Transcription

PostgreSQLwhen it’s not your job.Christophe PettusPostgreSQL Experts, Inc.DjangoCon Europe 2012

The DevOps World. “Integration between development andoperations.” “Cross-functional skill sharing.” “Maximum automation of development anddeployment processes.” “We’re way too cheap to hire realoperations staff. Anyway: Cloud!”

Thus. No experienced DBA on staff. Have you seen how much those peoplecost, anyway? Development staff pressed into duty asdatabase administrators. But it’s OK it’s PostgreSQL!

Everyone LovesPostgreSQL Robust, feature-rich, fully-ACID compliantdatabase. Very high performance, can handlehundreds of terabytes or more. Well-supported by Python, Django andassociated tools. Open-source under a permissive license.

But then you hear “It’s hard to configure.” “It requires a lot of on-going maintenance.” “It requires powerful hardware to get goodperformance.” “It’s SQL, and everyone knows how old andboring that is. Also: It’s not WebScale .” “Elephants scare me.”

We’re All Going To Die.

It Can Be Like This.

enihcamsi*Th bought inwas1997.gninnursi* It nval

PostgreSQL when it is notyour job. Basic configuration. Easy performance boosts (and avoidingpitfalls). On-going maintenance. Hardware selection.

Hi, I’m Christophe. PostgreSQL person since 1997. Django person since 2008. Consultant with PostgreSQL Experts, Inc.since 2009. Django guy.

No time to explain!

Just do this!

The philosophy of this talk. It’s hard to seriously misconfigurePostgreSQL. Almost all performance problems areapplication problems. Don’t obsess about tuning.

PostgreSQL configuration. Logging. Memory. Checkpoints. Planner. You’re done. No, really, you’re done!

Logging Be generous with logging; it’s very lowimpact on the system. It’s your best source of information forfinding performance problems.

Where to log? syslog — If you have a syslog infrastructureyou like already. standard format to files — If you are usingtools that need standard format. Otherwise, CSV format to files.

What to log?log destination 'csvlog'log directory 'pg log'logging collector onlog filename 'postgres-%Y-%m-%d %H%M%S'log rotation age 1dlog rotation size 1GBlog min duration statement 250mslog checkpoints onlog connections onlog disconnections onlog lock waits onlog temp files 0

Memory configuration shared buffers work mem maintenance work mem

shared buffers Below 2GB (?), set to 20% of total systemmemory. Below 32GB, set to 25% of total systemmemory. Above 32GB (lucky you!), set to 8GB. Done.

work mem Start low: 32-64MB. Look for ‘temporary file’ lines in logs. Set to 2-3x the largest temp file you see. Can cause a huge speed-up if set properly! But be careful: It can use that amount ofmemory per planner node.

maintenance work mem 10% of system memory, up to1GB. Maybe even higher if you are havingVACUUM problems.

effective cache size Set to the amount of file system cacheavailable. If you don’t know, set it to 50% of totalsystem memory. And you’re done with memory settings.

About checkpoints. A complete flush of dirty buffers to disk. Potentially a lot of I/O. Done when the first of two thresholds arehit: A particular number of WAL segmentshave been written. A timeout occurs.

Checkpoint settings, part 1wal buffers 16MBcheckpoint completion target 0.9checkpoint timeout 10m-30m # Depends on restart timecheckpoint segments 32 # To start.

Checkpoint settings, part 2 Look for checkpoint entries in the logs. Happening more often thancheckpoint timeout? Adjust checkpoint segments so thatcheckpoints happen due to timeoutsrather filling segments. And you’re done with checkpoint settings.

Checkpoint settings, part 3 The WAL can take up to 3 x 16MB xcheckpoint segments on disk. Restarting PostgreSQL can take up tocheckpoint timeout (but usually less).

Planner settings. effective io concurrency — Set to thenumber of I/O channels; otherwise, ignoreit. random page cost — 3.0 for a typicalRAID10 array, 2.0 for a SAN, 1.1 forAmazon EBS. And you’re done with planner settings.

Easy performance boosts. General system stuff. Stupid database tricks. SQL pathologies. Indexes. Tuning VACUUM.

General system stuff. Do not run anything besides PostgreSQLon the host. If PostgreSQL is in a VM, remember all ofthe other VMs on the same host. Disable the Linux OOM killer.

Stupid database tricks, 1 Sessions in the database. Constantly-updated accumulator records. Task queues in the database. Using the database as a filesystem. Frequently-locked singleton records. Very long-running transactions.

Stupid database tricks, 2 Using INSERT instead of COPY for bulkloading data. psycopg2 has a very good COPYinterface. Mixing transactional and data warehousequeries on the same database.

One schema trick If one model has a constantly-updatedsection and a rarely-updated section (Like a user record with a name and a“last seen on site” field) split those into two models (and thustwo database tables). You’ll thank me later.

SQL pathologies Gigantic IN clauses (a typical Django antipattern). Unanchored text queries like ‘%this%’; usethe built-in full text search instead. Small, high-volume queries processed bythe application.

Indexing, part 1 What is a good index? A good index: has high selectivity on commonlyperformed queries. or, is required to enforce a constraint.

Indexing, part 2 What’s a bad index? Everything else. Non-selective / rarely used / expensive tomaintain. Only the first column of a multi-columnindex can be used separately.

Indexing, part 3 Don’t go randomly creating indexes on ahunch. That’s my job. pg stat user tables — Shows sequentialscans. pg stat user indexes — Shows indexusage.

VACUUM autovacuum slowing the system down? Increase autovacuum vacuum cost limit(default is 200). If load is periodic Do manual VACUUMing instead at lowlow times. You must VACUUM regularly!

ANALYZE Collects statistics on the data to help theplanner choose a good plan. Done automatically as part of autovacuum. Always do it manually after substantialdatabase changes (loads, etc.). Remember to do it as part of any manualVACUUM process.

On-going maintenance. Monitoring. Backups. Disaster recovery. Schema migrations.

Monitoring. Always monitor PostgreSQL. At least disk space and system load. Memory and I/O utilization is very handy. 1 minute bins. check postgres.pl at bucardo.org.

pg dump Easiest PostgreSQL backup tool. Very low impact on the database beingbacked up. Makes a copy of the database. Becomes impractical as the database getsbig (in the tens of GB).

Streaming replication, 1. Best solution for large databases. Easy to set up. Maintains an exact logical copy of thedatabase on a different host. Make sure it really is a different host! Does not guard against application-levelfailures, however.

Streaming replication, 2. Replicas can be used for read-only queries. If you are getting query cancellations Increase max standby streaming delayto 200% of the longest query executiontime. You can pg dump a streaming replica.

Streaming replication, 3. Streaming replication is all-or-nothing. If you need partial replication, you needtrigger-based replication (Slony, Bucardo). These are not part-time jobs!

WAL archiving. Maintains a set of base backups and WALsegments on a (remote) server. Can be used for point-in-time recovery incase of an application (or DBA) failure. Slightly more complex to set up, but wellworth the security. Can be used along side streamingreplication.

Pitfalls Encoding. Schema migrations. IDLE IN TRANSACTION VACUUM FREEZE

Encoding. Character encoding is fixed in a databasewhen created. The defaults are probably not what youwant. Use UTF-8 encoding (with appropriatelocale). C Locale sometimes makes sense.

Who has done this? Add a column to a large table. Push out to production using South orsomething. Watch production system fall over and goboom as PostgreSQL appears to freeze? I’ve heard about that happening.

Schema migrations. All modifications to a table take anexclusive lock on that table while themodification is being done. If you add a column with a default value, thetable will be rewritten. This can be very, very bad.

Schema migrationsolutions. Create columns as not NOT NULL. Then add constraint later once field ispopulated. Takes a lock, but a faster lock. Create new table, copy values into it (oldtable can be read but not written).

IDLE IN TRANSACTION A session state when a transaction is inprogress, but the session isn’t doinganything. Common in Django applications. Be careful about your transaction model. Don’t accept Django’s default transactionbehavior.

VACUUM FREEZE Once in a long while, PostgreSQL needs toscan (and sometimes write) every table. This can be a big surprise. Once every few months, pick a (very) slackperiod and do a VACUUM FREEZE.

Hardware selection, oneyear ago. “Here are the kind of I/O subsystems toavoid, and to get.” “You need blah about this muchmemory ” “And you should think about cores and thisand that and this other thing blah blahblah ”

The Cloud.

Hardware in the cloud. Get as much memory as you can. Get one CPU core for each two activeconnections. Usually, few connections are active. Hope the I/O subsystem can keep up withyour traffic. Eventually, it won’t.

Your own hardware Get lots of (ECC) RAM. CPU is usually not as vital as RAM. First step is hardware RAID, with: RAID10 for the main database. RAID1 for the transaction logs. RAID1 for the boot disk.

Considered harmful. Parity-disk RAID (RAID 5/6, Drobo, etc.). iSCSI, especially for transaction logs. SANs, unless you can afford multichannelfibre. Long network hauls between the appserver and database server.

AWS Survival Guide. Biggest instance you can afford. EBS for the data and transaction logs. Don’t use instance storage for any databasedata; OK for text logs. random page cost 1.1 Set up streaming replication.

Additional tools. www.repmgr.org WAL-E from Heroku. pgFouine (log analyzer). pgbouncer (part of SkypeTools).

Additional reading. thebuild.com pgexperts.com

Questions?

Thank you!

Parity-disk RAID (RAID 5/6, Drobo, etc.). iSCSI, especially for transaction logs. SANs, unless you can aff