Improving PostgreSQL performance on AWS EC2

November 19, 2012

Questions periodically come up on the PostgreSQL mailing list regarding Amazon EC2 and how to get PostgreSQL to perform well on it. The general feeling on the list appears to have been that EC2 performs very poorly for database workloads, at least with PostgreSQL, and it doesn’t seem to be taken particularly seriously as a platform. I certainly thought of it as a last resort myself, for when other constraints prevent you from using a proper VPS or real hardware.

I had the chance to meet with a high level AWS support engineer last week. It’s prompted me to write up the basics of configuring EC2 instances for decent PostgreSQL performance. I haven’t had the chance to back the following advice with hard numbers and benchmarks yet, so remember: Always test everything with a simulation of your workload.

Before I can get into the configuration details, I need to outline how EC2 storage works.

EC2 storage types

EC2 instances have two very different storage options. These are explained quite well in the storage documentation, so I won’t repeat the details. It’s sufficient to say that the Instance Store is local to the VM and is unrecoverably lost if the VM is stopped. It is backed by local hard drive or SSD storage. EBS by contrast is durable and isn’t lost when the VM is stopped. It is more like NBD or iSCSI than local storage; it’s a network block device protocol with the corresponding latency and throughput issues that entails.

If you’ve been facing performance issues, you might’ve seen the High I/O instance types and thought “That sounds ideal for my database workload”. I thought so myself – but they won’t actually make any difference if your database storage is on EBS volumes. The High I/O instances have fast instance store storage, but aren’t any different in terms of EBS. So if you’re been using a High I/O instance with a database that’s on EBS volumes you’re not gaining any benefit from the enhanced instance store I/O you’re paying for, and are better off on an EBS-optimized large instance.

Durable databases

For a durable database where you care about your data, what you want instead of a high I/O instance is an EBS Optimized instance, which has guaranteed network bandwidth to the EBS storage servers. Use EBS volumes with provisioned IOPs and, for best results, stripe a group of EBS volumes into a RAID10 array. See increasing EBS performance.

You might also want to consider putting your temporary tablespaces on the instance store, rather than EBS. That way things like on-disk sorts won’t compete with other data for EBS bandwidth, and will get to use probably-faster local storage. (See also tablespaces).

As always, if you care about your data, back it up and use replication and/or PITR.

This is just the basics – there’s a lot more tuning, testing and performance work that can be done from here, not to mention work on HA, automated backup, and other data protection measures. Just using the right instance type will get you to vaguely reasonable performance; you won’t get the most out of your (virtual) iron without a lot more work.

Non-durable instances

If you have data you can re-generate, or you’re running streaming replicas from a master that you can just re-clone from the master, you don’t necessarily need durable storage.

In this case, consider using the instance store, which is available on every instance type except Micro.

Since you have no hope of recovering the database if the instance terminates or stops anyway, you might as well use fsync=off and full_page_writes=off in postgresql.conf, saving a lot of I/O and synchronization. This is one of the few situations where turning fsync off is acceptable; never do it for data you expect to preserve, as you’re effectively giving the database permission to corrupt your data if the power fails or the host OS crashes.

Since you’re using the instance store, you can also potentially benefit from one of the high I/O instance types, using an array of SSDs for extremely fast seeks and high IOPS.

Do not use the instance store for data you want to preserve!. Use it only for analytics work on data sets you can re-generate, or for replicas of data where the master lives elsewhere.

A non-durable storage based setup needs extensive testing and is likely to need some different tuning and configuration to a normal PostgreSQL install. Don’t expect amazing performance out of the box, you’ll need to do more than just fire up a High I/O instance and set up a default Pg install on it.

Future work

The next step, time permitting, is to quantify the above information with hard numbers. How does a High I/O instance with the DB on instance store perform compared to an EBS optimized x.large with a RAID10 striped set of provisioned I/O EBS volumes? Or the ubiquitous micro instance?

I’d like to look at integrating S3 and Glacier support into barman and repmgr down the track, as it’d be really interesting to have basebackups and WAL automatically stored in S3, archived to Glacier, used to provision new instance-store based replica servers, and more.

I’ll be doing more with EC2 in the coming months, so I expect to be able to offer more insight into future performance issues.

Share this