Prioritizing databases by separating into multiple clusters

From PostgreSQL wiki
Jump to navigationJump to search

(This page is presently draft quality. It reflects the current state as of 8.4 and mid 8.5 development).

Prioritizing databases by separating clusters

On the pgsql-general mailing list, someone wanted to know how to assign different priorities to different databases (or sets of databases) so that long-running queries on one database wouldn't delay important work in other databases.

PostgreSQL doesn't provide any built-in priority controls so you need to use operating-system level facilities to achieve what you need. There are a few complications that make it a bit trickier than just using "renice" - see priorities for an explanation. Essentially, to achieve really effective prioritization, you need to run separate PostgreSQL "clusters" (ie: a postmaster with its own datadir listening on a particular port or socket file), one for each priority level.

Do you really need to separate them?

Before spending the time and effort required to separate your databases into different clusters, see if a bit of query optimisation and database tuning can get you the performance you need. See Performance_Optimization.

What's a cluster?

In this context, "cluster" doesn't mean a cluster of separate machines working together, as is the usual usage. Rather, when talking about PostgreSQL installs a "cluster" is a set of PostgreSQL databases controlled by one postmaster. The cluster shares a postmaster, data directory, background writer, block of shared memory, etc. The same postgresql.conf and pg_hba.conf apply to the entire cluster. All connections to any database in the cluster come in on the same port or unix socket to the postmaster, which starts a backend using the appropriate database in the cluster to service the connection.

While it's typical to have only one PostgreSQL cluster on a machine, and most canned installs of PostgreSQL only provide for one cluster, nothing stops you creating more than one. There's more work involved in admin if you have multiple clusters, since each requires separate backups, separate tuning for autovacuum etc, but it also gives you the freedom to tune each cluster differently, assign them different backup priorities, etc. In particular, you can enable PITR-style continuous archiving for only some of the clusters so that your critical databases are always on warm standby but your reporting databases just get backed up occasionally.

Because each cluster has its own postmaster, clients must connect to a different port (or unix socket) to access the different clusters. If that's undesirable, you can bind a cluster to a particular IP address in postgresql.conf, so each cluster listens on the same port but on different IP addresses of a machine with multiple IP addresses.

There's one big downside with running multiple PostgreSQL clusters on a machine: each cluster needs its own shared memory allocation, which is fixed in size and reserved at cluster startup. That means that your high-priority cluster cannot borrow memory from your low-priority cluster when it needs it. Thankfully, these days RAM is cheap.

This separation of resources means that unless you add more RAM, the total performance of the two (or more) clusters is likely to be lower than that of a single cluster doing the same work. However, your high priority queries should still run faster unless they need lots of memory.

(Debian-derived Linux systems, including Ubuntu, make managing multiple clusters easy. When you install PostgreSQL, the Debian packaging system will create a default cluster named "main". You can use the Debian-specific pg_createcluster command to create new clusters. See the man page.)

Separate machines

Your best option is to separate PostgreSQL instances onto different physical machines. They won't compete for resources at all.

Multiple clusters on one machine with separate disk arrays

If you can't have separate machines, multiple cluster instances on the same machine with different physically separate disk I/O subsystems and enough CPUs to go around will help. You can use "nice" to prioritize CPU access very effectively by setting the nice level when starting the postmaster and letting all its children inherit that nice level.

You'll usually want a separate RAID controller for each cluster (if you use battery-backed hardware RAID), since the clusters have to share the RAID controller's write cache, and the RAID controller knows nothing about the OS's priorities. Similarly, if you use software RAID, you'll want completely separate arrays for each cluster.

To separate the disk I/O, just put each cluster's data directory on a file system that's stored on a different RAID array.

Multiple clusters on one machine with shared CPUs and disk array

If you can't separate the I/O subsystems either, you'll have to fall back on I/O priority systems like "ionice" to give the clusters different I/O priorities too. Again, you can set the I/O nice level when starting the postmaster for the cluster, and all its children will inherit the priority. You shouldn't expect great results doing this, though, as disk I/O thoughput is hurt severely by the increased seeking caused by multiple competing users.

Credits

Page initially by Ringerc 02:23, 26 November 2009 (UTC)