Regression Testing with pgbench
pgbench ships with PostgreSQL and can be used to test several aspects of database performance. As the primary benchmarking tool that is included with the main distribution, it provides the easiest way to run simple performance tests to confirm whether a new release is faster or slower than an earlier one. This is particularly useful for regression testing, to confirm that a new version wasn't slowed down by changes since the previous one.
Setup
If you are building your PostgreSQL from source, note that the default "make" target does not compile pgbench or other modules in the contrib/ directory tree. You'll need to do that yourself like this:
cd contrib make make install
(Alternately you could just go into contrib/pgbench and do the make/install from that directory)
You can confirm you're getting the right version like this example:
$ pgbench --version pgbench (PostgreSQL) 8.5alpha2
Non-debug builds
One critical thing to be aware of is that many testing builds of PostgreSQL, including the pre-release RPM versions, are compiled with assertion code turned on. Assertions help find bugs, but they will substantially slow down pgbench tests, particularly if you've increased shared_buffers to a large value. After you've started your database, you can check whether asserts are on using the SHOW command; this is what you should see:
postgres=# show debug_assertions; debug_assertions ------------------ off (1 row)
You will not get useful regression data unless this setting is off.
postgresql.conf configuration changes
The default settings for the postgresql.conf give pretty low performance for pgbench. Two settings you might increase usefully are:
* shared_buffers: Setting this definitely helps pgbench performance on reads and writes. Using at least 256MB gets you a good sized portion of the improvement possible here. * checkpoint_segments: If you're running the default or other write-heavy tests, you want this to be much larger to get more useful results. At least 16 will improve a lot over the baseline.
Many other parameters that impact general performance, including effective_cache_size and work_mem, have no impact on the simple tests pgbench runs.
Here's a script that creates a database cluster with useful starting parameters, presuming you've set PGDATA and PGLOG to where you want the cluster and startup log file to go, respectively:
initdb SHARED_BUFFERS="256MB" WAL_BUFFERS="16MB" CHECKPOINT_SEGMENTS="16" echo \# Changes for pgbench testing >> $PGDATA/postgresql.conf echo shared_buffers=$SHARED_BUFFERS >> $PGDATA/postgresql.conf echo wal_buffers=$WAL_BUFFERS >> $PGDATA/postgresql.conf echo checkpoint_segments=$CHECKPOINT_SEGMENTS >> $PGDATA/postgresql.conf pg_ctl start -l $PGLOG
Creating pgbench sample database
The simplest approach to get useful regression testing is to use a small database (one that fits in shared_buffers) that has a scale greater than the number of cores on your system, then run the select-only test on that.
- Set shared_buffers='256MB'
- Create a database with a scale of 10 (approximately 160MB):
createdb pgbench pgbench -i -s 10 pgbench
Baseline
Here's a sample with 2 jobs and 4 clients, suitable for a 2-core system, that runs for 30 seconds:
pgbench -T 30 -j 2 -S -c 4 pgbench
Note that clients must be a multiple of cores, so with "-j 2" we can't test performance with a single client.
Here's a script that tries a number of client loads:
CORES="2" CLIENTS="2 4 8 16 32" for c in $CLIENTS; do pgbench -T 30 -j $CORES -S -c $c pgbench done
Caching issues
Note that if you run the pgbench SELECT test right after initialization step, you'll be testing with a "warm cache": one where all of the data you need is in RAM already. If you do something to clear the PostgreSQL and OS caches, such as reboot, pgbench is going to be slow afterwards, because it will will read every single block from the accounts table in particular back into RAM again in a way that involves all seeks rather than sequential reads. For regression testing purposes, you might as well just create a new database every time you reboot so that you're only testing cached performance instead. Uncached behavior is much harder to systematically compare multiple runs with.
Comparisons
You can use pgbench-tools to script large numbers of tests and record the results for analysis in a database.
Resources
- PostgreSQL Performance Pit Stop: Determining the right pgbench database scale size and the presentation Using and abusing pgbench cover many aspects of getting useful results from pgbench.