PostgreSQL Buildfarm Howto

From PostgreSQL wiki
Jump to navigationJump to search

PostgreSQL BuildFarm is a distributed build system designed to detect build failures on a large collection of platforms and configurations. This software is written in Perl. If you're not comfortable with Perl then you possibly don't want to run this, even though the only adjustment you should ever need is to the config file (which is also Perl).

Get the Software

Download from the buildfarm server Unpack it and put it somewhere. You can put the config file in a different place from the script if you want to, but the simplest thing is to put it in the same place. Decide which user you will run the script as - it must be a user who can run the PostgreSQL server programs (on Unix that means it must *not* run as root). Do everything else here as that user.

Other Prerequisites

Must be version 1.6 or later.
All tools required for building Postgres from a Git checkout
GNU make, bison, flex, etc
See the Postgres documentation
This isn't absolutely necessary, but it greatly reduces the amount of CPU your buildfarm member will consume ... at the price of more disk space usage

All the command line options

This list is complete as of release 17 of the client

  • --config=/pathto/file - location of config file, default build-farm.conf
  • --nosend says don't send the results to the server
  • --nostatus says don't update the status files
  • --force says run the build even if it's not needed
  • --verbose[=n] says display information. verbosity level 1 (default if --verbose is specified) shows a line for each step as it starts. Any higher number causes the logs from the various stages to be sent to the standard output
  • --quiet - suppress error output
  • --test is short for --nosend --nostatus --force --verbose
  • --find-typedefs - obsolete way to trigger typedef analysis. This should now be done via the config file
  • --help - print help text
  • --keepall - keep build and installation directories if there is a failure
  • --from-source=/path - use source in path, not from SCM
  • --from-source-clean=/path - same as --from-source, run make distclean first
  • --skip-steps=list - skip certain steps
  • --only-steps=list - only do certain steps, not allowed with skip-steps
  • --schedule=name - use different schedule in check and installcheck
  • --tests=list - just run these tests in check and installcheck
  • --check-warnings - turn compiler warnings into errors
  • --delay-check - defer check step until after install steps

Choose a setup for a base git mirror that all your branches will pull from.

Most buildfarm members run on more than one branch, and if you do it's good practice to set up a mirror on the buildfarm machine and then just clone that for each branch. The official publicly available git repository is at

and there is a mirror at

Either should be suitable for cloning.

The simplest way to set up a mirror is simply to have the buildfarm script create and maintain it for you. If you do that, the mirror will be updated at the start of a run when it checks to see if any changes have occurred that might require a new build. To do that, all you need to do is set the following two options in your config file:

 git_keep_mirror => 'true',
 git_ignore_mirror_failure => 'true',

If you would rather clone the github mirror for your local mirror instead of the authoritative community repo (doing so can keep load off the community server, which is a good thing), then set the config variable to point to it like this:

 scmrepo => '',

The mirror will be placed in your build root, above the branch directories.

You can also opt to create and maintain a git mirror yourself, something like this:

 git clone --mirror pgsql-base.git

When that is done, add an entry to your crontab to keep it up to date, something like:

 20,50 * * * * cd /path/to/pgsql-base.git && git fetch -q

One downside of doing this is that your mirror will only be as up to date as the last time you ran the cron update.

To have your buildfarm installation use a local mirror you maintain yourself, set the config variable:

 scmrepo => '/path/to/pgsql-base.git',

Of course, in this case you don't set the git_keep_mirror option.

Create a directory where builds will run.

This should be dedicated to the use of the build farm. Make sure there's plenty of space - on my machine each branch can use up to about 700Mb during a build. You can use the directory where the script lives, or a subdirectory of it, or a completely different directory.

If you're using ccache, the cache directory can use up to 1Gb by default. You can reduce that if you like (see the ccache documentation), but it's good to allow at least 100Mb per active branch.

Edit the build-farm.conf file

Notable things you probably need to set include the following:


Set this to indicate the path to your Git mirror
If you are not using the Community git repository, or want to point the changesets at a different server, set this URL to indicate where to find a given Git commit on the web. For instance, for the github mirror, this value should be: - don't forget the trailing "/".

Once you have registered your Buildfarm animal you will need to set these, but for initial testing just leave them as-is:

This will need to be set to the animal name you were given by the Buildfarm coordinators
This must be the password indicated by the Buildfarm coordinators

Adjust other config variables "make", "config_opts", and (if you don't use ccache) "config_env" to suit your environment, and to choose which optional Postgres configuration options you want to build with.

You should not need to adjust other variables.

You may verify that you didn't screw things up too badly by running "perl -cw build-farm.conf". That verifies that the configuration is still legitimate Perl.

Alerts and Status Notifications

Alerts happen when we haven't heard from your buildfarm member for a while, and suggest that maybe something is wrong. Status notifications happen when we have heard from your buildfarm member, and we are telling you what happened. Both of them happen via email. Alerts are sent to the owner's registered email address. By default, none are sent. You can configure when and how often they are sent in the alerts section of the config file. Status notifications are sent to the addresses configured in the mail_events section of the config file. You can choose four different sorts of notification:

  • for every build
  • for every build that fails
  • for every build that changes the status
  • for every build that changes the status if the change is to or from OK (green)

Change the shebang line in the scripts.

If the path to your perl installation isn't "/usr/bin/perl", edit the #! line in perl scripts so it is correct. This is the ONLY line in those files you should ever need to edit.

Check that required perl modules are present.

Run "perl -cw". If you get errors about missing perl modules you will need to install them. Most of the required modules are standard modules in any perl distribution. The rest are all standard CPAN modules, and available either from there or from your OS distribution. When you don't get an error any more, run the same test on, and also on if you plan to use that (see below).

If you are using an https URL for the buildfarm server (which you should be!), make sure that LWP::Protocol::https and Mozilla::CA are installed as well; the above test does not catch these requirements.

When all is clear you are ready to start testing.

Run in test mode.

With a PATH that matches what you will have when running from cron, run the script in no-send, no-status, verbose mode. Something like this:

 ./ --nosend --nostatus --verbose

and watch the fun begin. If this results in failures because it can't find some executables (especially gmake and git), you might need to change the config file again, this time changing the "build_env" with another setting something like:

 PATH => "/usr/local/bin:$ENV{PATH}",

Also, if you put the config file somewhere else, you will need to use the --config=/path/to/build-farm.conf option.

If trying to diagnose problems, interesting summary information may be found in the file, which is found in a build-specific directory, of the form $build_root/$CURRENT_BRANCH/$animal.lastrun-logs/

If particular steps of a build failed, logs for those steps may be found in that same directory.

Test running from cron

When you have that running, it's time to try with cron. Put a line in your crontab that looks something like this:

 43 * * * * cd /location/of/ && ./ --nosend --verbose

Again, add the --config option if needed. Notice that this time we didn't specify --nostatus. That means that (after the first run) the script won't do any build work unless the Git repo has changed. Check that your cron job runs (it should email you the results, unless you tell it to send them elsewhere).

You can, and probably should, drop the --verbose option once things are working.

The frequency with which the cron job is launched is up to you, though we do suggest that active branches get built at least once a day. The build script will automatically exit if it finds a previous invocation still running, so you do not need to worry about scheduling jobs too close together. Think of the cron frequency as how often the buildfarm animal will wake up to see if there have been changes in the Git repo.

Choose which branches you want to build

By default builds the HEAD branch. If you want to build some other branch, you can do so by specifying the name on the commandline, e.g. REL9_4_STABLE

The old way to build multiple branches was to create a cron job for each active branch, along the lines of:

6 * * * * cd /home/andrew/buildfarm && ./ --nosend
30 4 * * * cd /home/andrew/buildfarm && ./ --nosend REL9_4_STABLE

But there's a better way ...


There is a wrapper script that makes running multiple branches much easier. To build all the branches that are currently being maintained by the project, instead of running, use with the --run-all option. This script accepts all the options that does, and passes them through. So now your crontab could just look like this:

 6 * * * * cd /home/andrew/buildfarm && ./ --run-all

One of the advantages of this approach is that you don't need to manually retire a branch when the Postgres project ends support for it, nor to add one when there's a new stable branch. The script contacts the server to get a list of branches that we're currently interested in, and then builds them. This is now the recommended method of running a buildfarm member.

The branches that are built are controlled by the branches_to_build setting in the global section of the config file. The sample config file's setting is 'ALL'.

If you don't want to build every one of the back branches, you can also use HEAD_PLUS_LATEST, or HEAD_PLUS_LATESTn for any n, or a fixed list of branches. In the last case you will probably need to adjust the list whenever the PostgreSQL developers start a new branch or declare an old branch to be at End Of Life.

Register your new buildfarm member and subscribe to the mailing list.

Once this is all running happily, you can register to upload your results to the central server. Registration can be done on the buildfarm server at When you receive your approval by email, you will edit the "animal" and "secret" lines in your config file, remove the --nosend flags, and you are done.

Please also join the buildfarm-members mailing list at This is a low-traffic list for owners of buildfarm members, and every buildfarm owner should be subscribed.

Status Mailing Lists

There are also two mailing lists that report status from all builds, not just your own animals. This is useful for developers who want to be notified of events rather than having to monitor the server's dashboard.

  • buildfarm-status-failures, which gets an email any time a buildfarm animal reports a failed run.
  • buildfarm-status-green-chgs, which gets an email any time the status of a buildfarm animal changes to or from green (i.e. success). This is the status list most people find useful.


Please file bug reports concerning the buildfarm client (but not Postgres itself) on the buildfarm members mailing list.

Running on Windows

There are three build environments for Windows: Cygwin, MinGW/MSys, and Microsoft Visual C++. The buildfarm can run with each of these environments. This section discusses requirements for the buildfarm, rather than requirements for building on Windows, which are covered elsewhere.


There is almost nothing extra to be done for Cygwin. You need to make sure that cygserver is running, and you should set MAX_CONNECTIONS=>3 and CYGWIN=>'server' in the build_env stanza of the buildfarm config. Other than that it should be just like running on Unix.


For MinGW/MSys, you need both the MSys DTK version of perl installed, and a native Windows perl - I have only tested with ActiveState perl, which I have found to be rock solid. You need to run the main buildfarm script using the MSYS DTK perl, and the web transaction script using native Perl. that mean you need to change the first line of the script so it reads something like:


You should make sure that the PATH is set in your config file to put the Native perl ahead of the MSys DTK perl. It's a good idea to have a runbf.bat file that you can call from the Windows scheduler. Mine looks like this:

 @echo off
 cd \msys\1.0\bin
 c:\msys\1.0\bin\sh.exe --login -c "cd bf && ./ --verbose %1 >> bftask.out 2>&1"

Set up a non-privileged Windows user to run this jobs as. set up the buildfarm as above as that user. Then create scheduler jobs that call runbf.bat with an optional branch name argument.

Microsoft Visual C++

For MSVC you need to edit the config file more extensively. Make sure the 'using_msvc' setting is on. Also, there is a section of the file specially for MSVC builds. As with MinGW, you need a native Windows perl installed. It appears that Windows Git does not like to clone local repositories specified with forward slashes (this is pretty horrible - almost all Windows programs are quite happy with forward slashes. Make sure you specify the repository using backslashes or weird things will happen. Again, you will need a runbf.bat file for the windows scheduler. Mine looks like this:

 @echo off
 cd \prog\bf
 c:\perl\bin\perl --verbose %1 %2 %3 %4 >> bfout.txt

You will also need a tar command capable of bundling up the logs to send to the server. The best one I have found for use on Windows is bsdtar, part of the libarchive collection at This is also a good place to get many of the libraries you need for optional pieces of MSVC and MinGW builds.

Running multiple buildfarm members on a single machine

Sometimes you might want to run more than one buildfarm member on a single machine. Possible reasons for doing this include testing different compilers, and running with different build options. For example, on one FreeBSD machine I have two members; one does a normal build and the other does a build with -DCLOBBER_CACHE_ALWAYS set. Or on a Windows machine one might want to test both the 32 bit and 64 bit mingw-w64 compilers.

The simplest way to do this is to do it all in the same location. Get one member working, then copy the config file to something with the other member's name and change the animal name and password, and whatever in the config will be different from the first one. The members can share a git mirror and build root. There are locking provisions that prevent instances of the buildfarm scripts from tripping over each other. If you are using ccache, you should ensure that each member gets a separate ccache location. The best way to do that is to put the member name into the ccache directory name (which is the default as of recent releases of the buildfarm scripts).

Running in Parallel

If you run a single animal, you can run all the branches in parallel just by changing's --run-all to --run-parallel. This will launch each branch's run, spaced out by 60 seconds from launch to launch.

The long story: parallelism is controlled by a number of configuration parameters in the global section of the config file. The first is parallel_lockdir. By default this is the global_lock_dir which in turn defaults to the build_root. This directory is where puts a lock file for each running branch. The second is max_parallel. The script will launch a new branch as long as the number of live locks is less than this number. The default is 10. Lastly the setting parallel_stagger determines how long the script will wait before starting a new branch, unless one finishes in the meantime. The default is 60 seconds.

If you want to run multiple animals and use parallelism between them the best way is to use a separate build_root for each animal. Then don't set the global_lock_dir for each animal, but do set the parallel_lockdir for each animal to point to the same directory, probably the build_root of one of the animals. Then you could have a crontab something like this:

 2-59/15 * * * * cd curly && --run-parallel --config=curly.conf
 7-59/15 * * * * cd larry && --run-parallel --config=larry.conf
 12-59/15 * * * * cd moe && --run-parallel --config=moe.conf

Tips and Tricks

You can force a single run of your animal by putting a file called <animal>.force-one-run in the <buildroot>/<branch> directory. For example the following will force a build on all the stable branches of my animal crake

 cd root
 for f in REL* ; do
   touch $f/crake.force-one-run

When the run is done this file will be removed automatically.

Testing Additional Software

In addition to testing core Postgres code, you can test addon software such as Extensions and Foreign Data Wrappers. To do that you need to create a Module file in the PGBuild/Modules directory. Say you're going to test a Foreign Data Wrapper called UltraCoolFDW. Copy the file in that directory to Inside, change the package name to "PGBuild::Modules::UltraCoolFDW".

Then add you new module to the "modules" section in your config file.

At this stage your new module will register and run. It just won't do anything, but if you run in verbose mode you will see the traces of its subroutines being called.

To make it do some things you need to fill in a bit of code. But not very much. The most important are the setup() subroutine, and the checkout(), setup_target(), build(), install(), installcheck() and cleanup()subroutines.

In setup() you normally need to create an SCM object to handle checking out your code, and stash info on where it's going to be built. That extra code for UltraFDW will look something like this, just before the register_module_hooks()call:

   my $scmconf = {
        scm             => 'git',
        scmrepo         => 'git://',
        git_reference   => undef,
        git_keep_mirror => 'true',
        git_ignore_mirror_failure => 'true',
        build_root                => $self->{buildroot},

    $self->{scm} = PGBuild::SCM->new($scmconf, 'ultracoolfdw');
    my $where = $self->{scm}->get_build_path();
    $self->{where} = $where;

You might only want to run this module on some branches. Say you only want to run it on 'HEAD' (our name for git master). You would put some thing like this at the top of the setup function:

    return unless $branch eq 'HEAD';

In checkout() you need to check the code out. Replace the push() line with lines like this:

    my $scmlog = $self->{scm}->checkout($self->{pgbranch});
	"------------- $MODULE checkout ----------------\n", @$scmlog);

This code works if your FDW code has branches that mirror the Postgres branches. If instead you have a single branch, say "main", that works for all Postgres branches, use that name instead of $self->{pgbranch}. The branch name "HEAD" can also be used: it will map to whatever the default branch is of your git repo.

setup_target() normally just needs the addition of this line:


These next functions all assume (correctly) that Postgres has been successfully built and installed in the standard place, i.e. "../inst" relative to your build directory.

build() and install() are pretty similar. Essentially they simply invoke your code's Makefile to run these tasks. The code for build should look something like this:

    my $cmd = "PATH=../inst:$ENV{PATH} make USE_PGXS=1";
    my @makeout = run_log("cd $self->{where} && $cmd");
    my $status = $? >> 8;
    writelog("$MODULE-build", \@makeout);
    print "======== make log ===========\n", @makeout if ($verbose > 1);
    $status ||= check_make_log_warnings("$MODULE-build", $verbose)
      if $check_warnings;
    send_result("$MODULE-build", $status, \@makeout) if $status;

while the code for install() looks something like this:

    my $cmd = "PATH=../inst:$ENV{PATH} make USE_PGXS=1 install";
    my @log = run_log("cd $self->{where} && $cmd");
    my $status = $? >> 8;
    writelog("$MODULE-install", \@log);
    print "======== install log ===========\n", @log if ($verbose > 1);
    send_result("$MODULE-install", $status, \@log) if $status;

If you get a perl complaint about $MODULE being undefined, add a line like this near the top of your module, just after use warnings;

(my $MODULE = __PACKAGE__) =~ s/PGBuild::Modules:://;

installcheck() is the most complicated subroutine. That's because in addition to running the installcheck procedure it needs to gather up all the log files, regression differences etc. Here's an example of the additional code needed in this subroutine:

    my $make = $self->{bfconf}->{make};
    print time_str(), "install-checking $MODULE\n" if $verbose;
    my $cmd = "$make USE_PGXS=1 USE_MODULE_DB=1 installcheck";
    my @log = run_log("cd $self->{where} && $cmd");
    my $log = PGBuild::Log->new("$MODULE-installcheck-$locale");
    my $status     = $? >> 8;
    my $installdir = "$self->{buildroot}/$self->{pgbranch}/inst";
    my @logfiles   = ("$self->{where}/regression.diffs", "$installdir/logfile");
    if ($status)
        $log->add_log($_) foreach (@logfiles);
    push(@log, $log->log_string);
    writelog("$MODULE-installcheck-$locale", \@log);
    print "======== installcheck ($locale) log ===========\n", @log
      if ($verbose > 1);
    send_result("$MODULE-installcheck-$locale", $status, \@log) if $status;

Finally in cleanup(), add any cleanup required. Usually this can just be the removal of the build directory, something like:


Remove any reference to unneeded subroutines in the $hooks, and you are done.