Getting a stack trace of a running PostgreSQL backend on Linux/BSD

From PostgreSQL wiki
Jump to navigationJump to search

Up to parent

Linux and BSD

Linux and BSD systems generally use the GNU compiler collection and the GNU Debugger ("gdb"). It's pretty trivial to get a stack trace of a process.

(If you want more than just a stack trace, take a look at the Developer FAQ which covers interactive debugging).

Installing External symbols

(BSD users who installed from ports can skip this)

On many Linux systems, debugging info is separated out from program binaries and stored separately. It's often not installed when you install a package, so if you want to debug the program (say, get a stack trace) you will need to install debug info packages. Unfortunately, the names of these packages vary depending on your distro, as does the procedure for installing them.

Some generic instructions (unrelated to PostgreSQL) are maintained on the GNOME Wiki here.

On Debian

Debian Squeeze (6.x) users will also need to install gdb 7.3 from backports, as the gdb shipped in Squeeze doesn't understand the PIE executables used in newer PostgreSQL builds.

On Ubuntu

First, follow the instructions on the Ubuntu wiki entry DebuggingProgramCrash.

Once you've finished enabling the use of debug info packages as described, you will need to use the script linked to on that wiki article to get a list of debug packages you need. Installing the debug package for postgresql alone is not sufficient.

After following the instructions on the Ubuntu wiki, download the script to your desktop, open a terminal, and run:

$ sudo apt-get install $(sudo bash Desktop/ -t -p $(pidof -s postgres))

On Fedora

All Fedora versions: - StackTraces

Other distros

In general, you need to install at least the debug symbol packages for the PostgreSQL server and client as well as any common package that may exist, and the debug symbol package for libc. It's a good idea to add debug symbols for the other libraries PostgreSQL uses in case the problem you're having arises in or touches on one of those libraries.

Collecting a stack trace

How to tell if a stack trace is any good

Read this section and keep it in mind as you collect information using the instructions below. Making sure the information you collect is actually useful will save you, and everybody else, time and hassle.

It is vitally important to have debugging symbols available to get a useful stack trace. If you do not have the required symbols installed, backtraces will contain lots of entries like this:

#1  0x00686a3d in ?? ()
#2  0x00d3d406 in ?? ()
#3  0x00bf0ba4 in ?? ()
#4  0x00d3663b in ?? ()
#5  0x00d39782 in ?? ()

... which are completely useless for debugging without access to your system (and almost useless with access). If you see results like the above, you need to install debugging symbol packages, or even re-build postgresql with debugging enabled. Do not bother collecting such backtraces, they are not useful.

Sometimes you'll get backtraces that contain just the function name and the executable it's within, not source code file names and line numbers or parameters. Such output will have lines like this:

#11 0x00d3afbe in PostmasterMain () from /usr/lib/postgresql/8.4/bin/postgres

This isn't ideal, but is a lot better than nothing. Installing debug information packages should give an even more detailed stack trace with line number and argument information, like this:

#9  0xb758d97e in PostmasterMain (argc=5, argv=0xb813a0e8) at postmaster.c:1040

... which is the most useful for tracking down your problem. Note the reference to a source file and line number instead of just an executable name.

Identifying the backend to connect to

You need to know the process ID of the postgresql backend to connect to. If you're interested in a backend that's using lots of CPU it might show up in top. If you have a current connection to the backend you're interested in, use select pg_backend_pid() to get its process ID. Otherwise, the pg_catalog.pg_stat_activity and/or pg_catalog.pg_locks views may be useful in identifying the backend of interest; see the "procpid" column in those views.

Attaching gdb to the backend

Once you know the process ID to connect to, run:

sudo gdb -p pid

where "pid" is the process ID of the backend. GDB will pause the execution of the process you specified and drop you into interactive mode (the (gdb) prompt) after showing the call the backend is currently running, eg:

0xb7c73424 in __kernel_vsyscall ()

You'll want to tell gdb to save a log of the session to a file, so at the gdb prompt enter:

(gdb) set pagination off
(gdb) set logging file debuglog.txt
(gdb) set logging on

gdb is now saving all input and output to a file, debuglog.txt, in the directory in which you started gdb.

At this point execution of the backend is still paused. It can even hold up other backends, so I recommend that you tell it to resume executing normally with the "cont" command:

(gdb) cont

The backend is now running normally, as if gdb wasn't connected to it.

Getting the trace

OK, with gdb connected you're ready to get a useful stack trace.

In addition to the instructions below, you can find some useful tips about using gdb with postgresql backends on the Developer FAQ.

Getting representative traces from a running backend

If you're concerned with a case that's taking way too long to execute a query, is using too much CPU, or appears to be in an infinite loop, you'll want to repeatedly interrupt its execution, get a stack trace, and let it resume executing. Having a collection of several stack traces helps provide a better idea of where it's spending its time.

You interrupt the backend and get back to the gdb command line with ^C (control-C). Once at the gdb command line, you use the "bt" command to get a backtrace, then the "cont" command to resume normal backend execution.

Once you've collected a few backtraces, detach then exit gdb at the gdb interactive prompt:

(gdb) detach
Detaching from program: /usr/lib/postgresql/8.3/bin/postgres, process 12912
(gdb) quit

An alternative approach is to use the gcore program to save a series of core dumps of the running program without disrupting its execution. Those core dumps may then be examined at your leisure, giving you time to get more than just a backtrace because you're not holding up the backend's execution while you think and type.

Getting a trace from the point of an error report

If you are trying to find out the cause of an unexpected error, the most useful thing to do is to set a breakpoint at errfinish before you let the backend continue:

(gdb) b errfinish
Breakpoint 1 at 0x80ced0: file elog.c, line 414.
(gdb) cont

Now, in your connected psql session, run whatever query is needed to provoke the error. When it happens, the backend will stop execution at errfinish. Collect your backtrace with bt, then quit (or, possibly, cont if you want to do it again).

A breakpoint at errfinish will capture generation of not only ERROR reports, but also NOTICE, LOG, and any other message that isn't suppressed by client_min_messages or log_min_messages. You may want to adjust those settings to avoid having to continue through a bunch of unrelated messages.

Getting a trace from a reproducibly crashing backend

GDB will automatically interrupt the execution of a program if it detects a crash. So, once you've attached gdb to the backend you expect to crash, you just let it continue execution as normal and do whatever you need to to make the backend crash.

gdb will drop you into interactive mode as the backend crashes. At the gdb prompt you can enter the bt command to get a stack trace of the crash, then cont to continue execution. When gdb reports the process has exited, use the quit command.

Alternately, you can collect a core file as explained below, but it's probably more hassle than it's worth if you know which backend to attach gdb to before it crashes.

Getting a trace from a randomly crashing backend

It's a lot harder to get a stack trace from a backend that's crashing when you don't know why it's crashing, what causes a backend to crash, or which backends will crash when. For this, you generally need to enable the generation of core files, which are debuggable dumps of a program's state that are generated by the operating system when the program crashes.

Enabling core dumps

This article provides a useful primer on core dumps on Linux.

On a Linux system you can check to see if core file generation is enabled for a process by examining /proc/$pid/limits, where $pid is the process ID of interest. "Max core file size" should be non-zero.

Generally, adding "ulimit -c unlimited" to the top of the PostgreSQL startup script and restarting postgresql is sufficient to enable core dump collection. Make sure you have plenty of free space in your PostgreSQL data directory, because that's where the core dumps will be written and they can be fairly big due to Pg's use of shared memory. It may be useful to temporarily reduce the size of shared_buffers within postgresql.conf. This avoids core dumps that make the system unresponsive for minutes at a time, which can happen when shared_buffers is more than a few gigabytes. Reducing shared_buffers significantly will usually not make the server intolerably slow, since PostgreSQL will make increased use of the filesystem cache.

On a Linux system it's also worth changing the file name format used for core dumps so that core dumps don't overwrite each other. The /proc/sys/kernel/core_pattern file controls this. I suggest core.%p.sig%s.%ts, which will record the process's PID, the signal that killed it, and the timestamp at which the core was generated. See man 5 core. To apply the settings change just run echo core.%p.sig%s.%ts | sudo tee -a /proc/sys/kernel/core_pattern.

You can test whether core dumps are enabled by starting a `psql' session, finding the backend pid for it using the instructions given above, then killing it with "kill -ABRT pidofbackend" (where pidofbackend is the PID of the postgres backend, NOT the pid of psql). You should see a core file appear in your postgresql data directory.

Debugging the core dump

Once you've enabled core dumps, you need to wait until you see a backend crash. A core dump will be generated by the operating system, and you'll be able to attach gdb to it to collect a stack trace or other information.

You need to tell gdb what executable file generated the core if you want to get useful backtraces and other debugging information. To do this, just specify the postgres executable path then the core file path when invoking gdb, as shown below. If you do not know the location of the postgres executable, you can get it by examining /proc/$pid/exe for a running postgres instance. For example:

$ for f in `pgrep postgres`; do ls -l /proc/$f/exe; done
lrwxrwxrwx 1 postgres postgres 0 2010-04-19 10:30 /proc/10621/exe -> /usr/lib/postgresql/8.4/bin/postgres
lrwxrwxrwx 1 postgres postgres 0 2010-04-19 10:51 /proc/11052/exe -> /usr/lib/postgresql/8.4/bin/postgres
lrwxrwxrwx 1 postgres postgres 0 2010-04-19 10:51 /proc/11053/exe -> /usr/lib/postgresql/8.4/bin/postgres
lrwxrwxrwx 1 postgres postgres 0 2010-04-19 10:51 /proc/11054/exe -> /usr/lib/postgresql/8.4/bin/postgres
lrwxrwxrwx 1 postgres postgres 0 2010-04-19 10:51 /proc/11055/exe -> /usr/lib/postgresql/8.4/bin/postgres

... we can see from the above that the postgres executable on my (Ubuntu) system is /usr/lib/postgresql/8.4/bin/postgres.

Once you know the executable path and the core file location, just run gdb with those as arguments, ie gdb -q /path/to/postgres /path/to/core. Now you can debug it as if it was a normal running postgres, as discussed in the sections above.

Debugging the core dump - example

For example, having just forced a postgres backend to crash with kill -ABRT, I have a core file named core.10780.sig6.1271644870s in /var/lib/postgresql/8.4/main, which is the data directory on my Ubuntu system. I've used /proc to find out that the executable for postgres on my system is /usr/lib/postgresql/8.4/bin/postgres.

It's now easy to run GDB against it and request a backtrace:

$ sudo -u postgres gdb -q -c /var/lib/postgresql/8.4/main/core.10780.sig6.1271644870s /usr/lib/postgresql/8.4/bin/postgres
Core was generated by `postgres: wal writer process                                                  '.
Program terminated with signal 6, Aborted.
#0  0x00a65422 in __kernel_vsyscall ()
(gdb) bt
#0  0x00a65422 in __kernel_vsyscall ()
#1  0x00686a3d in ___newselect_nocancel () from /lib/tls/i686/cmov/
#2  0x00e68d25 in pg_usleep () from /usr/lib/postgresql/8.4/bin/postgres
#3  0x00d3d406 in WalWriterMain () from /usr/lib/postgresql/8.4/bin/postgres
#4  0x00bf0ba4 in AuxiliaryProcessMain () from /usr/lib/postgresql/8.4/bin/postgres
#5  0x00d3663b in ?? () from /usr/lib/postgresql/8.4/bin/postgres
#6  0x00d39782 in ?? () from /usr/lib/postgresql/8.4/bin/postgres
#7  <signal handler called>
#8  0x00a65422 in __kernel_vsyscall ()
#9  0x00686a3d in ___newselect_nocancel () from /lib/tls/i686/cmov/
#10 0x00d37bee in ?? () from /usr/lib/postgresql/8.4/bin/postgres
#11 0x00d3afbe in PostmasterMain () from /usr/lib/postgresql/8.4/bin/postgres
#12 0x00cdc0dc in main () from /usr/lib/postgresql/8.4/bin/postgres

This example shows a stack trace that does not include function arguments. There may or may not be function arguments on your system, depending on obscure details largely outside your control, like whether or not Postgres was originally built to omit frame pointers, DWARF version, etc. In general, the situation with getting backtraces on mainstream Linux platforms has improved significantly since this example backtrace was originally added. These days, is often better to use "bt full" instead of "bt", since this can provide even more information (the values of local/stack variables during the crash). In general, the more information that you can provide for debugging, the better.

If you don't have proper symbols installed, specify the wrong executable to gdb or fail to specify an executable at all, you'll see a useless backtrace like this following one:

$ sudo -u postgres gdb -q -c /var/lib/postgresql/8.4/main/core.10780.sig6.1271644870s 
Core was generated by `postgres: wal writer process                                                  '.
Program terminated with signal 6, Aborted.
#0  0x00a65422 in __kernel_vsyscall ()
(gdb) bt
#0  0x00a65422 in __kernel_vsyscall ()
#1  0x00686a3d in ?? ()
#2  0x00d3d406 in ?? ()
#3  0x00bf0ba4 in ?? ()
#4  0x00d3663b in ?? ()
#5  0x00d39782 in ?? ()
#6  <signal handler called>
#7  0x00a65422 in __kernel_vsyscall ()
#8  0x00686a3d in ?? ()
#9  0x00d3afbe in ?? ()
#10 0x00cdc0dc in ?? ()
#11 0x005d7b56 in ?? ()
#12 0x00b8fad1 in ?? ()

If you get something like that, don't bother sending it in. If you didn't just get the executable path wrong, you'll probably need to install debugging symbols for PostgreSQL (or even re-build PostgreSQL with debugging enabled) and try again.

Tracing problems when creating a cluster

If you're running into a crash while trying to create a database cluster using initdb, that may leave behind a core dump that you can analyze with gdb as described above. This should be the case if there's an assertion failure for example. You will probably need to give the --no-clean option to initdb to keep it from deleting the new data directory and the core file along with it.

Another technique for finding bootstrap-time bugs is to manually feed the bootstrapping commands into bootstrap mode or single-user mode, with a data directory left over from initdb --no-clean. This can help if there has been no PANIC that leaves a core dump, but just a FATAL or ERROR, for example. It's easy to attach GDB to such a backend.

Also, try creating the data directory using from unpatched master, then triggering the crash with the patched backend, rather than initdb.

Dumping a page image from within GDB

It is sometimes useful to post a file containing a raw page image when reporting a problem on a community mailing list. Both tables and indexes consist of 8KiB-sized blocks/pages, which can be thought of as the fundamental unit of data storage. This is particularly likely to be helpful when the integrity of the data is suspect, such as when an assertion fails due to a bug that corrupts data. GDB makes it easy to do this from either an interactive session (though core dumps may have issues with dumping shared memory).


Breakpoint 1, _bt_split (rel=0x7f555b6f3460, itup_key=0x55d03a745d40, buf=232, cbuf=0, firstright=366, newitemoff=216, newitemsz=16, newitem=0x55d03a745d18, newitemonleft=true) at nbtinsert.c:1205
1205	{
(gdb) n
1215		Buffer		sbuf = InvalidBuffer;
1216		Page		spage = NULL;
1217		BTPageOpaque sopaque = NULL;
1227		int			indnatts = IndexRelationGetNumberOfAttributes(rel);
1228		int			indnkeyatts = IndexRelationGetNumberOfKeyAttributes(rel);
1231		rbuf = _bt_getbuf(rel, P_NEW, BT_WRITE);
1244		origpage = BufferGetPage(buf);
1245		leftpage = PageGetTempPage(origpage);
1246		rightpage = BufferGetPage(rbuf);
1248		origpagenumber = BufferGetBlockNumber(buf);
1249		rightpagenumber = BufferGetBlockNumber(rbuf);
(gdb) dump binary memory /tmp/ origpage (origpage + 8192)

The contents of the page "origpage" are now dumped to the file "/tmp/", which will be precisely 8192 bytes in size. This works wherever the "Page" C type appears (which is actually a typedef defined in bufpage.h -- an unadorned "Page" is actually a char pointer). A "Page" variable is a raw pointer to a page image, typically the authoritative/current page stored in shared_buffers.


Note also that the Postgres hex editor tool pg_hexedit can quickly visualize page images within GDB with intuitive tags and annotations. It might be easier to use pg_hexedit when it isn't initially clear what page images are of interest, or when multiple images of the page from the same block need to be captured over time, as a test case is run.

contrib/pageinspect page dump

When it isn't convenient to use GDB, and when it isn't necessary to get a page image that is exactly current at the time of a crash, it is possible to dump an arbitrary page to a file in a more lightweight fashion using contrib/pageinspect. For example, the following interactive shell session dumps the current page image in block 42 for the index 'pgbench_pkey':

$ psql -c "create extension pageinspect"
$ psql -XAtc "SELECT encode(get_raw_page('pgbench_pkey', 42),'base64')" | base64 -d >

This assumes that it is possible to connect as a superuser using psql, and that the base64 program is in the user's $PATH. The GNU coreutils package generally includes base64, so it will already be available on most Linux installations. Note that it may be necessary to install an operating system package named "postgresql-contrib" or similar before the pageinspect extension will be available to install.

Typically, the easiest way of following this procedure is to become the postgres operating system user first (e.g., through "su postgres").

Starting Postgres under GDB

Debugging multi-process applications like PostgreSQL has historically been very painful with GDB. Thankfully with recent 7.x releases, this has been improved greatly by "inferiors" (GDB's term for multiple debugged processes).

NB! This is still quite fragile, so don't expect to be able to do this in production.

# Stop server
pg_ctl -D /path/to/data stop -m fast
# Launch postgres via gdb
gdb --args postgres -D /path/to/data

Now, in the GDB shell, use these commands to set up an environment:

# We have scroll bars in the year 2012!
set pagination off
# Attach to both parent and child on fork
set detach-on-fork off
# Stop/resume all processes
set schedule-multiple on

# Usually don't care about these signals
handle SIGUSR1 noprint nostop
handle SIGUSR2 noprint nostop

# Make GDB's expression evaluation work with most common Postgres Macros (works with Linux).
# Per,
# have many Postgres macros work if these are defined (useful for TOAST stuff,
# varlena stuff, etc):
macro define __builtin_offsetof(T, F) ((int) &(((T *) 0)->F))
macro define __extension__

# Ugly hack so we don't break on process exit
python x: [gdb.execute('inferior 1'), gdb.post_event(lambda: gdb.execute('continue'))])

# Phew! Run it.

To get a list of processes, run info inferior. To switch to another process, run inferior NUM.

Recording Postgres using rr Record and Replay Framework

PostgreSQL 13 can be debugged using the rr debugging recorder. This section describes some useful workflows for using rr to debug Postgres. It is primarily written for Postgres hackers, though rr could also be used when reporting a bug.

Version compatibility

Commit fc3f4453a2bc95549682e23600b22e658cb2d6d7 resolved an issue that made it hard to use rr with earlier Postgres versions, so there might be problems on those versions. Also, earlier versions of rr distributed with older/LTS Linux OS versions might not have support for syscalls that are used by Postgres, such as sync_file_range(). All of these issues probably have fairly straightforward workarounds (e.g. you could start Postgres with --wal_writer_flush_after=0 --backend_flush_after=0 --bgwriter_flush_after=0 --checkpoint_flush_after=0).

Postgres settings

A script that records a postgres session using rr might could consist of the following example snippet:

rr record -M /code/postgresql/$BRANCH/install/bin/postgres \
  -D /code/postgresql/$BRANCH/data \
  --log_line_prefix="%m %p " \
  --effective_cache_size=1GB \
  --random_page_cost=4.0 \
  --work_mem=4MB \
  --maintenance_work_mem=64MB \
  --fsync=off \
  --log_statement=all \
  --log_min_messages=DEBUG5 \
  --max_connections=50 \

Most of the details here are somewhat arbitrary. The general idea is to make log output as verbose as possible, and to keep the amount of memory used by the server low.

It is quite practical to run "make installcheck" against the server when Postgres is run with "rr record", recording the entire execution. This is not much slower than just running the tests against a regular debug build of Postgres. It's still much faster than Valgrind, for example. Replaying the recording seems to be where having a high end machine helps a lot.

Event numbers in the log

Once the tests are done, stop Postgres in the usual way (e.g. Ctrl + C). The recording is saved to the $HOME/.local/share/rr/ directory on most Linux distros. rr creates a directory for each distinct recording in this parent directory. rr also maintains a symlink (latest-trace) that points to the latest recording directory, which is often used when replaying a recording. Be careful to avoid accidentally leaving too many recordings around. They can be rather large.

The record/Postgres terminal has output that looks like this (when the example "rr record" recipe is used):

[rr 1786705 1241867]2020-04-04 21:55:05.018 PDT 1786705 DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 63992/1/2
[rr 1786705 1241898]2020-04-04 21:55:05.019 PDT 1786705 DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
[rr 1786705 1241902]2020-04-04 21:55:05.019 PDT 1786705 LOG: statement: CREATE TYPE test_type_empty AS ();
[rr 1786705 1241906]2020-04-04 21:55:05.020 PDT 1786705 DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 63993/1/1
[rr 1786705 1241936]2020-04-04 21:55:05.020 PDT 1786705 DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
[rr 1786705 1241940]2020-04-04 21:55:05.020 PDT 1786705 LOG: statement: DROP TYPE test_type_empty;
[rr 1786705 1241944]2020-04-04 21:55:05.021 PDT 1786705 DEBUG:  drop auto-cascades to composite type test_type_empty
[rr 1786705 1241948]2020-04-04 21:55:05.021 PDT 1786705 DEBUG:  drop auto-cascades to type test_type_empty[]
[rr 1786705 1241952]2020-04-04 21:55:05.021 PDT 1786705 DEBUG: MultiXact: setting OldestMember[2] = 9
[rr 1786705 1241956]2020-04-04 21:55:05.021 PDT 1786705 DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 63994/1/3

The part of each log line in square brackets comes from rr (since we used -M when recording) -- the first number is a PID, the second an event number. You probably won't care about the PIDs, though, since the event number alone unambiguously identifies a particular "event" in a particular backend (rr recordings are single threaded, even when there are multiple threads or processes). Suppose you want to get to the CREATE TYPE test_type_empty AS () query -- you can get to the end of the query by replaying the recording with this option:

$ rr replay -M -g 1241902

Replaying the recording like this will take you to the point where the Postgres backend prints the log message at the end of executing the example query -- you will get a gdb debug server (rr implements a gdb backend), and interactive gdb session. This isn't precisely the point of execution that will be of interest to you, but it's close enough. You can easily set a breakpoint to the precise function you happen to be interested in, and then reverse-continue to get there by going backwards.

You can also find the point where a particular backend starts by using the fork option instead. So for the PID 1786705, that would look like:

$ rr replay -M -f 1786705

(Don't try to use the similar -p option, since that starts a debug server when the pid has been exec'd.)

Note that saving the output of a recording using standard tools like "tee" seems to have some issues [1]. It may be helpful to get log output (complete with these event numbers) by doing an "autopilot" replay, like this:

$ rr replay -M -a &> rr.log

You now have a log file that can be searched for a good event number, as a starting point. This may be a practical necessity when running "make installcheck" or a custom test suite, since there might be megabytes of log output. You usually don't need to bother to generate logs in this way, though. It might take a few minutes to do an autopilot replay, since rr will replay everything that was recorded in sub-realtime.

Jumping back and forth through a recording using GDB commands

Once you have a rough idea of where and when a bug manifests itself in your rr recording, you'll need to actually debug the issue using gdb. Often the natural approach is to jump back and forth through the recording to track the issue down in whatever backend is known to be misbehaving.

You can check the current event number once connected to gdb using gdb's "when" command, which can be useful when determining which point of execution you've reached relative to the high level output from "make check" (assuming the -M option was used to get event numbers there):

(rr) when
Current event: 379377

Since event numbers are shared by processes/threads, which are alway executed serially during recording, event numbers are a generic way of reasoning about how far along the recording is, within and across processes. We are not limited to attaching our debugger to processes that happen to be Postgres backends.

rr also supports gdb's checkpoint, restart and delete checkpoint commands; see the relevant section of the GDB docs. These are useful because they allow gdb to track interesting points in execution directly, at a finer granularity than "event number"; a new event number is created when there is a syscall, which might be far too coarse a granularity to be useful when actually zeroing in on a problem in one particular backend/process.

Watchpoints and reverse execution

Because rr supports reverse debugging, watchpoints are much more useful. Note that you should generally use watch -l expr rather than just using watch expr. Without -l, reverse execution is often very slow or apparently buggy, because gdb will try to reevaluate the expression as the program executes through different scopes.

Debugging tap tests

rr really shines when debugging things like tap tests, where there is complex scaffolding that may run multiple Postgres servers. You can run an entire "rr record make check", without having to worry about how that scaffolding works. Once you have useful PIDs (or event numbers) to work off of, it won't take too long to get an interactive debugging session in the backend of interest. You could get a PID for a backend of interest from the logs that appear in the ./tmp_check/log directory once you're done with recording "make check" execution. From there, you can start "rr replay" by passing the relevant PID as the -f argument.

Example replay of a "make check" session:

$ rr replay -M -f 2247718
[rr 2246854 304]make -C ../../../src/backend generated-headers
[rr 2246855 629]make[1]: Entering directory '/code/postgresql/patch/build/src/backend'
[rr 2246855 631]make -C catalog distprep generated-header-symlinks
[rr 2246856 984]make[2]: Entering directory '/code/postgresql/patch/build/src/backend/catalog'

*** SNIP -- Remaining "make check" output omitted for brevity ***

 ---> Reached target process 2247718 at event 379377.
Reading symbols from /usr/bin/../lib/rr/
Reading symbols from /lib/x86_64-linux-gnu/
Reading symbols from /usr/lib/debug/.build-id/0b/4031a3ab06ec61be1546960b4d1dad979d15ce.debug...

*** SNIP ***

(No debugging symbols found in /usr/lib/x86_64-linux-gnu/
Reading symbols from /lib/x86_64-linux-gnu/
Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/
0x0000000070000002 in ?? ()
(rr) bt
#0  0x0000000070000002 in ?? ()
#1  0x00007f0d2c25c3b6 in _raw_syscall () at raw_syscall.S:120
#2  0x00007f0d2c2582ff in traced_raw_syscall (call=call@entry=0x681fffa0) at syscallbuf.c:229
#3  0x00007f0d2c259978 in sys_fcntl (call=<optimized out>) at syscallbuf.c:1291
#4  syscall_hook_internal (call=0x681fffa0) at syscallbuf.c:2855
#5  syscall_hook (call=0x681fffa0) at syscallbuf.c:2987
#6  0x00007f0d2c2581da in _syscall_hook_trampoline () at syscall_hook.S:282
#7  0x00007f0d2c25820a in __morestack () at syscall_hook.S:417
#8  0x00007f0d2c258225 in _syscall_hook_trampoline_48_3d_00_f0_ff_ff () at syscall_hook.S:428
#9  0x00007f0d2b5a9f15 in arch_fork (ctid=0x7f0d297bee50) at arch-fork.h:49
#10 __libc_fork () at fork.c:76
#11 0x00005620ae898e53 in fork_process () at fork_process.c:62
#12 0x00005620ae8aab39 in BackendStartup (port=0x5620b0c1f600) at postmaster.c:4187
#13 0x00005620ae8a6d29 in ServerLoop () at postmaster.c:1727
#14 0x00005620ae8a64c2 in PostmasterMain (argc=4, argv=0x5620b0bf19e0) at postmaster.c:1400
#15 0x00005620ae7a8247 in main (argc=4, argv=0x5620b0bf19e0) at main.c:210

Debugging race conditions

rr can be used to isolate hard to reproduce race condition bugs. The single threaded nature of rr recording/execution seems to make it harder to reproduce bugs involving concurrent execution. However, using rr's chaos mode option (by using the -h argument with rr record) seems to increase the odds of successfully reproducing a problem. It might still take a few attempts, but you only have to get lucky once.

Packing a recording

rr pack can be used to save a recording in a fairly stable format -- it copies the needed files into the trace:

$ rr pack

This could be useful if you wanted to save a recording for more than a day or two. Because every single detail of the recording (e.g. pointers, PIDs) is stable, you can treat a recording as a totally self contained thing.

rr resources

Usage - rr wiki

Debugging protips - rr wiki