Developer FAQ

From PostgreSQL wiki
Jump to navigationJump to search


Getting Involved

How do I get involved in PostgreSQL development?

Download the code and have a look around. See downloading the source tree.

Subscribe to and read the pgsql-hackers mailing list (often termed "hackers"). This is where the major contributors and core members of the project discuss development. This list of commonly-used terms might be helpful.

How do I download/update the current source tree?

There are several ways to obtain the source tree. Occasional developers can just get the most recent source tree snapshot from ftp://ftp.postgresql.org/pub/snapshot/.

Regular developers might want to take advantage of anonymous access to our source code management system. The source tree is currently hosted in git. For details of how to obtain the source from git see the documentation and Working with Git.

What development environment is required to develop code?

PostgreSQL is developed mostly in the C programming language. The source code is targeted at most of the popular Unix platforms and the Windows environment (XP, Windows 2000, and up).

Most developers run a Unix-like operating system and use an open source tool chain with GCC, GNU Make, GDB, Autoconf, and so on. If you have contributed to open source software before, you will probably be familiar with these tools. Developers using this tool chain on Windows make use of MinGW, though most development on Windows is currently done with the Microsoft Visual Studio 2005 (version 8) development environment and associated tools.

The complete list of required software to build PostgreSQL can be found in the installation instructions.

Developers who regularly rebuild the source often pass the --enable-depend flag to configure. The result is that if you make a modification to a C header file, all files depend upon that file are also rebuilt.

src/Makefile.custom can be used to set environment variables, like CUSTOM_COPT, that are used for every compile.

How do I get involved in PostgreSQL web site development?

PostgreSQL website development is discussed on the pgsql-www mailing list and organized by the Infrastructure team. Source code for the postgresql.org web site is stored in a Git repository.

Development Tools and Help

How is the source code organized?

If you point your browser at Backend Flowchart, you will see a few paragraphs describing the data flow, the backend components in a flow chart, and a description of the shared memory area. You can click on any flowchart box to see a description. If you then click on the directory name, you will be taken to the source directory, to browse the actual source code behind it. We also have several README files in some source directories to describe the function of the module. The browser will display these when you enter the directory also.

What information is available to learn PostgreSQL internals?

What tools are available to learn about/inspect the PostgreSQL on-disk format?

What tools are available for developers?

First, all the files in the src/tools directory are designed for developers.

   RELEASE_CHANGES changes we have to make for each release
   ccsym           find standard defines made by your compiler
   copyright       fixes copyright notices
   entab           converts spaces to tabs, used by pgindent
   find_static     finds functions that could be made static
   find_typedef    finds typedefs in the source code
   find_badmacros  finds macros that use braces incorrectly
   fsync           a script to provide information about the cost of cache
                    syncing system calls
   make_ctags      make vi 'tags' file in each directory
   make_diff       make *.orig and diffs of source
   make_etags      make emacs 'etags' files
   make_keywords   make comparison of our keywords and SQL'92
   make_mkid       make mkid ID files
   git_changelog   used to generate a list of changes for each release
   pginclude       scripts for adding/removing include files
   pgindent        indents source files
   pgtest          a semi-automated build system
   thread          a thread testing script

In src/include/catalog:

   unused_oids     a script that finds unused OIDs for use in system catalogs
   duplicate_oids  finds duplicate OIDs in system catalog definitions

tools/backend was already described in the question-and-answer above.

Second, you really should have an editor that can handle tags, so you can tag a function call to see the function definition, and then tag inside that function to see an even lower-level function, and then back out twice to return to the original function. Most editors support this via tags or etags files.

Third, you need to get id-utils from ftp://ftp.gnu.org/gnu/idutils/

By running tools/make_mkid, an archive of source symbols can be created that can be rapidly queried.

Some developers make use of cscope, which can be found at http://cscope.sf.net/. Others use glimpse.

tools/make_diff has tools to create patch diff files that can be applied to the distribution. This produces diffs for easier readability.

pgindent is used to fix the source code style to conform to our standards, and is normally run at the end of each development cycle; see this question for more information on our style.

pginclude contains scripts used to add needed #include's to include files, and removed unneeded #include's.

When adding built-in objects such as types or functions, you will need to assign OIDs to them. Our convention is that all hand-assigned OIDs are distinct values in the range 1-9999. (It would work mechanically for them to be unique within individual system catalogs, but for clarity we require them to be unique across the whole system.) There is a script called unused_oids in src/include/catalog that shows the currently unused OIDs. To assign a new OID, pick one that is free according to unused_oids. The script will recommend a range to you, looking like this:

   Patches should use a more-or-less consecutive range of OIDs.
   Best practice is to start with a random choice in the range 8000-9999.
   Suggested random unused OID: 9209 (46 consecutive OID(s) available starting here)

and it's normally best to take its recommendation. See also the duplicate_oids script, which will complain if you made a mistake.

What's the formatting style used in PostgreSQL source code?

Our standard format is BSD style, with each level of code indented one tab stop, where each tab stop is four columns. You will need to set your editor or file viewer to display tabs as four spaces.

The src/tools/editors directory of the latest sources contains sample settings that can be used with the emacs, vim and compatible editors, to assist in keeping to PostgreSQL coding standards.

Vim users will also find using-vim-for-postgres-dev tips in article Configuring vim for postgres development

For less or more, specify -x4 to get the correct indentation.

We use the pgindent pretty-printer to format code to improve layout consistency. pgindent is run on all source files at least once per development cycle, and many developers make a point of running modified code through pgindent before submitting or committing patches. See the src/tools/pgindent source directory.

One thing pgindent does is to re-flow text within comments. While this is often helpful, it can destroy carefully made manual layout. Comment blocks that need specific line breaks should be written as block comments, where the comment starts as /*------. Such comments will not be reformatted in any way. Comment blocks starting in column 1, such as file or function header comments, won't be reflowed either.

See also the Formatting section in the documentation. This thread talks about our naming of variable and function names.

If you're wondering why we bother with this, this article describes the value of a consistent coding style.

Is there a diagram of the system catalogs available?

Yes, we have at least one for v8.3 (SVG version), and several for v10.

What books are good for developers?

There are five good books:

  • An Introduction to Database Systems, by C.J. Date, Addison, Wesley
  • A Guide to the SQL Standard, by C.J. Date, et. al, Addison, Wesley
  • Fundamentals of Database Systems, by Elmasri and Navathe
  • Transaction Processing, by Jim Gray and Andreas Reuter, Morgan Kaufmann
  • Transactional Information Systems, by Gerhard Weikum and Gottfried Vossen, Morgan Kaufmann

What is configure all about?

The files configure and configure.in are part of the GNU autoconf package. Configure allows us to test for various capabilities of the OS, and to set variables that can then be tested in C programs and Makefiles. Autoconf is installed on the PostgreSQL main server. To add options to configure, edit configure.in, and then run autoconf to generate configure.

When configure is run by the user, it tests various OS capabilities, stores those in config.status and config.cache, and modifies a list of *.in files. For example, if there exists a Makefile.in, configure generates a Makefile that contains substitutions for all @var@ parameters found by configure.

When you need to edit files, make sure you don't waste time modifying files generated by configure. Edit the *.in file, and re-run configure to recreate the needed file. If you run make distclean from the top-level source directory, all files derived by configure are removed, so you see only the file contained in the source distribution.

How do I add a new port?

There are a variety of places that need to be modified to add a new port. First, start in the src/template directory. Add an appropriate entry for your OS. Also, use src/config.guess to add your OS to src/template/.similar. You shouldn't match the OS version exactly. The configure test will look for an exact OS version number, and if not found, find a match without version number. Edit src/configure.in to add your new OS. (See configure item above.) You will need to run autoconf, or patch src/configure too.

Then, check src/include/port and add your new OS file, with appropriate values. Hopefully, there is already locking code in src/include/storage/s_lock.h for your CPU. There is also a src/makefiles directory for port-specific Makefile handling. There is a backend/port directory if you need special files for your OS.

Why don't you use raw devices, async-I/O, <insert your favorite wizz-bang feature here>?

There is always a temptation to use the newest operating system features as soon as they arrive. We resist that temptation.

First, we support 15+ operating systems, so any new feature has to be well established before we will consider it. Second, most new wizz-bang features don't provide dramatic improvements. Third, they usually have some downside, such as decreased reliability or additional code required. Therefore, we don't rush to use new features but rather wait for the feature to be established, then ask for testing to show that a measurable improvement is possible.

As an example, threads are not yet used instead of multiple processes for backends because:

  • Historically, threads were poorly supported and buggy.
  • An error in one backend can corrupt other backends if they're threads within a single process
  • Speed improvements using threads are small compared to the remaining backend startup time.
  • The backend code would be more complex.
  • Terminating backend processes allows the OS to cleanly and quickly free all resources, protecting against memory and file descriptor leaks and making backend shutdown cheaper and faster
  • Debugging threaded programs is much harder than debugging worker processes, and core dumps are much less useful
  • Sharing of read-only executable mappings and the use of shared_buffers means processes, like threads, are very memory efficient
  • Regular creation and destruction of processes helps protect against memory fragmentation, which can be hard to manage in long-running processes

(Whether individual backend processes should use multiple threads to make use of multiple cores for single queries is a separate question not covered here).

So, we are not ignorant of new features. It is just that we are cautious about their adoption. The TODO list often contains links to discussions showing our reasoning in these areas.

Even some modern platforms have surprising problems with widely used functionality. For example, Linux's AIO layer offers no reliable asynchronous way do fsync() and get completion notification.

How are branches managed?

See Using Back Branches and Committing with Git for information about how branches and backporting are handled.

Where can I get a copy of the SQL standards?

You are supposed to buy them from ISO or an ISO member such as ANSI. Search for ISO/ANSI 9075. ANSI's offer is less expensive, but the contents of the documents are the same between the two organizations.

Since buying an official copy of the standard is quite expensive, most developers rely on one of the various draft versions available on the Internet. Some of these are:

The PostgreSQL documentation contains information about PostgreSQL and SQL conformance.

Some further web pages about the SQL standard are:

Note that having access to a copy of the SQL standard is not necessary to become a useful contributor to PostgreSQL development. Interpreting the standard is difficult and requires years of experience. As the standard is silent on many useful features like indexing, there is a good bit of development happening outside its bounds.

See also SQL standard for more information about getting the standard and participating in its development.

Are there known deviations from the SQL Standard in PostgreSQL?

Certainly. We list them here.

Where can I get technical assistance?

Many technical questions held by those new to the code have been answered on the pgsql-hackers mailing list - the archives of which can be found at http://archives.postgresql.org/pgsql-hackers/.

If you cannot find discussion or your particular question, feel free to put it to the list.

Major contributors also answer technical questions, including questions about development of new features, on IRC at irc.freenode.net in the #postgresql channel.

Development Process

What do I do after choosing an item to work on?

Send an email to pgsql-hackers with a proposal for what you want to do (assuming your contribution is not trivial). Working in isolation is not advisable because experience has shown that there are often requirements that are not obvious, and if those are not agreed on beforehand it leads to wasted effort. In the email, discuss both the internal implementation method you plan to use, and any user-visible changes (new syntax, etc). For complex patches, it is important to get community feedback on your proposal before starting work. Failure to do so might mean your patch is rejected. If your work is being sponsored by a company, read this article for tips on being more effective.

Our queue of patches to be reviewed is maintained via a custom CommitFest web application at https://commitfest.postgresql.org.

How do I test my changes?

Basic system testing

The easiest way to test your code is to ensure that it builds against the latest version of the code and that it does not generate compiler warnings.

It is worth advised that you pass --enable-cassert to configure. This will turn on assertions within the source which will often make bugs more visible because they cause data corruption or segmentation violations. This generally makes debugging much easier.

Then, perform run time testing via psql.

Runtime environment

To test your modified version of PostgreSQL, it's convenient to install PostgreSQL into a local directory (in your home directory, for instance) to avoid conflicting with a system wide installation. Use the --prefix= option to configure to specify an installation location; --with-pgport to specify a non-standard default port is helpful as well. To run this instance, you will need to make sure that the correct binaries are used; depending on your operating system, environment variables like PATH and LD_LIBRARY_PATH (on most Linux/Unix-like systems) need to be set. Setting PGDATA will also be useful.

To avoid having to set this environment up manually, you may want to use Greg Smith's peg scripts,or the scripts that are used on the buildfarm.

Regression test suite

The next step is to test your changes against the existing regression test suite. To do this, issue "make check" in the root directory of the source tree. If any tests fail, investigate.

The regression tests and control program are in src/test/regress.

The control program is pg_regress, but you usually run it via make rather than directly.

You may find it useful to use PG_REGRESS_DIFF_OPTS=-ud make check to get unified diffs, rather than the default context diffs that pg_regress produces.

If you've deliberately changed existing behavior, this change might cause a regression test failure but not any actual regression. If so, you should also patch the regression test suite too.

To change the options PostgreSQL runs with for a given regression test execution you can use the PGOPTIONS environment variable, e.g.

   PGOPTIONS="-c log_error_verbosity=verbose -c log_min_messages=debug2" make check

Isolation tests

For concurrency issues, PostgreSQL includes an "isolation tester" in src/test/isolation . This tool supports multiple connections and is useful if you are trying to reproduce concurrency related bugs or test new functionality.

Valgrind

To use Valgrind, edit src/include/pg_config_manual.h to set #define USE_VALGRIND, then run the postmaster under Valgrind with the supplied suppressions.

See Valgrind.

Other run time testing

Some developers make use of tools such as perf (from the Linux kernel), gprof (which comes with the GNU binutils suite), ftrace, dtrace and oprofile (http://oprofile.sourceforge.net/) for profiling and other related tools.

What about unit testing, static analysis, model checking...?

There have been a number of discussions about other testing frameworks and some developers are exploring these ideas.

Keep in mind the Makefiles do not have the proper dependencies for include files. You have to do a make clean and then another make. If you are using GCC you can use the --enable-depend option of configure to have the compiler compute the dependencies automatically.

I have developed a patch, what next?

You will need to submit the patch to pgsql-hackers@postgresql.org. To help ensure your patch is reviewed and committed in a timely fashion, please try to follow the guidelines at Submitting a Patch.

What happens to my patch once it is submitted?

It will be reviewed by other contributors to the project and will be either accepted or sent back for further work. The process is explained in more detail at Submitting a Patch.

How do I help with reviewing patches?

If you would like to contribute by reviewing a patch in the CommitFest queue, you are most welcome to do so. Please read the guide at Reviewing a Patch for more information.

Do I need to sign a copyright assignment?

No, contributors keeps their copyright (as is the case for most European countries anyway). They simply consider themselves to be part of the Postgres Global Development Group. (It's not even possible to assign copyright to PGDG, as it's not a legal entity). This is the same way that the Linux Kernel and many other Open Source projects works.

May I add my own copyright notice where appropriate?

No, please don't. We like to keep the legal information short and crisp. Additionally, we've heard that could possibly pose problems for corporate users.

Doesn't the PostgreSQL license itself require to keep the copyright notice intact?

Yes, it does. And it is, because the PostgreSQL Global Development Group covers all copyright holders. Also note that US law doesn't require any copyright notice for getting the copyright granted, just like most European laws.

Technical Questions

How do I efficiently access information in system catalogs from the backend code?

You first need to find the tuples (rows) you are interested in. There are two ways. First, SearchSysCache() and related functions allow you to query the system catalogs using predefined indexes on the catalogs. This is the preferred way to access system tables, because the first call to the cache loads the needed rows, and future requests can return the results without accessing the base table. A list of available caches is located in src/backend/utils/cache/syscache.c. src/backend/utils/cache/lsyscache.c contains many column-specific cache lookup functions.

The rows returned are cache-owned versions of the heap rows. Therefore, you must not modify or delete the tuple returned by SearchSysCache(). What you should do is release it with ReleaseSysCache() when you are done using it; this informs the cache that it can discard that tuple if necessary. If you neglect to call ReleaseSysCache(), then the cache entry will remain locked in the cache until end of transaction, which is tolerable during development but not considered acceptable for release-worthy code.

If you can't use the system cache, you will need to retrieve the data directly from the heap table, using the buffer cache that is shared by all backends. The backend automatically takes care of loading the rows into the buffer cache. To do this, open the table with heap_open(). You can then start a table scan with heap_beginscan(), then use heap_getnext() and continue as long as HeapTupleIsValid() returns true. Then do a heap_endscan(). Keys can be assigned to the scan. No indexes are used, so all rows are going to be compared to the keys, and only the valid rows returned.

You can also use heap_fetch() to fetch rows by block number/offset. While scans automatically lock/unlock rows from the buffer cache, with heap_fetch(), you must pass a Buffer pointer, and ReleaseBuffer() it when completed.

Once you have the row, you can get data that is common to all tuples, like t_self and t_oid, by merely accessing the HeapTuple structure entries. If you need a table-specific column, you should take the HeapTuple pointer, and use the GETSTRUCT() macro to access the table-specific start of the tuple. You then cast the pointer, for example as a Form_pg_proc pointer if you are accessing the pg_proc table, or Form_pg_type if you are accessing pg_type. You can then access fields of the tuple by using the structure pointer:

((Form_pg_class) GETSTRUCT(tuple))->relnatts

Note however that this only works for columns that are fixed-width and never null, and only when all earlier columns are likewise fixed-width and never null. Otherwise the column's location is variable and you must use heap_getattr() or related functions to extract it from the tuple.

Also, avoid storing directly into struct fields as a means of changing live tuples. The best way is to use heap_modifytuple() and pass it your original tuple, plus the values you want changed. It returns a palloc'ed tuple, which you pass to heap_update(). You can delete tuples by passing the tuple's t_self to heap_delete(). You use t_self for heap_update() too. Remember, tuples can be either system cache copies, which might go away after you call ReleaseSysCache(), or read directly from disk buffers, which go away when you heap_getnext(), heap_endscan, or ReleaseBuffer(), in the heap_fetch() case. Or it may be a palloc'ed tuple, that you must pfree() when finished.

Why are table, column, type, function, view names sometimes referenced as Name or NameData, and sometimes as char *?

Table, column, type, function, and view names are stored in system tables in columns of type Name. Name is a fixed-length, null-terminated type of NAMEDATALEN bytes. (The default value for NAMEDATALEN is 64 bytes.)

  typedef struct nameData
   {
       char        data[NAMEDATALEN];
   } NameData;
   typedef NameData *Name;

Table, column, type, function, and view names that come into the backend via user queries are stored as variable-length, null-terminated character strings.

Many functions are called with both types of names, ie. heap_open(). Because the Name type is null-terminated, it is safe to pass it to a function expecting a char *. Because there are many cases where on-disk names(Name) are compared to user-supplied names(char *), there are many cases where Name and char * are used interchangeably.

Why do we use Node and List to make data structures?

We do this because this allows a consistent way to pass data inside the backend in a flexible way. Every node has a NodeTag which specifies what type of data is inside the Node. Lists are groups of Nodes chained together as a forward-linked list. The ordering of the list elements might or might not be significant, depending on the usage of the particular list.

Here are some of the List manipulation commands:

lfirst(i)
lfirst_int(i)
lfirst_oid(i)
return the data (a pointer, integer or OID respectively) of list cell i.
lnext(i)
return the next list cell after i.
foreach(i, list)
loop through list, assigning each list cell to i.

It is important to note that i is a ListCell *, not the data in the List cell. You need to use one of the lfirst variants to get at the cell's data.

Here is a typical code snippet that loops through a List containing Var * cells and processes each one:

           List        *list;
           ListCell    *i;
           ...
           foreach(i, list)
           {
               Var *var = (Var *) lfirst(i);
               ...
               /* process var here */
           }
lcons(node, list)
add node to the front of list, or create a new list with node if list is NIL.
lappend(list, node)
add node to the end of list.
list_concat(list1, list2)
Concatenate list2 on to the end of list1.
list_length(list)
return the length of the list.
list_nth(list, i)
return the i'th element in list, counting from zero.
lcons_int, ...
There are integer versions of these: lcons_int, lappend_int, etc. Also versions for OID lists: lcons_oid, lappend_oid, etc.

You can print nodes easily inside gdb. First, to disable output truncation when you use the gdb print command:

(gdb) set print elements 0

Instead of printing values in gdb format, you can use the next two commands to print out List, Node, and structure contents in a verbose format that is easier to understand. Lists are unrolled into nodes, and nodes are printed in detail. The first prints in a short format, and the second in a long format:

(gdb) call print(any_pointer)
(gdb) call pprint(any_pointer)

The output appears in the server log file, or on your screen if you are running a backend directly without a postmaster.

I just added a field to a structure. What else should I do?

The structures passed around in the parser, rewriter, optimizer, and executor require quite a bit of support. Most structures have support routines in src/backend/nodes used to create, copy, read, and output those structures -- in particular, most node types need support in the files copyfuncs.c and equalfuncs.c, and some need support in outfuncs.c and possibly readfuncs.c. Make sure you add support for your new field to these files. Find any other places the structure might need code for your new field -- searching for references to existing fields of the struct is a good way to do that. mkid is helpful with this (see available tools).

Why do we use palloc() and pfree() to allocate memory?

palloc() and pfree() are used in place of malloc() and free() because we find it easier to automatically free all memory allocated when a query completes. This assures us that all memory that was allocated gets freed even if we have lost track of where we allocated it. There are special non-query contexts that memory can be allocated in. These affect when the allocated memory is freed by the backend.

You can dump information about these memory contexts, which can be useful when hunting leaks. See #Examining backend memory use.

What is ereport()?

ereport() is used to send messages to the front-end, and optionally terminate the current query being processed. See here for more details on how to use it.

What is CommandCounterIncrement()?

Normally, statements can not see the rows they modify. This allows UPDATE foo SET x = x + 1 to work correctly.

However, there are cases where a transaction needs to see rows affected in previous parts of the transaction. This is accomplished using a Command Counter. Incrementing the counter allows transactions to be broken into pieces so each piece can see rows modified by previous pieces. CommandCounterIncrement() increments the Command Counter, creating a new part of the transaction.

I need to do some changes to query parsing. Can you succinctly explain the parser files?

The parser files live in the 'src/backend/parser' directory.

scan.l defines the lexer, i.e. the algorithm that splits a string (containing an SQL statement) into a stream of tokens. A token is usually a single word (i.e., doesn't contain spaces but is delimited by spaces), but can also be a whole single or double-quoted string for example. The lexer is basically defined in terms of regular expressions which describe the different token types.

gram.y defines the grammar (the syntactical structure) of SQL statements, using the tokens generated by the lexer as basic building blocks. The grammar is defined in BNF notation. BNF resembles regular expressions but works on the level of tokens, not characters. Also, patterns (called rules or productions in BNF) are named, and may be recursive, i.e. use themselves as sub-patterns.

The actual lexer is generated from scan.l by a tool called flex. You can find the manual at https://westes.github.io/flex/manual/index.html

The actual parser is generated from gram.y by a tool called bison. You can find the manual at http://www.gnu.org/s/bison/.

Beware, though, that you'll have a rather steep learning curve ahead of you if you've never used flex or bison before.

I get shift/reduce conflict I don't know how to deal with

See Fixing_shift/reduce_conflicts_in_Bison

How do I look at a query plan or parsed query?

It's often desirable to examine the structure of a parsed query or a query plan. PostgreSQL stores these as hierarchical trees, which it can print out in a custom format.

The pprint function is used to dump these trees to the backend's stderr, where you can capture it from the logs. You usually invoke this function by attaching a debugger like gdb or MSVC to the backend of interest before you run a query, then set a breakpoint at the position in the parser/rewriter/optimizer/executor you want to see the query state. When the breakpoint triggers, just run:

   call pprint(theQueryVariable)

where theQueryVariable is any Node* of a type that pprint understands. Usually you'll call it on a Query* but it's also common to dump various sub-parts of a query, like a target-list, etc.

This feature can be very useful in conjunction with gdb or MSVC tracepoints.

What debugging features are available?

Compile-time

First, if you are developing new C code you should ALWAYS work in a build configured with the --enable-cassert and --enable-debug options. Enabling asserts turns on many sanity checking options. Enabling debug symbols supports use of debuggers (such as gdb) to trace through misbehaving code. When compiling on gcc, the additional cflags -ggdb -Og -g3 -fno-omit-frame-pointer are also useful, as they insert a lot of debugging info detail. You can pass them to configure with something like:

   ./configure --enable-cassert --enable-debug CFLAGS="-ggdb -Og -g3 -fno-omit-frame-pointer"

Using -O0 instead of -Og will disable most compiler optimisation, including inlining, but -Og performs almost as well as the usual optimser flags like -O2 or -Os while providing much more debug info. You'll see many fewer <value optimised out> variables, less confusing and hard to follow re-ordering of execution, etc, but performance will remain quite usable. -ggdb -g3 tells gcc to also include the maximum amount of debug information in the produced binaries, including things like macro definitions.

-fno-omit-frame-pointer is useful when using tracing and profiling tools like perf, as frame pointers allow these tools to capture the call stack, not just the top function on the stack.

Run-time

The postgres server has a -d option that allows detailed information to be logged (elog or ereport DEBUGn printouts). The -d option takes a number that specifies the debug level. Be warned that high debug level values generate large log files. This option isn't available when starting the server via pg_ctl, but you can use -o log_min_messages=debug4 or similar instead.

When adding print statements for debugging keep in mind that logging_collector = on must be set in your postgresql.conf (the default is off) for stdout/stderr to be captured and logged to a file. Consider using either elog() or fprintf(stderr, "Log\n") instead of printf("Log\n") since usually stdout is fully buffered while stderr is only line-buffered. If you print to stdout you'll need to use fflush frequently to keep the output in sync with error/log messages (which go through stderr).

gdb

If the postmaster is running, start psql in one window, then find the PID of the postgres process used by psql using SELECT pg_backend_pid(). Use a debugger to attach to the postgres PID - gdb -p 1234 or, within a running gdb, attach 1234. You might also find the gdblive script useful. You can set breakpoints in the debugger and then issue queries from the psql session.

If you are looking to find the location that is generating an error or log message, set a breakpoint at errfinish. This will trap on all elog and ereport calls for enabled log levels, so it may be triggered a lot. If you're only interested in ERROR/FATAL/PANIC, use a gdb conditional breakpoint for errordata[errordata_stack_depth].elevel >= 20, or set a source-line breakpoint within the cases for PANIC, FATAL, and ERROR in errfinish. Note that not all errors go through errfinish; in particular, permissions checks are thrown separately. If your breakpoint doesn't trigger, git grep for the error text and see where it's thrown from.

If you are debugging something that happens during session startup, you can set PGOPTIONS="-W n", then start psql. This will cause startup to delay for n seconds so you can attach to the process with the debugger, set appropriate breakpoints, then continue through the startup sequence.

You can sometimes alternately figure out the target process for debugging by looking at pg_stat_activity, the logs, pg_locks, pg_stat_replication, etc.

Tools

There are a some helpful sets of gdb macros and Python scripts to help with PostgreSQL debugging, such as:

You can also call PostgreSQL functions like pprint from within gdb to inspect data structures.

All these tools and techniques work within gdb wrappers like the Eclipse CDT standalone graphical debugger.

core dumps

If it's too hard to predict which process will be the problem but you can relibly get it to crash (maybe by adding an appropriate Assert(...) and compiling with --enable-cassert) you can debug a core dump instead. On Linux you'll want to make sure /proc/sys/kernel/core_pattern has a sensible value like core.%e.%p.SIG%s.%t and, in the shell you launch PostgreSQL from, run:

ulimit -c unlimited

Unless you're working with a large shared_buffers you probably also want to set core dumps (and gdb's gcore) to include shared memory, using:

echo 127 > /proc/self/coredump_filter

Core dumps will be output in the PostgreSQL data directory unless your kernel's core_pattern says otherwise.

rr record and replay debugger

PostgreSQL 13 can be debugged using the rr debugging recorder. You can think of rr as a powerful framework for using GDB with replayable "recordings" of a program's execution. See the guide to using rr to debug Postgres for further details.

Standalone backend

If the postmaster is not running, you can actually run the postgres backend from the command line, and type your SQL statement directly. This is almost always a bad way to do things, however, since the usage environment isn't nearly as friendly as psql (no command history for instance) and there's no chance to study concurrent behavior. You might have to use this method if you broke initdb, but otherwise it has nothing to recommend it.

I broke initdb, how do I debug it?

Sometimes a patch will cause initdb failures. These are rarely in initdb its self; more often a failure occurs in a postgres backend launched by initdb to do some setup work.

If one of these is crashing or triggering an assertion, attaching gdb to initdb isn't going to do much by its self. initdb itself isn't crashing so gdb won't break.

What you need to do is run initdb under gdb, set a breakpoint on fork, then continue execution. When you trigger the breakpoint, finish the function. gdb will report that a child process was created, but this is not what you want, it's the shell that launched the real postgres instance.

While initdb is paused, use ps to find the postgres instance it started. pstree -p can be useful for this. When you've found it, attach a separate gdb session to it with gdb -p $the_postgres_pid. At this point you can safely detach gdb from initdb and debug the postgres instance that's failing.

See also Tracing_problems_when_creating_a_cluster

Profiling to analyse performance, CPU use

There are many options for profiling PostgreSQL, but one of the most popular now is perf, the Linux kernel profiling tool. See Profiling with perf.

perf is extremely powerful and not limited to CPU profiling; it's a useful tracing tool too.

You can also compile PostgreSQL with profiling enabled to see what functions are taking execution time. Configuring with --enable-profiling is the recommended way to set this up. Profile files from server processes will be deposited in the pgsql/data directory. Profile files from clients such as psql will be put in the client's current directory.

You usually shouldn't use --enable-cassert or any user-defined -O flags like -Og / -O0 when studying performance issues. The checks cassert enables are not always cheap, so they'll distort your profile data. Compiler optimisations are important to make sure you're profiling the same thing you'll actually be running.

--enable-debug is fine when profiling with gcc; for other compilers, it should be avoided.

perf is a less intrusive alternative to --enable-profiling on modern Linux systems.

Examining backend memory use

PostgreSQL's palloc is a hierarchical memory allocator that wraps the platform allocator. See #Why do we use palloc() and pfree() to allocate memory?.

Memory allocated with palloc is assigned to a memory context that's part of a hierarchy rooted at TopMemoryContext. Each context has a name.

You can dump stats about a memory context and its children using the MemoryContextStats(MemoryContext*) function. In the most common usage, that's:

   gdb -p $the_backend_pid
   (gdb) p MemoryContextStats(TopMemoryContext)

The output is written to stderr.

This may appear in the main server log file, a secondary log used by the init system for before PostgreSQL's logging collector starts, journald, or on your screen if you are running a backend directly without a postmaster.

Starting with v14, you can use the view pg_backend_memory_contexts or function pg_log_backend_memory_contexts to access this same information without the use of gdb.

gdb/MSVC tracepoints

Sometimes you want to trace execution and capture information without having to constantly switch to gdb every time you hit a breakpoint.

Both MSVC and gdb offer tracepoints for this. They're much more powerful than those offered by tools like perf - with the tradeoff that they're much more intrusive and require a debugger. For gdb, see gdb tracepoints. You can use debugger tracepoints to do things like fire a memory context dump every time a tracepoint is hit, or print a query parse tree, etc.

A viable alternative for some simpler cases is now to use perf to capture function calls, local variables, etc. See Profiling with perf.

Why are my variables full of 0x7f bytes?

In a debugger or a crash dump you may see memory full of 0x7f bytes - 0x7f7f words, 0x7f7f7f7f7f longs, etc.

This is because builds with CLOBBER_FREED_MEMORY defined will overwrite memory when it, or its containing memory context, is freed. This isn't necessarily associated with an explicit pfree - it can happen as a result of a MemoryContextReset or similar, possibly on memory you implicitly allocated to the current memory context by calling palloc, or allocated indirectly via a call to another function.

CLOBBER_FREED_MEMORY is enabled by passing --enable-cassert.

See src/backend/utils/mmgr/aset.c for details.


How do I stop gdb getting interrupted by SIGUSR1 all the time?

PostgreSQL uses SIGUSR1 for latch setting on backends, for SetLatch / WaitLatch / WaitLatchOrSocket etc.

gdb breaks on SIGUSR1 by default making debugging hard.

Just

   handle SIGUSR1 noprint pass

to make it silently pass SIGUSR1 to the program and not pause. Or start it like:

   gdb -ex 'handle SIGUSR1 nostop'

How do I attach gdb and set a breakpoint in a background worker / helper proc?

If you're trying to debug autovacuum, some arbitrary background worker, etc, it can be hard to get gdb attached when you want. Especially if the proc is short-lived.

A handy trick here is to inject an infinite loop that prints the pid until you attach gdb and change the loop-variable to allow debugging to continue. For example, if you add this just before the call to the function you want to debug:

   /* You may need to #include "miscadmin.h" and <unistd.h> */
   
   bool continue_sleep = true;
   do {
       sleep(1);
       elog(LOG, "zzzzz %d", MyProcPid);
   } while (continue_sleep);
   
   func_to_debug()

You can grep the logs for "zzzz" until it appears, attach to the pid of interest, set a breakpoint, and continue execution.

   $ gdb -p $the-pid
   (gdb) break func_to_debug
   (gdb) p continue_sleep=0
   (gdb) cont

Note that it's bad practice to use sleep in PostgreSQL backends; use WaitLatch with a timeout instead. This is OK for debugging though.

Another option can be to have PostgreSQL delay all processes on start with the postgres -W <seconds> option, but this works poorly when you're debugging an issue in complex groups of bgworkers or something that only happens after extended runtime.