Developer FAQ

From PostgreSQL wiki

(Difference between revisions)
Jump to: navigation, search
m (What do I do after choosing an item to work on?: Link to CommitFest Wiki page.)
(Why haven't you replaced CVS with SVN, Git, Monotone, VSS, ?)
Line 187: Line 187:
  
 
=== Why haven't you replaced CVS with SVN, Git, Monotone, VSS, <insert your favorite SCMS here>? ===
 
=== Why haven't you replaced CVS with SVN, Git, Monotone, VSS, <insert your favorite SCMS here>? ===
 +
We will switch to Git in August 2010.
  
There is an offical Git clone of the authoratative CVS repository with total history import available at
+
In the meantime, there is an offical Git clone of the authoratative CVS repository with total history import available at
 
- http://git.postgresql.org/gitweb?p=postgresql.git;a=summary
 
- http://git.postgresql.org/gitweb?p=postgresql.git;a=summary
 
Currently there is no intention to migrate from CVS until at least the end of the 8.4 development cycle sometime in mid 2009. For more information please refer to the mailing list archives.
 
  
 
== Development Process ==
 
== Development Process ==

Revision as of 21:03, 10 July 2010

Contents

Getting Involved

How do I get involved in PostgreSQL development?

Download the code and have a look around. See downloading the source tree.

Subscribe to and read the pgsql-hackers mailing list (often termed "hackers"). This is where the major contributors and core members of the project discuss development.

How do I download/update the current source tree?

There are several ways to obtain the source tree. Occasional developers can just get the most recent source tree snapshot from ftp://ftp.postgresql.org/pub/snapshot/.

Regular developers might want to take advantage of anonymous access to our source code management system. The source tree is currently hosted in CVS. For details of how to obtain the source from CVS see http://developer.postgresql.org/docs/postgres/cvs.html.

We also have a complete Git mirror of CVS with total history import, see the web tracker at: http://git.postgresql.org/?p=postgresql.git

What development environment is required to develop code?

PostgreSQL is developed mostly in the C programming language. The source code is targeted at most of the popular Unix platforms and the Windows environment (XP, Windows 2000, and up).

Most developers run a Unix-like operating system and use an open source tool chain with GCC, GNU Make, GDB, Autoconf, and so on. If you have contributed to open source software before, you will probably be familiar with these tools. Developers using this tool chain on Windows make use of MinGW, though most development on Windows is currently done with the Microsoft Visual Studio 2005 (version 8) development environment and associated tools.

The complete list of required software to build PostgreSQL can be found in the installation instructions.

Developers who regularly rebuild the source often pass the --enable-depend flag to configure. The result is that if you make a modification to a C header file, all files depend upon that file are also rebuilt.

src/Makefile.custom can be used to set environment variables, like CUSTOM_COPT, that are used for every compile.

What areas need work?

Outstanding features are detailed in Todo.

You can learn more about these features by consulting the archives, the SQL standards and the recommend texts (see books for developers).

How do I get involved in PostgreSQL web site development?

PostgreSQL website development is discussed on the pgsql-www mailing list. There is a project page where the source code is available at http://pgweb.postgresql.org/.

Development Tools and Help

How is the source code organized?

If you point your browser at How PostgreSQL Processes a Query, (also in a checkout of the source code, under src/tools/backend/index.html), you will see few paragraphs describing the data flow, the backend components in a flow chart, and a description of the shared memory area. You can click on any flowchart box to see a description. If you then click on the directory name, you will be taken to the source directory, to browse the actual source code behind it. We also have several README files in some source directories to describe the function of the module. The browser will display these when you enter the directory also.

Other than documentation in the source tree itself, you can find some papers/presentations discussing the code at http://www.postgresql.org/developer/coding. An excellent presentation is at http://neilconway.org/talks/hacking/

What tools are available for developers?

First, all the files in the src/tools directory are designed for developers.

   RELEASE_CHANGES changes we have to make for each release
   backend         description/flowchart of the backend directories
   ccsym           find standard defines made by your compiler
   copyright       fixes copyright notices
   entab           converts spaces to tabs, used by pgindent
   find_static     finds functions that could be made static
   find_typedef    finds typedefs in the source code
   find_badmacros  finds macros that use braces incorrectly
   fsync           a script to provide information about the cost of cache
                    syncing system calls
   make_ctags      make vi 'tags' file in each directory
   make_diff       make *.orig and diffs of source
   make_etags      make emacs 'etags' files
   make_keywords   make comparison of our keywords and SQL'92
   make_mkid       make mkid ID files
   pgcvslog        used to generate a list of changes for each release
   pginclude       scripts for adding/removing include files
   pgindent        indents source files
   pgtest          a semi-automated build system
   thread          a thread testing script

In src/include/catalog:

   unused_oids     a script that finds unused OIDs for use in system catalogs
   duplicate_oids  finds duplicate OIDs in system catalog definitions

tools/backend was already described in the question-and-answer above.

Second, you really should have an editor that can handle tags, so you can tag a function call to see the function definition, and then tag inside that function to see an even lower-level function, and then back out twice to return to the original function. Most editors support this via tags or etags files.

Third, you need to get id-utils from ftp://ftp.gnu.org/gnu/id-utils/

By running tools/make_mkid, an archive of source symbols can be created that can be rapidly queried.

Some developers make use of cscope, which can be found at http://cscope.sf.net/. Others use glimpse, which can be found at http://webglimpse.net/.

tools/make_diff has tools to create patch diff files that can be applied to the distribution. This produces context diffs, which is our preferred format.

pgindent is used to fix the source code style to conform to our standards, and is normally run at the end of each development cycle; see this question for more information on our style.

pginclude contains scripts used to add needed #include's to include files, and removed unneeded #include's.

When adding built-in objects such as types or functions, you will need to assign OIDs to them. Our convention is that all hand-assigned OIDs are distinct values in the range 1-9999. (It would work mechanically for them to be unique within individual system catalogs, but for clarity we require them to be unique across the whole system.) There is a script called unused_oids in src/include/catalog that shows the currently unused OIDs. To assign a new OID, pick one that is free according to unused_oids, and for bonus points pick one that is nearby to related existing objects. See also the duplicate_oids script, which will complain if you made a mistake.

What's the formatting style used in PostgreSQL source code?

Our standard format BSD style, with each level of code indented one tab, where each tab is four spaces. You will need to set your editor or file viewer to display tabs as four spaces:

For vi (in .exrc or .vimrc):

set tabstop=4 shiftwidth=4 noexpandtab

For less or more, specify -x4 to get the correct indentation.

The tools/editors directory of the latest sources contains sample settings that can be used with the emacs, xemacs and vim editors, that assist in keeping to PostgreSQL coding standards.

pgindent will the format code by specifying flags to your operating system's utility indent. pgindent is run on all source files just before each beta test period. It auto-formats all source files to make them consistent. Comment blocks that need specific line breaks should be formatted as block comments, where the comment starts as /*------. These comments will not be reformatted in any way.

See also the Formatting section in the documentation. This posting talks about our naming of variable and function names.

If you're wondering why we bother with this, this article describes the value of a consistent coding style.

Is there a diagram of the system catalogs available?

Yes, we have at least one (SVG version).

What books are good for developers?

There are five good books:

  • An Introduction to Database Systems, by C.J. Date, Addison, Wesley
  • A Guide to the SQL Standard, by C.J. Date, et. al, Addison, Wesley
  • Fundamentals of Database Systems, by Elmasri and Navathe
  • Transaction Processing, by Jim Gray and Andreas Reuter, Morgan Kaufmann
  • Transactional Information Systems, by Gerhard Weikum and Gottfried Vossen, Morgan Kaufmann

What is configure all about?

The files configure and configure.in are part of the GNU autoconf package. Configure allows us to test for various capabilities of the OS, and to set variables that can then be tested in C programs and Makefiles. Autoconf is installed on the PostgreSQL main server. To add options to configure, edit configure.in, and then run autoconf to generate configure.

When configure is run by the user, it tests various OS capabilities, stores those in config.status and config.cache, and modifies a list of *.in files. For example, if there exists a Makefile.in, configure generates a Makefile that contains substitutions for all @var@ parameters found by configure.

When you need to edit files, make sure you don't waste time modifying files generated by configure. Edit the *.in file, and re-run configure to recreate the needed file. If you run make distclean from the top-level source directory, all files derived by configure are removed, so you see only the file contained in the source distribution.

How do I add a new port?

There are a variety of places that need to be modified to add a new port. First, start in the src/template directory. Add an appropriate entry for your OS. Also, use src/config.guess to add your OS to src/template/.similar. You shouldn't match the OS version exactly. The configure test will look for an exact OS version number, and if not found, find a match without version number. Edit src/configure.in to add your new OS. (See configure item above.) You will need to run autoconf, or patch src/configure too.

Then, check src/include/port and add your new OS file, with appropriate values. Hopefully, there is already locking code in src/include/storage/s_lock.h for your CPU. There is also a src/makefiles directory for port-specific Makefile handling. There is a backend/port directory if you need special files for your OS.

Why don't you use threads, raw devices, async-I/O, <insert your favorite wizz-bang feature here>?

There is always a temptation to use the newest operating system features as soon as they arrive. We resist that temptation.

First, we support 15+ operating systems, so any new feature has to be well established before we will consider it. Second, most new wizz-bang features don't provide dramatic improvements. Third, they usually have some downside, such as decreased reliability or additional code required. Therefore, we don't rush to use new features but rather wait for the feature to be established, then ask for testing to show that a measurable improvement is possible.

As an example, threads are not currently used instead of multiple processes for backends because:

  • Historically, threads were unsupported and buggy.
  • An error in one backend can corrupt other backends if they're threads within a single process
  • Speed improvements using threads are small compared to the remaining backend startup time.
  • The backend code would be more complex.

(Whether individual backend processes should use multiple threads to make use of multiple cores for single queries is a separate question not covered here).

So, we are not ignorant of new features. It is just that we are cautious about their adoption. The TODO list often contains links to discussions showing our reasoning in these areas.

How are CVS branches managed?

See CVS Branch Management.

Where can I get a copy of the SQL standards?

You are supposed to buy them from ISO or ANSI. Search for ISO/ANSI 9075. ANSI's offer is less expensive, but the contents of the documents are the same between the two organizations.

Since buying an official copy of the standard is quite expensive, most developers rely on one of the various draft versions available on the Internet. Some of these are:

The PostgreSQL documentation contains information about PostgreSQL and SQL conformance.

Some further web pages about the SQL standard are:

Note that having access to a copy of the SQL standard is not necessary to become a useful contributor to PostgreSQL development. Interpreting the standard is difficult and needs years of experience. And most features in PostgreSQL are not specified in the standard anyway.

Where can I get technical assistance?

Many technical questions held by those new to the code have been answered on the pgsql-hackers mailing list - the archives of which can be found at http://archives.postgresql.org/pgsql-hackers/.

If you cannot find discussion or your particular question, feel free to put it to the list.

Major contributors also answer technical questions, including questions about development of new features, on IRC at irc.freenode.net in the #postgresql channel.

Why haven't you replaced CVS with SVN, Git, Monotone, VSS, <insert your favorite SCMS here>?

We will switch to Git in August 2010.

In the meantime, there is an offical Git clone of the authoratative CVS repository with total history import available at - http://git.postgresql.org/gitweb?p=postgresql.git;a=summary

Development Process

What do I do after choosing an item to work on?

Send an email to pgsql-hackers with a proposal for what you want to do (assuming your contribution is not trivial). Working in isolation is not advisable because others might be working on the same TODO item, or you might have misunderstood the TODO item. In the email, discuss both the internal implementation method you plan to use, and any user-visible changes (new syntax, etc). For complex patches, it is important to get community feedback on your proposal before starting work. Failure to do so might mean your patch is rejected. If your work is being sponsored by a company, read this article for tips on being more effective.

Our queue of patches to be reviewed is maintained via a custom CommitFest web application at http://commitfest.postgresql.org.

How do I test my changes?

Basic system testing

The easiest way to test your code is to ensure that it builds against the latest version of the code and that it does not generate compiler warnings.

It is worth advised that you pass --enable-cassert to configure. This will turn on assertions within the source which will often make bugs more visible because they cause data corruption or segmentation violations. This generally makes debugging much easier.

Then, perform run time testing via psql.

Regression test suite

The next step is to test your changes against the existing regression test suite. To do this, issue "make check" in the root directory of the source tree. If any tests fail, investigate.

If you've deliberately changed existing behavior, this change might cause a regression test failure but not any actual regression. If so, you should also patch the regression test suite.

Other run time testing

Some developers make use of tools such as valgrind (http://valgrind.kde.org) for memory testing, gprof (which comes with the GNU binutils suite) and oprofile (http://oprofile.sourceforge.net/) for profiling and other related tools.

What about unit testing, static analysis, model checking...?

There have been a number of discussions about other testing frameworks and some developers are exploring these ideas.

Keep in mind the Makefiles do not have the proper dependencies for include files. You have to do a make clean and then another make. If you are using GCC you can use the --enable-depend option of configure to have the compiler compute the dependencies automatically.

I have developed a patch, what next?

You will need to submit the patch to pgsql-hackers@postgresql.org. To help ensure your patch is reviewed and committed in a timely fashion, please try to follow the guidelines at Submitting a Patch.

What happens to my patch once it is submitted?

It will be reviewed by other contributors to the project and will be either accepted or sent back for further work. The process is explained in more detail at Submitting a Patch.

How do I help with reviewing patches?

If you would like to contribute by reviewing a patch in the CommitFest queue, you are most welcome to do so. Please read the guide at Reviewing a Patch for more information.

Technical Questions

How do I efficiently access information in system catalogs from the backend code?

You first need to find the tuples (rows) you are interested in. There are two ways. First, SearchSysCache() and related functions allow you to query the system catalogs using predefined indexes on the catalogs. This is the preferred way to access system tables, because the first call to the cache loads the needed rows, and future requests can return the results without accessing the base table. A list of available caches is located in src/backend/utils/cache/syscache.c. src/backend/utils/cache/lsyscache.c contains many column-specific cache lookup functions.

The rows returned are cache-owned versions of the heap rows. Therefore, you must not modify or delete the tuple returned by SearchSysCache(). What you should do is release it with ReleaseSysCache() when you are done using it; this informs the cache that it can discard that tuple if necessary. If you neglect to call ReleaseSysCache(), then the cache entry will remain locked in the cache until end of transaction, which is tolerable during development but not considered acceptable for release-worthy code.

If you can't use the system cache, you will need to retrieve the data directly from the heap table, using the buffer cache that is shared by all backends. The backend automatically takes care of loading the rows into the buffer cache. To do this, open the table with heap_open(). You can then start a table scan with heap_beginscan(), then use heap_getnext() and continue as long as HeapTupleIsValid() returns true. Then do a heap_endscan(). Keys can be assigned to the scan. No indexes are used, so all rows are going to be compared to the keys, and only the valid rows returned.

You can also use heap_fetch() to fetch rows by block number/offset. While scans automatically lock/unlock rows from the buffer cache, with heap_fetch(), you must pass a Buffer pointer, and ReleaseBuffer() it when completed.

Once you have the row, you can get data that is common to all tuples, like t_self and t_oid, by merely accessing the HeapTuple structure entries. If you need a table-specific column, you should take the HeapTuple pointer, and use the GETSTRUCT() macro to access the table-specific start of the tuple. You then cast the pointer, for example as a Form_pg_proc pointer if you are accessing the pg_proc table, or Form_pg_type if you are accessing pg_type. You can then access fields of the tuple by using the structure pointer:

((Form_pg_class) GETSTRUCT(tuple))->relnatts

Note however that this only works for columns that are fixed-width and never null, and only when all earlier columns are likewise fixed-width and never null. Otherwise the column's location is variable and you must use heap_getattr() or related functions to extract it from the tuple.

Also, avoid storing directly into struct fields as a means of changing live tuples. The best way is to use heap_modifytuple() and pass it your original tuple, plus the values you want changed. It returns a palloc'ed tuple, which you pass to heap_update(). You can delete tuples by passing the tuple's t_self to heap_delete(). You use t_self for heap_update() too. Remember, tuples can be either system cache copies, which might go away after you call ReleaseSysCache(), or read directly from disk buffers, which go away when you heap_getnext(), heap_endscan, or ReleaseBuffer(), in the heap_fetch() case. Or it may be a palloc'ed tuple, that you must pfree() when finished.

Why are table, column, type, function, view names sometimes referenced as Name or NameData, and sometimes as char *?

Table, column, type, function, and view names are stored in system tables in columns of type Name. Name is a fixed-length, null-terminated type of NAMEDATALEN bytes. (The default value for NAMEDATALEN is 64 bytes.)

  typedef struct nameData
   {
       char        data[NAMEDATALEN];
   } NameData;
   typedef NameData *Name;

Table, column, type, function, and view names that come into the backend via user queries are stored as variable-length, null-terminated character strings.

Many functions are called with both types of names, ie. heap_open(). Because the Name type is null-terminated, it is safe to pass it to a function expecting a char *. Because there are many cases where on-disk names(Name) are compared to user-supplied names(char *), there are many cases where Name and char * are used interchangeably.

Why do we use Node and List to make data structures?

We do this because this allows a consistent way to pass data inside the backend in a flexible way. Every node has a NodeTag which specifies what type of data is inside the Node. Lists are groups of Nodes chained together as a forward-linked list. The ordering of the list elements might or might not be significant, depending on the usage of the particular list.

Here are some of the List manipulation commands:

lfirst(i)
lfirst_int(i)
lfirst_oid(i)
return the data (a pointer, integer or OID respectively) of list cell i.
lnext(i)
return the next list cell after i.
foreach(i, list)
loop through list, assigning each list cell to i.

It is important to note that i is a ListCell *, not the data in the List cell. You need to use one of the lfirst variants to get at the cell's data.

Here is a typical code snippet that loops through a List containing Var *'s and processes each one:

           List        *list;
           ListCell    *i;
           ...
           foreach(i, list)
           {
               Var *var = (Var *) lfirst(i);
               ...
               /* process var here */
           }
lcons(node, list)
add node to the front of list, or create a new list with node if list is NIL.
lappend(list, node)
add node to the end of list.
list_concat(list1, list2)
Concatenate list2 on to the end of list1.
list_length(list)
return the length of the list.
list_nth(list, i)
return the i'th element in list, counting from zero.
lcons_int, ...
There are integer versions of these: lcons_int, lappend_int, etc. Also versions for OID lists: lcons_oid, lappend_oid, etc.

You can print nodes easily inside gdb. First, to disable output truncation when you use the gdb print command:

(gdb) set print elements 0

Instead of printing values in gdb format, you can use the next two commands to print out List, Node, and structure contents in a verbose format that is easier to understand. List's are unrolled into nodes, and nodes are printed in detail. The first prints in a short format, and the second in a long format:

(gdb) call print(any_pointer)
(gdb) call pprint(any_pointer)

The output appears in the server log file, or on your screen if you are running a backend directly without a postmaster.

I just added a field to a structure. What else should I do?

The structures passed around in the parser, rewriter, optimizer, and executor require quite a bit of support. Most structures have support routines in src/backend/nodes used to create, copy, read, and output those structures -- in particular, most node types need support in the files copyfuncs.c and equalfuncs.c, and some need support in outfuncs.c and possibly readfuncs.c. Make sure you add support for your new field to these files. Find any other places the structure might need code for your new field -- searching for references to existing fields of the struct is a good way to do that. mkid is helpful with this (see available tools).

Why do we use palloc() and pfree() to allocate memory?

palloc() and pfree() are used in place of malloc() and free() because we find it easier to automatically free all memory allocated when a query completes. This assures us that all memory that was allocated gets freed even if we have lost track of where we allocated it. There are special non-query contexts that memory can be allocated in. These affect when the allocated memory is freed by the backend.

What is ereport()?

ereport() is used to send messages to the front-end, and optionally terminate the current query being processed. See here for more details on how to use it.

What is CommandCounterIncrement()?

Normally, statements can not see the rows they modify. This allows UPDATE foo SET x = x + 1 to work correctly.

However, there are cases where a transaction needs to see rows affected in previous parts of the transaction. This is accomplished using a Command Counter. Incrementing the counter allows transactions to be broken into pieces so each piece can see rows modified by previous pieces. CommandCounterIncrement() increments the Command Counter, creating a new part of the transaction.

What debugging features are available?

First, if you are developing new C code you should ALWAYS work in a build configured with the --enable-cassert and --enable-debug options. Enabling asserts turns on many sanity checking options. Enabling debug symbols supports use of debuggers (such as gdb) to trace through misbehaving code.

The postgres server has a -d option that allows detailed information to be logged (elog or ereport DEBUGn printouts). The -d option takes a number that specifies the debug level. Be warned that high debug level values generate large log files.

If the postmaster is running, start psql in one window, then find the PID of the postgres process used by psql using SELECT pg_backend_pid(). Use a debugger to attach to the postgres PID. You can set breakpoints in the debugger and then issue queries from the psql session. If you are looking to find the location that is generating an error or log message, set a breakpoint at errfinish. If you are debugging something that happens during session startup, you can set PGOPTIONS="-W n", then start psql. This will cause startup to delay for n seconds so you can attach to the process with the debugger, set appropriate breakpoints, then continue through the startup sequence.

If the postmaster is not running, you can actually run the postgres backend from the command line, and type your SQL statement directly. This is almost always a bad way to do things, however, since the usage environment isn't nearly as friendly as psql (no command history for instance) and there's no chance to study concurrent behavior. You might have to use this method if you broke initdb, but otherwise it has nothing to recommend it.

You can also compile with profiling to see what functions are taking execution time --- configuring with --enable-profiling is the recommended way to set this up. (You usually shouldn't use --enable-cassert when studying performance issues, since the checks it enables are not always cheap.) Profile files from server processes will be deposited in the pgsql/data directory. Profile files from clients such as psql will be put in the client's current directory.

Personal tools