https://wiki.postgresql.org/api.php?action=feedcontributions&user=Adunstan&feedformat=atomPostgreSQL wiki - User contributions [en]2024-03-30T04:06:07ZUser contributionsMediaWiki 1.35.13https://wiki.postgresql.org/index.php?title=Mailing_Lists&diff=38456Mailing Lists2023-12-05T15:04:55Z<p>Adunstan: /* Email etiquette mechanics */</p>
<hr />
<div>=== Accessing the Mailing Lists === <br />
<br />
The public mailing lists are open for both viewing and participation. <br />
<br />
The archives can be accessed [http://www.postgresql.org/list/ here]. If you wish to join the discussion, please read on.<br />
<br />
=== Mailing List Culture === <br />
<br />
The PostgreSQL community exists world-wide on our mailing lists. As you dive into our community, you will encounter people with wildly varying levels of expertise for databases, software development and system administration. Excellent technical and professional advice is given freely on the mailing lists, but there is no guarantee or expectation that anyone can solve any particular problem. Flaming or personal attacks are not tolerated on our mailing lists, IRC or related forums connected to the postgresql.org site. <br />
<br />
Above all, the PostgreSQL community's expectation is that each person treats the other with respect, and grants each other the benefit-of-the-doubt when it comes to terse or critical language. The Robustness Principle applies to participation in our community: Be conservative in what you send; be liberal in what you accept.<br />
<br />
That said, our community is known for its aggressive and technical discussion style. For those unfamiliar with our community, our discussions can come across as insulting or overly critical. Please keep in mind that as a new contributor, you are encountering a new culture. Every culture has different rules about appropriate behavior, social norms, and expectations. Much like when learning a new language or visiting a new, unfamiliar country, your experiences while joining the PostgreSQL community will undoubtably include an "adjustment cycle". That can and likely will include high and low moments, friendly or otherwise.<br />
<br />
As with any encounter with unfamiliar culture, you must take some time to get acquainted. Take extra time to communicate clearly. Ask for clarification if you're confused or a response doesn't make sense to you. Be careful to avoid personal attacks if someone makes a mistake. If there's one universal constant, it is that everyone makes mistakes.<br />
<br />
Remember that we are a learning community, and with few exceptions, people are communicating with the intention of learning, sharing and refining ideas.<br />
<br />
=== Email etiquette mechanics ===<br />
<br />
Signatures that include "confidentiality notices" are useless in the context of PostgreSQL mailing lists. All messages to our lists are archived publicly, are immediately available worldwide and will not be removed from our archives. Please remove the notices from your email to our lists, particularly when posting code that you wish to be contributed or shared with our community.<br />
<br />
When replying, please be respectful and use appropriate quoting. See the [https://web.archive.org/web/20170426175120/http://www.gweep.ca/~edmonds/usenet/ml-etiquette.html Mailing List Etiquette FAQ] for details about what constitutes appropriate quoting when replying to mailing lists. <br />
<br />
Our mailing lists are generally set to "reply to sender", but the preferred way to participate in threads is to "reply all". That means that you'll include both the email address of the sender and the mailing list in your response. Also, please do not send HTML-enriched email to the mailing lists.<br />
<br />
Finally, our community generally does not "top post" in response to mailing list threads (See [https://en.wikipedia.org/wiki/Posting_style#Top-posting Wikipedia: Top Posting]for a definition of top posting, and [http://web.archive.org/web/20230608210806/idallen.com/topposting.html Top Posting Deprecated] for discussion of why we discourage it).<br />
<br />
=== Using the discussion lists ===<br />
<br />
You can send an email directly to any of the mailing lists, without subscribing first. <br />
Any responses you receive or send should be sent to the list ''and'' CC correspondents.<br />
<br />
If you wish to receive the mail traffic sent to a list, you can join using the [http://www.postgresql.org/community/lists/subscribe/ subscribe] form. You should receive an email in response from the [https://wiki.postgresql.org/wiki/PGLister_Announce PGLister] mailing list manager software that handles the lists. If you wish change the various settings associated with your subscription or unsubscribe, you can do so using either the [https://lists.postgresql.org/manage web] interface, or by sending commands to PGLister via email following [https://wiki.postgresql.org/wiki/PGLister_Announce#Unsubscribing_without_a_community_account these instructions].<br />
<br />
If you follow discussion through the web interface instead of subscribing,<br />
you will at some point wish to reply to a message sent to the list. '''Do not''' simply copy<br />
the message body and paste it into a message with a similar subject as a way to join the conversation.<br />
The mailing list relies on the "In-Reply-To" mail header in order to associate individual messages<br />
to their thread. If you don't know how to add this header manually, you should instead make use<br />
of the "raw" link [http://www.postgresql.org/message-id/CA+OCxoxAm_iEh21sxHiYzZxK9_3JjdzHLX4ib--ZbH73yfb_zA@mail.gmail.com provided] on every message view to download the message as a file<br />
(in mbox format), then import it into your favorite email client and use the usual "Reply All"<br />
way of responding to mailing list messages. To download the "raw" file a simple authentication is <br />
required to protect it against bots. The username / password to use is provided in the prompt <br />
but some recent browsers do not display this message. In that case, try a different browser.<br />
<br />
=== Overview of discussion lists ===<br />
<br />
We have two primary lists. The [https://www.postgresql.org/list/pgsql-general/ pgsql-general@postgresql.org] is for developers, DBAs and admins who have a question or problem using PostgreSQL.<br />
[https://www.postgresql.org/list/pgsql-hackers/ pgsql-hackers@postgresql.org] is for developers to submit and discuss patches, or for bug reports or issues with unreleased version (development snapshots, beta or release candidates), and for discussion about database internals. We also have the [https://www.postgresql.org/list/pgsql-novice/ pgsql-novice@postgresql.org] list if you would like to try posting a question a smaller list, with a group of people who are there specifically to answer very basic questions.<br />
<br />
If you are primarily interested in performance tuning, benchmarking or case studies from existing users regarding performance, [https://www.postgresql.org/list/pgsql-performance/ pgsql-performance@postgresql.org] is a great list to join.<br />
<br />
If you're interested in contributing to website maintenance or editing, or system administration of PostgreSQL infrastructure, join the [https://www.postgresql.org/list/pgsql-www/ pgsql-www@postgresql.org] mailing list.<br />
<br />
If you have something to contribute to the PostgreSQL documentation, join the [https://www.postgresql.org/list/pgsql-docs/ pgsql-docs@postgresql.org] mailing list. The documentation is always in need of copy editors, testers and example generation.<br />
<br />
If you're interested in staffing booths at conferences, giving talks at conferences, starting a user group or participating in a user group, join the [https://www.postgresql.org/list/pgsql-advocacy/ pgsql-advocacy@postgresql.org] mailing list. We are always in need of booth volunteers, speakers, case study writers and bloggers.<br />
<br />
If you think you've found a bug in PostgreSQL and are new to our project, we suggest you ask about it on the [https://www.postgresql.org/list/pgsql-general/ pgsql-general] list first, and then read our [http://www.postgresql.org/docs/current/static/bug-reporting.html Bug Submission Guidelines] and then go to our [http://www.postgresql.org/support/submitbug Bug Reporting form].<br />
<br />
We also have User Group mailing lists, language-specific lists and some other specific projects with their own communities. You can find a comprehensive list of these at: [http://www.postgresql.org/community/lists/ http://www.postgresql.org/community/lists/]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=Mailing_Lists&diff=38455Mailing Lists2023-12-05T15:03:37Z<p>Adunstan: /* Email etiquette mechanics */ add link to advocacy re: top posting</p>
<hr />
<div>=== Accessing the Mailing Lists === <br />
<br />
The public mailing lists are open for both viewing and participation. <br />
<br />
The archives can be accessed [http://www.postgresql.org/list/ here]. If you wish to join the discussion, please read on.<br />
<br />
=== Mailing List Culture === <br />
<br />
The PostgreSQL community exists world-wide on our mailing lists. As you dive into our community, you will encounter people with wildly varying levels of expertise for databases, software development and system administration. Excellent technical and professional advice is given freely on the mailing lists, but there is no guarantee or expectation that anyone can solve any particular problem. Flaming or personal attacks are not tolerated on our mailing lists, IRC or related forums connected to the postgresql.org site. <br />
<br />
Above all, the PostgreSQL community's expectation is that each person treats the other with respect, and grants each other the benefit-of-the-doubt when it comes to terse or critical language. The Robustness Principle applies to participation in our community: Be conservative in what you send; be liberal in what you accept.<br />
<br />
That said, our community is known for its aggressive and technical discussion style. For those unfamiliar with our community, our discussions can come across as insulting or overly critical. Please keep in mind that as a new contributor, you are encountering a new culture. Every culture has different rules about appropriate behavior, social norms, and expectations. Much like when learning a new language or visiting a new, unfamiliar country, your experiences while joining the PostgreSQL community will undoubtably include an "adjustment cycle". That can and likely will include high and low moments, friendly or otherwise.<br />
<br />
As with any encounter with unfamiliar culture, you must take some time to get acquainted. Take extra time to communicate clearly. Ask for clarification if you're confused or a response doesn't make sense to you. Be careful to avoid personal attacks if someone makes a mistake. If there's one universal constant, it is that everyone makes mistakes.<br />
<br />
Remember that we are a learning community, and with few exceptions, people are communicating with the intention of learning, sharing and refining ideas.<br />
<br />
=== Email etiquette mechanics ===<br />
<br />
Signatures that include "confidentiality notices" are useless in the context of PostgreSQL mailing lists. All messages to our lists are archived publicly, are immediately available worldwide and will not be removed from our archives. Please remove the notices from your email to our lists, particularly when posting code that you wish to be contributed or shared with our community.<br />
<br />
When replying, please be respectful and use appropriate quoting. See the [https://web.archive.org/web/20170426175120/http://www.gweep.ca/~edmonds/usenet/ml-etiquette.html Mailing List Etiquette FAQ] for details about what constitutes appropriate quoting when replying to mailing lists. <br />
<br />
Our mailing lists are generally set to "reply to sender", but the preferred way to participate in threads is to "reply all". That means that you'll include both the email address of the sender and the mailing list in your response. Also, please do not send HTML-enriched email to the mailing lists.<br />
<br />
Finally, our community generally does not "top post" in response to mailing list threads (See [https://en.wikipedia.org/wiki/Posting_style#Top-posting Wikipedia: Top Posting]for a definition of top posting, and [http://web.archive.org/web/20230608210806/idallen.com/topposting.html: Top Posting Deprecated] for discussion of why we discourage it).<br />
<br />
=== Using the discussion lists ===<br />
<br />
You can send an email directly to any of the mailing lists, without subscribing first. <br />
Any responses you receive or send should be sent to the list ''and'' CC correspondents.<br />
<br />
If you wish to receive the mail traffic sent to a list, you can join using the [http://www.postgresql.org/community/lists/subscribe/ subscribe] form. You should receive an email in response from the [https://wiki.postgresql.org/wiki/PGLister_Announce PGLister] mailing list manager software that handles the lists. If you wish change the various settings associated with your subscription or unsubscribe, you can do so using either the [https://lists.postgresql.org/manage web] interface, or by sending commands to PGLister via email following [https://wiki.postgresql.org/wiki/PGLister_Announce#Unsubscribing_without_a_community_account these instructions].<br />
<br />
If you follow discussion through the web interface instead of subscribing,<br />
you will at some point wish to reply to a message sent to the list. '''Do not''' simply copy<br />
the message body and paste it into a message with a similar subject as a way to join the conversation.<br />
The mailing list relies on the "In-Reply-To" mail header in order to associate individual messages<br />
to their thread. If you don't know how to add this header manually, you should instead make use<br />
of the "raw" link [http://www.postgresql.org/message-id/CA+OCxoxAm_iEh21sxHiYzZxK9_3JjdzHLX4ib--ZbH73yfb_zA@mail.gmail.com provided] on every message view to download the message as a file<br />
(in mbox format), then import it into your favorite email client and use the usual "Reply All"<br />
way of responding to mailing list messages. To download the "raw" file a simple authentication is <br />
required to protect it against bots. The username / password to use is provided in the prompt <br />
but some recent browsers do not display this message. In that case, try a different browser.<br />
<br />
=== Overview of discussion lists ===<br />
<br />
We have two primary lists. The [https://www.postgresql.org/list/pgsql-general/ pgsql-general@postgresql.org] is for developers, DBAs and admins who have a question or problem using PostgreSQL.<br />
[https://www.postgresql.org/list/pgsql-hackers/ pgsql-hackers@postgresql.org] is for developers to submit and discuss patches, or for bug reports or issues with unreleased version (development snapshots, beta or release candidates), and for discussion about database internals. We also have the [https://www.postgresql.org/list/pgsql-novice/ pgsql-novice@postgresql.org] list if you would like to try posting a question a smaller list, with a group of people who are there specifically to answer very basic questions.<br />
<br />
If you are primarily interested in performance tuning, benchmarking or case studies from existing users regarding performance, [https://www.postgresql.org/list/pgsql-performance/ pgsql-performance@postgresql.org] is a great list to join.<br />
<br />
If you're interested in contributing to website maintenance or editing, or system administration of PostgreSQL infrastructure, join the [https://www.postgresql.org/list/pgsql-www/ pgsql-www@postgresql.org] mailing list.<br />
<br />
If you have something to contribute to the PostgreSQL documentation, join the [https://www.postgresql.org/list/pgsql-docs/ pgsql-docs@postgresql.org] mailing list. The documentation is always in need of copy editors, testers and example generation.<br />
<br />
If you're interested in staffing booths at conferences, giving talks at conferences, starting a user group or participating in a user group, join the [https://www.postgresql.org/list/pgsql-advocacy/ pgsql-advocacy@postgresql.org] mailing list. We are always in need of booth volunteers, speakers, case study writers and bloggers.<br />
<br />
If you think you've found a bug in PostgreSQL and are new to our project, we suggest you ask about it on the [https://www.postgresql.org/list/pgsql-general/ pgsql-general] list first, and then read our [http://www.postgresql.org/docs/current/static/bug-reporting.html Bug Submission Guidelines] and then go to our [http://www.postgresql.org/support/submitbug Bug Reporting form].<br />
<br />
We also have User Group mailing lists, language-specific lists and some other specific projects with their own communities. You can find a comprehensive list of these at: [http://www.postgresql.org/community/lists/ http://www.postgresql.org/community/lists/]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=Working_with_Git&diff=37459Working with Git2023-01-27T14:23:33Z<p>Adunstan: Add section on git hooks, with example of use with pgindent</p>
<hr />
<div>This page collects various wisdom on working with the [https://git.postgresql.org/ PostgreSQL Git repository]. There are also [[Other Git Repositories]] you might work with, most notably the official [https://github.com/postgres Github mirror] which you might fork on that site.<br />
<br />
==Getting Started==<br />
<br />
A simple way to get started might look like this:<br />
<br />
git clone https://git.postgresql.org/git/postgresql.git<br />
cd postgresql<br />
git checkout -b my-cool-feature<br />
$EDITOR<br />
git commit -a<br />
git diff --patience master my-cool-feature > ../my-cool-feature.patch<br />
<br />
Note that <code>git checkout -b my-cool-feature</code> creates a new branch and checks it out at the same time. Typically, you would develop each feature in a separate branch.<br />
<br />
See the documentation and tutorials at https://git-scm.com/doc/ext for a more detailed Git introduction. For an even more detailed lesson, check out [https://git-scm.com/book/en/v2 the Pro Git book] and maybe get a hardcopy to help support the site.<br />
<br />
You may wish to put the following in your .git/info/exclude [[GitExclude]].<br />
Now that the master repository has been converted to git, the standard<br />
.gitignore files should cover all build products, so you don't need<br />
most of what is listed in that reference. You might still want to<br />
exclude *~, tags, and /cscope.out, though.<br />
<br />
=== Keeping your master branch local synchronized ===<br />
<br />
First, add the origin as a remote. You only need to do this once:<br />
<br />
git remote add origin https://git.postgresql.org/git/postgresql.git<br />
<br />
Next, fetch from your public git repository:<br />
<br />
git fetch origin master<br />
<br />
Merge any new patches from your public repository:<br />
<br />
git merge FETCH_HEAD<br />
<br />
Merge in any changes from the main branch:<br />
<br />
git fetch origin master<br />
git merge FETCH_HEAD<br />
<br />
Now check that it still compiles, passes regression, etc. Make sure you've<br />
invoked ./configure, and then:<br />
<br />
make check<br />
make maintainer-clean<br />
<br />
Assuming all that's good, do a dry run.<br />
<br />
git push --dry-run origin master<br />
<br />
If that's happy, push it out to your public repository.<br />
<br />
git push origin master<br />
<br />
If not, fix any merge failures, do an other dry run, and push.<br />
<br />
=== Tracking Other Branches ===<br />
<br />
Lets say you're happy tracking master, but you'd really like to track any one of the other potential branches at git.postgresql.org<br />
<br />
git remote add <super-fun-branch> https://git.postgresql.org/super-fun-branch.git<br />
git fetch super-fun-branch<br />
git checkout super-fun-branch #this will stage your remote branch for a local checkout<br />
git checkout -b super-fun-branch-name #the name can be wahtever you choose<br />
<br />
Now you have a local branch within your local git repo tracking a different branches history. Most importantly, you can now push to that repo if you have to without making an explicit clone to track the history. It's pretty much impossible to not share some common history with the master branch.<br />
<br />
=== Using Back Branches ===<br />
<br />
Since the git repository contains branches for each of the major versions of PostgreSQL, it's easy to work on the latest code from an older version instead of the current one. Here's how you might list the possibilities and checkout an older version:<br />
<br />
git branch -r<br />
git checkout -b REL_15_STABLE origin/REL_15_STABLE<br />
<br />
Note that if you've already checked out and used a later version, you might need to clean up some of the files left behind by it. It's suggested to run:<br />
<br />
make maintainer-clean<br />
<br />
To get rid of as many of those as possible. You might need to delete some files left behind after that anyway before git will allow you to do the checkout (src/interfaces/ecpg/preproc/preproc.y can be a problem with the specific example above).<br />
<br />
=== Testing a patch ===<br />
<br />
This is a typical setup to review a patch text file, as typically sent by e-mail:<br />
<br />
git checkout -b feature-to-review<br />
patch -p1 < feature.patch<br />
<br />
If the patch fails to apply, there will be file.rej files left behind showing the part that didn't apply. If your directory tree is clean of build information, you can easily find these later using:<br />
<br />
git status<br />
<br />
=== Patch cleanup ===<br />
<br />
Patch diff submission works best when the author does a round of self-review of the actual patch--not just the code, but the physical diff file produced. [[Creating Clean Patches]] covers practices commonly used to produce better patch diff output.<br />
<br />
==Publishing Your Work==<br />
<br />
If you develop a feature over a longer period of time, you want to allow for intermediate review. The traditional approach to that has been emailing huge patches around. The more advanced approach that we want to try (see also Peter Eisentraut's [http://petereisentraut.blogspot.com/2008/02/on-patch-review.html blog entry]) is that you push your Git branches to a private area on <code>git.postgresql.org</code>, where others can pull your work, operate on it using the familiar Git tools, and perhaps even send you improvements as Git-formatted patches. See [https://git.postgresql.org/adm/help the git.postgresql.org site] for instructions on how to sign up, and how to use the repository. You may need to eventually create a patch via e-mail as part of officially [[Submitting a Patch]].<br />
<br />
==Pushing New Branches==<br />
<br />
If you create a new branch, generally for a new feature test, you'll need to push it to git.postgresql.org. <br />
<br />
git push origin new_feature_branch<br />
<br />
Note that, if you have a completely blank repository then not even the branch "master" will exist and will need to be pushed.<br />
<br />
If you ''are'' working with the postgresql core code, do NOT casually make up your own branches and push them, without clearing it on the pgsql-hackers list first. Generally, you want to use your private repo area instead.<br />
<br />
==Removing a Branch==<br />
<br />
Once your feature has been committed to the PostgreSQL repository, you can usually remove your local feature branch. This works as follows:<br />
<br />
# switch to a different branch<br />
git checkout master<br />
git branch -D my-cool-feature<br />
<br />
==Using git hooks==<br />
<br />
Git hooks are scripts that run when certain events such as a commit or a push happen. They are placed in your <code>.git/hooks</code> directory. Here is a sample script that checks when you commit if your code has been properly indented, and optionally re-indents it for you. To use this, place it in <code>.git/hooks/pre-commit</code><br />
<br />
<syntaxhighlight lang="shell"><br />
#!/bin/sh<br />
set -u<br />
: ${PGAUTOINDENT:=no} <br />
<br />
# the branch we're committing to<br />
branch=$(git rev-parse --abbrev-ref HEAD)<br />
# the files in the commit<br />
files=$(git diff --cached --name-only --diff-filter=ACMR)<br />
<br />
check_indent () {<br />
# no need to filter files - pgindent ignores everything that isn't a<br />
# .c or .h file<br />
<br />
src/tools/pgindent/pgindent --silent-diff $files && return 0<br />
exec 2>&1<br />
if [ "$PGAUTOINDENT" = yes ] ; then<br />
echo "Running pgindent on changed files"<br />
src/tools/pgindent/pgindent $files<br />
echo "Commit abandoned. Rerun git commit to adopt pgindent changes"<br />
else<br />
echo 'You need a pgindent run, e.g:'<br />
echo -n 'src/tools/pgindent/pgindent '<br />
echo '`git diff --name-only --diff-filter=ACMR`'<br />
fi<br />
exit 1<br />
}<br />
<br />
# nothing to do if there are no files<br />
test -z "$files" && exit 0<br />
check_indent<br />
</syntaxhighlight><br />
<br />
==Working with the users/foo/postgres.git==<br />
<br />
One option while requesting a project at git.postgresql.org is to have a clone of the main postgresql repository.<br />
<br />
That is very nice feature, but how do you sync the upstream code?!<br />
<br />
One method is to create a git clone in your own repository and add a new remote to handle the syncing :<br />
<br />
# clone your repos<br />
git clone ssh://git@git.postgresql.org/users/foo/postgres.git my_postgres<br />
<br />
# add a new remote<br />
git remote add pgmaster https://git.postgresql.org/git/postgresql.git<br />
<br />
# track some old versions<br />
git checkout -b REL8_3_STABLE origin/REL8_3_STABLE<br />
git checkout -b REL8_4_STABLE origin/REL8_4_STABLE<br />
<br />
# change the remote of master and our old versions tracked<br />
git config branch.REL8_3_STABLE.remote pgmaster<br />
git config branch.REL8_4_STABLE.remote pgmaster<br />
git config branch.master.remote pgmaster<br />
<br />
# pull from postgres official git for each branch<br />
# and finally push to origin<br />
git checkout master<br />
git pull<br />
git push origin<br />
git checkout REL8_3_STABLE<br />
git pull<br />
git push origin<br />
git checkout REL8_4_STABLE<br />
git pull<br />
git push origin<br />
<br />
<br />
This way, PostgreSQL is easy to sync for each branch. Pulling from the official and pushing to your own repository.<br />
<br />
Create your own branch and work as usual. Users who have a local clone of the postgresql.git can add your branch in their repository and happily merge, just as you do.<br />
<br />
==Using the Web Interface==<br />
<br />
Try the web interface at https://git.postgresql.org/. It offers browsing, "blame" functionality, snapshots, and other advanced features, and it is much faster than CVSweb. Even if you don't care for Git or version control systems, you will probably enjoy the web interface.<br />
<br />
==RSS Feeds==<br />
<br />
The Git service provides RSS feeds that report about commits to the repositories. Some people may find this to be an alternative to subscribing to the pgsql-committers mailing list. The URL for the RSS feed from the PostgreSQL repository is https://git.postgresql.org/gitweb/?p=postgresql.git;a=rss. Other options are available; they can be found via the [https://git.postgresql.org/ home page] of the web interface.<br />
<br />
==PostgreSQL Style==<br />
<br />
The PostgreSQL source uses 4-character tabs, making the output from <code>git diff</code> look odd. You can fix that by putting this into your.<code>git/config</code> file:<br />
<br />
[core]<br />
pager = less -x4<br />
<br />
==Continuing the "rsync the CVSROOT" workflow==<br />
<br />
Aidan van Dyk {{messageLink|20090602162347.GF23972@yugib.highrise.ca|published a nice tutorial}} on how to keep several branches using a single copy of historical objects. This is roughly equivalent to keeping several checkouts of a rsync'ed copy of CVSROOT, which is what some committers were used to doing with CVS.<br />
<br />
<br />
[[Category:Git]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_Buildfarm_Howto&diff=36740PostgreSQL Buildfarm Howto2022-02-15T15:04:50Z<p>Adunstan: /* Testing Additional Software more tiny edits */</p>
<hr />
<div>PostgreSQL BuildFarm is a distributed build system designed to detect <br />
build failures on a large collection of platforms and configurations. <br />
This software is written in Perl. If you're not comfortable with Perl<br />
then you possibly don't want to run this, even though the only adjustment<br />
you should ever need is to the config file (which is also Perl).<br />
<br />
=== Get the Software === <br />
Download from [http://buildfarm.postgresql.org/downloads the buildfarm server]<br />
Unpack it and put it somewhere. You can put the config file in a different <br />
place from the run_build.pl script if you want to, but the <br />
simplest thing is to put it in the same place. Decide which user you will run <br />
the script as - it must be a user who can run the PostgreSQL server programs (on Unix<br />
that means it must *not* run as root). Do everything else here as that user.<br />
<br />
=== Other Prerequisites ===<br />
<br />
; Git<br />
: Must be version 1.6 or later.<br />
<br />
; All tools required for building Postgres from a Git checkout<br />
: GNU make, bison, flex, etc<br />
: See [http://www.postgresql.org/docs/devel/static/install-requirements.html the Postgres documentation]<br />
<br />
; ccache<br />
: This isn't ''absolutely'' necessary, but it greatly reduces the amount of CPU your buildfarm member will consume ... at the price of more disk space usage<br />
<br />
=== All the run_build.pl command line options ===<br />
<br />
This list is complete as of release 4.19 of the client<br />
<br />
* --config=/pathto/file - location of config file, default build-farm.conf<br />
* --nosend says don't send the results to the server<br />
* --nostatus says don't update the status files<br />
* --force says run the build even if it's not needed<br />
* --verbose[=n] says display information. verbosity level 1 (default if --verbose is specified) shows a line for each step as it starts. Any higher number causes the logs from the various stages to be sent to the standard output<br />
* --quiet - suppress error output<br />
* --test is short for --nosend --no-status --force --verbose<br />
* --find-typedefs - obsolete way to trigger typedef anaylsis. This should now be done via the config file<br />
* --help - print help text<br />
* --keepall - keep build and installation directories if there is a failure<br />
* [ to be continued ]<br />
<br />
=== Choose a setup for a base git mirror that all your branches will pull from. ===<br />
Most buildfarm members run on more than one branch, and if you do it's good practice to set up<br />
a mirror on the buildfarm machine and then just clone that for each branch. The official publicly available git repository is at<br />
* git://git.postgresql.org/git/postgresql.git<br />
and there is a mirror at <br />
* git://github.com/postgres/postgres.git<br />
Either should be suitable for cloning.<br />
<br />
The simplest way to set up a mirror is simply to have the buildfarm script create and maintain it for you. <br />
If you do that, the mirror will be updated at the start of a run when it checks to see if any changes have occurred that might<br />
require a new build. To do that, all you need to do is set the following two options in your config file:<br />
git_keep_mirror => 'true',<br />
git_ignore_mirror_failure => 'true',<br />
<br />
If you would rather clone the github mirror for your local mirror instead of the authoritative community repo (doing so can keep load off the community server, which is a good thing), then set the config variable to point to it like this:<br />
scmrepo => 'git://github.com/postgres/postgres.git',<br />
<br />
The mirror will be placed in your build root, above the branch directories.<br />
<br />
You can also opt to create and maintain a git mirror yourself, something like this:<br />
git clone --mirror git://git.postgresql.org/git/postgresql.git pgsql-base.git<br />
When that is done, add an entry to your crontab to keep it up to date, something like:<br />
20,50 * * * * cd /path/to/pgsql-base.git && git fetch -q<br />
<br />
One downside of doing this is that your mirror will only be as up to date as the last time you ran the cron update.<br />
<br />
To have your buildfarm installation use a local mirror you maintain yourself, set the config variable:<br />
scmrepo => '/path/to/pgsql-base.git',<br />
Of course, in this case you don't set the git_keep_mirror option.<br />
<br />
=== Create a directory where builds will run. === <br />
This should be dedicated to<br />
the use of the build farm. Make sure there's plenty of space - on my<br />
machine each branch can use up to about 700Mb during a build. You can use the<br />
directory where the script lives, or a subdirectory of it, or a completely <br />
different directory.<br />
<br />
If you're using ccache, the cache directory can use up to 1Gb by default.<br />
You can reduce that if you like (see the ccache documentation), but it's<br />
good to allow at least 100Mb per active branch.<br />
<br />
=== Edit the build-farm.conf file ===<br />
<br />
Notable things you probably need to set include the following:<br />
<br />
==== %conf ====<br />
<br />
; scmrepo<br />
: Set this to indicate the path to your Git mirror<br />
; scm_url<br />
: If you are not using the Community git repository, or want to point the changesets at a different server, set this URL to indicate where to find a given Git commit on the web. For instance, for the github mirror, this value should be: <i>&#x68;ttp://github.com/postgres/postgres.git/commit/</i> - don't forget the trailing "/".<br />
<br />
Once you have registered your Buildfarm animal you will need to set these, but for initial testing just leave them as-is:<br />
<br />
; animal<br />
: This will need to be set to the animal name you were given by the Buildfarm coordinators<br />
; secret<br />
: This must be the password indicated by the Buildfarm coordinators<br />
<br />
Adjust other config variables "make", "config_opts", and (if you don't use ccache) "config_env" to suit your environment, and to choose which optional Postgres configuration options you want to build with. <br />
<br />
You should not need to adjust other variables.<br />
<br />
You may verify that you didn't screw things up too badly by running "perl -cw build-farm.conf". That verifies that the configuration is still legitimate Perl.<br />
<br />
=== Alerts and Status Notifications ===<br />
<br />
Alerts happen when we haven't heard from your buildfarm member for a while, and suggest that maybe something is wrong. Status notifications happen when we have heard from your buildfarm member, and we are telling you what happened. Both of them happen via email. Alerts are sent to the owner's registered email address. By default, none are sent. You can configure when and how often they are sent in the alerts section of the config file. Status notifications are sent to the addresses configured in the mail_events section of the config file. You can choose four different sorts of notification:<br />
* for every build<br />
* for every build that fails<br />
* for every build that changes the status<br />
* for every build that changes the status if the change is to or from OK (green) <br />
<br />
=== Change the shebang line in the scripts. ===<br />
If the path to your perl <br />
installation isn't "/usr/bin/perl", edit the #! line in perl scripts so it is correct. <br />
This is the ONLY line in those files you should ever need to edit. <br />
<br />
=== Check that required perl modules are present. ===<br />
Run "perl -cw run_build.pl". <br />
If you get errors about missing perl modules you will need to install them. <br />
Most of the required modules are standard modules in any perl<br />
distribution. The rest are all standard CPAN modules, and available either from there<br />
or from your OS distribution. When you don't get an error any more, run the same test on<br />
run_web_txn.pl, and also on run_branches.pl if you plan to use that (see below).<br />
<br />
If you are using an https URL for the buildfarm server (which you should be!), make<br />
sure that LWP::Protocol::https and Mozilla::CA are installed as well; the above test<br />
does not catch these requirements.<br />
<br />
When all is clear you are ready to start testing.<br />
<br />
=== Run in test mode. ===<br />
With a PATH that matches what you will have when running from cron, run<br />
the script in no-send, no-status, verbose mode. Something like this:<br />
./run_build.pl --nosend --nostatus --verbose<br />
and watch the fun begin. If this results in failures because it can't<br />
find some executables (especially gmake and git), you might need to change <br />
the config file again, this time changing the "build_env" with another <br />
setting something like:<br />
PATH => "/usr/local/bin:$ENV{PATH}",<br />
Also, if you put the config file somewhere else, you will need to use <br />
the --config=/path/to/build-farm.conf option.<br />
<br />
If trying to diagnose problems, interesting summary information may be found in the file '''web-txn.data''', which is found in a build-specific directory, of the form $build_root/$CURRENT_BRANCH/$animal.lastrun-logs/web-txn.data<br />
<br />
If particular steps of a build failed, logs for those steps may be found in that same directory.<br />
<br />
=== Test running from cron === <br />
When you have that running, it's time to try with cron. <br />
Put a line in your crontab that looks something like this:<br />
43 * * * * cd /location/of/run_build.pl/ && ./run_build.pl --nosend --verbose<br />
Again, add the --config option if needed. Notice that this time we didn't <br />
specify --nostatus. That means that (after the first run) the script won't <br />
do any build work unless the Git repo has changed. Check that your cron <br />
job runs (it should email you the results, unless you tell it to send them<br />
elsewhere).<br />
<br />
You can, and probably should, drop the --verbose option once things are<br />
working.<br />
<br />
The frequency with which the cron job is launched is up to you, though we do<br />
suggest that active branches get built at least once a day. The build script will<br />
automatically exit if it finds a previous invocation still running, so you do not<br />
need to worry about scheduling jobs too close together. Think of the cron<br />
frequency as how often the buildfarm animal will wake up to see if there have<br />
been changes in the Git repo.<br />
<br />
=== Choose which branches you want to build === <br />
By default run_build.pl builds the HEAD branch. If you want to<br />
build some other branch, you can do so by specifying the name on the commandline,<br />
e.g. <br />
run_build.pl REL9_4_STABLE<br />
<br />
The old way to build multiple branches was to create a cron job for each<br />
active branch, along the lines of:<br />
<br />
6 * * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend<br />
30 4 * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend REL9_4_STABLE<br />
<br />
But there's a better way ...<br />
<br />
=== Using run_branches.pl ===<br />
There is a wrapper script that makes running multiple branches much easier. To build all the branches that are currently being maintained by the project, instead of running run_build.pl, use run_branches.pl with the --run-all option. This script accepts all the options that run_build.pl does, and passes them through. So now your crontab could just look like this:<br />
6 * * * * cd /home/andrew/buildfarm && ./run_branches.pl --run-all<br />
One of the advantages of this approach is that you don't need to manually retire a branch when the Postgres project ends support for it, nor to add one when there's a new stable branch. The script contacts the server to get a list of branches that we're currently interested in, and then builds them. This is now the recommended method of running a buildfarm member.<br />
<br />
The branches that are built are controlled by the <code>branches_to_build</code> setting in the <code>global</code> section of the config file. The sample config file's setting is 'ALL'.<br />
<br />
If you don't want to build every one of the back branches, you can also use HEAD_PLUS_LATEST, or HEAD_PLUS_LATESTn for any n, or a fixed list of branches. In the last case you will probably need to adjust the list whenever the PostgreSQL developers start a new branch or declare an old branch to be at End Of Life.<br />
<br />
=== Register your new buildfarm member and subscribe to the mailing list. === <br />
Once this is all running happily, you can register to upload your<br />
results to the central server. Registration can be done on the buildfarm server <br />
at https://buildfarm.postgresql.org/cgi-bin/register-form.pl. When you receive your approval by <br />
email, you will edit the "animal" and "secret" lines in your config file, <br />
remove the --nosend flags, and you are done.<br />
<br />
Please also join the buildfarm-members mailing list at<br />
https://lists.postgresql.org<br />
This is a low-traffic list for owners of buildfarm members, and every buildfarm owner should be subscribed.<br />
<br />
=== Status Mailing Lists ===<br />
<br />
There are also two mailing lists that report status from all builds, not just your own animals. This is useful for developers who want to be notified of events rather than having to monitor the server's dashboard.<br />
<br />
* <b><code>buildfarm-status-failures</code></b>, which gets an email any time a buildfarm animal reports a failed run.<br />
* <b><code>buildfarm-status-green-chgs</code></b>, which gets an email any time the status of a buildfarm animal changes to or from green (i.e. success). This is the status list most people find useful.<br />
<br />
=== Bugs === <br />
Please file bug reports concerning the buildfarm client (but not Postgres itself)<br />
on the buildfarm members mailing list.<br />
<br />
=== Running on Windows ===<br />
There are three build environments for Windows: Cygwin, MinGW/MSys, and Microsoft Visual C++. The buildfarm can run with each of these environments. This section discusses requirements for the buildfarm, rather than requirements for building on Windows, which are covered elsewhere.<br />
<br />
==== Cygwin ==== <br />
There is almost nothing extra to be done for Cygwin. You need to make sure that cygserver is running, and you should set MAX_CONNECTIONS=>3 and CYGWIN=>'server' in the build_env stanza of the buildfarm config. Other than that it should be just like running on Unix.<br />
<br />
==== MinGW/Msys ====<br />
For MinGW/MSys, you need both the MSys DTK version of perl installed, and a native Windows perl - I have only tested with ActiveState perl, which I have found to be rock solid. You need to run the main buildfarm script using the MSYS DTK perl, and the web transaction script using native Perl. that mean you need to change the first line of the run_web_txn.pl script so it reads something like:<br />
#!/c/perl/bin/perl<br />
You should make sure that the PATH is set in your config file to put the Native perl ahead of the MSys DTK perl.<br />
It's a good idea to have a runbf.bat file that you can call from the Windows scheduler. Mine looks like this:<br />
@echo off<br />
setlocal<br />
c:<br />
cd \msys\1.0\bin<br />
c:\msys\1.0\bin\sh.exe --login -c "cd bf && ./run_build.pl --verbose %1 >> bftask.out 2>&1"<br />
Set up a non-privileged Windows user to run this jobs as. set up the buildfarm as above as that user. Then create scheduler jobs that call runbf.bat with an optional branch name argument.<br />
<br />
==== Microsoft Visual C++ ====<br />
For MSVC you need to edit the config file more extensively. Make sure the 'using_msvc' setting is on. Also, there is a section of the file specially for MSVC builds. As with MinGW, you need a native Windows perl installed. It appears that Windows Git does not like to clone local repositories specified with forward slashes (this is pretty horrible - almost all Windows programs are quite happy with forward slashes. Make sure you specify the repository using backslashes or weird things will happen. Again, you will need a runbf.bat file for the windows scheduler. Mine looks like this:<br />
@echo off<br />
c:<br />
cd \prog\bf<br />
c:\perl\bin\perl run_build.pl --verbose %1 %2 %3 %4 >> bfout.txt<br />
You will also need a tar command capable of bundling up the logs to send to the server. The best one I have found for use on Windows is bsdtar, part of the libarchive collection at http://sourceforge.net/projects/gnuwin32/files/. This is also a good place to get many of the libraries you need for optional pieces of MSVC and MinGW builds.<br />
<br />
=== Running multiple buildfarm members on a single machine ===<br />
<br />
Sometimes you might want to run more than one buildfarm member on a single machine. Possible reasons for doing this include testing different compilers, and running with different build options. For example, on one FreeBSD machine I have two members; one does a normal build and the other does a build with -DCLOBBER_CACHE_ALWAYS set. Or on a Windows machine one might want to test both the 32 bit and 64 bit mingw-w64 compilers.<br />
<br />
The simplest way to do this is to do it all in the same location. Get one member working, then copy the config file to something with the other member's name and change the animal name and password, and whatever in the config will be different from the first one. The members can share a git mirror and build root. There are locking provisions that prevent instances of the buildfarm scripts from tripping over each other. If you are using ccache, you should ensure that each member gets a separate ccache location. The best way to do that is to put the member name into the ccache directory name (which is the default as of recent releases of the buildfarm scripts).<br />
<br />
=== Running in Parallel ===<br />
<br />
If you run a single animal, you can run all the branches in parallel just by changing <code>run_branches.pl</code>'s <code>--run-all</code> to <code>--run-parallel</code>. This will launch each branch's run, spaced out by 60 seconds from launch to launch. <br />
<br />
The long story: parallelism is controlled by a number of configuration parameters in the <code>global</code> section of the config file. The first is <code>parallel_lockdir</code>. By default this is the <code>global_lock_dir</code> which in turn defaults to the <code>build_root</code>. This directory is where <code>run_branches.pl</code> puts a lock file for each running branch. The second is <code>max_parallel</code>. The script will launch a new branch as long as the number of live locks is less than this number. The default is 10. Lastly the setting <code>parallel_stagger</code> determines how long the script will wait before starting a new branch, unless one finishes in the meantime. The default is 60 seconds.<br />
<br />
If you want to run multiple animals and use parallelism between them the best way is to use a separate <code>build_root</code> for each animal. Then don't set the <code>global_lock_dir</code> for each animal, but do set the <code>parallel_lockdir</code> for each animal to point to the same directory, probably the <code>build_root</code> of one of the animals. Then you could have a crontab something like this:<br />
<br />
2-59/15 * * * * cd curly && run_branches.pl --run-parallel --config=curly.conf<br />
7-59/15 * * * * cd larry && run_branches.pl --run-parallel --config=larry.conf<br />
12-59/15 * * * * cd moe && run_branches.pl --run-parallel --config=moe.conf<br />
<br />
=== Tips and Tricks ===<br />
<br />
You can force a single run of your animal by putting a file called <animal>.force-one-run in the <buildroot>/<branch> directory. For example the following will force a build on all the stable branches of my animal crake<br />
cd root<br />
for f in REL* ; do<br />
touch $f/crake.force-one-run<br />
done<br />
When the run is done this file will be removed automatically. <br />
<br />
=== Testing Additional Software ===<br />
<br />
In addition to testing core Postgres code, you can test addon software such as Extensions and Foreign Data Wrappers. To do that you need to create a Module file in the PGBuild/Modules directory. Say you're going to test a Foreign Data Wrapper called UltraCoolFDW. Copy the Skeleton.pm file in that directory to UltraCoolFDW.pm. Inside, change the package name to "PGBuild::Modules::UltraCoolFDW".<br />
<br />
Then add you new module to the "modules" section in your config file.<br />
<br />
At this stage your new module will register and run. It just won't do anything, but if you run in verbose mode you will see the traces of its subroutines being called.<br />
<br />
To make it do some things you need to fill in a bit of code. But not very much. The most important are the <code>setup()</code> subroutine, and the <code>checkout()</code>, <code>setup_target()</code>, <code>build()</code>, <code>install()</code>, <code>installcheck()</code> and <code>cleanup()</code>subroutines.<br />
<br />
In <code>setup()</code> you normally need to create an SCM object to handle checking out your code, and stash info on where it's going to be built. That extra code for UltraFDW will look something like this, just before the <code>register_module_hooks()</code>call:<br />
<syntaxhighlight lang="perl"><br />
my $scmconf = {<br />
scm => 'git',<br />
scmrepo => 'git://my.gitrepo.org/myname/ultracoolfdw.git',<br />
git_reference => undef,<br />
git_keep_mirror => 'true',<br />
git_ignore_mirror_failure => 'true',<br />
build_root => $self->{buildroot},<br />
};<br />
<br />
$self->{scm} = PGBuild::SCM->new($scmconf, 'ultracoolfdw');<br />
my $where = $self->{scm}->get_build_path();<br />
$self->{where} = $where;<br />
</syntaxhighlight><br />
<br />
You might only want to run this module on some branches. Say you only want to run it on 'HEAD' (our name for git master). You would put some thing like this at the top of the setup function:<br />
<br />
<syntaxhighlight lang="perl"><br />
return unless $branch eq 'HEAD';<br />
</syntaxhighlight><br />
<br />
In <code>checkout()</code> you need to check the code out. Replace the <code>push()</code> line with lines like this:<br />
<br />
<syntaxhighlight lang="perl"><br />
my $scmlog = $self->{scm}->checkout($self->{pgbranch});<br />
push(@$savescmlog,<br />
"------------- $MODULE checkout ----------------\n", @$scmlog);<br />
</syntaxhighlight><br />
<br />
This code works if your FDW code has branches that mirror the Postgres branches. If instead you have a single branch, say "main", that works for all Postgres branches, use that name instead of <code>$self->{pgbranch}</code>. The branch name "HEAD" can also be used: it will map to whatever the default branch is of your git repo.<br />
<br />
<code>setup_target()</code> normally just needs the addition of this line:<br />
<br />
<syntaxhighlight lang="perl"><br />
$self->{scm}->copy_source(undef);<br />
</syntaxhighlight><br />
<br />
These next functions all assume (correctly) that Postgres has been successfully built and installed in the standard place, i.e. "../inst" relative to your build directory.<br />
<br />
<code>build()</code> and <code>install()</code> are pretty similar. Essentially they simply invoke your code's Makefile to run these tasks. The code for <code>build</code> should look something like this:<br />
<br />
<syntaxhighlight lang="perl"><br />
my $cmd = "PATH=../inst:$ENV{PATH} make USE_PGXS=1";<br />
my @makeout = run_log("cd $self->{where} && $cmd");<br />
my $status = $? >> 8;<br />
writelog("$MODULE-build", \@makeout);<br />
print "======== make log ===========\n", @makeout if ($verbose > 1);<br />
$status ||= check_make_log_warnings("$MODULE-build", $verbose)<br />
if $check_warnings;<br />
send_result("$MODULE-build", $status, \@makeout) if $status;<br />
</syntaxhighlight><br />
<br />
while the code for <code>install()</code> looks something like this:<br />
<br />
<syntaxhighlight lang="perl"><br />
my $cmd = "PATH=../inst:$ENV{PATH} make USE_PGXS=1 install";<br />
my @log = run_log("cd $self->{where} && $cmd");<br />
my $status = $? >> 8;<br />
writelog("$MODULE-install", \@log);<br />
print "======== install log ===========\n", @log if ($verbose > 1);<br />
send_result("$MODULE-install", $status, \@log) if $status;<br />
</syntaxhighlight><br />
<br />
If you get a perl complaint about $MODULE being undefined, add a line like this near the top of your module, just after <code>use warnings;</code><br />
<br />
<syntaxhighlight lang="perl"><br />
(my $MODULE = __PACKAGE__) =~ s/PGBuild::Modules:://;<br />
</syntaxhighlight><br />
<br />
<code>installcheck()</code> is the most complicated subroutine. That's because in addition to running the installcheck procedure it needs to gather up all the log files, regression differences etc. Here's an example of the additional code needed in this subroutine:<br />
<br />
<syntaxhighlight lang="perl"><br />
my $make = $self->{bfconf}->{make};<br />
print time_str(), "install-checking $MODULE\n" if $verbose;<br />
my $cmd = "$make USE_PGXS=1 USE_MODULE_DB=1 installcheck";<br />
my @log = run_log("cd $self->{where} && $cmd");<br />
my $log = PGBuild::Log->new("$MODULE-installcheck-$locale");<br />
my $status = $? >> 8;<br />
my $installdir = "$self->{buildroot}/$self->{pgbranch}/inst";<br />
my @logfiles = ("$self->{where}/regression.diffs", "$installdir/logfile");<br />
if ($status)<br />
{<br />
$log->add_log($_) foreach (@logfiles);<br />
}<br />
push(@log, $log->log_string);<br />
writelog("$MODULE-installcheck-$locale", \@log);<br />
print "======== installcheck ($locale) log ===========\n", @log<br />
if ($verbose > 1);<br />
send_result("$MODULE-installcheck-$locale", $status, \@log) if $status;<br />
</syntaxhighlight><br />
<br />
Finally in <code>cleanup()</code>, add any cleanup required. Usually this can just be the removal of the build directory, something like:<br />
<br />
<syntaxhighlight lang="perl"><br />
rmtree($self->{where});<br />
</syntaxhighlight><br />
<br />
Remove any reference to unneeded subroutines in the <code>$hooks</code>, and you are done.<br />
<br />
[[Category:Howto]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_Buildfarm_Howto&diff=36739PostgreSQL Buildfarm Howto2022-02-15T14:54:53Z<p>Adunstan: /* Testing Additional Software typos etc. */</p>
<hr />
<div>PostgreSQL BuildFarm is a distributed build system designed to detect <br />
build failures on a large collection of platforms and configurations. <br />
This software is written in Perl. If you're not comfortable with Perl<br />
then you possibly don't want to run this, even though the only adjustment<br />
you should ever need is to the config file (which is also Perl).<br />
<br />
=== Get the Software === <br />
Download from [http://buildfarm.postgresql.org/downloads the buildfarm server]<br />
Unpack it and put it somewhere. You can put the config file in a different <br />
place from the run_build.pl script if you want to, but the <br />
simplest thing is to put it in the same place. Decide which user you will run <br />
the script as - it must be a user who can run the PostgreSQL server programs (on Unix<br />
that means it must *not* run as root). Do everything else here as that user.<br />
<br />
=== Other Prerequisites ===<br />
<br />
; Git<br />
: Must be version 1.6 or later.<br />
<br />
; All tools required for building Postgres from a Git checkout<br />
: GNU make, bison, flex, etc<br />
: See [http://www.postgresql.org/docs/devel/static/install-requirements.html the Postgres documentation]<br />
<br />
; ccache<br />
: This isn't ''absolutely'' necessary, but it greatly reduces the amount of CPU your buildfarm member will consume ... at the price of more disk space usage<br />
<br />
=== All the run_build.pl command line options ===<br />
<br />
This list is complete as of release 4.19 of the client<br />
<br />
* --config=/pathto/file - location of config file, default build-farm.conf<br />
* --nosend says don't send the results to the server<br />
* --nostatus says don't update the status files<br />
* --force says run the build even if it's not needed<br />
* --verbose[=n] says display information. verbosity level 1 (default if --verbose is specified) shows a line for each step as it starts. Any higher number causes the logs from the various stages to be sent to the standard output<br />
* --quiet - suppress error output<br />
* --test is short for --nosend --no-status --force --verbose<br />
* --find-typedefs - obsolete way to trigger typedef anaylsis. This should now be done via the config file<br />
* --help - print help text<br />
* --keepall - keep build and installation directories if there is a failure<br />
* [ to be continued ]<br />
<br />
=== Choose a setup for a base git mirror that all your branches will pull from. ===<br />
Most buildfarm members run on more than one branch, and if you do it's good practice to set up<br />
a mirror on the buildfarm machine and then just clone that for each branch. The official publicly available git repository is at<br />
* git://git.postgresql.org/git/postgresql.git<br />
and there is a mirror at <br />
* git://github.com/postgres/postgres.git<br />
Either should be suitable for cloning.<br />
<br />
The simplest way to set up a mirror is simply to have the buildfarm script create and maintain it for you. <br />
If you do that, the mirror will be updated at the start of a run when it checks to see if any changes have occurred that might<br />
require a new build. To do that, all you need to do is set the following two options in your config file:<br />
git_keep_mirror => 'true',<br />
git_ignore_mirror_failure => 'true',<br />
<br />
If you would rather clone the github mirror for your local mirror instead of the authoritative community repo (doing so can keep load off the community server, which is a good thing), then set the config variable to point to it like this:<br />
scmrepo => 'git://github.com/postgres/postgres.git',<br />
<br />
The mirror will be placed in your build root, above the branch directories.<br />
<br />
You can also opt to create and maintain a git mirror yourself, something like this:<br />
git clone --mirror git://git.postgresql.org/git/postgresql.git pgsql-base.git<br />
When that is done, add an entry to your crontab to keep it up to date, something like:<br />
20,50 * * * * cd /path/to/pgsql-base.git && git fetch -q<br />
<br />
One downside of doing this is that your mirror will only be as up to date as the last time you ran the cron update.<br />
<br />
To have your buildfarm installation use a local mirror you maintain yourself, set the config variable:<br />
scmrepo => '/path/to/pgsql-base.git',<br />
Of course, in this case you don't set the git_keep_mirror option.<br />
<br />
=== Create a directory where builds will run. === <br />
This should be dedicated to<br />
the use of the build farm. Make sure there's plenty of space - on my<br />
machine each branch can use up to about 700Mb during a build. You can use the<br />
directory where the script lives, or a subdirectory of it, or a completely <br />
different directory.<br />
<br />
If you're using ccache, the cache directory can use up to 1Gb by default.<br />
You can reduce that if you like (see the ccache documentation), but it's<br />
good to allow at least 100Mb per active branch.<br />
<br />
=== Edit the build-farm.conf file ===<br />
<br />
Notable things you probably need to set include the following:<br />
<br />
==== %conf ====<br />
<br />
; scmrepo<br />
: Set this to indicate the path to your Git mirror<br />
; scm_url<br />
: If you are not using the Community git repository, or want to point the changesets at a different server, set this URL to indicate where to find a given Git commit on the web. For instance, for the github mirror, this value should be: <i>&#x68;ttp://github.com/postgres/postgres.git/commit/</i> - don't forget the trailing "/".<br />
<br />
Once you have registered your Buildfarm animal you will need to set these, but for initial testing just leave them as-is:<br />
<br />
; animal<br />
: This will need to be set to the animal name you were given by the Buildfarm coordinators<br />
; secret<br />
: This must be the password indicated by the Buildfarm coordinators<br />
<br />
Adjust other config variables "make", "config_opts", and (if you don't use ccache) "config_env" to suit your environment, and to choose which optional Postgres configuration options you want to build with. <br />
<br />
You should not need to adjust other variables.<br />
<br />
You may verify that you didn't screw things up too badly by running "perl -cw build-farm.conf". That verifies that the configuration is still legitimate Perl.<br />
<br />
=== Alerts and Status Notifications ===<br />
<br />
Alerts happen when we haven't heard from your buildfarm member for a while, and suggest that maybe something is wrong. Status notifications happen when we have heard from your buildfarm member, and we are telling you what happened. Both of them happen via email. Alerts are sent to the owner's registered email address. By default, none are sent. You can configure when and how often they are sent in the alerts section of the config file. Status notifications are sent to the addresses configured in the mail_events section of the config file. You can choose four different sorts of notification:<br />
* for every build<br />
* for every build that fails<br />
* for every build that changes the status<br />
* for every build that changes the status if the change is to or from OK (green) <br />
<br />
=== Change the shebang line in the scripts. ===<br />
If the path to your perl <br />
installation isn't "/usr/bin/perl", edit the #! line in perl scripts so it is correct. <br />
This is the ONLY line in those files you should ever need to edit. <br />
<br />
=== Check that required perl modules are present. ===<br />
Run "perl -cw run_build.pl". <br />
If you get errors about missing perl modules you will need to install them. <br />
Most of the required modules are standard modules in any perl<br />
distribution. The rest are all standard CPAN modules, and available either from there<br />
or from your OS distribution. When you don't get an error any more, run the same test on<br />
run_web_txn.pl, and also on run_branches.pl if you plan to use that (see below).<br />
<br />
If you are using an https URL for the buildfarm server (which you should be!), make<br />
sure that LWP::Protocol::https and Mozilla::CA are installed as well; the above test<br />
does not catch these requirements.<br />
<br />
When all is clear you are ready to start testing.<br />
<br />
=== Run in test mode. ===<br />
With a PATH that matches what you will have when running from cron, run<br />
the script in no-send, no-status, verbose mode. Something like this:<br />
./run_build.pl --nosend --nostatus --verbose<br />
and watch the fun begin. If this results in failures because it can't<br />
find some executables (especially gmake and git), you might need to change <br />
the config file again, this time changing the "build_env" with another <br />
setting something like:<br />
PATH => "/usr/local/bin:$ENV{PATH}",<br />
Also, if you put the config file somewhere else, you will need to use <br />
the --config=/path/to/build-farm.conf option.<br />
<br />
If trying to diagnose problems, interesting summary information may be found in the file '''web-txn.data''', which is found in a build-specific directory, of the form $build_root/$CURRENT_BRANCH/$animal.lastrun-logs/web-txn.data<br />
<br />
If particular steps of a build failed, logs for those steps may be found in that same directory.<br />
<br />
=== Test running from cron === <br />
When you have that running, it's time to try with cron. <br />
Put a line in your crontab that looks something like this:<br />
43 * * * * cd /location/of/run_build.pl/ && ./run_build.pl --nosend --verbose<br />
Again, add the --config option if needed. Notice that this time we didn't <br />
specify --nostatus. That means that (after the first run) the script won't <br />
do any build work unless the Git repo has changed. Check that your cron <br />
job runs (it should email you the results, unless you tell it to send them<br />
elsewhere).<br />
<br />
You can, and probably should, drop the --verbose option once things are<br />
working.<br />
<br />
The frequency with which the cron job is launched is up to you, though we do<br />
suggest that active branches get built at least once a day. The build script will<br />
automatically exit if it finds a previous invocation still running, so you do not<br />
need to worry about scheduling jobs too close together. Think of the cron<br />
frequency as how often the buildfarm animal will wake up to see if there have<br />
been changes in the Git repo.<br />
<br />
=== Choose which branches you want to build === <br />
By default run_build.pl builds the HEAD branch. If you want to<br />
build some other branch, you can do so by specifying the name on the commandline,<br />
e.g. <br />
run_build.pl REL9_4_STABLE<br />
<br />
The old way to build multiple branches was to create a cron job for each<br />
active branch, along the lines of:<br />
<br />
6 * * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend<br />
30 4 * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend REL9_4_STABLE<br />
<br />
But there's a better way ...<br />
<br />
=== Using run_branches.pl ===<br />
There is a wrapper script that makes running multiple branches much easier. To build all the branches that are currently being maintained by the project, instead of running run_build.pl, use run_branches.pl with the --run-all option. This script accepts all the options that run_build.pl does, and passes them through. So now your crontab could just look like this:<br />
6 * * * * cd /home/andrew/buildfarm && ./run_branches.pl --run-all<br />
One of the advantages of this approach is that you don't need to manually retire a branch when the Postgres project ends support for it, nor to add one when there's a new stable branch. The script contacts the server to get a list of branches that we're currently interested in, and then builds them. This is now the recommended method of running a buildfarm member.<br />
<br />
The branches that are built are controlled by the <code>branches_to_build</code> setting in the <code>global</code> section of the config file. The sample config file's setting is 'ALL'.<br />
<br />
If you don't want to build every one of the back branches, you can also use HEAD_PLUS_LATEST, or HEAD_PLUS_LATESTn for any n, or a fixed list of branches. In the last case you will probably need to adjust the list whenever the PostgreSQL developers start a new branch or declare an old branch to be at End Of Life.<br />
<br />
=== Register your new buildfarm member and subscribe to the mailing list. === <br />
Once this is all running happily, you can register to upload your<br />
results to the central server. Registration can be done on the buildfarm server <br />
at https://buildfarm.postgresql.org/cgi-bin/register-form.pl. When you receive your approval by <br />
email, you will edit the "animal" and "secret" lines in your config file, <br />
remove the --nosend flags, and you are done.<br />
<br />
Please also join the buildfarm-members mailing list at<br />
https://lists.postgresql.org<br />
This is a low-traffic list for owners of buildfarm members, and every buildfarm owner should be subscribed.<br />
<br />
=== Status Mailing Lists ===<br />
<br />
There are also two mailing lists that report status from all builds, not just your own animals. This is useful for developers who want to be notified of events rather than having to monitor the server's dashboard.<br />
<br />
* <b><code>buildfarm-status-failures</code></b>, which gets an email any time a buildfarm animal reports a failed run.<br />
* <b><code>buildfarm-status-green-chgs</code></b>, which gets an email any time the status of a buildfarm animal changes to or from green (i.e. success). This is the status list most people find useful.<br />
<br />
=== Bugs === <br />
Please file bug reports concerning the buildfarm client (but not Postgres itself)<br />
on the buildfarm members mailing list.<br />
<br />
=== Running on Windows ===<br />
There are three build environments for Windows: Cygwin, MinGW/MSys, and Microsoft Visual C++. The buildfarm can run with each of these environments. This section discusses requirements for the buildfarm, rather than requirements for building on Windows, which are covered elsewhere.<br />
<br />
==== Cygwin ==== <br />
There is almost nothing extra to be done for Cygwin. You need to make sure that cygserver is running, and you should set MAX_CONNECTIONS=>3 and CYGWIN=>'server' in the build_env stanza of the buildfarm config. Other than that it should be just like running on Unix.<br />
<br />
==== MinGW/Msys ====<br />
For MinGW/MSys, you need both the MSys DTK version of perl installed, and a native Windows perl - I have only tested with ActiveState perl, which I have found to be rock solid. You need to run the main buildfarm script using the MSYS DTK perl, and the web transaction script using native Perl. that mean you need to change the first line of the run_web_txn.pl script so it reads something like:<br />
#!/c/perl/bin/perl<br />
You should make sure that the PATH is set in your config file to put the Native perl ahead of the MSys DTK perl.<br />
It's a good idea to have a runbf.bat file that you can call from the Windows scheduler. Mine looks like this:<br />
@echo off<br />
setlocal<br />
c:<br />
cd \msys\1.0\bin<br />
c:\msys\1.0\bin\sh.exe --login -c "cd bf && ./run_build.pl --verbose %1 >> bftask.out 2>&1"<br />
Set up a non-privileged Windows user to run this jobs as. set up the buildfarm as above as that user. Then create scheduler jobs that call runbf.bat with an optional branch name argument.<br />
<br />
==== Microsoft Visual C++ ====<br />
For MSVC you need to edit the config file more extensively. Make sure the 'using_msvc' setting is on. Also, there is a section of the file specially for MSVC builds. As with MinGW, you need a native Windows perl installed. It appears that Windows Git does not like to clone local repositories specified with forward slashes (this is pretty horrible - almost all Windows programs are quite happy with forward slashes. Make sure you specify the repository using backslashes or weird things will happen. Again, you will need a runbf.bat file for the windows scheduler. Mine looks like this:<br />
@echo off<br />
c:<br />
cd \prog\bf<br />
c:\perl\bin\perl run_build.pl --verbose %1 %2 %3 %4 >> bfout.txt<br />
You will also need a tar command capable of bundling up the logs to send to the server. The best one I have found for use on Windows is bsdtar, part of the libarchive collection at http://sourceforge.net/projects/gnuwin32/files/. This is also a good place to get many of the libraries you need for optional pieces of MSVC and MinGW builds.<br />
<br />
=== Running multiple buildfarm members on a single machine ===<br />
<br />
Sometimes you might want to run more than one buildfarm member on a single machine. Possible reasons for doing this include testing different compilers, and running with different build options. For example, on one FreeBSD machine I have two members; one does a normal build and the other does a build with -DCLOBBER_CACHE_ALWAYS set. Or on a Windows machine one might want to test both the 32 bit and 64 bit mingw-w64 compilers.<br />
<br />
The simplest way to do this is to do it all in the same location. Get one member working, then copy the config file to something with the other member's name and change the animal name and password, and whatever in the config will be different from the first one. The members can share a git mirror and build root. There are locking provisions that prevent instances of the buildfarm scripts from tripping over each other. If you are using ccache, you should ensure that each member gets a separate ccache location. The best way to do that is to put the member name into the ccache directory name (which is the default as of recent releases of the buildfarm scripts).<br />
<br />
=== Running in Parallel ===<br />
<br />
If you run a single animal, you can run all the branches in parallel just by changing <code>run_branches.pl</code>'s <code>--run-all</code> to <code>--run-parallel</code>. This will launch each branch's run, spaced out by 60 seconds from launch to launch. <br />
<br />
The long story: parallelism is controlled by a number of configuration parameters in the <code>global</code> section of the config file. The first is <code>parallel_lockdir</code>. By default this is the <code>global_lock_dir</code> which in turn defaults to the <code>build_root</code>. This directory is where <code>run_branches.pl</code> puts a lock file for each running branch. The second is <code>max_parallel</code>. The script will launch a new branch as long as the number of live locks is less than this number. The default is 10. Lastly the setting <code>parallel_stagger</code> determines how long the script will wait before starting a new branch, unless one finishes in the meantime. The default is 60 seconds.<br />
<br />
If you want to run multiple animals and use parallelism between them the best way is to use a separate <code>build_root</code> for each animal. Then don't set the <code>global_lock_dir</code> for each animal, but do set the <code>parallel_lockdir</code> for each animal to point to the same directory, probably the <code>build_root</code> of one of the animals. Then you could have a crontab something like this:<br />
<br />
2-59/15 * * * * cd curly && run_branches.pl --run-parallel --config=curly.conf<br />
7-59/15 * * * * cd larry && run_branches.pl --run-parallel --config=larry.conf<br />
12-59/15 * * * * cd moe && run_branches.pl --run-parallel --config=moe.conf<br />
<br />
=== Tips and Tricks ===<br />
<br />
You can force a single run of your animal by putting a file called <animal>.force-one-run in the <buildroot>/<branch> directory. For example the following will force a build on all the stable branches of my animal crake<br />
cd root<br />
for f in REL* ; do<br />
touch $f/crake.force-one-run<br />
done<br />
When the run is done this file will be removed automatically. <br />
<br />
=== Testing Additional Software ===<br />
<br />
In addition to testing core Postgres code, you can test addon software such as Extensions and Foreign Data Wrappers. To do that you need to create a Module file in the PGBuild/Modules directory. Say you're going to test a Foreign Data Wrapper called UltraCoolFDW. Copy the Skeleton.pm file in that directory to UltraCoolFDW.pm` Inside, change the package name to "PGBuild::Modules::UltraCoolFDW".<br />
<br />
Then add you new module to the "modules" section in your config file.<br />
<br />
At this stage your new module will register and run. It just won't do anything, but if you run in verbose mode you will see the traces of its subroutines being called.<br />
<br />
To make it do some things you need to fill in a bit of code. But not very much. The most important are the <code>setup()</code> subroutine, and the <code>checkout()</code>, <code>setup_target()</code>, <code>build()</code>, <code>install()</code>, <code>installcheck()</code> and <code>cleanup()</code>subroutines.<br />
<br />
In <code>setup()</code> you normally need to create and SCM object to handle checking out your code, andstash info in where it's going to be built. That extra code for UltraFDW will look something like this, just before the <code>register_module_hooks()</code>call:<br />
<syntaxhighlight lang="perl"><br />
my $scmconf = {<br />
scm => 'git',<br />
scmrepo => 'git://my.gitrepo.org/myname/ultracoolfdw.git',<br />
git_reference => undef,<br />
git_keep_mirror => 'true',<br />
git_ignore_mirror_failure => 'true',<br />
build_root => $self->{buildroot},<br />
};<br />
<br />
$self->{scm} = PGBuild::SCM->new($scmconf, 'ultracoolfdw');<br />
my $where = $self->{scm}->get_build_path();<br />
$self->{where} = $where;<br />
</syntaxhighlight><br />
<br />
You might only want to run this module on some branches. Say you only want to run it on 'HEAD' (our name for git master). You would put some thing like this at the top of the setup function:<br />
<br />
<syntaxhighlight lang="perl"><br />
return unless $branch eq 'HEAD';<br />
</syntaxhighlight><br />
<br />
In <code>checkout()</code> you need to check the code out. Replace the <code>push()</code> line with lines like this:<br />
<br />
<syntaxhighlight lang="perl"><br />
my $scmlog = $self->{scm}->checkout($self->{pgbranch});<br />
push(@$savescmlog,<br />
"------------- $MODULE checkout ----------------\n", @$scmlog);<br />
</syntaxhighlight><br />
<br />
This code works if your FDW code has branches that mirror the Postgres branches. If instead you have a single branch, say "main", that works for all Postgres branches, use that name instead of <code>$self->{pgbranch}</code>.<br />
<br />
<code>setup_target()</code> normally just needs the addition of this line:<br />
<br />
<syntaxhighlight lang="perl"><br />
$self->{scm}->copy_source(undef);<br />
</syntaxhighlight><br />
<br />
These next functions all assume (correctly) that Postgres has been successfully built and installed in the standard place, i.e. "../inst" relative to your build directory.<br />
<br />
<code>build()</code> and <code>install()</code> are pretty similar. Essentially they simply invoke your code's Makefile to run these tasks. The code for <code>build</code> should look something like this:<br />
<br />
<syntaxhighlight lang="perl"><br />
my $cmd = "PATH=../inst:$ENV{PATH} make USE_PGXS=1";<br />
my @makeout = run_log("cd $self->{where} && $cmd");<br />
my $status = $? >> 8;<br />
writelog("$MODULE-build", \@makeout);<br />
print "======== make log ===========\n", @makeout if ($verbose > 1);<br />
$status ||= check_make_log_warnings("$MODULE-build", $verbose)<br />
if $check_warnings;<br />
send_result("$MODULE-build", $status, \@makeout) if $status;<br />
</syntaxhighlight><br />
<br />
while the code for <code>install()</code> looks something like this:<br />
<br />
<syntaxhighlight lang="perl"><br />
my $cmd = "PATH=../inst:$ENV{PATH} make USE_PGXS=1 install";<br />
my @log = run_log("cd $self->{where} && $cmd");<br />
my $status = $? >> 8;<br />
writelog("$MODULE-install", \@log);<br />
print "======== install log ===========\n", @log if ($verbose > 1);<br />
send_result("$MODULE-install", $status, \@log) if $status;<br />
</syntaxhighlight><br />
<br />
If you get a perl complaint about $MODULE being undefined, add a line like this near the top of your module, just after <code>use warnings;</code><br />
<br />
<syntaxhighlight lang="perl"><br />
(my $MODULE = __PACKAGE__) =~ s/PGBuild::Modules:://;<br />
</syntaxhighlight><br />
<br />
<code>installcheck()</code> is the most complicated subroutine. That's because in addition to running the installcheck procedure it needs to gather up all the log files, regression differences etc. Here's an example of the additional code needed in this subroutine:<br />
<br />
<syntaxhighlight lang="perl"><br />
my $make = $self->{bfconf}->{make};<br />
print time_str(), "install-checking $MODULE\n" if $verbose;<br />
my $cmd = "$make USE_PGXS=1 USE_MODULE_DB=1 installcheck";<br />
my @log = run_log("cd $self->{where} && $cmd");<br />
my $log = PGBuild::Log->new("$MODULE-installcheck-$locale");<br />
my $status = $? >> 8;<br />
my $installdir = "$self->{buildroot}/$self->{pgbranch}/inst";<br />
my @logfiles = ("$self->{where}/regression.diffs", "$installdir/logfile");<br />
if ($status)<br />
{<br />
$log->add_log($_) foreach (@logfiles);<br />
}<br />
push(@log, $log->log_string);<br />
writelog("$MODULE-installcheck-$locale", \@log);<br />
print "======== installcheck ($locale) log ===========\n", @log<br />
if ($verbose > 1);<br />
send_result("$MODULE-installcheck-$locale", $status, \@log) if $status;<br />
</syntaxhighlight><br />
<br />
Finally in <code>cleanup()</code>, add any cleanup required. Usually this can just be the removal of the build directory, something like:<br />
<br />
<syntaxhighlight lang="perl"><br />
rmtree($self->{where});<br />
</syntaxhighlight><br />
<br />
Remove any reference to unneeded subroutines in the <code>$hooks</code>, and you are done.<br />
<br />
[[Category:Howto]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_Buildfarm_Howto&diff=36738PostgreSQL Buildfarm Howto2022-02-14T20:59:56Z<p>Adunstan: Document use of modules to test extensions etc.</p>
<hr />
<div>PostgreSQL BuildFarm is a distributed build system designed to detect <br />
build failures on a large collection of platforms and configurations. <br />
This software is written in Perl. If you're not comfortable with Perl<br />
then you possibly don't want to run this, even though the only adjustment<br />
you should ever need is to the config file (which is also Perl).<br />
<br />
=== Get the Software === <br />
Download from [http://buildfarm.postgresql.org/downloads the buildfarm server]<br />
Unpack it and put it somewhere. You can put the config file in a different <br />
place from the run_build.pl script if you want to, but the <br />
simplest thing is to put it in the same place. Decide which user you will run <br />
the script as - it must be a user who can run the PostgreSQL server programs (on Unix<br />
that means it must *not* run as root). Do everything else here as that user.<br />
<br />
=== Other Prerequisites ===<br />
<br />
; Git<br />
: Must be version 1.6 or later.<br />
<br />
; All tools required for building Postgres from a Git checkout<br />
: GNU make, bison, flex, etc<br />
: See [http://www.postgresql.org/docs/devel/static/install-requirements.html the Postgres documentation]<br />
<br />
; ccache<br />
: This isn't ''absolutely'' necessary, but it greatly reduces the amount of CPU your buildfarm member will consume ... at the price of more disk space usage<br />
<br />
=== All the run_build.pl command line options ===<br />
<br />
This list is complete as of release 4.19 of the client<br />
<br />
* --config=/pathto/file - location of config file, default build-farm.conf<br />
* --nosend says don't send the results to the server<br />
* --nostatus says don't update the status files<br />
* --force says run the build even if it's not needed<br />
* --verbose[=n] says display information. verbosity level 1 (default if --verbose is specified) shows a line for each step as it starts. Any higher number causes the logs from the various stages to be sent to the standard output<br />
* --quiet - suppress error output<br />
* --test is short for --nosend --no-status --force --verbose<br />
* --find-typedefs - obsolete way to trigger typedef anaylsis. This should now be done via the config file<br />
* --help - print help text<br />
* --keepall - keep build and installation directories if there is a failure<br />
* [ to be continued ]<br />
<br />
=== Choose a setup for a base git mirror that all your branches will pull from. ===<br />
Most buildfarm members run on more than one branch, and if you do it's good practice to set up<br />
a mirror on the buildfarm machine and then just clone that for each branch. The official publicly available git repository is at<br />
* git://git.postgresql.org/git/postgresql.git<br />
and there is a mirror at <br />
* git://github.com/postgres/postgres.git<br />
Either should be suitable for cloning.<br />
<br />
The simplest way to set up a mirror is simply to have the buildfarm script create and maintain it for you. <br />
If you do that, the mirror will be updated at the start of a run when it checks to see if any changes have occurred that might<br />
require a new build. To do that, all you need to do is set the following two options in your config file:<br />
git_keep_mirror => 'true',<br />
git_ignore_mirror_failure => 'true',<br />
<br />
If you would rather clone the github mirror for your local mirror instead of the authoritative community repo (doing so can keep load off the community server, which is a good thing), then set the config variable to point to it like this:<br />
scmrepo => 'git://github.com/postgres/postgres.git',<br />
<br />
The mirror will be placed in your build root, above the branch directories.<br />
<br />
You can also opt to create and maintain a git mirror yourself, something like this:<br />
git clone --mirror git://git.postgresql.org/git/postgresql.git pgsql-base.git<br />
When that is done, add an entry to your crontab to keep it up to date, something like:<br />
20,50 * * * * cd /path/to/pgsql-base.git && git fetch -q<br />
<br />
One downside of doing this is that your mirror will only be as up to date as the last time you ran the cron update.<br />
<br />
To have your buildfarm installation use a local mirror you maintain yourself, set the config variable:<br />
scmrepo => '/path/to/pgsql-base.git',<br />
Of course, in this case you don't set the git_keep_mirror option.<br />
<br />
=== Create a directory where builds will run. === <br />
This should be dedicated to<br />
the use of the build farm. Make sure there's plenty of space - on my<br />
machine each branch can use up to about 700Mb during a build. You can use the<br />
directory where the script lives, or a subdirectory of it, or a completely <br />
different directory.<br />
<br />
If you're using ccache, the cache directory can use up to 1Gb by default.<br />
You can reduce that if you like (see the ccache documentation), but it's<br />
good to allow at least 100Mb per active branch.<br />
<br />
=== Edit the build-farm.conf file ===<br />
<br />
Notable things you probably need to set include the following:<br />
<br />
==== %conf ====<br />
<br />
; scmrepo<br />
: Set this to indicate the path to your Git mirror<br />
; scm_url<br />
: If you are not using the Community git repository, or want to point the changesets at a different server, set this URL to indicate where to find a given Git commit on the web. For instance, for the github mirror, this value should be: <i>&#x68;ttp://github.com/postgres/postgres.git/commit/</i> - don't forget the trailing "/".<br />
<br />
Once you have registered your Buildfarm animal you will need to set these, but for initial testing just leave them as-is:<br />
<br />
; animal<br />
: This will need to be set to the animal name you were given by the Buildfarm coordinators<br />
; secret<br />
: This must be the password indicated by the Buildfarm coordinators<br />
<br />
Adjust other config variables "make", "config_opts", and (if you don't use ccache) "config_env" to suit your environment, and to choose which optional Postgres configuration options you want to build with. <br />
<br />
You should not need to adjust other variables.<br />
<br />
You may verify that you didn't screw things up too badly by running "perl -cw build-farm.conf". That verifies that the configuration is still legitimate Perl.<br />
<br />
=== Alerts and Status Notifications ===<br />
<br />
Alerts happen when we haven't heard from your buildfarm member for a while, and suggest that maybe something is wrong. Status notifications happen when we have heard from your buildfarm member, and we are telling you what happened. Both of them happen via email. Alerts are sent to the owner's registered email address. By default, none are sent. You can configure when and how often they are sent in the alerts section of the config file. Status notifications are sent to the addresses configured in the mail_events section of the config file. You can choose four different sorts of notification:<br />
* for every build<br />
* for every build that fails<br />
* for every build that changes the status<br />
* for every build that changes the status if the change is to or from OK (green) <br />
<br />
=== Change the shebang line in the scripts. ===<br />
If the path to your perl <br />
installation isn't "/usr/bin/perl", edit the #! line in perl scripts so it is correct. <br />
This is the ONLY line in those files you should ever need to edit. <br />
<br />
=== Check that required perl modules are present. ===<br />
Run "perl -cw run_build.pl". <br />
If you get errors about missing perl modules you will need to install them. <br />
Most of the required modules are standard modules in any perl<br />
distribution. The rest are all standard CPAN modules, and available either from there<br />
or from your OS distribution. When you don't get an error any more, run the same test on<br />
run_web_txn.pl, and also on run_branches.pl if you plan to use that (see below).<br />
<br />
If you are using an https URL for the buildfarm server (which you should be!), make<br />
sure that LWP::Protocol::https and Mozilla::CA are installed as well; the above test<br />
does not catch these requirements.<br />
<br />
When all is clear you are ready to start testing.<br />
<br />
=== Run in test mode. ===<br />
With a PATH that matches what you will have when running from cron, run<br />
the script in no-send, no-status, verbose mode. Something like this:<br />
./run_build.pl --nosend --nostatus --verbose<br />
and watch the fun begin. If this results in failures because it can't<br />
find some executables (especially gmake and git), you might need to change <br />
the config file again, this time changing the "build_env" with another <br />
setting something like:<br />
PATH => "/usr/local/bin:$ENV{PATH}",<br />
Also, if you put the config file somewhere else, you will need to use <br />
the --config=/path/to/build-farm.conf option.<br />
<br />
If trying to diagnose problems, interesting summary information may be found in the file '''web-txn.data''', which is found in a build-specific directory, of the form $build_root/$CURRENT_BRANCH/$animal.lastrun-logs/web-txn.data<br />
<br />
If particular steps of a build failed, logs for those steps may be found in that same directory.<br />
<br />
=== Test running from cron === <br />
When you have that running, it's time to try with cron. <br />
Put a line in your crontab that looks something like this:<br />
43 * * * * cd /location/of/run_build.pl/ && ./run_build.pl --nosend --verbose<br />
Again, add the --config option if needed. Notice that this time we didn't <br />
specify --nostatus. That means that (after the first run) the script won't <br />
do any build work unless the Git repo has changed. Check that your cron <br />
job runs (it should email you the results, unless you tell it to send them<br />
elsewhere).<br />
<br />
You can, and probably should, drop the --verbose option once things are<br />
working.<br />
<br />
The frequency with which the cron job is launched is up to you, though we do<br />
suggest that active branches get built at least once a day. The build script will<br />
automatically exit if it finds a previous invocation still running, so you do not<br />
need to worry about scheduling jobs too close together. Think of the cron<br />
frequency as how often the buildfarm animal will wake up to see if there have<br />
been changes in the Git repo.<br />
<br />
=== Choose which branches you want to build === <br />
By default run_build.pl builds the HEAD branch. If you want to<br />
build some other branch, you can do so by specifying the name on the commandline,<br />
e.g. <br />
run_build.pl REL9_4_STABLE<br />
<br />
The old way to build multiple branches was to create a cron job for each<br />
active branch, along the lines of:<br />
<br />
6 * * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend<br />
30 4 * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend REL9_4_STABLE<br />
<br />
But there's a better way ...<br />
<br />
=== Using run_branches.pl ===<br />
There is a wrapper script that makes running multiple branches much easier. To build all the branches that are currently being maintained by the project, instead of running run_build.pl, use run_branches.pl with the --run-all option. This script accepts all the options that run_build.pl does, and passes them through. So now your crontab could just look like this:<br />
6 * * * * cd /home/andrew/buildfarm && ./run_branches.pl --run-all<br />
One of the advantages of this approach is that you don't need to manually retire a branch when the Postgres project ends support for it, nor to add one when there's a new stable branch. The script contacts the server to get a list of branches that we're currently interested in, and then builds them. This is now the recommended method of running a buildfarm member.<br />
<br />
The branches that are built are controlled by the <code>branches_to_build</code> setting in the <code>global</code> section of the config file. The sample config file's setting is 'ALL'.<br />
<br />
If you don't want to build every one of the back branches, you can also use HEAD_PLUS_LATEST, or HEAD_PLUS_LATESTn for any n, or a fixed list of branches. In the last case you will probably need to adjust the list whenever the PostgreSQL developers start a new branch or declare an old branch to be at End Of Life.<br />
<br />
=== Register your new buildfarm member and subscribe to the mailing list. === <br />
Once this is all running happily, you can register to upload your<br />
results to the central server. Registration can be done on the buildfarm server <br />
at https://buildfarm.postgresql.org/cgi-bin/register-form.pl. When you receive your approval by <br />
email, you will edit the "animal" and "secret" lines in your config file, <br />
remove the --nosend flags, and you are done.<br />
<br />
Please also join the buildfarm-members mailing list at<br />
https://lists.postgresql.org<br />
This is a low-traffic list for owners of buildfarm members, and every buildfarm owner should be subscribed.<br />
<br />
=== Status Mailing Lists ===<br />
<br />
There are also two mailing lists that report status from all builds, not just your own animals. This is useful for developers who want to be notified of events rather than having to monitor the server's dashboard.<br />
<br />
* <b><code>buildfarm-status-failures</code></b>, which gets an email any time a buildfarm animal reports a failed run.<br />
* <b><code>buildfarm-status-green-chgs</code></b>, which gets an email any time the status of a buildfarm animal changes to or from green (i.e. success). This is the status list most people find useful.<br />
<br />
=== Bugs === <br />
Please file bug reports concerning the buildfarm client (but not Postgres itself)<br />
on the buildfarm members mailing list.<br />
<br />
=== Running on Windows ===<br />
There are three build environments for Windows: Cygwin, MinGW/MSys, and Microsoft Visual C++. The buildfarm can run with each of these environments. This section discusses requirements for the buildfarm, rather than requirements for building on Windows, which are covered elsewhere.<br />
<br />
==== Cygwin ==== <br />
There is almost nothing extra to be done for Cygwin. You need to make sure that cygserver is running, and you should set MAX_CONNECTIONS=>3 and CYGWIN=>'server' in the build_env stanza of the buildfarm config. Other than that it should be just like running on Unix.<br />
<br />
==== MinGW/Msys ====<br />
For MinGW/MSys, you need both the MSys DTK version of perl installed, and a native Windows perl - I have only tested with ActiveState perl, which I have found to be rock solid. You need to run the main buildfarm script using the MSYS DTK perl, and the web transaction script using native Perl. that mean you need to change the first line of the run_web_txn.pl script so it reads something like:<br />
#!/c/perl/bin/perl<br />
You should make sure that the PATH is set in your config file to put the Native perl ahead of the MSys DTK perl.<br />
It's a good idea to have a runbf.bat file that you can call from the Windows scheduler. Mine looks like this:<br />
@echo off<br />
setlocal<br />
c:<br />
cd \msys\1.0\bin<br />
c:\msys\1.0\bin\sh.exe --login -c "cd bf && ./run_build.pl --verbose %1 >> bftask.out 2>&1"<br />
Set up a non-privileged Windows user to run this jobs as. set up the buildfarm as above as that user. Then create scheduler jobs that call runbf.bat with an optional branch name argument.<br />
<br />
==== Microsoft Visual C++ ====<br />
For MSVC you need to edit the config file more extensively. Make sure the 'using_msvc' setting is on. Also, there is a section of the file specially for MSVC builds. As with MinGW, you need a native Windows perl installed. It appears that Windows Git does not like to clone local repositories specified with forward slashes (this is pretty horrible - almost all Windows programs are quite happy with forward slashes. Make sure you specify the repository using backslashes or weird things will happen. Again, you will need a runbf.bat file for the windows scheduler. Mine looks like this:<br />
@echo off<br />
c:<br />
cd \prog\bf<br />
c:\perl\bin\perl run_build.pl --verbose %1 %2 %3 %4 >> bfout.txt<br />
You will also need a tar command capable of bundling up the logs to send to the server. The best one I have found for use on Windows is bsdtar, part of the libarchive collection at http://sourceforge.net/projects/gnuwin32/files/. This is also a good place to get many of the libraries you need for optional pieces of MSVC and MinGW builds.<br />
<br />
=== Running multiple buildfarm members on a single machine ===<br />
<br />
Sometimes you might want to run more than one buildfarm member on a single machine. Possible reasons for doing this include testing different compilers, and running with different build options. For example, on one FreeBSD machine I have two members; one does a normal build and the other does a build with -DCLOBBER_CACHE_ALWAYS set. Or on a Windows machine one might want to test both the 32 bit and 64 bit mingw-w64 compilers.<br />
<br />
The simplest way to do this is to do it all in the same location. Get one member working, then copy the config file to something with the other member's name and change the animal name and password, and whatever in the config will be different from the first one. The members can share a git mirror and build root. There are locking provisions that prevent instances of the buildfarm scripts from tripping over each other. If you are using ccache, you should ensure that each member gets a separate ccache location. The best way to do that is to put the member name into the ccache directory name (which is the default as of recent releases of the buildfarm scripts).<br />
<br />
=== Running in Parallel ===<br />
<br />
If you run a single animal, you can run all the branches in parallel just by changing <code>run_branches.pl</code>'s <code>--run-all</code> to <code>--run-parallel</code>. This will launch each branch's run, spaced out by 60 seconds from launch to launch. <br />
<br />
The long story: parallelism is controlled by a number of configuration parameters in the <code>global</code> section of the config file. The first is <code>parallel_lockdir</code>. By default this is the <code>global_lock_dir</code> which in turn defaults to the <code>build_root</code>. This directory is where <code>run_branches.pl</code> puts a lock file for each running branch. The second is <code>max_parallel</code>. The script will launch a new branch as long as the number of live locks is less than this number. The default is 10. Lastly the setting <code>parallel_stagger</code> determines how long the script will wait before starting a new branch, unless one finishes in the meantime. The default is 60 seconds.<br />
<br />
If you want to run multiple animals and use parallelism between them the best way is to use a separate <code>build_root</code> for each animal. Then don't set the <code>global_lock_dir</code> for each animal, but do set the <code>parallel_lockdir</code> for each animal to point to the same directory, probably the <code>build_root</code> of one of the animals. Then you could have a crontab something like this:<br />
<br />
2-59/15 * * * * cd curly && run_branches.pl --run-parallel --config=curly.conf<br />
7-59/15 * * * * cd larry && run_branches.pl --run-parallel --config=larry.conf<br />
12-59/15 * * * * cd moe && run_branches.pl --run-parallel --config=moe.conf<br />
<br />
=== Tips and Tricks ===<br />
<br />
You can force a single run of your animal by putting a file called <animal>.force-one-run in the <buildroot>/<branch> directory. For example the following will force a build on all the stable branches of my animal crake<br />
cd root<br />
for f in REL* ; do<br />
touch $f/crake.force-one-run<br />
done<br />
When the run is done this file will be removed automatically. <br />
<br />
=== Testing Additional Software ===<br />
<br />
In addition to testing core Postgres code, you can test addon software such as Extensions and Foreign Data Wrappers. To do that you need to create a Module file in the PGBuild/Modules directory. Say you're going to test a Foreign Data Wrapper called UltraCoolFDW. Copy the Skeleton.pm file in that directory to UltraCoolFDW.pm` Inside, change the package name to "PGBuild::Modules::UltraCoolFDW".<br />
<br />
Then add you new module to the "modules" section in your config file.<br />
<br />
At this stage your new module will register and run. It just won't do anything, but if you run in verbose mode you will see the traces of its subroutines being called.<br />
<br />
To make it do some things you need to fill in a bit of code. But not very much. The most important are the <code>setup()</code> subroutine, and the <code>checkout()</code>, <code>setup_target()</code>, <code>build()</code>, <code>install()</code>, <code>installcheck()</code> and <code>cleanup()</code>subroutines.<br />
<br />
In <code>setup()</code> you normally need to create and SCM object to handle checking out your code, andstash info in where it's going to be built. That code wore UltraFDW will look something like this, just before the <code>register_module_hooks()</code>call:<br />
<syntaxhighlight lang="perl"><br />
my $scmconf = {<br />
scm => 'git',<br />
scmrepo => 'git://my.gitrepo.org/myname/ultracoolfdw.git',<br />
git_reference => undef,<br />
git_keep_mirror => 'true',<br />
git_ignore_mirror_failure => 'true',<br />
build_root => $self->{buildroot},<br />
};<br />
<br />
$self->{scm} = PGBuild::SCM->new($scmconf, 'ultracoolfdw');<br />
my $where = $self->{scm}->get_build_path();<br />
$self->{where} = $where;<br />
</syntaxhighlight><br />
<br />
You might only want to run this module on some branches. Say you only want to run it on 'HEAD' (our name for git master). You would put some thing like this at the top of the setup function:<br />
<br />
<syntaxhighlight lang="perl"><br />
return unless $branch eq 'HEAD';<br />
</syntaxhighlight><br />
<br />
In <code>checkout()</code> you need to check the code out. Replace the <code>push()</code> line with lines like this:<br />
<br />
<syntaxhighlight lang="perl"><br />
my $scmlog = $self->{scm}->checkout($self->{pgbranch});<br />
push(@$savescmlog,<br />
"------------- $MODULE checkout ----------------\n", @$scmlog);<br />
</syntaxhighlight><br />
<br />
This code works if your FDW code has branches that mirror the Postgres branches. If instead you have a single branch, say "main", that works for all Postgres branches, use that name instead of <code>$self->{pgbranch}</code>.<br />
<br />
<code>setup_target()</code> normally just need the addition of this line:<br />
<br />
<syntaxhighlight lang="perl"><br />
$self->{scm}->copy_source(undef);<br />
</syntaxhighlight><br />
<br />
These next functions all assume (correctly) that Postgres has been successfully built and installed in the standard place, i.e. "../inst" relative you your build directory.<br />
<br />
<code>build()</code> and <code>install()</code> are pretty similar. Essentially they simply invoke your code's Makefile to run these tasks. The code for <code>build</code> should look something like this:<br />
<br />
<syntaxhighlight lang="perl"><br />
my $cmd = "PATH=../inst:$ENV{PATH} make USE_PGXS=1";<br />
my @makeout = run_log("cd $self->{where} && $cmd");<br />
my $status = $? >> 8;<br />
writelog("$MODULE-build", \@makeout);<br />
print "======== make log ===========\n", @makeout if ($verbose > 1);<br />
$status ||= check_make_log_warnings("$MODULE-build", $verbose)<br />
if $check_warnings;<br />
send_result("$MODULE-build", $status, \@makeout) if $status;<br />
</syntaxhighlight><br />
<br />
while the code for <code>install()</code> looks something like this:<br />
<br />
<syntaxhighlight lang="perl"><br />
my $cmd = "PATH=../inst:$ENV{PATH} make USE_PGXS=1 install";<br />
my @log = run_log("cd $self->{where} && $cmd");<br />
my $status = $? >> 8;<br />
writelog("$MODULE-install", \@log);<br />
print "======== install log ===========\n", @log if ($verbose > 1);<br />
send_result("$MODULE-install", $status, \@log) if $status;<br />
</syntaxhighlight><br />
<br />
If you get a perl complaint about $MODULE being undefined, add a line like this near the top of your module, just after <code>use warnings;</code><br />
<br />
<syntaxhighlight lang="perl"><br />
(my $MODULE = __PACKAGE__) =~ s/PGBuild::Modules:://;<br />
</syntaxhighlight><br />
<br />
<code>installcheck</code> is the most complicated subroutine. That's because in addition to running the installcheck procedure it needs to gather up all the log files, regression differences etc. Here's an example of the additional code needed in this subroutine:<br />
<br />
<syntaxhighlight lang="perl"><br />
my $make = $self->{bfconf}->{make};<br />
print time_str(), "install-checking $MODULE\n" if $verbose;<br />
my $cmd = "$make USE_PGXS=1 USE_MODULE_DB=1 installcheck";<br />
my @log = run_log("cd $self->{where} && $cmd");<br />
my $log = PGBuild::Log->new("$MODULE-installcheck-$locale");<br />
my $status = $? >> 8;<br />
my $installdir = "$self->{buildroot}/$self->{pgbranch}/inst";<br />
my @logfiles = ("$self->{where}/regression.diffs", "$installdir/logfile");<br />
if ($status)<br />
{<br />
$log->add_log($_) foreach (@logfiles);<br />
}<br />
push(@log, $log->log_string);<br />
writelog("$MODULE-installcheck-$locale", \@log);<br />
print "======== installcheck ($locale) log ===========\n", @log<br />
if ($verbose > 1);<br />
send_result("$MODULE-installcheck-$locale", $status, \@log) if $status;<br />
</syntaxhighlight><br />
<br />
Finally in <code>cleanup()</code>, add any cleanup required. Usually this can just be the removal of the build directory, something like:<br />
<br />
<syntaxhighlight lang="perl"><br />
rmtree($self->{where});<br />
</syntaxhighlight><br />
<br />
Remove any reference to unneeded subroutines in the <code>$hooks</code>, and you are done.<br />
<br />
[[Category:Howto]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_14_Open_Items&diff=36432PostgreSQL 14 Open Items2021-09-15T15:53:53Z<p>Adunstan: move non-14 item to "nothing to do" section</p>
<hr />
<div>== Open Issues ==<br />
<br />
'''NOTE''': Please place new open items at the end of the list.<br />
<br />
* [https://www.postgresql.org/message-id/20210817091420.u3vgqjh43lnpjntk%40alap3.anarazel.de pgstat_send_connstats() introduces unnecessary timestamp and UDP overhead]<br />
** Owner: Magnus Hagander<br />
<br />
* [https://www.postgresql.org/message-id/5bafa66ad529e11860339565c9e7c166%40oss.nttdata.com EXPLAIN VERBOSE fails on query with SEARCH BREADTH FIRST]<br />
** Owner: Peter Eisentraut<br />
<br />
== Decisions to Recheck Mid-Beta ==<br />
<br />
== Older bugs affecting stable branches ==<br />
<br />
=== Live issues ===<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkjjCoq5Y4LeeHJcjYJVxGm3M3SAWZ0%3D6J8K1FPSC9K0w%40mail.gmail.com REINDEX on a system catalog can leave index with two index tuples whose heap TIDs match]<br />
** In other words, there is a rare case where the HOT invariant is violated. Same HOT chain is indexed twice due to confusion about which precise heap tuple should be indexed.<br />
** Unclear what the user impact is.<br />
** Affects all stable branches.<br />
<br />
* [https://www.postgresql.org/message-id/20201016135230.GA23633%40alvherre.pgsql CREATE TABLE .. PARTITION OF fails to preserve tgenabled for inherited row triggers]<br />
** tgenabled lost on CREATE TABLE .. PARTITION OF, and on pg_dump, and comments on child triggers lost during pg_dump;<br />
** Those are resolved by f0e21f2f6 and df80fa2ee, but there's another issue with psql \d of non-inherited triggers<br />
<br />
* [https://www.postgresql.org/message-id/20201001021609.GC8476%40telsasoft.com memory leak with JIT inlining]<br />
** [https://www.postgresql.org/message-id/flat/20210331040751.GU4431%40telsasoft.com#cc34872765add8e483e05009212d9d39 Another report of (same?) issue and reproducer]<br />
** [https://www.postgresql.org/message-id/flat/9f73e655-14b8-feaf-bd66-c0f506224b9e%40stephans-server.de Another report]<br />
** [https://www.postgresql.org/message-id/flat/16707-f5df308978a55bf8%40postgresql.org Another report]<br />
<br />
* [https://www.postgresql.org/message-id/CAEudQAoR5e7=uMZ0otzuCVb25zTC8QQBe+2Dt1JRsa3u+XuwJg@mail.gmail.com could not rename temporary statistics file on Windows]<br />
** See {{PgCommitURL|909b449e00fc2f71e1a38569bbddbb6457d28485}} that has fixed a similar symptom for WAL segments. Most reporters of the WAL segment problem complained about this renaming issue as well.<br />
<br />
* [https://www.postgresql.org/message-id/20210422203603.fdnh3fu2mmfp2iov@alap3.anarazel.de Incorrect snapshot calculation when 2PC is in use]<br />
** Seems to be an old problem.<br />
<br />
=== Fixed issues ===<br />
<br />
* [https://www.postgresql.org/message-id/flat/trinity-1c565d44-159f-488b-a518-caf13883134f-1611835701633%403c-app-gmx-bap78 hashagg broken by failing to spill grouping columns]<br />
** Fixed at: {{PgCommitURL|0ff865fbe50e82f17df8a9280fa01faf270b7f3f}}<br />
<br />
* [https://www.postgresql.org/message-id/CAE-ML+_EjH_fzfq1F3RJ1=XaaNG=-Jz-i3JqkNhXiLAsM3z-Ew@mail.gmail.com PITR promote bug: Checkpointer writes to older timeline]<br />
** Fixed at: {{PgCommitURL|595b9cba2ab0cdd057e02d3c23f34a8bcfd90a2d}}<br />
<br />
* [https://www.postgresql.org/message-id/YFBcRbnBiPdGZvfW%40paquier.xyz Permission failures with WAL files in 13~ on Windows]<br />
** Fixed at: {{PgCommitURL|78c24e97dd189f62187a99ef84016d0eb35a7978}}<br />
<br />
* [https://www.postgresql.org/message-id/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com CLOBBER_CACHE Server crashed with segfault 11 while executing clusterdb]<br />
** Fixed at: {{PgCommitURL|9d523119fd38fd205cb9c8ea8e7cceeb54355818}}<br />
<br />
* [https://www.postgresql.org/message-id/CAAV6ZkQRCVBh8qAY+SZiHnz+U+FqAGBBDaDTjF2yiKa2nJSLKg@mail.gmail.com Reference leak with tupledescs in plpgsql simple expressions]<br />
** Fixed at: {{PgCommitURL|c2db458c1036efae503ce5e451f8369e64c99541}}<br />
<br />
* [https://www.postgresql.org/message-id/a3be61d9-f44b-7fce-3dc8-d700fdfb6f48%402ndquadrant.com extract(julian) is undocumented and gives wrong result]<br />
** Fixed by documentation change at: {{PgCommitURL|79a5928ebcb726b7061bf265b5c6990e835e8c4f}}<br />
<br />
* [https://www.postgresql.org/message-id/CAGRY4nwxKUS_RvXFW-ugrZBYxPFFM5kjwKT5O+0+Stuga5b4+Q@mail.gmail.com lwlock dtrace probes do unnecessary work if dtrace is compiled in but disabled]<br />
** Fixed at: {{PgCommitURL|b94409a02f6122d77b5154e481c0819fed6b4c95}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/15990-eee2ac466b11293d%40postgresql.org Detoast failures after commit/rollback in plpgsql]<br />
** Fixed at: {{PgCommitURL|f21fadafaf0fb5ea4c9622d915972651273d62ce}} and {{PgCommitURL|84f5c2908dad81e8622b0406beea580e40bb03ac}}<br />
<br />
* [https://www.postgresql.org/message-id/3382681.1621381328%40sss.pgh.pa.us Subscription tests fail under CLOBBER_CACHE_ALWAYS]<br />
** Fixed at: {{PgCommitURL|b39630fd41f25b414d0ea9b30804f4105f2a0aff}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/534fca83789c4a378c7de379e9067d4f%40politie.nl XX000: unknown type of jsonb container.]<br />
** Fixed at: {{PgCommitURL|6ee41a301e70fc8e4ad383bad22d695f66ccb0ac}}<br />
<br />
* [https://www.postgresql.org/message-id/1884374.1617898865%40sss.pgh.pa.us Buildfarm does not test pg_stat_statements]<br />
** Fixed by buildfarm client change<br />
<br />
* [https://www.postgresql.org/message-id/17064-bb0d7904ef72add3%40postgresql.org Parallel VACUUM operations cause the error "global/pg_filenode.map contains incorrect checksum"]<br />
** Fixed at: {{PgCommitURL|b6d8d207}} and {{PgCommitURL|9b8ed0f52}}<br />
<br />
* [https://www.postgresql.org/message-id/378885e4-f85f-fc28-6c91-c4d1c080bf26%40amazon.com Assertion failure in HEAD and 13 after calling COMMIT in a stored proc]<br />
** Fixed at: {{PgCommitURL|d102aafb6259a6a412803d4b1d8c4f00aa17f67e}}<br />
<br />
* [https://www.postgresql.org/message-id/4aa370cb91ecf2f9885d98b80ad1109c%40postgrespro.ru Add PortalDrop in exec_execute_message]<br />
** Fixed at: {{PgCommitURL|bb4aed46a}} and {{PgCommitURL|4efcf47053}}<br />
<br />
* [https://www.postgresql.org/message-id/2591376.1621196582%40sss.pgh.pa.us snapshot-scalability logic fails after pg_upgrade, due to pg_resetwal issue]<br />
** Now seems likely that this is an old issue affecting every release, and that the snapshot-scalability work is not at fault<br />
** [https://commitfest.postgresql.org/33/3105/ Pending fix for pg_upgrade + pg_resetwal]<br />
** Fixed at: {{PgCommitURL|74cf7d46a91d601e0f8d957a7edbaeeb7df83efc}}<br />
<br />
* [https://www.postgresql.org/message-id/b5146fb1-ad9e-7d6e-f980-98ed68744a7c%40amazon.com Logical Decoding of relation rewrite with toast does not reset toast_hash]<br />
** Problem exists since v11.<br />
** Fixed at: {{PgCommitURL|29b5905470285bf730f6fe7cc5ddb3513d0e6945}}<br />
<br />
=== Nothing to do ===<br />
<br />
* [https://www.postgresql.org/message-id/CABNQVagu3bZGqiTjb31a8D5Od3fUMs7Oh3gmZMQZVHZ=uWWWfQ@mail.gmail.com Consider back-patching typmod casting behavior change to stable branches]<br />
** Fixed in HEAD/v14 at: {{PgCommitURL|5c056b0c2519e602c2e98bace5b16d2ecde6454b}}<br />
<br />
== Non-bugs ==<br />
<br />
* [https://www.postgresql.org/message-id/20210216064214.GI28165%40telsasoft.com progress reporting for partitioned REINDEX]<br />
* [https://www.postgresql.org/message-id/YFnWBYinNf1s0Y6v@msg.df7cb.de pg_regress and tablespace removal]<br />
** [https://www.postgresql.org/message-id/YG/tf6HTZFj4hWlb@paquier.xyz Some patch]<br />
<br />
== Resolved Issues ==<br />
<br />
=== resolved before 14beta4 (?) ===<br />
<br />
* [https://www.postgresql.org/message-id/4170264.1620321747%40sss.pgh.pa.us Should we undo libpq change that leaves PQerrorMessage() nonempty after successful connect?]<br />
** Fixed at: {{PgCommitURL|138531f1bbc333745bd8422371c07e7e108d5528}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/CAApHDvpbusiKMV%3DvZypdpHHu81u0zMVAp6hu1vg-%3DgQLBBKUPA%40mail.gmail.com#8386c8d37ec1f9f9386cbf528bd9af5c default setting of enable_memoize]<br />
** No change required. (Discussed on Releases list)<br />
** Owner: David Rowley<br />
<br />
* [https://www.postgresql.org/message-id/58cbfa74-9356-778b-3e10-94e3075c5807@enterprisedb.com extended statistics: reject single-var expressions]<br />
** Fixed at: {{PgCommitURL|13380e1476490932c7b15530ead1f649a16e1125}} - Extra parenthesis<br />
** Fixed at: {{PgCommitURL|537ca68db}} - reject single-var expressions<br />
** Owner: Tomas Vondra<br />
<br />
* [https://www.postgresql.org/message-id/20210820125513.GQ10479@telsasoft.com pg_stats includes partitioned tables, but always shows analyze_count=0]<br />
** Fixed at: {{PgCommitURL|e1efc5b465c844969a0ed0d07e1364f3ce424d8c}}<br />
<br />
* [https://www.postgresql.org/message-id/20210730010355.6yodvn2ag3arfihi@alap3.anarazel.de Issues around autovacuum for partitioned tables]<br />
** Feature reverted: {{PgCommitURL|b3d24cc0f0aa882ceec0a74a99f94166c6fc3247}}<br />
<br />
* [https://www.postgresql.org/message-id/TYAPR01MB5866BA57688DF2770E2F95C6F5069@TYAPR01MB5866.jpnprd01.prod.outlook.com DECLARE STATEMENT and DEALLOCATE/DESCRIBE]<br />
** Fixed at: {{PgCommitURL|399edafa2aba562a8013fbe039f3cbf3a41a436e}}<br />
** Fixed at: {{PgCommitURL|f576de1db1eeca63180b1ffa4b42b1e360f88577}}<br />
<br />
* [https://www.postgresql.org/message-id/1629039545467.80333%40nidsa.net Performance regression with hex refactoring code]<br />
** Fixed at: {{PgCommitURL|2576dcfb76aa71e4222bac5a3a43f71875bfa9e8}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/20210807234407.icku2rnqyapsb3io%40alap3.anarazel.de elog.c query_id support vs shutdown]<br />
** Fixed at: {{PgCommitURL|bed5eac2d50eb86a254861dcdea7b064d10c72cf}}<br />
<br />
* [https://www.postgresql.org/message-id/OS0PR01MB5716935D4C2CC85A6143073F94EF9@OS0PR01MB5716.jpnprd01.prod.outlook.com wrong refresh when ALTER SUBSCRIPTION ADD/DROP PUBLICATION]<br />
** Fixed at: {{PgCommitURL|1046a69b3087a6417e85cae9b6bc76caa22f913b}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/17158-8a2ba823982537a4%40postgresql.org BUG #17158 (type RECORD is not always hashable)]<br />
** Fixed at: {{PgCommitURL|054adca641ac1279dc8d9b74fda41948ac35e9a9}}<br />
<br />
=== resolved before 14beta3 ===<br />
<br />
* [https://www.postgresql.org/message-id/flat/20210530172418.GO2082%40telsasoft.com#d6544e507234cc76b9bc0a50026cd74b \dX doesn't check pg_statistics_obj_is_visible()]<br />
** Fixed at: {{PgCommitURL|f68b609230689f9886a46e5d9ab8d6cdd947e0dc}}<br />
<br />
* [https://www.postgresql.org/message-id/e1b4f05d-54ec-4f51-832b-c18cf5a161c0@www.fastmail.com remove_temp_files_after_crash should be a DEVELOPER GUC]<br />
** Fixed at: {{PgCommitURL|797b0fc0b078c7b4c46ef9f2d9e47aa2d98c6c63}}<br />
<br />
* [https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com recovery_init_sync_method should be PGC_SIGHUP?]<br />
** Fixed at: {{PgCommitURL|34a8b64b4e5f0cd818e5cc7f98846de57938ea57}}<br />
<br />
* [https://www.postgresql.org/message-id/YNZ2mnsbDVJQrA/a@paquier.xyz OOM on palloc() when parsing service file would cause libpq to exit() without reporting a failure]<br />
** Fixed at: {{PgCommitURL|8ec00dc5cd70e0e579e9fbf8661bc46f5ccd8078}}<br />
** Additional defenses added at: {{PgCommitURL|dc227eb82ea8bf6919cd81a182a084589ddce7f3}}<br />
<br />
* [https://www.postgresql.org/message-id/17076-89a16ae835d329b9%40postgresql.org incorrect code for reporting the hash partition associated with a particular modulus]<br />
** Fixed at: {{PgCommitURL|dd2364ced98553e0217bfe8f621cd4b0970db74a}}<br />
<br />
* [https://www.postgresql.org/message-id/c5269c65-f967-77c5-ff7c-15e621c47f6a%40gmail.com Bug in multirange selectivity estimation]<br />
** Fixed at: {{PgCommitURL|322e82b77ef4acb9697c6e4259292f5671cb85bb}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/704fb6fb99ec9864a4dbeda2478337d2%40postgrespro.ru autoanalyze of partitioned table causes it to lose its relhasindex]<br />
** Fixed at: {{PgCommitURL|d700518d744e53994fdded14b23ebc15b031b0dd}}<br />
<br />
* [https://www.postgresql.org/message-id/CAF7igB1r6wRfSCEAB-iZBKxkowWY6+dFF2jObSdd9+iVK+vHJg@mail.gmail.com Incorrect time maths in pgbench] and [https://www.postgresql.org/message-id/CAHLJuCW_8Vpcr0=t6O_gozrg3wXXWXZXDioYJd3NhvKriqgpfQ@mail.gmail.com second thread]<br />
** Fixed at: {{PgCommitURL|0e39a608ed5545cc6b9d538ac937c3c1ee8cdc36}}<br />
<br />
* [https://www.postgresql.org/message-id/60258efe-bd7e-4886-82e1-196e0cac5433%40postgresql.org unnesting multirange data types]<br />
** Fixed at: {{PgCommitURL|244ad5415557812a6ac4dc5d6e2ae908361d82c3}}<br />
<br />
* [https://www.postgresql.org/message-id/17066-16a37f6223a8470b@postgresql.org Cache lookup failed when null (unknown) is passed as anycompatiblemultirange]<br />
** Fixed at: {{PgCommitURL|336ea6e6ff1109e7b83370565e3cb211804fda0c}}<br />
<br />
* [https://www.postgresql.org/message-id/530153.1627425648%40sss.pgh.pa.us Degraded out-of-memory handling in libpq]<br />
** Fixed at: {{PgCommitURL|514b4c11d24701d2cc90ad75ed787bf1380af673}}<br />
<br />
* [https://www.postgresql.org/message-id/0203588E-E609-43AF-9F4F-902854231EE7@enterprisedb.com Crash in regexp with {0}]<br />
** Fixed at: {{PgCommitURL|cc1868799c8311ed1cc3674df2c5e1374c914deb}}<br />
<br />
=== resolved before 14beta2 ===<br />
<br />
* [https://www.postgresql.org/message-id/20210609184506.rqm5rikoikm47csf%40alap3.anarazel.de Snapshot scalability OldestXmin issue (can cause infinite loop during system catalog VACUUM)]<br />
** Fixed at: {{PgCommitURL|5a1e1d83022b976ebdec5cfa8f255c4278b75b8e}}<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkCYR0U7zXqXo0CgFaFwUDz1WbKq8ngjzKi4+AQ5f-mYQ@mail.gmail.com Generalize INDEX_CLEANUP to allow the user to disable the optimization that has VACUUM skip indexes in marginal cases with very few LP_DEAD items/deletable TIDs.]<br />
** Fixed at: {{PgCommitURL|3499df0dee8c4ea51d264a674df5b5e31991319a}}<br />
<br />
* [https://www.postgresql.org/message-id/20210324232224.vrfiij2rxxwqqjjb@alap3.anarazel.de Questions about pg_stat_wal] also [https://www.postgresql.org/message-id/E3774ACD-7894-451E-9F13-71E097D10595@oss.nttdata.com]<br />
** Fixed at: {{PgCommitURL|d8735b8b4651f5ed50afc472e236a8e6120f07f2}}<br />
** Fixed at: {{PgCommitURL|d780d7c0882fe9a385102b292907baaceb505ed0}}<br />
<br />
* [https://www.postgresql.org/message-id/YKMO%2B2gD8R8I2O5b%40paquier.xyz pg_dumpall misses --no-toast-compression]<br />
** Fixed at: {{PgCommitURL|694da1983e9569b2a2f96cd786ead6b8dba31f1d}} <br />
<br />
* [https://www.postgresql.org/message-id/YKQnUoYV63GRJBDD%40msg.df7cb.de portability issue with pgbench's permute() function]<br />
** Fixed at: {{PgCommitURL|0f516d039d8023163e82fa51104052306068dd69}}<br />
<br />
* [https://www.postgresql.org/message-id/35457b09-36f8-add3-1d07-6034fa585ca8@oss.nttdata.com compute_query_id and pg_stat_statements]<br />
** Fixed at {{PgCommitURL|cafde58b33}} and {{PgCommitURL|354f32d01d}}<br />
<br />
* [https://www.postgresql.org/message-id/CAOxo6X+dy-V58iEPFgst8ahPKEU+38NZzUuc+a7wDBZd4TrHMQ@mail.gmail.com Result Cache works incorrectly with unique joins]<br />
** Fixed at {{PgCommitURL|9e215378d7fbb7d4615be917917c52f246cc6c61}}<br />
<br />
* [https://www.postgresql.org/message-id/20210517204803.iyk5wwvwgtjcmc5w%40alap3.anarazel.de Move pg_attribute.attcompression to earlier in struct for reduced size?]<br />
** Fixed at {{PgCommitURL|f5024d8d7b04de2f5f4742ab433cc38160354861}}<br />
<br />
* [https://www.postgresql.org/message-id/17030-5844aecae42fe223@postgresql.org EXPLAIN can suffer from cannot decompile join alias var in plan tree]<br />
** Fixed at {{PgCommitURL|cba5c70b956810c61b3778f7041f92fbb8065acb}}<br />
<br />
* [https://www.postgresql.org/message-id/20210521211929.pcehg6f23icwstdb@alap3.anarazel.de Memory leak when rewriting tuples with recompressed toast values]<br />
** Fixed at {{PgCommitURL|fb0f5f0172edf9f63c8f70ea9c1ec043b61c770e}}<br />
<br />
* [https://www.postgresql.org/message-id/626613.1621787110%40sss.pgh.pa.us Redefine pg_attribute.attcompression]<br />
** Fixed at {{PgCommitURL|e6241d8e030fbd2746b3ea3f44e728224298f35b}}<br />
<br />
* [https://www.postgresql.org/message-id/1665197.1622065382%40sss.pgh.pa.us Undo bump of FirstBootstrapObjectId]<br />
** Fixed at {{PgCommitURL|a4390abecf0f5152cff864e82b67e5f6c8489698}}<br />
<br />
* [https://www.postgresql.org/message-id/CABOikdN-_858zojYN-2tNcHiVTw-nhxPwoQS4quExeweQfG1Ug%40mail.gmail.com Assertion failure while streaming toasted data]<br />
** Fixed at {{PgCommitURL|6f4bdf81529fdaf6744875b0be99ecb9bfb3b7e0}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/7817fb9ebd6661cdf9b67dec6e129a78%40postgrespro.ru Join pushdown issue in postgres_fdw updates]<br />
** Fixed at {{PgCommitURL|f61db909dfb94f3411f8719916601a11a905b95e}}<br />
<br />
* [https://www.postgresql.org/message-id/CAD21AoA%3D%3Df2VSw3c-Cp_y%3DWLKHMKc1D6s7g3YWsCOvgaYPpJcg%40mail.gmail.com Performance degradation of REFRESH MATERIALIZED VIEW]<br />
** Fixed at {{PgCommitURL|8e03eb92e9ad54e2f1ed8b5a73617601f6262f81}}<br />
<br />
* [https://www.postgresql.org/message-id/CAPmGK16Q4B2_KY%2BJH7rb7wQbw54AUprp7TMekGTd2T1B62yysQ%40mail.gmail.com Rescan of async Appends is broken when do_exec_prune=false]<br />
** Fixed at {{PgCommitURL|f3baaf28a6da588987b94a05a725894805c3eae9}}<br />
<br />
* [https://www.postgresql.org/message-id/504c276ab6eee000bb23d571ea9b0ced4250774e.camel%40vmware.com libpq dumps core while making an SSL connection to a server specified by hostaddr]<br />
** Fixed at {{PgCommitURL|37e1cce4ddf0be362e3093cee55493aee41bc423}}<br />
<br />
* [https://www.postgresql.org/message-id/B4A3AF82-79ED-4F4C-A4E5-CD2622098972%40enterprisedb.com logical replication of truncate command with trigger causes Assert]<br />
** Fixed at {{PgCommitURL|3a09d75b4f6cabc8331e228b6988dbfcd9afdfbe}}<br />
<br />
* [https://www.postgresql.org/message-id/3742981.1621533210%40sss.pgh.pa.us Reconsider catalog representation and uniqueness rules for procedures with output-only arguments]<br />
** Fixed at {{PgCommitURL|e56bce5d43789cce95d099554ae9593ada92b3b7}}<br />
<br />
* [https://www.postgresql.org/message-id/20210527003144.xxqppojoiwurc2iz@alap3.anarazel.de Performance regression of VACUUM FULL with the addition of recompression path in tuple rewrite]<br />
** Fixed at {{PgCommitURL|dbab0c07e5ba1f19a991da2d72972a8fe9a41bda}}<br />
<br />
* [https://www.postgresql.org/message-id/20210525161458.GZ3676%40telsasoft.com Document incompatibility with aggregates using system functions using anycompatiblearray]<br />
** Fixed at {{PgCommitURL|25dfb5a831a1b8a83a8a68453b4bbe38a5ef737e}}<br />
<br />
=== resolved before 14beta1 ===<br />
<br />
* [https://www.postgresql.org/message-id/OS0PR01MB611340CBD300A7C4FD6B6101FB5F9@OS0PR01MB6113.jpnprd01.prod.outlook.com FailedAssertion reported in lazy_scan_heap() when running logical replication]<br />
** Fixed at: {{PgCommitURL|c9787385db47ba423d845b34d58e158551c6335d}}<br />
<br />
* [https://www.postgresql.org/message-id/CAJKUy5gCXDSmFs2c%3DR%2BVGgn7FiYcLCsEFEuDNNLGfoha%3DpBE_g%40mail.gmail.com Assertion fail with window function and nested partitioned tables]<br />
** [https://www.postgresql.org/message-id/87sg8tqhsl.fsf@aurora.ydns.eu Older report]<br />
** Fixed at: {{PgCommitURL|fb2d645dd53ff571572d830e830fc8c368063802}}<br />
<br />
* [https://www.postgresql.org/message-id/1df88660-6f08-cc6e-b7e2-f85296a2bdab@oss.nttdata.com Atomic initialization of waitStart done at backend startup]<br />
** Fixed at: {{PgCommitURL|f05ed5a5cfa55878baa77a1e39d68cb09793b477}}<br />
<br />
* [https://www.postgresql.org/message-id/20210117215940.GE8560%40telsasoft.com pg_collation_actual_version() ERROR: cache lookup failed for collation 123]<br />
** Fixed at: {{PgCommitURL|0fb0a0503bfc125764c8dba4f515058145dc7f8b}}<br />
<br />
* [https://www.postgresql.org/message-id/fd3ba610085f1ff54623478cf2f7adf5af193cbb.camel@vmware.com cryptohash: missing locking functions for OpenSSL <= 1.0.2?]<br />
** Fixed at: {{PgCommitURL|2c0cefcd18161549e9e8b103f46c0f65fca84d99}}<br />
<br />
* [https://www.postgresql.org/message-id/CAHut%2BPuPGGASnh2Dy37VYODKULVQo-5oE%3DShc6gwtRizDt%3D%3DcA%40mail.gmail.com pg_subscription - substream column?]<br />
** Fixed at: {{PgCommitURL|7efeb214ad832fa96ea950d0906b1d2b96316d15}}<br />
<br />
* [https://www.postgresql.org/message-id/CAJKUy5gcs0zGOp6JXU2mMVdthYhuQpFk%3DS3V8DOKT%3DLZC1L36Q%40mail.gmail.com TOAST compression method of index columns]<br />
** Fixed at: {{PgCommitURL|5db1fd7823a1a12e2bdad98abc8e102fd71ffbda}}<br />
<br />
* [https://www.postgresql.org/message-id/20210402235337.GA4082@ahch-to Crash with encoding conversion functions]<br />
** Fixed at: {{PgCommitURL|c4c393b3ec83ceb4b4d7f37cdd5302126377d069}}<br />
<br />
* [https://www.postgresql.org/message-id/CAApHDvpYT10-nkSp8xXe-nbO3jmoaRyRFHbzh-RWMfAJynqgpQ@mail.gmail.com Crash with extended stats on expressions]<br />
** Fixed at: {{PgCommitURL|518442c7f334f3b05ea28b7ef50f1b551cfcc23e}}<br />
<br />
* [https://postgr.es/m/CA+TgmobwnGawnxufvqLCrcTy4HRhMepFiXQLY8YpVD+PTuwagA@mail.gmail.com Update TOAST documentation for LZ4 compression]<br />
** Fixed at: {{PgCommitURL|e8c435a824e123f43067ce6f69d66f14cfb8815e}}<br />
<br />
* [https://www.postgresql.org/message-id/20210404220802.GA728316@rfd.leadboat.com Behavior of pg_dump --extension with schemas]<br />
** Fixed at: {{PgCommitURL|344487e2db03f3cec13685a839dbc8a0e2a36750}}<br />
<br />
* [https://www.postgresql.org/message-id/OSZPR01MB631017521EE6887ADC9492E8FD759@OSZPR01MB6310.jpnprd01.prod.outlook.com psql query cancellation is broken], as are [https://www.postgresql.org/message-id/2671235.1618154047%40sss.pgh.pa.us autocommit], and [https://www.postgresql.org/message-id/YHTYOFBHDuGaz2gy@paquier.xyz error reporting]<br />
** Reverted by: {{PgCommitURL|fae65629cec824738ee11bf60f757239906d64fa}}<br />
<br />
* On Windows, collation version lookup (sometimes?) fails for names like "English_United States.1252", but works for names like "en-US".<br />
** Fixed at: {{PgCommitURL|9f12a3b95dd56c897f1aa3d756d8fb419e84a187}} -- this commit tolerates failure so at least we don't raise an error, but unfortunately we have no version information<br />
** Fixed at: {{PgCommitURL|1bf946bd43e545b86e567588b791311fe4e36a8c}} -- this commit documents the limitation<br />
<br />
* [https://www.postgresql.org/message-id/1820954.1617860500@sss.pgh.pa.us Handling of querystring inconsistent for parallel execution of SQL function bodies]<br />
** Fixed at: {{PgCommitURL|1111b2668d89bfcb6f502789158b1233ab4217a6}}<br />
<br />
* [https://www.postgresql.org/message-id/YHPkU8hFi4no4NSw@paquier.xyz Problems around compute_query_id]<br />
** Fixed at: {{PgCommitURL|db01f797dd48f826c62e1b8eea70f11fe7ff3efc}}<br />
<br />
* [https://www.postgresql.org/message-id/OS0PR01MB611383FA0FE92EB9DE21946AFB769@OS0PR01MB6113.jpnprd01.prod.outlook.com Table reference leak in logical replication]<br />
** Fixed at: {{PgCommitURL|f3b141c482552a57866c72919007d6481cd59ee3}}<br />
<br />
* [https://www.postgresql.org/message-id/20210410184226.GY6592%40telsasoft.com DETACH PARTITION CONCURRENTLY: Avoid adding redundant constraint]<br />
** Fixed at: {{PgCommitURL|7b357cc6ae}}<br />
<br />
* [https://www.postgresql.org/message-id/CC3F964B-8FA1-4A23-9D3E-6EA00BBFF0EE@enterprisedb.com Issues in PostgresNode and older major versions with multi-install]<br />
** Fixed at {{PgCommitURL|95c3a1956ec9eac686c1b69b033dd79211b72343}} and {{PgCommitURL|4c4eaf3d19201c5e2d9efebc590903dfaba0d3e5}}<br />
<br />
* [https://www.postgresql.org/message-id/3269784.1617215412%40sss.pgh.pa.us DETACH PARTITION CONCURRENTLY tests fail under CLOBBER_CACHE_ALWAYS]<br />
** Fixed at: {{PgCommitURL|8aba9322511f}}<br />
<br />
* [https://www.postgresql.org/message-id/551ed8c1-f531-818b-664a-2cecdab99cd8@oss.nttdata.com TRUNCATE on foreign tables and ONLY clause]<br />
** Fixed at: {{PgCommitURL|8e9ea08bae93a754d5075b7bc9c0b2bc71958bfd}}<br />
<br />
* [https://www.postgresql.org/message-id/CAMkU=1zKGWEJdBbYKw7Tn7cJmYR_UjgdcXTPDqJj=dNwCETBCQ@mail.gmail.com handling of character continuation in psql broken by sql body patch]<br />
** Fixed at: {{PgCommitURL|d9a9f4b4b92ad39e3c4e6600dc61d5603ddd6e24}}<br />
<br />
* [https://www.postgresql.org/message-id/20210505210947.GA27406%40telsasoft.com cache lookup failed for statistics object 123]<br />
** Fixed at: {{PgCommitURL|8d4b311d2494ca592e30aed03b29854d864eb846}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/CAFj8pRCL_Rjw-MCR6J7VX9OF7MR6PA5K8qUbrMvprW_e-aHkfQ%40mail.gmail.com batch fdw insert bug]<br />
** Fixed at: {{PgCommitURL|c6a01d924939306e95c8deafd09352be6a955648}}<br />
<br />
* [https://www.postgresql.org/message-id/3564817.1618420687@sss.pgh.pa.us Bogus collation version recording in recordMultipleDependencies]<br />
** Fixed at: {{PgCommitURL|ec48314708262d8ea6cdcb83f803fc83dd89e721}} (Feature revert)<br />
<br />
* [https://www.postgresql.org/message-id/773932.1619022622@sss.pgh.pa.us Corruption issues with WAL prefetch?]<br />
** Fixed at: {{PgCommitURL|c2dc19342e05e081dc13b296787baa38352681ef}} (Feature revert)<br />
<br />
* [https://www.postgresql.org/message-id/YIetoZGq31L84v5d@paquier.xyz Small issues with CREATE TABLE COMPRESSION]<br />
** MSVC scripts don't support builds with lz4: fixed at {{PgCommitURL|9ca40dcd4d0cad43d95a9a253fafaa9a9ba7de24}}<br />
** pg_dump includes no tests with compression methods of attributes and --no-toast-compression: fixed at {{PgCommitURL|63db0ac3f9e6bae313da67f640c95c0045b7f0ee}}<br />
** Documentation missing for --with-lz4 in installation instructions: fixed at {{PgCommitURL|02a93e7ef9612788081ef07ea1bbd0a8cc99ae99}}<br />
<br />
* [https://www.postgresql.org/message-id/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de Replication slot stats misgivings]<br />
** Fixed at: {{PgCommitURL|3fa17d37716f978f80dfcdab4e7c73f3a24e7a48}}<br />
** Fixed at: {{PgCommitURL|592f00f8dec68038301467a904ac514eddabf6cd}}<br />
** Fixed at: {{PgCommitURL|cca57c1d9bf7eeba5b81115e0b82651cf3d8e4ea}}<br />
** Fixed at: {{PgCommitURL|f5fc2f5b23d1b1dff60f8ca5dc211161df47eda4}}<br />
<br />
* [https://www.postgresql.org/message-id/CAPmGK158e9sJOfuWxfn%2B0ynrspXQU3JhNjSCbaoeSzMvnga%2Bbw%40mail.gmail.com FDW: crash with DDL and async/batch option]<br />
** Fixed at: {{PgCommitURL|a784859f4480ceaa05a00ca35311071ca33483d1}}<br />
<br />
* [https://www.postgresql.org/message-id/20210409213155.GA23912%40alvherre.pgsql should autoanalyze for partitioned tables handle ATTACH/DETACH/DROP?]<br />
** Fixed at: {{PgCommitURL|1b5617eb844cd2470a334c1d2eec66cf9b39c41a}} (docs)<br />
<br />
* [https://www.postgresql.org/message-id/CALT9ZEE7OiszofHELnjPhX%3DhV92PiKn8haSZ4_FWBAw4diaRdQ%40mail.gmail.com OOM in spgist insert]<br />
** Fixed at: {{PgCommitURL|c3c35a733c77b298d3cf7e7de2eeb4aea540a631}}<br />
<br />
== Won't Fix ==<br />
<br />
* [https://www.postgresql.org/message-id/92408.1618772924%40sss.pgh.pa.us SQL-standard function body: pg_dump should handle circular dependencies]<br />
** Owner: Peter Eisentraut<br />
* [https://www.postgresql.org/message-id/17061-dd7f4825b7da3a9d%40postgresql.org SEARCH BREADTH FIRST produces a composite column whose fields can't be accessed]<br />
** Owner: Peter Eisentraut<br />
<br />
== Important Dates ==<br />
<br />
Current schedule:<br />
<br />
* Feature Freeze: April 7, 2021 ('''Last Day to Commit Features''')<br />
* Beta 1: May 20, 2021<br />
* Beta 2: June 24, 2021<br />
* Beta 3: August 12, 2021<br />
* RC 1: <br />
* GA: <br />
<br />
[[Category:Open_Items]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=Foreign_data_wrappers&diff=36426Foreign data wrappers2021-09-08T14:42:40Z<p>Adunstan: /* Specific SQL Database Wrappers */ add db2 fdw</p>
<hr />
<div>= Foreign Data Wrappers =<br />
In 2003, a new specification called [[SQL/MED]] ("SQL Management of External Data") was added to the SQL standard. It is a standardized way of handling access to remote objects from SQL databases. In 2011, PostgreSQL 9.1 was released with read-only support of this standard, and in 2013 write support was added with PostgreSQL 9.3.<br />
<br />
There are now a variety of Foreign Data Wrappers (FDW) available which enable PostgreSQL Server to different remote data stores, ranging from other SQL databases through to flat file. This page list some of the wrappers currently available. Another [https://pgxn.org/tag/fdw/ fdw list] can be found at [https://pgxn.org/ the PGXN website].<br />
<br />
Please keep in mind that most of these wrappers are '''not officially supported by the PostgreSQL Global Development Group''' (PGDG) and that some of these projects are '''still in Beta''' version. Use carefully!<br />
<br />
<br />
== Generic SQL Database Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
|ODBC<br />
|Native<br />
|<br />
|[https://github.com/CartoDB/odbc_fdw github]<br />
|<br />
|<br />
|CartoDB took over active development of the ODBC FDW for PG 9.5+<br />
|-<br />
|JDBC<br />
|Native<br />
|<br />
|[https://github.com/atris/JDBC_FDW github]<br />
|<br />
|<br />
| Not maintained ?<br />
|-<br />
|JDBC2<br />
|Native<br />
|<br />
|[https://github.com/heimir-sverrisson/jdbc2_fdw github]<br />
|<br />
|<br />
|<br />
|-<br />
| [https://www.sqlalchemy.org/ SQL_Alchemy]<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
| [https://multicorn.org/foreign-data-wrappers/#sqlalchemy-foreign-data-wrapper documentation]<br />
| Can be used to access data stored in any database supported by the sqlalchemy python toolkit.<br />
|-<br />
| [https://gdal.org/drivers/vector/index.html GDAL/OGR]<br />
| Native<br />
| MIT<br />
| [https://github.com/pramsey/pgsql-ogr-fdw GitHub]<br />
| yum.postgresql.org, apt.postgresql.org, and part of PostGIS windows bundle (application stackbuilder)<br />
| <br />
| Can access many kinds of data sources (Relational databases, spreadsheets, CSV files, web feature services, etc). Uses the [https://gdal.org/ GDAL library] which supports hundreds of formats to access the data. Exposes vector data as PostGIS geometry columns if you have PostGIS installed. Works great with both spatial and non-spatial data.<br />
|-<br />
| VirtDB<br />
| Native<br />
| GPL<br />
| [https://github.com/virtdb/virtdb-fdw GitHub]<br />
|<br />
|<br />
| A generic FDW to access VirtDB data sources (SAP ERP, Oracle RDBMS)<br />
|}<br />
<br />
== Specific SQL Database Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
|[https://www.postgresql.org/ PostgreSQL]<br />
|Native<br />
|PostgreSQL<br />
|[https://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=contrib/postgres_fdw;hb=HEAD git.postgresql.org]<br />
|<br />
|[https://www.postgresql.org/docs/current/postgres-fdw.html documentation]<br />
|<br />
|-<br />
|[https://www.oracle.com/index.html Oracle]<br />
|Native<br />
|PostgreSQL<br />
|[https://github.com/laurenz/oracle_fdw github]<br />
|[https://pgxn.org/dist/oracle_fdw/ PGXN]<br />
|[http://laurenz.github.io/oracle_fdw/ website]<br />
|<br />
|-<br />
|[https://www.mysql.com/ MySQL]<br />
|Native<br />
|<br />
|[https://github.com/EnterpriseDB/mysql_fdw github]<br />
|[https://pgxn.org/dist/mysql_fdw/ PGXN]<br />
|[https://www.enterprisedb.com/blog/new-oss-tool-links-postgres-and-mysql example]<br />
|FDW for MySQL<br />
|-<br />
|Informix<br />
|Native<br />
|PostgreSQL<br />
|[https://github.com/credativ/informix_fdw github]<br />
|<br />
|<br />
|<br />
|-<br />
|DB2<br />
|Native<br />
|<br />
|[https://github.com/wolfgangbrandl/db2_fdw github]<br />
|<br />
|<br />
|<br />
|-<br />
|[https://www.firebirdsql.org/ Firebird]<br />
|Native<br />
|PostgreSQL<br />
|[https://github.com/ibarwick/firebird_fdw/ github]<br />
|[https://pgxn.org/dist/firebird_fdw/ PGXN]<br />
|[https://github.com/ibarwick/firebird_fdw/blob/master/README.md README]<br />
|version [https://github.com/ibarwick/firebird_fdw/releases/tag/1.2.0 1.2.0] released (2020-10)<br />
|-<br />
|[https://www.sqlite.org/index.html SQLite]<br />
|Native<br />
|PostgreSQL<br />
|[https://github.com/pgspider/sqlite_fdw github]<br />
|[https://pgxn.org/dist/sqlite_fdw PGXN]<br />
|[https://github.com/pgspider/sqlite_fdw/blob/master/README.md README]<br />
|An FDW for SQLite3 (write support and several pushdown optimization)<br />
|-<br />
|Sybase / MS SQL Server<br />
|Native<br />
|<br />
|[https://github.com/tds-fdw/tds_fdw github]<br />
|[https://pgxn.org/dist/tds_fdw/ PGXN]<br />
|<br />
|An FDW for Sybase and Microsoft SQL server<br />
|-<br />
|[https://www.monetdb.org/ MonetDB]<br />
|Native<br />
|<br />
|[https://github.com/snaga/monetdb_fdw github]<br />
|<br />
|<br />
|<br />
|}<br />
<br />
== NoSQL Database Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
|[https://cloud.google.com/bigtable/ BigTable or HBase]<br />
|[https://github.com/posix4e/rpgffi Native Rust Binding (RPGFFI)]<br />
|MIT<br />
|[https://github.com/durch/google-bigtable-postgres-fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
|[http://cassandra.apache.org/ Cassandra]<br />
|[https://multicorn.org/ Multicorn]<br />
|MIT<br />
|[https://github.com/rankactive/cassandra-fdw Github]<br />
|[https://rankactive.com/resources/postgresql-cassandra-fdw Rankactive]<br />
|<br />
|<br />
|-<br />
| Cassandra2<br />
| Native<br />
| MIT<br />
|[https://github.com/jaiminpan/cassandra2_fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
|-<br />
| [http://cassandra.apache.org Cassandra]<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
|[https://github.com/wjch-krl/pgCassandra Github]<br />
|<br />
|<br />
|<br />
|-<br />
|[https://clickhouse.yandex/ ClickHouse]<br />
|[https://multicorn.org/ Multicorn]<br />
|BSD<br />
|[https://github.com/Infinidat/infi.clickhouse_fdw/ Github]<br />
|<br />
|[https://github.com/Infinidat/infi.clickhouse_fdw/blob/master/README.md README]<br />
|<br />
|-<br />
|[https://clickhouse.yandex/ ClickHouse]<br />
|Native<br />
|Apache<br />
|[https://github.com/adjust/clickhouse_fdw Github]<br />
|<br />
|[https://github.com/adjust/clickhouse_fdw/blob/master/README.md README]<br />
|<br />
|-<br />
|[http://couchdb.apache.org/ CouchDB]<br />
|Native<br />
|PostgreSQL<br />
|[https://github.com/ZhengYang/couchdb_fdw Github]<br />
|[https://pgxn.org/dist/couchdb_fdw/ PGXN]<br />
|<br />
| Original version<br />
|-<br />
|[http://couchdb.apache.org/ CouchDB]<br />
|Native<br />
|PostgreSQL<br />
|[https://github.com/golgauth/couchdb_fdw Github]<br />
|<br />
|<br />
| golgauth version (9.1 - 9.2+ compatible)<br />
|-<br />
| [https://github.com/griddb/griddb_nosql GridDB]<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/pgspider/griddb_fdw Github]<br />
|<br />
| [https://github.com/pgspider/griddb_fdw/blob/master/README.md README]<br />
|<br />
|-<br />
| InfluxDB<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/pgspider/influxdb_fdw Github]<br />
|<br />
| [https://github.com/pgspider/influxdb_fdw/blob/master/README.md README]<br />
|<br />
|-<br />
| [https://kafka.apache.org/ Kafka]<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/adjust/kafka_fdw GitHub]<br />
|<br />
| [https://github.com/adjust/kafka_fdw/blob/master/README.md README]<br />
|<br />
|-<br />
|[https://fallabs.com/kyototycoon/ Kyoto Tycoon ]<br />
|Native<br />
|MIT<br />
|[https://github.com/cloudflare/kt_fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
|[https://www.mongodb.com/ MongoDB]<br />
|Native<br />
|GPL3+<br />
|[https://github.com/EnterpriseDB/mongo_fdw Github]<br />
|[https://pgxn.org/dist/mongo_fdw/ PGXN]<br />
|[https://github.com/EnterpriseDB/mongo_fdw/blob/master/README.md README]<br />
|EDB version<br />
|-<br />
|[https://www.mongodb.com/ MongoDB]<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/dwa/mongoose_fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
| [https://www.mongodb.com/ MongoDB]<br />
| [https://multicorn.org/ Multicorn]<br />
|<br />
| [https://github.com/asya999/yam_fdw Github]<br />
|<br />
|<br />
| Yet Another Postgres FDW for MongoDB<br />
|-<br />
|[https://neo4j.com/ Neo4j]<br />
|[https://multicorn.org/ Multicorn]<br />
|GPLv3<br />
|[https://github.com/sim51/neo4j-fdw Github]<br />
|<br />
|[https://github.com/sim51/neo4j-fdw/blob/master/README.adoc README]<br />
|FWD for Neo4j and also add a Cypher function to Pg<br />
|-<br />
|[https://neo4j.com/ Neo4j]<br />
|Native<br />
|?<br />
|[https://github.com/nuko-yokohama/neo4j_fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
|[http://quasar-analytics.org/ Quasar]<br />
|Native<br />
|Apache<br />
|[https://github.com/slamdata/quasar-fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
|[https://redis.io/ Redis]<br />
|Native<br />
|PostgreSQL<br />
|[https://github.com/pg-redis-fdw/redis_fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
| [https://redis.io/ Redis]<br />
| Native<br />
| BSD<br />
| [https://github.com/nahanni/rw_redis_fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
| [https://rethinkdb.com/ RethinkDB]<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/rotten/rethinkdb-multicorn-postgresql-fdw Github]<br />
|<br />
| [https://rethinkdb.com/blog/postgres-foreign-data-wrapper/ blog]<br />
|<br />
|-<br />
| [https://github.com/basho/riak Riak]<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/kiskovacs/riak-multicorn-pg-fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
|[http://whitedb.org/ WhiteDB]<br />
| Native<br />
| MIT<br />
| [https://github.com/Kentik/wdb_fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
|[https://github.com/facebook/rocksdb RocksDB]<br />
|Native<br />
|Apache<br />
|[https://github.com/vidardb/PostgresForeignDataWrapper Github]<br />
|<br />
|[https://github.com/vidardb/PostgresForeignDataWrapper/blob/master/README.md README]<br />
|FDW for RocksDB<br />
|<br />
|<br />
|<br />
|}<br />
<br />
== File Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
| CSV<br />
| Native<br />
| PostgreSQL<br />
|[https://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=contrib/file_fdw;hb=HEAD git.postgresql.org]<br />
|<br />
| [https://www.postgresql.org/docs/current/file-fdw.html documentation]<br />
| Delivered as an official extension of PostgreSQL 9.1 / [https://www.depesz.com/2011/03/14/waiting-for-9-1-foreign-data-wrapper/ example] / [http://www.postgresonline.com/journal/archives/250-File-FDW-Family-Part-1-file_fdw.html Another example]<br />
|-<br />
| CSV<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
| [https://multicorn.org/foreign-data-wrappers/#csv-foreign-data-wrapper documentation]<br />
| Each column defined in the table will be mapped, in order, against columns in the CSV file.<br />
|-<br />
| CSV / Text Array<br />
| Native<br />
|<br />
| [https://github.com/adunstan/file_text_array_fdw GitHub]<br />
|<br />
| [http://www.postgresonline.com/journal/archives/251-File-FDW-Family-Part-2-file_textarray_fdw-Foreign-Data-Wrapper.html How to]<br />
| Another CSV wrapper<br />
|-<br />
| CSV / Fixed-length<br />
| Native<br />
|<br />
| [https://github.com/adunstan/file_fixed_length_record_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| CSV / gzipped<br />
| [https://multicorn.org/ Multicorn]<br />
|<br />
| [https://github.com/dialogbox/py_csvgz_fdw GitHub]<br />
|<br />
|<br />
| PostgreSQL Foreign Data Wrapper for gzipped cvs file<br />
|-<br />
| Compressed File<br />
| Native<br />
|<br />
| [https://github.com/gokhankici/compressedfile_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Document Collection<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/ZhengYang/dc_fdw GitHub]<br />
|<br />
| [https://github.com/ZhengYang/dc_fdw/wiki wiki]<br />
|<br />
|-<br />
| JSON<br />
| Native<br />
| GPL3<br />
| [https://github.com/nkhorman/json_fdw GitHub]<br />
|<br />
| [https://www.citusdata.com/blog/2013/05/30/run-sql-on-json-files-without-any-data-loads/ Example]<br />
|<br />
|-<br />
| Multi-File<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
| [https://multicorn.org/foreign-data-wrappers/#filesystem-foreign-data-wrapper doc]<br />
| Access data stored in various files in a filesystem. The files are looked up based on a pattern, and parts of the file's path are mapped to various columns, as well as the file's content itself.<br />
|-<br />
| Multi CDR<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/theirix/multicdr_fdw GitHub]<br />
| [https://pgxn.org/dist/multicdr_fdw/ PGXN]<br />
|<br />
|<br />
|-<br />
| Parquet<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/adjust/parquet_fdw GitHub]<br />
|<br />
|<br />
| Foreign data wrapper for reading Parquet files using libarrow/libparquet<br />
|-<br />
| pg_dump<br />
| Native<br />
| New BSD<br />
| [https://github.com/MeetMe/dump_fdw GitHub]<br />
|<br />
|<br />
| Allows querying of data directly against Postgres custom format files created by pg_dump<br />
|-<br />
| TAR Files<br />
| Native<br />
|<br />
| [https://github.com/beargiles/tarfile-fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| XML<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
|<br />
|<br />
|-<br />
| ZIP Files<br />
| Native<br />
|<br />
| [https://github.com/beargiles/zipfile-fdw GitHub]<br />
|<br />
|<br />
|<br />
|}<br />
<br />
== Geo Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
|[https://www.gdal.org GDAL/OGR]<br />
|Native<br />
|MIT<br />
|[https://github.com/pramsey/pgsql-ogr-fdw GitHub]<br />
|<br />
|<br />
|A wrapper for data sources with a [https://www.gdal.org GDAL/OGR] driver, including databases like Oracle, Informix, SQLite, SQL Server, ODBC as well as file formats like Shape, FGDB, MapInfo, CSV, Excel, OpenOffice, OpenStreetMap PBF and XML, OGC WebServices, [https://www.gdal.org/ogr_formats.html and more] Spatial columns are linked in as PostGIS geometry if PostGIS is installed.<br />
|-<br />
| Geocode / GeoJSON<br />
| [https://multicorn.org/ Multicorn]<br />
| GPL<br />
| [https://github.com/bosth/geofdw GitHub]<br />
|<br />
|<br />
| a collection of PostGIS-related foreign data wrappers<br />
|-<br />
| [https://wiki.openstreetmap.org/wiki/PBF_Format Open Street Map PBF]<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/vpikulik/postgres_osm_pbf_fdw GitHub]<br />
|<br />
|<br />
|<br />
|}<br />
<br />
== LDAP Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
| LDAP<br />
| Native<br />
|<br />
| [https://github.com/guedes/ldap_fdw GitHub]<br />
| [https://pgxn.org/dist/ldap_fdw/ PGXN]<br />
|<br />
| Allows to query an LDAP server and retrieve data from some pre-configured Organizational Unit<br />
|-<br />
| LDAP<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
| [https://multicorn.org/foreign-data-wrappers/#idldap-foreign-data-wrapper documentation]<br />
|<br />
|}<br />
<br />
== Generic Web Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
| Git<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
|<br />
|<br />
|-<br />
| Git<br />
| Native<br />
| MIT<br />
| [https://github.com/franckverrot/git_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| ICAL<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/daamien/Multicorn/blob/master/python/multicorn/icalfdw.py GitHub]<br />
|<br />
| [https://wiki.postgresql.org/images/7/7e/Conferences-write_a_foreign_data_wrapper_in_15_minutes-presentation.pdf pdf]<br />
|<br />
|-<br />
| IMAP<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
| [https://multicorn.org/foreign-data-wrappers/#idimap-foreign-data-wrapper documentation]<br />
|<br />
|-<br />
| RSS<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
| [https://multicorn.org/foreign-data-wrappers/#idrss-foreign-data-wrapper documentation]<br />
| This fdw can be used to access items from an rss feed.<br />
|-<br />
| www<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/cyga/www_fdw/ GitHub]<br />
| [https://pgxn.org/dist/www_fdw/ PGXN]<br />
| [https://github.com/cyga/www_fdw/wiki wiki]<br />
| Allows to query different web services<br />
|-<br />
| pgsql-http<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/pramsey/pgsql-http GitHub]<br />
| Compile<br />
| <br />
| Allows to query any http resource using CURL libs. By Paul Ramsey<br />
<br />
|}<br />
<br />
== Specific Web Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
| Database.com<br />
| [https://multicorn.org/ Multicorn]<br />
| BSD<br />
| [https://github.com/metadaddy/Database.com-FDW-for-PostgreSQL GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Dun & Badstreet<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/dpmorel/dnb_fdw GitHub]<br />
|<br />
|<br />
| Access to the [https://fr.wikipedia.org/wiki/Data_Universal_Numbering_System Data Universal Numbering System] (DUNS)<br />
|-<br />
| DynamoDB<br />
| [https://multicorn.org/ Multicorn]<br />
| GPL<br />
| [https://github.com/avances123/dynamodb_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Facebook<br />
| [https://multicorn.org/ Multicorn]<br />
|<br />
| [https://github.com/mrwilson/fb-psql GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Fixer.io<br />
| based on www_fdw<br />
|<br />
| [https://github.com/hakanensari/frankfurter GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Google<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
|<br />
|<br />
|-<br />
| Heroku dataclips<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/petergeoghegan/dataclips_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Keycloak<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/schne324/foreign-keycloak-wrapper GitHub]<br />
| [https://pgxn.org/dist/foreign-keycloak-wrapper/ PGXN]<br />
| [https://github.com/schne324/foreign-keycloak-wrapper/blob/master/README.md README]<br />
| Direct database integration with the [https://www.keycloak.org Keycloak] open-source Identity/Access Management solution.<br />
|-<br />
| Mailchimp<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/daamien/mailchimp_fdw GitHub]<br />
|<br />
|<br />
| Beta<br />
|-<br />
| [http://parseplatform.org/ Parse]<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/spacialdb/parse_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| S3<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/umitanuki/s3_fdw GitHub]<br />
| [https://pgxn.org/dist/s3_fdw/ PGXN]<br />
|<br />
|<br />
|-<br />
| S3CSV<br />
| [https://multicorn.org/ Multicorn]<br />
| GPL 3<br />
| [https://github.com/eligoenergy/s3csv_fdw GitHub]<br />
|<br />
|<br />
| This is meant to replace s3_fdw that does is not supported on PostgreSQL version 9.2+<br />
|-<br />
| Telegram<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/guedes/telegram_fdw GitHub]<br />
|<br />
|<br />
| telegram_fdw is a Telegram BOT implemented using the PostgreSQL foreign data wrapper interface.<br />
|-<br />
| Twitter<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/umitanuki/twitter_fdw GitHub]<br />
| [https://pgxn.org/dist/twitter_fdw/ PGXN]<br />
|<br />
| A wrapper fetching text messages from Twitter over the Internet and returning a table<br />
|-<br />
| [https://www.treasuredata.com/ Treasure Data]<br />
| Native<br />
| Apache<br />
| [https://github.com/komamitsu/treasuredata_fdw GitHub]<br />
| [https://pgxn.org/dist/treasuredata_fdw PGXN]<br />
|<br />
| A FDW for Treasure Data internally using a Rust library<br />
|-<br />
| [https://www.treasuredata.com/ Treasure Data]<br />
| [https://multicorn.org/ Multicorn]<br />
| Apache<br />
| [https://github.com/komamitsu/td-fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Google Spreadsheets<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/lincolnturner/gspreadsheet_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Open Weather Map<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/ycku/owmfdw GitHub]<br />
|<br />
|<br />
| A FDW for Open Weather Map (single city)<br />
|}<br />
<br />
== Big Data Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
|Elasticsearch<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/matthewfranglen/postgres-elasticsearch-fdw GitHub]<br />
|<br />
|<br />
| Supports up to PG 13, ES 7.<br />
|-<br />
| Google BigQuery<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
|[https://github.com/gabfl/bigquery_fdw GitHub]<br />
|<br />
|[https://github.com/gabfl/bigquery_fdw/blob/master/docs/README.md Documentation]<br />
|bigquery_fdw is a BigQuery FDW compatible with PostgreSQL >= 9.5<br />
|-<br />
| file_fdw-gds (Hadoop)<br />
| Native<br />
|<br />
| [https://github.com/wat4dog/pg-file-fdw-gds GitHub]<br />
|<br />
|<br />
| Hadoop file_fdw is a slightly modified version of PostgreSQL 9.3's file_fdw module.<br />
|-<br />
| Hadoop<br />
| Native<br />
| PostgreSQL<br />
| [https://www.openscg.com/bigsql/hadoopfdw/ Bitbucket]<br />
|<br />
|<br />
| Allows read and write access to HBase as well as to HDFS via Hive.<br />
|-<br />
| HDFS<br />
| Native<br />
| Apache<br />
| [https://github.com/EnterpriseDB/hdfs_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Hive<br />
| [https://multicorn.org/ Multicorn]<br />
|<br />
| [https://github.com/youngwookim/hive-fdw-for-postgresql GitHub]<br />
|<br />
|<br />
| Used to access Apache Hive tables.<br />
|-<br />
| Hive / ORC File<br />
| Native<br />
|<br />
| [https://github.com/gokhankici/orc_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| [http://impala.apache.org/ Impala]<br />
| Native<br />
| BSD<br />
| [https://github.com/lapug/impala_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| [https://arrow.apache.org/ Apache Arrow]<br />
| Native<br />
| GPLv2<br />
| [https://github.com/heterodb/pg-strom GitHub]<br />
|<br />
|<br />
| A part of PG-Strom feature; as a columnar data source with support of SSD-to-GPU Direct SQL <br />
|}<br />
<br />
== Column-Oriented Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
|Columnar Store<br />
|Native<br />
|<br />
|[https://github.com/citusdata/cstore_fdw github]<br />
|[https://www.citusdata.com/blog/2014/04/03/columnar-store-for-analytics/ example]<br />
|<br />
|A Columnar Store for PostgreSQL.<br />
|-<br />
|[https://www.monetdb.org/ MonetDB]<br />
|Native<br />
|<br />
|[https://github.com/snaga/monetdb_fdw github]<br />
|<br />
|<br />
|<br />
|-<br />
|GPU Memory Store<br />
|Native<br />
|GPL v2<br />
|[https://github.com/heterodb/pg-strom github]<br />
|<br />
|<br />
|FDW to GPU device memory; a part of PG-Strom feature for PL/CUDA<br />
|}<br />
<br />
== Scientific Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
| Ambry<br />
| [https://multicorn.org/ Multicorn]<br />
|<br />
| [https://github.com/nmb10/ambryfdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| ROOT files<br />
| Native<br />
|<br />
| [https://github.com/miguel-branco/root_fdw GitHub]<br />
|<br />
|<br />
| https://root.cern.ch<br />
|-<br />
| VCF files (Genotype)<br />
| [https://multicorn.org/ Multicorn]<br />
|<br />
| [https://github.com/smithijk/vcf_fdw_postgresql GitHub]<br />
|<br />
|<br />
| https://en.wikipedia.org/wiki/Variant_Call_Format<br />
|}<br />
<br />
== Operating System Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
| Docker<br />
| [https://multicorn.org/ Multicorn]<br />
| Expat<br />
| [https://github.com/paultag/dockerfdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Log files<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/rdunklau/logfdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| OpenStack / Telemetry<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/CSCfi/telemetry-fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| OS Query<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/shish/pgosquery GitHub]<br />
|<br />
|<br />
| Like Facebook's OSQuery, but for Postgres<br />
|-<br />
| Passwd<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/beargiles/passwd-fdw GitHub]<br />
|<br />
|<br />
| reads linux/unix password and group files.<br />
|-<br />
| Process<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn/blob/master/python/multicorn/processfdw.py GitHub]<br />
|<br />
|<br />
| A foreign datawrapper for querying system stats based on [https://libstatgrab.org/ statgrab]<br />
|-<br />
| Environment Variables<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/pgsql-tw/envfdw GitHub]<br />
|<br />
|<br />
| envFDW is a forign data wrapper for processing environment variables<br />
|}<br />
<br />
== Exotic Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
| faker_fdw<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/guedes/faker_fdw GitHub]<br />
|<br />
|<br />
| faker_fdw is a foreign data wrapper for PostgreSQL that generates fake data.<br />
|-<br />
| fdw_fdw<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/daamien/fdw_fdw GitHub]<br />
|<br />
|<br />
| the Meta FDW ! reads this page and returns the list of all the FDW<br />
|-<br />
| PPG<br />
| Native<br />
|<br />
| [https://github.com/scarbrofair/ppg_fdw GitHub]<br />
|<br />
|<br />
| distributed parallel query engine, based on fdw and hooks of PG<br />
|-<br />
| Open Civic Data<br />
| [https://multicorn.org/ Multicorn]<br />
| Expat<br />
| [https://github.com/paultag/sunlightfdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| [https://www2.meethue.com/en-us/philips-hue-benefits Phillips Hue Lighting Systems]<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/rotten/hue-multicorn-postgresql-fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Random Number<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/yieldsfalsehood/rng_fdw GitHub]<br />
|<br />
|<br />
| A random number generator foreign data wrapper for postgres<br />
|-<br />
| Rotfang<br />
| Native<br />
| PostgreSQL<br />
| [https://bitbucket.org/adunstan/rotfang-fdw BitBucket]<br />
|<br />
| [https://drive.google.com/file/d/0B3XVAFFWEFN0aURac0dzSFQyZzA/view slides]<br />
| Advanced random number generator<br />
|-<br />
| Template Tables<br />
| Native<br />
| BSD<br />
| [https://github.com/okbob/template_fdw GitHub]<br />
|<br />
|<br />
| PostgreSQL data wrapper for template tables - any DML and SELECT operations are disallowed<br />
|-<br />
| VMware vSphere<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/ycku/vspherefdw GitHub]<br />
|<br />
|<br />
| A PostgreSQL FDW to query your VMware vSphere service<br />
|}<br />
<br />
== Example Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
| Dummy<br />
| Native<br />
| BSD<br />
| [https://github.com/slaught/dummy_fdw GitHub]<br />
|<br />
|<br />
| Readable null FDW for testing<br />
|-<br />
| Hello World<br />
|<br />
|<br />
| [https://github.com/wikrsh/hello_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Black Hole<br />
|<br />
|<br />
| [https://bitbucket.org/adunstan/blackhole_fdw bitbucket]<br />
|<br />
|<br />
| a skeleton FDW pre-populated with relevant excerpts from the documentation<br />
|}<br />
<br />
=Writing Foreign Database Wrappers=<br />
<br />
* [https://multicorn.org/ Multicorn] is an extension that allows you to write FDWs in Python<br />
* [https://github.com/franckverrot/holycorn Holycorn] is an extension that allows you to write FDWs in Ruby<br />
* [https://www.postgresql.org/docs/current/fdwhandler.html Documentation: Writing a Foreign Data Wrapper]<br />
* [https://bitbucket.org/adunstan/blackhole_fdw Black Hole FDW] - a skeleton FDW pre-populated with relevant excerpts from the documentation<br />
* [http://blog.guillaume.lelarge.info/index.php/post/2013/06/25/The-handler-and-the-validator-functions-of-a-FDW FDW tutorial by Guillaume Lelarge]<br />
* [https://github.com/nautilebleu/django-fdw django-fdw] A sample project to test django and Postgres Foreign Data Wrapper<br />
<br />
<br />
[[Category:Foreign-data wrapper|!]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=Foreign_data_wrappers&diff=36425Foreign data wrappers2021-09-08T14:13:51Z<p>Adunstan: /* Exotic Wrappers */</p>
<hr />
<div>= Foreign Data Wrappers =<br />
In 2003, a new specification called [[SQL/MED]] ("SQL Management of External Data") was added to the SQL standard. It is a standardized way of handling access to remote objects from SQL databases. In 2011, PostgreSQL 9.1 was released with read-only support of this standard, and in 2013 write support was added with PostgreSQL 9.3.<br />
<br />
There are now a variety of Foreign Data Wrappers (FDW) available which enable PostgreSQL Server to different remote data stores, ranging from other SQL databases through to flat file. This page list some of the wrappers currently available. Another [https://pgxn.org/tag/fdw/ fdw list] can be found at [https://pgxn.org/ the PGXN website].<br />
<br />
Please keep in mind that most of these wrappers are '''not officially supported by the PostgreSQL Global Development Group''' (PGDG) and that some of these projects are '''still in Beta''' version. Use carefully!<br />
<br />
<br />
== Generic SQL Database Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
|ODBC<br />
|Native<br />
|<br />
|[https://github.com/CartoDB/odbc_fdw github]<br />
|<br />
|<br />
|CartoDB took over active development of the ODBC FDW for PG 9.5+<br />
|-<br />
|JDBC<br />
|Native<br />
|<br />
|[https://github.com/atris/JDBC_FDW github]<br />
|<br />
|<br />
| Not maintained ?<br />
|-<br />
|JDBC2<br />
|Native<br />
|<br />
|[https://github.com/heimir-sverrisson/jdbc2_fdw github]<br />
|<br />
|<br />
|<br />
|-<br />
| [https://www.sqlalchemy.org/ SQL_Alchemy]<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
| [https://multicorn.org/foreign-data-wrappers/#sqlalchemy-foreign-data-wrapper documentation]<br />
| Can be used to access data stored in any database supported by the sqlalchemy python toolkit.<br />
|-<br />
| [https://gdal.org/drivers/vector/index.html GDAL/OGR]<br />
| Native<br />
| MIT<br />
| [https://github.com/pramsey/pgsql-ogr-fdw GitHub]<br />
| yum.postgresql.org, apt.postgresql.org, and part of PostGIS windows bundle (application stackbuilder)<br />
| <br />
| Can access many kinds of data sources (Relational databases, spreadsheets, CSV files, web feature services, etc). Uses the [https://gdal.org/ GDAL library] which supports hundreds of formats to access the data. Exposes vector data as PostGIS geometry columns if you have PostGIS installed. Works great with both spatial and non-spatial data.<br />
|-<br />
| VirtDB<br />
| Native<br />
| GPL<br />
| [https://github.com/virtdb/virtdb-fdw GitHub]<br />
|<br />
|<br />
| A generic FDW to access VirtDB data sources (SAP ERP, Oracle RDBMS)<br />
|}<br />
<br />
== Specific SQL Database Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
|[https://www.postgresql.org/ PostgreSQL]<br />
|Native<br />
|PostgreSQL<br />
|[https://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=contrib/postgres_fdw;hb=HEAD git.postgresql.org]<br />
|<br />
|[https://www.postgresql.org/docs/current/postgres-fdw.html documentation]<br />
|<br />
|-<br />
|[https://www.oracle.com/index.html Oracle]<br />
|Native<br />
|PostgreSQL<br />
|[https://github.com/laurenz/oracle_fdw github]<br />
|[https://pgxn.org/dist/oracle_fdw/ PGXN]<br />
|[http://laurenz.github.io/oracle_fdw/ website]<br />
|<br />
|-<br />
|[https://www.mysql.com/ MySQL]<br />
|Native<br />
|<br />
|[https://github.com/EnterpriseDB/mysql_fdw github]<br />
|[https://pgxn.org/dist/mysql_fdw/ PGXN]<br />
|[https://www.enterprisedb.com/blog/new-oss-tool-links-postgres-and-mysql example]<br />
|FDW for MySQL<br />
|-<br />
|Informix<br />
|Native<br />
|PostgreSQL<br />
|[https://github.com/credativ/informix_fdw github]<br />
|<br />
|<br />
|<br />
|-<br />
|[https://www.firebirdsql.org/ Firebird]<br />
|Native<br />
|PostgreSQL<br />
|[https://github.com/ibarwick/firebird_fdw/ github]<br />
|[https://pgxn.org/dist/firebird_fdw/ PGXN]<br />
|[https://github.com/ibarwick/firebird_fdw/blob/master/README.md README]<br />
|version [https://github.com/ibarwick/firebird_fdw/releases/tag/1.2.0 1.2.0] released (2020-10)<br />
|-<br />
|[https://www.sqlite.org/index.html SQLite]<br />
|Native<br />
|PostgreSQL<br />
|[https://github.com/pgspider/sqlite_fdw github]<br />
|[https://pgxn.org/dist/sqlite_fdw PGXN]<br />
|[https://github.com/pgspider/sqlite_fdw/blob/master/README.md README]<br />
|An FDW for SQLite3 (write support and several pushdown optimization)<br />
|-<br />
|Sybase / MS SQL Server<br />
|Native<br />
|<br />
|[https://github.com/tds-fdw/tds_fdw github]<br />
|[https://pgxn.org/dist/tds_fdw/ PGXN]<br />
|<br />
|An FDW for Sybase and Microsoft SQL server<br />
|-<br />
|[https://www.monetdb.org/ MonetDB]<br />
|Native<br />
|<br />
|[https://github.com/snaga/monetdb_fdw github]<br />
|<br />
|<br />
|<br />
|}<br />
<br />
== NoSQL Database Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
|[https://cloud.google.com/bigtable/ BigTable or HBase]<br />
|[https://github.com/posix4e/rpgffi Native Rust Binding (RPGFFI)]<br />
|MIT<br />
|[https://github.com/durch/google-bigtable-postgres-fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
|[http://cassandra.apache.org/ Cassandra]<br />
|[https://multicorn.org/ Multicorn]<br />
|MIT<br />
|[https://github.com/rankactive/cassandra-fdw Github]<br />
|[https://rankactive.com/resources/postgresql-cassandra-fdw Rankactive]<br />
|<br />
|<br />
|-<br />
| Cassandra2<br />
| Native<br />
| MIT<br />
|[https://github.com/jaiminpan/cassandra2_fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
|-<br />
| [http://cassandra.apache.org Cassandra]<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
|[https://github.com/wjch-krl/pgCassandra Github]<br />
|<br />
|<br />
|<br />
|-<br />
|[https://clickhouse.yandex/ ClickHouse]<br />
|[https://multicorn.org/ Multicorn]<br />
|BSD<br />
|[https://github.com/Infinidat/infi.clickhouse_fdw/ Github]<br />
|<br />
|[https://github.com/Infinidat/infi.clickhouse_fdw/blob/master/README.md README]<br />
|<br />
|-<br />
|[https://clickhouse.yandex/ ClickHouse]<br />
|Native<br />
|Apache<br />
|[https://github.com/adjust/clickhouse_fdw Github]<br />
|<br />
|[https://github.com/adjust/clickhouse_fdw/blob/master/README.md README]<br />
|<br />
|-<br />
|[http://couchdb.apache.org/ CouchDB]<br />
|Native<br />
|PostgreSQL<br />
|[https://github.com/ZhengYang/couchdb_fdw Github]<br />
|[https://pgxn.org/dist/couchdb_fdw/ PGXN]<br />
|<br />
| Original version<br />
|-<br />
|[http://couchdb.apache.org/ CouchDB]<br />
|Native<br />
|PostgreSQL<br />
|[https://github.com/golgauth/couchdb_fdw Github]<br />
|<br />
|<br />
| golgauth version (9.1 - 9.2+ compatible)<br />
|-<br />
| [https://github.com/griddb/griddb_nosql GridDB]<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/pgspider/griddb_fdw Github]<br />
|<br />
| [https://github.com/pgspider/griddb_fdw/blob/master/README.md README]<br />
|<br />
|-<br />
| InfluxDB<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/pgspider/influxdb_fdw Github]<br />
|<br />
| [https://github.com/pgspider/influxdb_fdw/blob/master/README.md README]<br />
|<br />
|-<br />
| [https://kafka.apache.org/ Kafka]<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/adjust/kafka_fdw GitHub]<br />
|<br />
| [https://github.com/adjust/kafka_fdw/blob/master/README.md README]<br />
|<br />
|-<br />
|[https://fallabs.com/kyototycoon/ Kyoto Tycoon ]<br />
|Native<br />
|MIT<br />
|[https://github.com/cloudflare/kt_fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
|[https://www.mongodb.com/ MongoDB]<br />
|Native<br />
|GPL3+<br />
|[https://github.com/EnterpriseDB/mongo_fdw Github]<br />
|[https://pgxn.org/dist/mongo_fdw/ PGXN]<br />
|[https://github.com/EnterpriseDB/mongo_fdw/blob/master/README.md README]<br />
|EDB version<br />
|-<br />
|[https://www.mongodb.com/ MongoDB]<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/dwa/mongoose_fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
| [https://www.mongodb.com/ MongoDB]<br />
| [https://multicorn.org/ Multicorn]<br />
|<br />
| [https://github.com/asya999/yam_fdw Github]<br />
|<br />
|<br />
| Yet Another Postgres FDW for MongoDB<br />
|-<br />
|[https://neo4j.com/ Neo4j]<br />
|[https://multicorn.org/ Multicorn]<br />
|GPLv3<br />
|[https://github.com/sim51/neo4j-fdw Github]<br />
|<br />
|[https://github.com/sim51/neo4j-fdw/blob/master/README.adoc README]<br />
|FWD for Neo4j and also add a Cypher function to Pg<br />
|-<br />
|[https://neo4j.com/ Neo4j]<br />
|Native<br />
|?<br />
|[https://github.com/nuko-yokohama/neo4j_fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
|[http://quasar-analytics.org/ Quasar]<br />
|Native<br />
|Apache<br />
|[https://github.com/slamdata/quasar-fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
|[https://redis.io/ Redis]<br />
|Native<br />
|PostgreSQL<br />
|[https://github.com/pg-redis-fdw/redis_fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
| [https://redis.io/ Redis]<br />
| Native<br />
| BSD<br />
| [https://github.com/nahanni/rw_redis_fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
| [https://rethinkdb.com/ RethinkDB]<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/rotten/rethinkdb-multicorn-postgresql-fdw Github]<br />
|<br />
| [https://rethinkdb.com/blog/postgres-foreign-data-wrapper/ blog]<br />
|<br />
|-<br />
| [https://github.com/basho/riak Riak]<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/kiskovacs/riak-multicorn-pg-fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
|[http://whitedb.org/ WhiteDB]<br />
| Native<br />
| MIT<br />
| [https://github.com/Kentik/wdb_fdw Github]<br />
|<br />
|<br />
|<br />
|-<br />
|[https://github.com/facebook/rocksdb RocksDB]<br />
|Native<br />
|Apache<br />
|[https://github.com/vidardb/PostgresForeignDataWrapper Github]<br />
|<br />
|[https://github.com/vidardb/PostgresForeignDataWrapper/blob/master/README.md README]<br />
|FDW for RocksDB<br />
|<br />
|<br />
|<br />
|}<br />
<br />
== File Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
| CSV<br />
| Native<br />
| PostgreSQL<br />
|[https://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=contrib/file_fdw;hb=HEAD git.postgresql.org]<br />
|<br />
| [https://www.postgresql.org/docs/current/file-fdw.html documentation]<br />
| Delivered as an official extension of PostgreSQL 9.1 / [https://www.depesz.com/2011/03/14/waiting-for-9-1-foreign-data-wrapper/ example] / [http://www.postgresonline.com/journal/archives/250-File-FDW-Family-Part-1-file_fdw.html Another example]<br />
|-<br />
| CSV<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
| [https://multicorn.org/foreign-data-wrappers/#csv-foreign-data-wrapper documentation]<br />
| Each column defined in the table will be mapped, in order, against columns in the CSV file.<br />
|-<br />
| CSV / Text Array<br />
| Native<br />
|<br />
| [https://github.com/adunstan/file_text_array_fdw GitHub]<br />
|<br />
| [http://www.postgresonline.com/journal/archives/251-File-FDW-Family-Part-2-file_textarray_fdw-Foreign-Data-Wrapper.html How to]<br />
| Another CSV wrapper<br />
|-<br />
| CSV / Fixed-length<br />
| Native<br />
|<br />
| [https://github.com/adunstan/file_fixed_length_record_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| CSV / gzipped<br />
| [https://multicorn.org/ Multicorn]<br />
|<br />
| [https://github.com/dialogbox/py_csvgz_fdw GitHub]<br />
|<br />
|<br />
| PostgreSQL Foreign Data Wrapper for gzipped cvs file<br />
|-<br />
| Compressed File<br />
| Native<br />
|<br />
| [https://github.com/gokhankici/compressedfile_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Document Collection<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/ZhengYang/dc_fdw GitHub]<br />
|<br />
| [https://github.com/ZhengYang/dc_fdw/wiki wiki]<br />
|<br />
|-<br />
| JSON<br />
| Native<br />
| GPL3<br />
| [https://github.com/nkhorman/json_fdw GitHub]<br />
|<br />
| [https://www.citusdata.com/blog/2013/05/30/run-sql-on-json-files-without-any-data-loads/ Example]<br />
|<br />
|-<br />
| Multi-File<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
| [https://multicorn.org/foreign-data-wrappers/#filesystem-foreign-data-wrapper doc]<br />
| Access data stored in various files in a filesystem. The files are looked up based on a pattern, and parts of the file's path are mapped to various columns, as well as the file's content itself.<br />
|-<br />
| Multi CDR<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/theirix/multicdr_fdw GitHub]<br />
| [https://pgxn.org/dist/multicdr_fdw/ PGXN]<br />
|<br />
|<br />
|-<br />
| Parquet<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/adjust/parquet_fdw GitHub]<br />
|<br />
|<br />
| Foreign data wrapper for reading Parquet files using libarrow/libparquet<br />
|-<br />
| pg_dump<br />
| Native<br />
| New BSD<br />
| [https://github.com/MeetMe/dump_fdw GitHub]<br />
|<br />
|<br />
| Allows querying of data directly against Postgres custom format files created by pg_dump<br />
|-<br />
| TAR Files<br />
| Native<br />
|<br />
| [https://github.com/beargiles/tarfile-fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| XML<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
|<br />
|<br />
|-<br />
| ZIP Files<br />
| Native<br />
|<br />
| [https://github.com/beargiles/zipfile-fdw GitHub]<br />
|<br />
|<br />
|<br />
|}<br />
<br />
== Geo Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
|[https://www.gdal.org GDAL/OGR]<br />
|Native<br />
|MIT<br />
|[https://github.com/pramsey/pgsql-ogr-fdw GitHub]<br />
|<br />
|<br />
|A wrapper for data sources with a [https://www.gdal.org GDAL/OGR] driver, including databases like Oracle, Informix, SQLite, SQL Server, ODBC as well as file formats like Shape, FGDB, MapInfo, CSV, Excel, OpenOffice, OpenStreetMap PBF and XML, OGC WebServices, [https://www.gdal.org/ogr_formats.html and more] Spatial columns are linked in as PostGIS geometry if PostGIS is installed.<br />
|-<br />
| Geocode / GeoJSON<br />
| [https://multicorn.org/ Multicorn]<br />
| GPL<br />
| [https://github.com/bosth/geofdw GitHub]<br />
|<br />
|<br />
| a collection of PostGIS-related foreign data wrappers<br />
|-<br />
| [https://wiki.openstreetmap.org/wiki/PBF_Format Open Street Map PBF]<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/vpikulik/postgres_osm_pbf_fdw GitHub]<br />
|<br />
|<br />
|<br />
|}<br />
<br />
== LDAP Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
| LDAP<br />
| Native<br />
|<br />
| [https://github.com/guedes/ldap_fdw GitHub]<br />
| [https://pgxn.org/dist/ldap_fdw/ PGXN]<br />
|<br />
| Allows to query an LDAP server and retrieve data from some pre-configured Organizational Unit<br />
|-<br />
| LDAP<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
| [https://multicorn.org/foreign-data-wrappers/#idldap-foreign-data-wrapper documentation]<br />
|<br />
|}<br />
<br />
== Generic Web Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
| Git<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
|<br />
|<br />
|-<br />
| Git<br />
| Native<br />
| MIT<br />
| [https://github.com/franckverrot/git_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| ICAL<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/daamien/Multicorn/blob/master/python/multicorn/icalfdw.py GitHub]<br />
|<br />
| [https://wiki.postgresql.org/images/7/7e/Conferences-write_a_foreign_data_wrapper_in_15_minutes-presentation.pdf pdf]<br />
|<br />
|-<br />
| IMAP<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
| [https://multicorn.org/foreign-data-wrappers/#idimap-foreign-data-wrapper documentation]<br />
|<br />
|-<br />
| RSS<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
| [https://multicorn.org/foreign-data-wrappers/#idrss-foreign-data-wrapper documentation]<br />
| This fdw can be used to access items from an rss feed.<br />
|-<br />
| www<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/cyga/www_fdw/ GitHub]<br />
| [https://pgxn.org/dist/www_fdw/ PGXN]<br />
| [https://github.com/cyga/www_fdw/wiki wiki]<br />
| Allows to query different web services<br />
|-<br />
| pgsql-http<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/pramsey/pgsql-http GitHub]<br />
| Compile<br />
| <br />
| Allows to query any http resource using CURL libs. By Paul Ramsey<br />
<br />
|}<br />
<br />
== Specific Web Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
| Database.com<br />
| [https://multicorn.org/ Multicorn]<br />
| BSD<br />
| [https://github.com/metadaddy/Database.com-FDW-for-PostgreSQL GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Dun & Badstreet<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/dpmorel/dnb_fdw GitHub]<br />
|<br />
|<br />
| Access to the [https://fr.wikipedia.org/wiki/Data_Universal_Numbering_System Data Universal Numbering System] (DUNS)<br />
|-<br />
| DynamoDB<br />
| [https://multicorn.org/ Multicorn]<br />
| GPL<br />
| [https://github.com/avances123/dynamodb_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Facebook<br />
| [https://multicorn.org/ Multicorn]<br />
|<br />
| [https://github.com/mrwilson/fb-psql GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Fixer.io<br />
| based on www_fdw<br />
|<br />
| [https://github.com/hakanensari/frankfurter GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Google<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn GitHub]<br />
| [https://pgxn.org/dist/multicorn/ PGXN]<br />
|<br />
|<br />
|-<br />
| Heroku dataclips<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/petergeoghegan/dataclips_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Keycloak<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/schne324/foreign-keycloak-wrapper GitHub]<br />
| [https://pgxn.org/dist/foreign-keycloak-wrapper/ PGXN]<br />
| [https://github.com/schne324/foreign-keycloak-wrapper/blob/master/README.md README]<br />
| Direct database integration with the [https://www.keycloak.org Keycloak] open-source Identity/Access Management solution.<br />
|-<br />
| Mailchimp<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/daamien/mailchimp_fdw GitHub]<br />
|<br />
|<br />
| Beta<br />
|-<br />
| [http://parseplatform.org/ Parse]<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/spacialdb/parse_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| S3<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/umitanuki/s3_fdw GitHub]<br />
| [https://pgxn.org/dist/s3_fdw/ PGXN]<br />
|<br />
|<br />
|-<br />
| S3CSV<br />
| [https://multicorn.org/ Multicorn]<br />
| GPL 3<br />
| [https://github.com/eligoenergy/s3csv_fdw GitHub]<br />
|<br />
|<br />
| This is meant to replace s3_fdw that does is not supported on PostgreSQL version 9.2+<br />
|-<br />
| Telegram<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/guedes/telegram_fdw GitHub]<br />
|<br />
|<br />
| telegram_fdw is a Telegram BOT implemented using the PostgreSQL foreign data wrapper interface.<br />
|-<br />
| Twitter<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/umitanuki/twitter_fdw GitHub]<br />
| [https://pgxn.org/dist/twitter_fdw/ PGXN]<br />
|<br />
| A wrapper fetching text messages from Twitter over the Internet and returning a table<br />
|-<br />
| [https://www.treasuredata.com/ Treasure Data]<br />
| Native<br />
| Apache<br />
| [https://github.com/komamitsu/treasuredata_fdw GitHub]<br />
| [https://pgxn.org/dist/treasuredata_fdw PGXN]<br />
|<br />
| A FDW for Treasure Data internally using a Rust library<br />
|-<br />
| [https://www.treasuredata.com/ Treasure Data]<br />
| [https://multicorn.org/ Multicorn]<br />
| Apache<br />
| [https://github.com/komamitsu/td-fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Google Spreadsheets<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/lincolnturner/gspreadsheet_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Open Weather Map<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/ycku/owmfdw GitHub]<br />
|<br />
|<br />
| A FDW for Open Weather Map (single city)<br />
|}<br />
<br />
== Big Data Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
|Elasticsearch<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/matthewfranglen/postgres-elasticsearch-fdw GitHub]<br />
|<br />
|<br />
| Supports up to PG 13, ES 7.<br />
|-<br />
| Google BigQuery<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
|[https://github.com/gabfl/bigquery_fdw GitHub]<br />
|<br />
|[https://github.com/gabfl/bigquery_fdw/blob/master/docs/README.md Documentation]<br />
|bigquery_fdw is a BigQuery FDW compatible with PostgreSQL >= 9.5<br />
|-<br />
| file_fdw-gds (Hadoop)<br />
| Native<br />
|<br />
| [https://github.com/wat4dog/pg-file-fdw-gds GitHub]<br />
|<br />
|<br />
| Hadoop file_fdw is a slightly modified version of PostgreSQL 9.3's file_fdw module.<br />
|-<br />
| Hadoop<br />
| Native<br />
| PostgreSQL<br />
| [https://www.openscg.com/bigsql/hadoopfdw/ Bitbucket]<br />
|<br />
|<br />
| Allows read and write access to HBase as well as to HDFS via Hive.<br />
|-<br />
| HDFS<br />
| Native<br />
| Apache<br />
| [https://github.com/EnterpriseDB/hdfs_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Hive<br />
| [https://multicorn.org/ Multicorn]<br />
|<br />
| [https://github.com/youngwookim/hive-fdw-for-postgresql GitHub]<br />
|<br />
|<br />
| Used to access Apache Hive tables.<br />
|-<br />
| Hive / ORC File<br />
| Native<br />
|<br />
| [https://github.com/gokhankici/orc_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| [http://impala.apache.org/ Impala]<br />
| Native<br />
| BSD<br />
| [https://github.com/lapug/impala_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| [https://arrow.apache.org/ Apache Arrow]<br />
| Native<br />
| GPLv2<br />
| [https://github.com/heterodb/pg-strom GitHub]<br />
|<br />
|<br />
| A part of PG-Strom feature; as a columnar data source with support of SSD-to-GPU Direct SQL <br />
|}<br />
<br />
== Column-Oriented Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
|Columnar Store<br />
|Native<br />
|<br />
|[https://github.com/citusdata/cstore_fdw github]<br />
|[https://www.citusdata.com/blog/2014/04/03/columnar-store-for-analytics/ example]<br />
|<br />
|A Columnar Store for PostgreSQL.<br />
|-<br />
|[https://www.monetdb.org/ MonetDB]<br />
|Native<br />
|<br />
|[https://github.com/snaga/monetdb_fdw github]<br />
|<br />
|<br />
|<br />
|-<br />
|GPU Memory Store<br />
|Native<br />
|GPL v2<br />
|[https://github.com/heterodb/pg-strom github]<br />
|<br />
|<br />
|FDW to GPU device memory; a part of PG-Strom feature for PL/CUDA<br />
|}<br />
<br />
== Scientific Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
| Ambry<br />
| [https://multicorn.org/ Multicorn]<br />
|<br />
| [https://github.com/nmb10/ambryfdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| ROOT files<br />
| Native<br />
|<br />
| [https://github.com/miguel-branco/root_fdw GitHub]<br />
|<br />
|<br />
| https://root.cern.ch<br />
|-<br />
| VCF files (Genotype)<br />
| [https://multicorn.org/ Multicorn]<br />
|<br />
| [https://github.com/smithijk/vcf_fdw_postgresql GitHub]<br />
|<br />
|<br />
| https://en.wikipedia.org/wiki/Variant_Call_Format<br />
|}<br />
<br />
== Operating System Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
| Docker<br />
| [https://multicorn.org/ Multicorn]<br />
| Expat<br />
| [https://github.com/paultag/dockerfdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Log files<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/rdunklau/logfdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| OpenStack / Telemetry<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/CSCfi/telemetry-fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| OS Query<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/shish/pgosquery GitHub]<br />
|<br />
|<br />
| Like Facebook's OSQuery, but for Postgres<br />
|-<br />
| Passwd<br />
| Native<br />
| PostgreSQL<br />
| [https://github.com/beargiles/passwd-fdw GitHub]<br />
|<br />
|<br />
| reads linux/unix password and group files.<br />
|-<br />
| Process<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/Kozea/Multicorn/blob/master/python/multicorn/processfdw.py GitHub]<br />
|<br />
|<br />
| A foreign datawrapper for querying system stats based on [https://libstatgrab.org/ statgrab]<br />
|-<br />
| Environment Variables<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/pgsql-tw/envfdw GitHub]<br />
|<br />
|<br />
| envFDW is a forign data wrapper for processing environment variables<br />
|}<br />
<br />
== Exotic Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
| faker_fdw<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/guedes/faker_fdw GitHub]<br />
|<br />
|<br />
| faker_fdw is a foreign data wrapper for PostgreSQL that generates fake data.<br />
|-<br />
| fdw_fdw<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/daamien/fdw_fdw GitHub]<br />
|<br />
|<br />
| the Meta FDW ! reads this page and returns the list of all the FDW<br />
|-<br />
| PPG<br />
| Native<br />
|<br />
| [https://github.com/scarbrofair/ppg_fdw GitHub]<br />
|<br />
|<br />
| distributed parallel query engine, based on fdw and hooks of PG<br />
|-<br />
| Open Civic Data<br />
| [https://multicorn.org/ Multicorn]<br />
| Expat<br />
| [https://github.com/paultag/sunlightfdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| [https://www2.meethue.com/en-us/philips-hue-benefits Phillips Hue Lighting Systems]<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/rotten/hue-multicorn-postgresql-fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Random Number<br />
| [https://multicorn.org/ Multicorn]<br />
| PostgreSQL<br />
| [https://github.com/yieldsfalsehood/rng_fdw GitHub]<br />
|<br />
|<br />
| A random number generator foreign data wrapper for postgres<br />
|-<br />
| Rotfang<br />
| Native<br />
| PostgreSQL<br />
| [https://bitbucket.org/adunstan/rotfang-fdw BitBucket]<br />
|<br />
| [https://drive.google.com/file/d/0B3XVAFFWEFN0aURac0dzSFQyZzA/view slides]<br />
| Advanced random number generator<br />
|-<br />
| Template Tables<br />
| Native<br />
| BSD<br />
| [https://github.com/okbob/template_fdw GitHub]<br />
|<br />
|<br />
| PostgreSQL data wrapper for template tables - any DML and SELECT operations are disallowed<br />
|-<br />
| VMware vSphere<br />
| [https://multicorn.org/ Multicorn]<br />
| MIT<br />
| [https://github.com/ycku/vspherefdw GitHub]<br />
|<br />
|<br />
| A PostgreSQL FDW to query your VMware vSphere service<br />
|}<br />
<br />
== Example Wrappers ==<br />
<br />
{| align="center" border="1" cellspacing="0" {{Prettytable}}<br />
|-<br />
!{{Hl2}} |Data Source<br />
!{{Hl2}} |Type<br />
!{{Hl2}} |License<br />
!{{Hl2}} |Code<br />
!{{Hl2}} |Install<br />
!{{Hl2}} |Doc<br />
!{{Hl2}} |Notes<br />
|-<br />
| Dummy<br />
| Native<br />
| BSD<br />
| [https://github.com/slaught/dummy_fdw GitHub]<br />
|<br />
|<br />
| Readable null FDW for testing<br />
|-<br />
| Hello World<br />
|<br />
|<br />
| [https://github.com/wikrsh/hello_fdw GitHub]<br />
|<br />
|<br />
|<br />
|-<br />
| Black Hole<br />
|<br />
|<br />
| [https://bitbucket.org/adunstan/blackhole_fdw bitbucket]<br />
|<br />
|<br />
| a skeleton FDW pre-populated with relevant excerpts from the documentation<br />
|}<br />
<br />
=Writing Foreign Database Wrappers=<br />
<br />
* [https://multicorn.org/ Multicorn] is an extension that allows you to write FDWs in Python<br />
* [https://github.com/franckverrot/holycorn Holycorn] is an extension that allows you to write FDWs in Ruby<br />
* [https://www.postgresql.org/docs/current/fdwhandler.html Documentation: Writing a Foreign Data Wrapper]<br />
* [https://bitbucket.org/adunstan/blackhole_fdw Black Hole FDW] - a skeleton FDW pre-populated with relevant excerpts from the documentation<br />
* [http://blog.guillaume.lelarge.info/index.php/post/2013/06/25/The-handler-and-the-validator-functions-of-a-FDW FDW tutorial by Guillaume Lelarge]<br />
* [https://github.com/nautilebleu/django-fdw django-fdw] A sample project to test django and Postgres Foreign Data Wrapper<br />
<br />
<br />
[[Category:Foreign-data wrapper|!]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_14_Open_Items&diff=36421PostgreSQL 14 Open Items2021-09-04T14:30:01Z<p>Adunstan: /* resolved before 14beta4 (?) */</p>
<hr />
<div>== Open Issues ==<br />
<br />
'''NOTE''': Please place new open items at the end of the list.<br />
<br />
* [https://www.postgresql.org/message-id/20210817091420.u3vgqjh43lnpjntk%40alap3.anarazel.de pgstat_send_connstats() introduces unnecessary timestamp and UDP overhead]<br />
** Owner: Magnus Hagander<br />
<br />
* [https://www.postgresql.org/message-id/flat/17158-8a2ba823982537a4%40postgresql.org BUG #17158 (type RECORD is not always hashable)]<br />
** Owner: Peter Eisentraut<br />
<br />
== Decisions to Recheck Mid-Beta ==<br />
<br />
* [https://www.postgresql.org/message-id/4170264.1620321747%40sss.pgh.pa.us Should we undo libpq change that leaves PQerrorMessage() nonempty after successful connect?]<br />
** Owner: Tom Lane<br />
<br />
* [https://www.postgresql.org/message-id/CABNQVagu3bZGqiTjb31a8D5Od3fUMs7Oh3gmZMQZVHZ=uWWWfQ@mail.gmail.com Consider back-patching typmod casting behavior change to stable branches]<br />
** Fixed in HEAD/v14 at: {{PgCommitURL|5c056b0c2519e602c2e98bace5b16d2ecde6454b}}<br />
** Owner: Tom Lane<br />
<br />
== Older bugs affecting stable branches ==<br />
<br />
=== Live issues ===<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkjjCoq5Y4LeeHJcjYJVxGm3M3SAWZ0%3D6J8K1FPSC9K0w%40mail.gmail.com REINDEX on a system catalog can leave index with two index tuples whose heap TIDs match]<br />
** In other words, there is a rare case where the HOT invariant is violated. Same HOT chain is indexed twice due to confusion about which precise heap tuple should be indexed.<br />
** Unclear what the user impact is.<br />
** Affects all stable branches.<br />
<br />
* [https://www.postgresql.org/message-id/20201016135230.GA23633%40alvherre.pgsql CREATE TABLE .. PARTITION OF fails to preserve tgenabled for inherited row triggers]<br />
** tgenabled lost on CREATE TABLE .. PARTITION OF, and on pg_dump, and comments on child triggers lost during pg_dump;<br />
** Those are resolved by f0e21f2f6 and df80fa2ee, but there's another issue with psql \d of non-inherited triggers<br />
<br />
* [https://www.postgresql.org/message-id/20201001021609.GC8476%40telsasoft.com memory leak with JIT inlining]<br />
** [https://www.postgresql.org/message-id/flat/20210331040751.GU4431%40telsasoft.com#cc34872765add8e483e05009212d9d39 Another report of (same?) issue and reproducer]<br />
** [https://www.postgresql.org/message-id/flat/9f73e655-14b8-feaf-bd66-c0f506224b9e%40stephans-server.de Another report]<br />
** [https://www.postgresql.org/message-id/flat/16707-f5df308978a55bf8%40postgresql.org Another report]<br />
<br />
* [https://www.postgresql.org/message-id/CAEudQAoR5e7=uMZ0otzuCVb25zTC8QQBe+2Dt1JRsa3u+XuwJg@mail.gmail.com could not rename temporary statistics file on Windows]<br />
** See {{PgCommitURL|909b449e00fc2f71e1a38569bbddbb6457d28485}} that has fixed a similar symptom for WAL segments. Most reporters of the WAL segment problem complained about this renaming issue as well.<br />
<br />
* [https://www.postgresql.org/message-id/20210422203603.fdnh3fu2mmfp2iov@alap3.anarazel.de Incorrect snapshot calculation when 2PC is in use]<br />
** Seems to be an old problem.<br />
<br />
=== Fixed issues ===<br />
<br />
* [https://www.postgresql.org/message-id/flat/trinity-1c565d44-159f-488b-a518-caf13883134f-1611835701633%403c-app-gmx-bap78 hashagg broken by failing to spill grouping columns]<br />
** Fixed at: {{PgCommitURL|0ff865fbe50e82f17df8a9280fa01faf270b7f3f}}<br />
<br />
* [https://www.postgresql.org/message-id/CAE-ML+_EjH_fzfq1F3RJ1=XaaNG=-Jz-i3JqkNhXiLAsM3z-Ew@mail.gmail.com PITR promote bug: Checkpointer writes to older timeline]<br />
** Fixed at: {{PgCommitURL|595b9cba2ab0cdd057e02d3c23f34a8bcfd90a2d}}<br />
<br />
* [https://www.postgresql.org/message-id/YFBcRbnBiPdGZvfW%40paquier.xyz Permission failures with WAL files in 13~ on Windows]<br />
** Fixed at: {{PgCommitURL|78c24e97dd189f62187a99ef84016d0eb35a7978}}<br />
<br />
* [https://www.postgresql.org/message-id/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com CLOBBER_CACHE Server crashed with segfault 11 while executing clusterdb]<br />
** Fixed at: {{PgCommitURL|9d523119fd38fd205cb9c8ea8e7cceeb54355818}}<br />
<br />
* [https://www.postgresql.org/message-id/CAAV6ZkQRCVBh8qAY+SZiHnz+U+FqAGBBDaDTjF2yiKa2nJSLKg@mail.gmail.com Reference leak with tupledescs in plpgsql simple expressions]<br />
** Fixed at: {{PgCommitURL|c2db458c1036efae503ce5e451f8369e64c99541}}<br />
<br />
* [https://www.postgresql.org/message-id/a3be61d9-f44b-7fce-3dc8-d700fdfb6f48%402ndquadrant.com extract(julian) is undocumented and gives wrong result]<br />
** Fixed by documentation change at: {{PgCommitURL|79a5928ebcb726b7061bf265b5c6990e835e8c4f}}<br />
<br />
* [https://www.postgresql.org/message-id/CAGRY4nwxKUS_RvXFW-ugrZBYxPFFM5kjwKT5O+0+Stuga5b4+Q@mail.gmail.com lwlock dtrace probes do unnecessary work if dtrace is compiled in but disabled]<br />
** Fixed at: {{PgCommitURL|b94409a02f6122d77b5154e481c0819fed6b4c95}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/15990-eee2ac466b11293d%40postgresql.org Detoast failures after commit/rollback in plpgsql]<br />
** Fixed at: {{PgCommitURL|f21fadafaf0fb5ea4c9622d915972651273d62ce}} and {{PgCommitURL|84f5c2908dad81e8622b0406beea580e40bb03ac}}<br />
<br />
* [https://www.postgresql.org/message-id/3382681.1621381328%40sss.pgh.pa.us Subscription tests fail under CLOBBER_CACHE_ALWAYS]<br />
** Fixed at: {{PgCommitURL|b39630fd41f25b414d0ea9b30804f4105f2a0aff}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/534fca83789c4a378c7de379e9067d4f%40politie.nl XX000: unknown type of jsonb container.]<br />
** Fixed at: {{PgCommitURL|6ee41a301e70fc8e4ad383bad22d695f66ccb0ac}}<br />
<br />
* [https://www.postgresql.org/message-id/1884374.1617898865%40sss.pgh.pa.us Buildfarm does not test pg_stat_statements]<br />
** Fixed by buildfarm client change<br />
<br />
* [https://www.postgresql.org/message-id/17064-bb0d7904ef72add3%40postgresql.org Parallel VACUUM operations cause the error "global/pg_filenode.map contains incorrect checksum"]<br />
** Fixed at: {{PgCommitURL|b6d8d207}} and {{PgCommitURL|9b8ed0f52}}<br />
<br />
* [https://www.postgresql.org/message-id/378885e4-f85f-fc28-6c91-c4d1c080bf26%40amazon.com Assertion failure in HEAD and 13 after calling COMMIT in a stored proc]<br />
** Fixed at: {{PgCommitURL|d102aafb6259a6a412803d4b1d8c4f00aa17f67e}}<br />
<br />
* [https://www.postgresql.org/message-id/4aa370cb91ecf2f9885d98b80ad1109c%40postgrespro.ru Add PortalDrop in exec_execute_message]<br />
** Fixed at: {{PgCommitURL|bb4aed46a}} and {{PgCommitURL|4efcf47053}}<br />
<br />
* [https://www.postgresql.org/message-id/2591376.1621196582%40sss.pgh.pa.us snapshot-scalability logic fails after pg_upgrade, due to pg_resetwal issue]<br />
** Now seems likely that this is an old issue affecting every release, and that the snapshot-scalability work is not at fault<br />
** [https://commitfest.postgresql.org/33/3105/ Pending fix for pg_upgrade + pg_resetwal]<br />
** Fixed at: {{PgCommitURL|74cf7d46a91d601e0f8d957a7edbaeeb7df83efc}}<br />
<br />
* [https://www.postgresql.org/message-id/b5146fb1-ad9e-7d6e-f980-98ed68744a7c%40amazon.com Logical Decoding of relation rewrite with toast does not reset toast_hash]<br />
** Problem exists since v11.<br />
** Fixed at: {{PgCommitURL|29b5905470285bf730f6fe7cc5ddb3513d0e6945}}<br />
<br />
=== Nothing to do ===<br />
<br />
== Non-bugs ==<br />
<br />
* [https://www.postgresql.org/message-id/20210216064214.GI28165%40telsasoft.com progress reporting for partitioned REINDEX]<br />
* [https://www.postgresql.org/message-id/YFnWBYinNf1s0Y6v@msg.df7cb.de pg_regress and tablespace removal]<br />
** [https://www.postgresql.org/message-id/YG/tf6HTZFj4hWlb@paquier.xyz Some patch]<br />
<br />
== Resolved Issues ==<br />
<br />
=== resolved before 14beta4 (?) ===<br />
<br />
* [https://www.postgresql.org/message-id/flat/CAApHDvpbusiKMV%3DvZypdpHHu81u0zMVAp6hu1vg-%3DgQLBBKUPA%40mail.gmail.com#8386c8d37ec1f9f9386cbf528bd9af5c default setting of enable_memoize]<br />
** No change required. (Discussed on Releases list)<br />
** Owner: David Rowley<br />
<br />
* [https://www.postgresql.org/message-id/58cbfa74-9356-778b-3e10-94e3075c5807@enterprisedb.com extended statistics: reject single-var expressions]<br />
** Fixed at: {{PgCommitURL|13380e1476490932c7b15530ead1f649a16e1125}} - Extra parenthesis<br />
** Fixed at: {{PgCommitURL|537ca68db}} - reject single-var expressions<br />
** Owner: Tomas Vondra<br />
<br />
* [https://www.postgresql.org/message-id/20210820125513.GQ10479@telsasoft.com pg_stats includes partitioned tables, but always shows analyze_count=0]<br />
** Fixed at: {{PgCommitURL|e1efc5b465c844969a0ed0d07e1364f3ce424d8c}}<br />
<br />
* [https://www.postgresql.org/message-id/20210730010355.6yodvn2ag3arfihi@alap3.anarazel.de Issues around autovacuum for partitioned tables]<br />
** Feature reverted: {{PgCommitURL|b3d24cc0f0aa882ceec0a74a99f94166c6fc3247}}<br />
<br />
* [https://www.postgresql.org/message-id/TYAPR01MB5866BA57688DF2770E2F95C6F5069@TYAPR01MB5866.jpnprd01.prod.outlook.com DECLARE STATEMENT and DEALLOCATE/DESCRIBE]<br />
** Fixed at: {{PgCommitURL|399edafa2aba562a8013fbe039f3cbf3a41a436e}}<br />
** Fixed at: {{PgCommitURL|f576de1db1eeca63180b1ffa4b42b1e360f88577}}<br />
<br />
* [https://www.postgresql.org/message-id/1629039545467.80333%40nidsa.net Performance regression with hex refactoring code]<br />
** Fixed at: {{PgCommitURL|2576dcfb76aa71e4222bac5a3a43f71875bfa9e8}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/20210807234407.icku2rnqyapsb3io%40alap3.anarazel.de elog.c query_id support vs shutdown]<br />
** Fixed at: {{PgCommitURL|bed5eac2d50eb86a254861dcdea7b064d10c72cf}}<br />
<br />
* [https://www.postgresql.org/message-id/OS0PR01MB5716935D4C2CC85A6143073F94EF9@OS0PR01MB5716.jpnprd01.prod.outlook.com wrong refresh when ALTER SUBSCRIPTION ADD/DROP PUBLICATION]<br />
** Fixed at: {{PgCommitURL|1046a69b3087a6417e85cae9b6bc76caa22f913b}}<br />
<br />
=== resolved before 14beta3 ===<br />
<br />
* [https://www.postgresql.org/message-id/flat/20210530172418.GO2082%40telsasoft.com#d6544e507234cc76b9bc0a50026cd74b \dX doesn't check pg_statistics_obj_is_visible()]<br />
** Fixed at: {{PgCommitURL|f68b609230689f9886a46e5d9ab8d6cdd947e0dc}}<br />
<br />
* [https://www.postgresql.org/message-id/e1b4f05d-54ec-4f51-832b-c18cf5a161c0@www.fastmail.com remove_temp_files_after_crash should be a DEVELOPER GUC]<br />
** Fixed at: {{PgCommitURL|797b0fc0b078c7b4c46ef9f2d9e47aa2d98c6c63}}<br />
<br />
* [https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com recovery_init_sync_method should be PGC_SIGHUP?]<br />
** Fixed at: {{PgCommitURL|34a8b64b4e5f0cd818e5cc7f98846de57938ea57}}<br />
<br />
* [https://www.postgresql.org/message-id/YNZ2mnsbDVJQrA/a@paquier.xyz OOM on palloc() when parsing service file would cause libpq to exit() without reporting a failure]<br />
** Fixed at: {{PgCommitURL|8ec00dc5cd70e0e579e9fbf8661bc46f5ccd8078}}<br />
** Additional defenses added at: {{PgCommitURL|dc227eb82ea8bf6919cd81a182a084589ddce7f3}}<br />
<br />
* [https://www.postgresql.org/message-id/17076-89a16ae835d329b9%40postgresql.org incorrect code for reporting the hash partition associated with a particular modulus]<br />
** Fixed at: {{PgCommitURL|dd2364ced98553e0217bfe8f621cd4b0970db74a}}<br />
<br />
* [https://www.postgresql.org/message-id/c5269c65-f967-77c5-ff7c-15e621c47f6a%40gmail.com Bug in multirange selectivity estimation]<br />
** Fixed at: {{PgCommitURL|322e82b77ef4acb9697c6e4259292f5671cb85bb}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/704fb6fb99ec9864a4dbeda2478337d2%40postgrespro.ru autoanalyze of partitioned table causes it to lose its relhasindex]<br />
** Fixed at: {{PgCommitURL|d700518d744e53994fdded14b23ebc15b031b0dd}}<br />
<br />
* [https://www.postgresql.org/message-id/CAF7igB1r6wRfSCEAB-iZBKxkowWY6+dFF2jObSdd9+iVK+vHJg@mail.gmail.com Incorrect time maths in pgbench] and [https://www.postgresql.org/message-id/CAHLJuCW_8Vpcr0=t6O_gozrg3wXXWXZXDioYJd3NhvKriqgpfQ@mail.gmail.com second thread]<br />
** Fixed at: {{PgCommitURL|0e39a608ed5545cc6b9d538ac937c3c1ee8cdc36}}<br />
<br />
* [https://www.postgresql.org/message-id/60258efe-bd7e-4886-82e1-196e0cac5433%40postgresql.org unnesting multirange data types]<br />
** Fixed at: {{PgCommitURL|244ad5415557812a6ac4dc5d6e2ae908361d82c3}}<br />
<br />
* [https://www.postgresql.org/message-id/17066-16a37f6223a8470b@postgresql.org Cache lookup failed when null (unknown) is passed as anycompatiblemultirange]<br />
** Fixed at: {{PgCommitURL|336ea6e6ff1109e7b83370565e3cb211804fda0c}}<br />
<br />
* [https://www.postgresql.org/message-id/530153.1627425648%40sss.pgh.pa.us Degraded out-of-memory handling in libpq]<br />
** Fixed at: {{PgCommitURL|514b4c11d24701d2cc90ad75ed787bf1380af673}}<br />
<br />
* [https://www.postgresql.org/message-id/0203588E-E609-43AF-9F4F-902854231EE7@enterprisedb.com Crash in regexp with {0}]<br />
** Fixed at: {{PgCommitURL|cc1868799c8311ed1cc3674df2c5e1374c914deb}}<br />
<br />
=== resolved before 14beta2 ===<br />
<br />
* [https://www.postgresql.org/message-id/20210609184506.rqm5rikoikm47csf%40alap3.anarazel.de Snapshot scalability OldestXmin issue (can cause infinite loop during system catalog VACUUM)]<br />
** Fixed at: {{PgCommitURL|5a1e1d83022b976ebdec5cfa8f255c4278b75b8e}}<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkCYR0U7zXqXo0CgFaFwUDz1WbKq8ngjzKi4+AQ5f-mYQ@mail.gmail.com Generalize INDEX_CLEANUP to allow the user to disable the optimization that has VACUUM skip indexes in marginal cases with very few LP_DEAD items/deletable TIDs.]<br />
** Fixed at: {{PgCommitURL|3499df0dee8c4ea51d264a674df5b5e31991319a}}<br />
<br />
* [https://www.postgresql.org/message-id/20210324232224.vrfiij2rxxwqqjjb@alap3.anarazel.de Questions about pg_stat_wal] also [https://www.postgresql.org/message-id/E3774ACD-7894-451E-9F13-71E097D10595@oss.nttdata.com]<br />
** Fixed at: {{PgCommitURL|d8735b8b4651f5ed50afc472e236a8e6120f07f2}}<br />
** Fixed at: {{PgCommitURL|d780d7c0882fe9a385102b292907baaceb505ed0}}<br />
<br />
* [https://www.postgresql.org/message-id/YKMO%2B2gD8R8I2O5b%40paquier.xyz pg_dumpall misses --no-toast-compression]<br />
** Fixed at: {{PgCommitURL|694da1983e9569b2a2f96cd786ead6b8dba31f1d}} <br />
<br />
* [https://www.postgresql.org/message-id/YKQnUoYV63GRJBDD%40msg.df7cb.de portability issue with pgbench's permute() function]<br />
** Fixed at: {{PgCommitURL|0f516d039d8023163e82fa51104052306068dd69}}<br />
<br />
* [https://www.postgresql.org/message-id/35457b09-36f8-add3-1d07-6034fa585ca8@oss.nttdata.com compute_query_id and pg_stat_statements]<br />
** Fixed at {{PgCommitURL|cafde58b33}} and {{PgCommitURL|354f32d01d}}<br />
<br />
* [https://www.postgresql.org/message-id/CAOxo6X+dy-V58iEPFgst8ahPKEU+38NZzUuc+a7wDBZd4TrHMQ@mail.gmail.com Result Cache works incorrectly with unique joins]<br />
** Fixed at {{PgCommitURL|9e215378d7fbb7d4615be917917c52f246cc6c61}}<br />
<br />
* [https://www.postgresql.org/message-id/20210517204803.iyk5wwvwgtjcmc5w%40alap3.anarazel.de Move pg_attribute.attcompression to earlier in struct for reduced size?]<br />
** Fixed at {{PgCommitURL|f5024d8d7b04de2f5f4742ab433cc38160354861}}<br />
<br />
* [https://www.postgresql.org/message-id/17030-5844aecae42fe223@postgresql.org EXPLAIN can suffer from cannot decompile join alias var in plan tree]<br />
** Fixed at {{PgCommitURL|cba5c70b956810c61b3778f7041f92fbb8065acb}}<br />
<br />
* [https://www.postgresql.org/message-id/20210521211929.pcehg6f23icwstdb@alap3.anarazel.de Memory leak when rewriting tuples with recompressed toast values]<br />
** Fixed at {{PgCommitURL|fb0f5f0172edf9f63c8f70ea9c1ec043b61c770e}}<br />
<br />
* [https://www.postgresql.org/message-id/626613.1621787110%40sss.pgh.pa.us Redefine pg_attribute.attcompression]<br />
** Fixed at {{PgCommitURL|e6241d8e030fbd2746b3ea3f44e728224298f35b}}<br />
<br />
* [https://www.postgresql.org/message-id/1665197.1622065382%40sss.pgh.pa.us Undo bump of FirstBootstrapObjectId]<br />
** Fixed at {{PgCommitURL|a4390abecf0f5152cff864e82b67e5f6c8489698}}<br />
<br />
* [https://www.postgresql.org/message-id/CABOikdN-_858zojYN-2tNcHiVTw-nhxPwoQS4quExeweQfG1Ug%40mail.gmail.com Assertion failure while streaming toasted data]<br />
** Fixed at {{PgCommitURL|6f4bdf81529fdaf6744875b0be99ecb9bfb3b7e0}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/7817fb9ebd6661cdf9b67dec6e129a78%40postgrespro.ru Join pushdown issue in postgres_fdw updates]<br />
** Fixed at {{PgCommitURL|f61db909dfb94f3411f8719916601a11a905b95e}}<br />
<br />
* [https://www.postgresql.org/message-id/CAD21AoA%3D%3Df2VSw3c-Cp_y%3DWLKHMKc1D6s7g3YWsCOvgaYPpJcg%40mail.gmail.com Performance degradation of REFRESH MATERIALIZED VIEW]<br />
** Fixed at {{PgCommitURL|8e03eb92e9ad54e2f1ed8b5a73617601f6262f81}}<br />
<br />
* [https://www.postgresql.org/message-id/CAPmGK16Q4B2_KY%2BJH7rb7wQbw54AUprp7TMekGTd2T1B62yysQ%40mail.gmail.com Rescan of async Appends is broken when do_exec_prune=false]<br />
** Fixed at {{PgCommitURL|f3baaf28a6da588987b94a05a725894805c3eae9}}<br />
<br />
* [https://www.postgresql.org/message-id/504c276ab6eee000bb23d571ea9b0ced4250774e.camel%40vmware.com libpq dumps core while making an SSL connection to a server specified by hostaddr]<br />
** Fixed at {{PgCommitURL|37e1cce4ddf0be362e3093cee55493aee41bc423}}<br />
<br />
* [https://www.postgresql.org/message-id/B4A3AF82-79ED-4F4C-A4E5-CD2622098972%40enterprisedb.com logical replication of truncate command with trigger causes Assert]<br />
** Fixed at {{PgCommitURL|3a09d75b4f6cabc8331e228b6988dbfcd9afdfbe}}<br />
<br />
* [https://www.postgresql.org/message-id/3742981.1621533210%40sss.pgh.pa.us Reconsider catalog representation and uniqueness rules for procedures with output-only arguments]<br />
** Fixed at {{PgCommitURL|e56bce5d43789cce95d099554ae9593ada92b3b7}}<br />
<br />
* [https://www.postgresql.org/message-id/20210527003144.xxqppojoiwurc2iz@alap3.anarazel.de Performance regression of VACUUM FULL with the addition of recompression path in tuple rewrite]<br />
** Fixed at {{PgCommitURL|dbab0c07e5ba1f19a991da2d72972a8fe9a41bda}}<br />
<br />
* [https://www.postgresql.org/message-id/20210525161458.GZ3676%40telsasoft.com Document incompatibility with aggregates using system functions using anycompatiblearray]<br />
** Fixed at {{PgCommitURL|25dfb5a831a1b8a83a8a68453b4bbe38a5ef737e}}<br />
<br />
=== resolved before 14beta1 ===<br />
<br />
* [https://www.postgresql.org/message-id/OS0PR01MB611340CBD300A7C4FD6B6101FB5F9@OS0PR01MB6113.jpnprd01.prod.outlook.com FailedAssertion reported in lazy_scan_heap() when running logical replication]<br />
** Fixed at: {{PgCommitURL|c9787385db47ba423d845b34d58e158551c6335d}}<br />
<br />
* [https://www.postgresql.org/message-id/CAJKUy5gCXDSmFs2c%3DR%2BVGgn7FiYcLCsEFEuDNNLGfoha%3DpBE_g%40mail.gmail.com Assertion fail with window function and nested partitioned tables]<br />
** [https://www.postgresql.org/message-id/87sg8tqhsl.fsf@aurora.ydns.eu Older report]<br />
** Fixed at: {{PgCommitURL|fb2d645dd53ff571572d830e830fc8c368063802}}<br />
<br />
* [https://www.postgresql.org/message-id/1df88660-6f08-cc6e-b7e2-f85296a2bdab@oss.nttdata.com Atomic initialization of waitStart done at backend startup]<br />
** Fixed at: {{PgCommitURL|f05ed5a5cfa55878baa77a1e39d68cb09793b477}}<br />
<br />
* [https://www.postgresql.org/message-id/20210117215940.GE8560%40telsasoft.com pg_collation_actual_version() ERROR: cache lookup failed for collation 123]<br />
** Fixed at: {{PgCommitURL|0fb0a0503bfc125764c8dba4f515058145dc7f8b}}<br />
<br />
* [https://www.postgresql.org/message-id/fd3ba610085f1ff54623478cf2f7adf5af193cbb.camel@vmware.com cryptohash: missing locking functions for OpenSSL <= 1.0.2?]<br />
** Fixed at: {{PgCommitURL|2c0cefcd18161549e9e8b103f46c0f65fca84d99}}<br />
<br />
* [https://www.postgresql.org/message-id/CAHut%2BPuPGGASnh2Dy37VYODKULVQo-5oE%3DShc6gwtRizDt%3D%3DcA%40mail.gmail.com pg_subscription - substream column?]<br />
** Fixed at: {{PgCommitURL|7efeb214ad832fa96ea950d0906b1d2b96316d15}}<br />
<br />
* [https://www.postgresql.org/message-id/CAJKUy5gcs0zGOp6JXU2mMVdthYhuQpFk%3DS3V8DOKT%3DLZC1L36Q%40mail.gmail.com TOAST compression method of index columns]<br />
** Fixed at: {{PgCommitURL|5db1fd7823a1a12e2bdad98abc8e102fd71ffbda}}<br />
<br />
* [https://www.postgresql.org/message-id/20210402235337.GA4082@ahch-to Crash with encoding conversion functions]<br />
** Fixed at: {{PgCommitURL|c4c393b3ec83ceb4b4d7f37cdd5302126377d069}}<br />
<br />
* [https://www.postgresql.org/message-id/CAApHDvpYT10-nkSp8xXe-nbO3jmoaRyRFHbzh-RWMfAJynqgpQ@mail.gmail.com Crash with extended stats on expressions]<br />
** Fixed at: {{PgCommitURL|518442c7f334f3b05ea28b7ef50f1b551cfcc23e}}<br />
<br />
* [https://postgr.es/m/CA+TgmobwnGawnxufvqLCrcTy4HRhMepFiXQLY8YpVD+PTuwagA@mail.gmail.com Update TOAST documentation for LZ4 compression]<br />
** Fixed at: {{PgCommitURL|e8c435a824e123f43067ce6f69d66f14cfb8815e}}<br />
<br />
* [https://www.postgresql.org/message-id/20210404220802.GA728316@rfd.leadboat.com Behavior of pg_dump --extension with schemas]<br />
** Fixed at: {{PgCommitURL|344487e2db03f3cec13685a839dbc8a0e2a36750}}<br />
<br />
* [https://www.postgresql.org/message-id/OSZPR01MB631017521EE6887ADC9492E8FD759@OSZPR01MB6310.jpnprd01.prod.outlook.com psql query cancellation is broken], as are [https://www.postgresql.org/message-id/2671235.1618154047%40sss.pgh.pa.us autocommit], and [https://www.postgresql.org/message-id/YHTYOFBHDuGaz2gy@paquier.xyz error reporting]<br />
** Reverted by: {{PgCommitURL|fae65629cec824738ee11bf60f757239906d64fa}}<br />
<br />
* On Windows, collation version lookup (sometimes?) fails for names like "English_United States.1252", but works for names like "en-US".<br />
** Fixed at: {{PgCommitURL|9f12a3b95dd56c897f1aa3d756d8fb419e84a187}} -- this commit tolerates failure so at least we don't raise an error, but unfortunately we have no version information<br />
** Fixed at: {{PgCommitURL|1bf946bd43e545b86e567588b791311fe4e36a8c}} -- this commit documents the limitation<br />
<br />
* [https://www.postgresql.org/message-id/1820954.1617860500@sss.pgh.pa.us Handling of querystring inconsistent for parallel execution of SQL function bodies]<br />
** Fixed at: {{PgCommitURL|1111b2668d89bfcb6f502789158b1233ab4217a6}}<br />
<br />
* [https://www.postgresql.org/message-id/YHPkU8hFi4no4NSw@paquier.xyz Problems around compute_query_id]<br />
** Fixed at: {{PgCommitURL|db01f797dd48f826c62e1b8eea70f11fe7ff3efc}}<br />
<br />
* [https://www.postgresql.org/message-id/OS0PR01MB611383FA0FE92EB9DE21946AFB769@OS0PR01MB6113.jpnprd01.prod.outlook.com Table reference leak in logical replication]<br />
** Fixed at: {{PgCommitURL|f3b141c482552a57866c72919007d6481cd59ee3}}<br />
<br />
* [https://www.postgresql.org/message-id/20210410184226.GY6592%40telsasoft.com DETACH PARTITION CONCURRENTLY: Avoid adding redundant constraint]<br />
** Fixed at: {{PgCommitURL|7b357cc6ae}}<br />
<br />
* [https://www.postgresql.org/message-id/CC3F964B-8FA1-4A23-9D3E-6EA00BBFF0EE@enterprisedb.com Issues in PostgresNode and older major versions with multi-install]<br />
** Fixed at {{PgCommitURL|95c3a1956ec9eac686c1b69b033dd79211b72343}} and {{PgCommitURL|4c4eaf3d19201c5e2d9efebc590903dfaba0d3e5}}<br />
<br />
* [https://www.postgresql.org/message-id/3269784.1617215412%40sss.pgh.pa.us DETACH PARTITION CONCURRENTLY tests fail under CLOBBER_CACHE_ALWAYS]<br />
** Fixed at: {{PgCommitURL|8aba9322511f}}<br />
<br />
* [https://www.postgresql.org/message-id/551ed8c1-f531-818b-664a-2cecdab99cd8@oss.nttdata.com TRUNCATE on foreign tables and ONLY clause]<br />
** Fixed at: {{PgCommitURL|8e9ea08bae93a754d5075b7bc9c0b2bc71958bfd}}<br />
<br />
* [https://www.postgresql.org/message-id/CAMkU=1zKGWEJdBbYKw7Tn7cJmYR_UjgdcXTPDqJj=dNwCETBCQ@mail.gmail.com handling of character continuation in psql broken by sql body patch]<br />
** Fixed at: {{PgCommitURL|d9a9f4b4b92ad39e3c4e6600dc61d5603ddd6e24}}<br />
<br />
* [https://www.postgresql.org/message-id/20210505210947.GA27406%40telsasoft.com cache lookup failed for statistics object 123]<br />
** Fixed at: {{PgCommitURL|8d4b311d2494ca592e30aed03b29854d864eb846}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/CAFj8pRCL_Rjw-MCR6J7VX9OF7MR6PA5K8qUbrMvprW_e-aHkfQ%40mail.gmail.com batch fdw insert bug]<br />
** Fixed at: {{PgCommitURL|c6a01d924939306e95c8deafd09352be6a955648}}<br />
<br />
* [https://www.postgresql.org/message-id/3564817.1618420687@sss.pgh.pa.us Bogus collation version recording in recordMultipleDependencies]<br />
** Fixed at: {{PgCommitURL|ec48314708262d8ea6cdcb83f803fc83dd89e721}} (Feature revert)<br />
<br />
* [https://www.postgresql.org/message-id/773932.1619022622@sss.pgh.pa.us Corruption issues with WAL prefetch?]<br />
** Fixed at: {{PgCommitURL|c2dc19342e05e081dc13b296787baa38352681ef}} (Feature revert)<br />
<br />
* [https://www.postgresql.org/message-id/YIetoZGq31L84v5d@paquier.xyz Small issues with CREATE TABLE COMPRESSION]<br />
** MSVC scripts don't support builds with lz4: fixed at {{PgCommitURL|9ca40dcd4d0cad43d95a9a253fafaa9a9ba7de24}}<br />
** pg_dump includes no tests with compression methods of attributes and --no-toast-compression: fixed at {{PgCommitURL|63db0ac3f9e6bae313da67f640c95c0045b7f0ee}}<br />
** Documentation missing for --with-lz4 in installation instructions: fixed at {{PgCommitURL|02a93e7ef9612788081ef07ea1bbd0a8cc99ae99}}<br />
<br />
* [https://www.postgresql.org/message-id/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de Replication slot stats misgivings]<br />
** Fixed at: {{PgCommitURL|3fa17d37716f978f80dfcdab4e7c73f3a24e7a48}}<br />
** Fixed at: {{PgCommitURL|592f00f8dec68038301467a904ac514eddabf6cd}}<br />
** Fixed at: {{PgCommitURL|cca57c1d9bf7eeba5b81115e0b82651cf3d8e4ea}}<br />
** Fixed at: {{PgCommitURL|f5fc2f5b23d1b1dff60f8ca5dc211161df47eda4}}<br />
<br />
* [https://www.postgresql.org/message-id/CAPmGK158e9sJOfuWxfn%2B0ynrspXQU3JhNjSCbaoeSzMvnga%2Bbw%40mail.gmail.com FDW: crash with DDL and async/batch option]<br />
** Fixed at: {{PgCommitURL|a784859f4480ceaa05a00ca35311071ca33483d1}}<br />
<br />
* [https://www.postgresql.org/message-id/20210409213155.GA23912%40alvherre.pgsql should autoanalyze for partitioned tables handle ATTACH/DETACH/DROP?]<br />
** Fixed at: {{PgCommitURL|1b5617eb844cd2470a334c1d2eec66cf9b39c41a}} (docs)<br />
<br />
* [https://www.postgresql.org/message-id/CALT9ZEE7OiszofHELnjPhX%3DhV92PiKn8haSZ4_FWBAw4diaRdQ%40mail.gmail.com OOM in spgist insert]<br />
** Fixed at: {{PgCommitURL|c3c35a733c77b298d3cf7e7de2eeb4aea540a631}}<br />
<br />
== Won't Fix ==<br />
<br />
* [https://www.postgresql.org/message-id/92408.1618772924%40sss.pgh.pa.us SQL-standard function body: pg_dump should handle circular dependencies]<br />
** Owner: Peter Eisentraut<br />
* [https://www.postgresql.org/message-id/17061-dd7f4825b7da3a9d%40postgresql.org SEARCH BREADTH FIRST produces a composite column whose fields can't be accessed]<br />
** Owner: Peter Eisentraut<br />
<br />
== Important Dates ==<br />
<br />
Current schedule:<br />
<br />
* Feature Freeze: April 7, 2021 ('''Last Day to Commit Features''')<br />
* Beta 1: May 20, 2021<br />
* Beta 2: June 24, 2021<br />
* Beta 3: August 12, 2021<br />
* RC 1: <br />
* GA: <br />
<br />
[[Category:Open_Items]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_14_Open_Items&diff=36420PostgreSQL 14 Open Items2021-09-04T14:24:53Z<p>Adunstan: resolved issue re default setting of enable_memoize</p>
<hr />
<div>== Open Issues ==<br />
<br />
'''NOTE''': Please place new open items at the end of the list.<br />
<br />
* [https://www.postgresql.org/message-id/20210817091420.u3vgqjh43lnpjntk%40alap3.anarazel.de pgstat_send_connstats() introduces unnecessary timestamp and UDP overhead]<br />
** Owner: Magnus Hagander<br />
<br />
* [https://www.postgresql.org/message-id/flat/17158-8a2ba823982537a4%40postgresql.org BUG #17158 (type RECORD is not always hashable)]<br />
** Owner: Peter Eisentraut<br />
<br />
== Decisions to Recheck Mid-Beta ==<br />
<br />
* [https://www.postgresql.org/message-id/4170264.1620321747%40sss.pgh.pa.us Should we undo libpq change that leaves PQerrorMessage() nonempty after successful connect?]<br />
** Owner: Tom Lane<br />
<br />
* [https://www.postgresql.org/message-id/CABNQVagu3bZGqiTjb31a8D5Od3fUMs7Oh3gmZMQZVHZ=uWWWfQ@mail.gmail.com Consider back-patching typmod casting behavior change to stable branches]<br />
** Fixed in HEAD/v14 at: {{PgCommitURL|5c056b0c2519e602c2e98bace5b16d2ecde6454b}}<br />
** Owner: Tom Lane<br />
<br />
== Older bugs affecting stable branches ==<br />
<br />
=== Live issues ===<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkjjCoq5Y4LeeHJcjYJVxGm3M3SAWZ0%3D6J8K1FPSC9K0w%40mail.gmail.com REINDEX on a system catalog can leave index with two index tuples whose heap TIDs match]<br />
** In other words, there is a rare case where the HOT invariant is violated. Same HOT chain is indexed twice due to confusion about which precise heap tuple should be indexed.<br />
** Unclear what the user impact is.<br />
** Affects all stable branches.<br />
<br />
* [https://www.postgresql.org/message-id/20201016135230.GA23633%40alvherre.pgsql CREATE TABLE .. PARTITION OF fails to preserve tgenabled for inherited row triggers]<br />
** tgenabled lost on CREATE TABLE .. PARTITION OF, and on pg_dump, and comments on child triggers lost during pg_dump;<br />
** Those are resolved by f0e21f2f6 and df80fa2ee, but there's another issue with psql \d of non-inherited triggers<br />
<br />
* [https://www.postgresql.org/message-id/20201001021609.GC8476%40telsasoft.com memory leak with JIT inlining]<br />
** [https://www.postgresql.org/message-id/flat/20210331040751.GU4431%40telsasoft.com#cc34872765add8e483e05009212d9d39 Another report of (same?) issue and reproducer]<br />
** [https://www.postgresql.org/message-id/flat/9f73e655-14b8-feaf-bd66-c0f506224b9e%40stephans-server.de Another report]<br />
** [https://www.postgresql.org/message-id/flat/16707-f5df308978a55bf8%40postgresql.org Another report]<br />
<br />
* [https://www.postgresql.org/message-id/CAEudQAoR5e7=uMZ0otzuCVb25zTC8QQBe+2Dt1JRsa3u+XuwJg@mail.gmail.com could not rename temporary statistics file on Windows]<br />
** See {{PgCommitURL|909b449e00fc2f71e1a38569bbddbb6457d28485}} that has fixed a similar symptom for WAL segments. Most reporters of the WAL segment problem complained about this renaming issue as well.<br />
<br />
* [https://www.postgresql.org/message-id/20210422203603.fdnh3fu2mmfp2iov@alap3.anarazel.de Incorrect snapshot calculation when 2PC is in use]<br />
** Seems to be an old problem.<br />
<br />
=== Fixed issues ===<br />
<br />
* [https://www.postgresql.org/message-id/flat/trinity-1c565d44-159f-488b-a518-caf13883134f-1611835701633%403c-app-gmx-bap78 hashagg broken by failing to spill grouping columns]<br />
** Fixed at: {{PgCommitURL|0ff865fbe50e82f17df8a9280fa01faf270b7f3f}}<br />
<br />
* [https://www.postgresql.org/message-id/CAE-ML+_EjH_fzfq1F3RJ1=XaaNG=-Jz-i3JqkNhXiLAsM3z-Ew@mail.gmail.com PITR promote bug: Checkpointer writes to older timeline]<br />
** Fixed at: {{PgCommitURL|595b9cba2ab0cdd057e02d3c23f34a8bcfd90a2d}}<br />
<br />
* [https://www.postgresql.org/message-id/YFBcRbnBiPdGZvfW%40paquier.xyz Permission failures with WAL files in 13~ on Windows]<br />
** Fixed at: {{PgCommitURL|78c24e97dd189f62187a99ef84016d0eb35a7978}}<br />
<br />
* [https://www.postgresql.org/message-id/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com CLOBBER_CACHE Server crashed with segfault 11 while executing clusterdb]<br />
** Fixed at: {{PgCommitURL|9d523119fd38fd205cb9c8ea8e7cceeb54355818}}<br />
<br />
* [https://www.postgresql.org/message-id/CAAV6ZkQRCVBh8qAY+SZiHnz+U+FqAGBBDaDTjF2yiKa2nJSLKg@mail.gmail.com Reference leak with tupledescs in plpgsql simple expressions]<br />
** Fixed at: {{PgCommitURL|c2db458c1036efae503ce5e451f8369e64c99541}}<br />
<br />
* [https://www.postgresql.org/message-id/a3be61d9-f44b-7fce-3dc8-d700fdfb6f48%402ndquadrant.com extract(julian) is undocumented and gives wrong result]<br />
** Fixed by documentation change at: {{PgCommitURL|79a5928ebcb726b7061bf265b5c6990e835e8c4f}}<br />
<br />
* [https://www.postgresql.org/message-id/CAGRY4nwxKUS_RvXFW-ugrZBYxPFFM5kjwKT5O+0+Stuga5b4+Q@mail.gmail.com lwlock dtrace probes do unnecessary work if dtrace is compiled in but disabled]<br />
** Fixed at: {{PgCommitURL|b94409a02f6122d77b5154e481c0819fed6b4c95}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/15990-eee2ac466b11293d%40postgresql.org Detoast failures after commit/rollback in plpgsql]<br />
** Fixed at: {{PgCommitURL|f21fadafaf0fb5ea4c9622d915972651273d62ce}} and {{PgCommitURL|84f5c2908dad81e8622b0406beea580e40bb03ac}}<br />
<br />
* [https://www.postgresql.org/message-id/3382681.1621381328%40sss.pgh.pa.us Subscription tests fail under CLOBBER_CACHE_ALWAYS]<br />
** Fixed at: {{PgCommitURL|b39630fd41f25b414d0ea9b30804f4105f2a0aff}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/534fca83789c4a378c7de379e9067d4f%40politie.nl XX000: unknown type of jsonb container.]<br />
** Fixed at: {{PgCommitURL|6ee41a301e70fc8e4ad383bad22d695f66ccb0ac}}<br />
<br />
* [https://www.postgresql.org/message-id/1884374.1617898865%40sss.pgh.pa.us Buildfarm does not test pg_stat_statements]<br />
** Fixed by buildfarm client change<br />
<br />
* [https://www.postgresql.org/message-id/17064-bb0d7904ef72add3%40postgresql.org Parallel VACUUM operations cause the error "global/pg_filenode.map contains incorrect checksum"]<br />
** Fixed at: {{PgCommitURL|b6d8d207}} and {{PgCommitURL|9b8ed0f52}}<br />
<br />
* [https://www.postgresql.org/message-id/378885e4-f85f-fc28-6c91-c4d1c080bf26%40amazon.com Assertion failure in HEAD and 13 after calling COMMIT in a stored proc]<br />
** Fixed at: {{PgCommitURL|d102aafb6259a6a412803d4b1d8c4f00aa17f67e}}<br />
<br />
* [https://www.postgresql.org/message-id/4aa370cb91ecf2f9885d98b80ad1109c%40postgrespro.ru Add PortalDrop in exec_execute_message]<br />
** Fixed at: {{PgCommitURL|bb4aed46a}} and {{PgCommitURL|4efcf47053}}<br />
<br />
* [https://www.postgresql.org/message-id/2591376.1621196582%40sss.pgh.pa.us snapshot-scalability logic fails after pg_upgrade, due to pg_resetwal issue]<br />
** Now seems likely that this is an old issue affecting every release, and that the snapshot-scalability work is not at fault<br />
** [https://commitfest.postgresql.org/33/3105/ Pending fix for pg_upgrade + pg_resetwal]<br />
** Fixed at: {{PgCommitURL|74cf7d46a91d601e0f8d957a7edbaeeb7df83efc}}<br />
<br />
* [https://www.postgresql.org/message-id/b5146fb1-ad9e-7d6e-f980-98ed68744a7c%40amazon.com Logical Decoding of relation rewrite with toast does not reset toast_hash]<br />
** Problem exists since v11.<br />
** Fixed at: {{PgCommitURL|29b5905470285bf730f6fe7cc5ddb3513d0e6945}}<br />
<br />
=== Nothing to do ===<br />
<br />
== Non-bugs ==<br />
<br />
* [https://www.postgresql.org/message-id/20210216064214.GI28165%40telsasoft.com progress reporting for partitioned REINDEX]<br />
* [https://www.postgresql.org/message-id/YFnWBYinNf1s0Y6v@msg.df7cb.de pg_regress and tablespace removal]<br />
** [https://www.postgresql.org/message-id/YG/tf6HTZFj4hWlb@paquier.xyz Some patch]<br />
<br />
== Resolved Issues ==<br />
<br />
=== resolved before 14beta4 (?) ===<br />
<br />
* [https://www.postgresql.org/message-id/flat/CAApHDvpbusiKMV%3DvZypdpHHu81u0zMVAp6hu1vg-%3DgQLBBKUPA%40mail.gmail.com#8386c8d37ec1f9f9386cbf528bd9af5c default setting of enable_memoize]<br />
** No change required. See [https://www.postgresql.org/message-id/flat/CAApHDvrE3-vxtYxc1-p_iRSoVyne9PT_ntgOrno_W2AC_z32SQ@mail.gmail.com]<br />
** Owner: David Rowley<br />
<br />
* [https://www.postgresql.org/message-id/58cbfa74-9356-778b-3e10-94e3075c5807@enterprisedb.com extended statistics: reject single-var expressions]<br />
** Fixed at: {{PgCommitURL|13380e1476490932c7b15530ead1f649a16e1125}} - Extra parenthesis<br />
** Fixed at: {{PgCommitURL|537ca68db}} - reject single-var expressions<br />
** Owner: Tomas Vondra<br />
<br />
* [https://www.postgresql.org/message-id/20210820125513.GQ10479@telsasoft.com pg_stats includes partitioned tables, but always shows analyze_count=0]<br />
** Fixed at: {{PgCommitURL|e1efc5b465c844969a0ed0d07e1364f3ce424d8c}}<br />
<br />
* [https://www.postgresql.org/message-id/20210730010355.6yodvn2ag3arfihi@alap3.anarazel.de Issues around autovacuum for partitioned tables]<br />
** Feature reverted: {{PgCommitURL|b3d24cc0f0aa882ceec0a74a99f94166c6fc3247}}<br />
<br />
* [https://www.postgresql.org/message-id/TYAPR01MB5866BA57688DF2770E2F95C6F5069@TYAPR01MB5866.jpnprd01.prod.outlook.com DECLARE STATEMENT and DEALLOCATE/DESCRIBE]<br />
** Fixed at: {{PgCommitURL|399edafa2aba562a8013fbe039f3cbf3a41a436e}}<br />
** Fixed at: {{PgCommitURL|f576de1db1eeca63180b1ffa4b42b1e360f88577}}<br />
<br />
* [https://www.postgresql.org/message-id/1629039545467.80333%40nidsa.net Performance regression with hex refactoring code]<br />
** Fixed at: {{PgCommitURL|2576dcfb76aa71e4222bac5a3a43f71875bfa9e8}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/20210807234407.icku2rnqyapsb3io%40alap3.anarazel.de elog.c query_id support vs shutdown]<br />
** Fixed at: {{PgCommitURL|bed5eac2d50eb86a254861dcdea7b064d10c72cf}}<br />
<br />
* [https://www.postgresql.org/message-id/OS0PR01MB5716935D4C2CC85A6143073F94EF9@OS0PR01MB5716.jpnprd01.prod.outlook.com wrong refresh when ALTER SUBSCRIPTION ADD/DROP PUBLICATION]<br />
** Fixed at: {{PgCommitURL|1046a69b3087a6417e85cae9b6bc76caa22f913b}}<br />
<br />
=== resolved before 14beta3 ===<br />
<br />
* [https://www.postgresql.org/message-id/flat/20210530172418.GO2082%40telsasoft.com#d6544e507234cc76b9bc0a50026cd74b \dX doesn't check pg_statistics_obj_is_visible()]<br />
** Fixed at: {{PgCommitURL|f68b609230689f9886a46e5d9ab8d6cdd947e0dc}}<br />
<br />
* [https://www.postgresql.org/message-id/e1b4f05d-54ec-4f51-832b-c18cf5a161c0@www.fastmail.com remove_temp_files_after_crash should be a DEVELOPER GUC]<br />
** Fixed at: {{PgCommitURL|797b0fc0b078c7b4c46ef9f2d9e47aa2d98c6c63}}<br />
<br />
* [https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com recovery_init_sync_method should be PGC_SIGHUP?]<br />
** Fixed at: {{PgCommitURL|34a8b64b4e5f0cd818e5cc7f98846de57938ea57}}<br />
<br />
* [https://www.postgresql.org/message-id/YNZ2mnsbDVJQrA/a@paquier.xyz OOM on palloc() when parsing service file would cause libpq to exit() without reporting a failure]<br />
** Fixed at: {{PgCommitURL|8ec00dc5cd70e0e579e9fbf8661bc46f5ccd8078}}<br />
** Additional defenses added at: {{PgCommitURL|dc227eb82ea8bf6919cd81a182a084589ddce7f3}}<br />
<br />
* [https://www.postgresql.org/message-id/17076-89a16ae835d329b9%40postgresql.org incorrect code for reporting the hash partition associated with a particular modulus]<br />
** Fixed at: {{PgCommitURL|dd2364ced98553e0217bfe8f621cd4b0970db74a}}<br />
<br />
* [https://www.postgresql.org/message-id/c5269c65-f967-77c5-ff7c-15e621c47f6a%40gmail.com Bug in multirange selectivity estimation]<br />
** Fixed at: {{PgCommitURL|322e82b77ef4acb9697c6e4259292f5671cb85bb}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/704fb6fb99ec9864a4dbeda2478337d2%40postgrespro.ru autoanalyze of partitioned table causes it to lose its relhasindex]<br />
** Fixed at: {{PgCommitURL|d700518d744e53994fdded14b23ebc15b031b0dd}}<br />
<br />
* [https://www.postgresql.org/message-id/CAF7igB1r6wRfSCEAB-iZBKxkowWY6+dFF2jObSdd9+iVK+vHJg@mail.gmail.com Incorrect time maths in pgbench] and [https://www.postgresql.org/message-id/CAHLJuCW_8Vpcr0=t6O_gozrg3wXXWXZXDioYJd3NhvKriqgpfQ@mail.gmail.com second thread]<br />
** Fixed at: {{PgCommitURL|0e39a608ed5545cc6b9d538ac937c3c1ee8cdc36}}<br />
<br />
* [https://www.postgresql.org/message-id/60258efe-bd7e-4886-82e1-196e0cac5433%40postgresql.org unnesting multirange data types]<br />
** Fixed at: {{PgCommitURL|244ad5415557812a6ac4dc5d6e2ae908361d82c3}}<br />
<br />
* [https://www.postgresql.org/message-id/17066-16a37f6223a8470b@postgresql.org Cache lookup failed when null (unknown) is passed as anycompatiblemultirange]<br />
** Fixed at: {{PgCommitURL|336ea6e6ff1109e7b83370565e3cb211804fda0c}}<br />
<br />
* [https://www.postgresql.org/message-id/530153.1627425648%40sss.pgh.pa.us Degraded out-of-memory handling in libpq]<br />
** Fixed at: {{PgCommitURL|514b4c11d24701d2cc90ad75ed787bf1380af673}}<br />
<br />
* [https://www.postgresql.org/message-id/0203588E-E609-43AF-9F4F-902854231EE7@enterprisedb.com Crash in regexp with {0}]<br />
** Fixed at: {{PgCommitURL|cc1868799c8311ed1cc3674df2c5e1374c914deb}}<br />
<br />
=== resolved before 14beta2 ===<br />
<br />
* [https://www.postgresql.org/message-id/20210609184506.rqm5rikoikm47csf%40alap3.anarazel.de Snapshot scalability OldestXmin issue (can cause infinite loop during system catalog VACUUM)]<br />
** Fixed at: {{PgCommitURL|5a1e1d83022b976ebdec5cfa8f255c4278b75b8e}}<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkCYR0U7zXqXo0CgFaFwUDz1WbKq8ngjzKi4+AQ5f-mYQ@mail.gmail.com Generalize INDEX_CLEANUP to allow the user to disable the optimization that has VACUUM skip indexes in marginal cases with very few LP_DEAD items/deletable TIDs.]<br />
** Fixed at: {{PgCommitURL|3499df0dee8c4ea51d264a674df5b5e31991319a}}<br />
<br />
* [https://www.postgresql.org/message-id/20210324232224.vrfiij2rxxwqqjjb@alap3.anarazel.de Questions about pg_stat_wal] also [https://www.postgresql.org/message-id/E3774ACD-7894-451E-9F13-71E097D10595@oss.nttdata.com]<br />
** Fixed at: {{PgCommitURL|d8735b8b4651f5ed50afc472e236a8e6120f07f2}}<br />
** Fixed at: {{PgCommitURL|d780d7c0882fe9a385102b292907baaceb505ed0}}<br />
<br />
* [https://www.postgresql.org/message-id/YKMO%2B2gD8R8I2O5b%40paquier.xyz pg_dumpall misses --no-toast-compression]<br />
** Fixed at: {{PgCommitURL|694da1983e9569b2a2f96cd786ead6b8dba31f1d}} <br />
<br />
* [https://www.postgresql.org/message-id/YKQnUoYV63GRJBDD%40msg.df7cb.de portability issue with pgbench's permute() function]<br />
** Fixed at: {{PgCommitURL|0f516d039d8023163e82fa51104052306068dd69}}<br />
<br />
* [https://www.postgresql.org/message-id/35457b09-36f8-add3-1d07-6034fa585ca8@oss.nttdata.com compute_query_id and pg_stat_statements]<br />
** Fixed at {{PgCommitURL|cafde58b33}} and {{PgCommitURL|354f32d01d}}<br />
<br />
* [https://www.postgresql.org/message-id/CAOxo6X+dy-V58iEPFgst8ahPKEU+38NZzUuc+a7wDBZd4TrHMQ@mail.gmail.com Result Cache works incorrectly with unique joins]<br />
** Fixed at {{PgCommitURL|9e215378d7fbb7d4615be917917c52f246cc6c61}}<br />
<br />
* [https://www.postgresql.org/message-id/20210517204803.iyk5wwvwgtjcmc5w%40alap3.anarazel.de Move pg_attribute.attcompression to earlier in struct for reduced size?]<br />
** Fixed at {{PgCommitURL|f5024d8d7b04de2f5f4742ab433cc38160354861}}<br />
<br />
* [https://www.postgresql.org/message-id/17030-5844aecae42fe223@postgresql.org EXPLAIN can suffer from cannot decompile join alias var in plan tree]<br />
** Fixed at {{PgCommitURL|cba5c70b956810c61b3778f7041f92fbb8065acb}}<br />
<br />
* [https://www.postgresql.org/message-id/20210521211929.pcehg6f23icwstdb@alap3.anarazel.de Memory leak when rewriting tuples with recompressed toast values]<br />
** Fixed at {{PgCommitURL|fb0f5f0172edf9f63c8f70ea9c1ec043b61c770e}}<br />
<br />
* [https://www.postgresql.org/message-id/626613.1621787110%40sss.pgh.pa.us Redefine pg_attribute.attcompression]<br />
** Fixed at {{PgCommitURL|e6241d8e030fbd2746b3ea3f44e728224298f35b}}<br />
<br />
* [https://www.postgresql.org/message-id/1665197.1622065382%40sss.pgh.pa.us Undo bump of FirstBootstrapObjectId]<br />
** Fixed at {{PgCommitURL|a4390abecf0f5152cff864e82b67e5f6c8489698}}<br />
<br />
* [https://www.postgresql.org/message-id/CABOikdN-_858zojYN-2tNcHiVTw-nhxPwoQS4quExeweQfG1Ug%40mail.gmail.com Assertion failure while streaming toasted data]<br />
** Fixed at {{PgCommitURL|6f4bdf81529fdaf6744875b0be99ecb9bfb3b7e0}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/7817fb9ebd6661cdf9b67dec6e129a78%40postgrespro.ru Join pushdown issue in postgres_fdw updates]<br />
** Fixed at {{PgCommitURL|f61db909dfb94f3411f8719916601a11a905b95e}}<br />
<br />
* [https://www.postgresql.org/message-id/CAD21AoA%3D%3Df2VSw3c-Cp_y%3DWLKHMKc1D6s7g3YWsCOvgaYPpJcg%40mail.gmail.com Performance degradation of REFRESH MATERIALIZED VIEW]<br />
** Fixed at {{PgCommitURL|8e03eb92e9ad54e2f1ed8b5a73617601f6262f81}}<br />
<br />
* [https://www.postgresql.org/message-id/CAPmGK16Q4B2_KY%2BJH7rb7wQbw54AUprp7TMekGTd2T1B62yysQ%40mail.gmail.com Rescan of async Appends is broken when do_exec_prune=false]<br />
** Fixed at {{PgCommitURL|f3baaf28a6da588987b94a05a725894805c3eae9}}<br />
<br />
* [https://www.postgresql.org/message-id/504c276ab6eee000bb23d571ea9b0ced4250774e.camel%40vmware.com libpq dumps core while making an SSL connection to a server specified by hostaddr]<br />
** Fixed at {{PgCommitURL|37e1cce4ddf0be362e3093cee55493aee41bc423}}<br />
<br />
* [https://www.postgresql.org/message-id/B4A3AF82-79ED-4F4C-A4E5-CD2622098972%40enterprisedb.com logical replication of truncate command with trigger causes Assert]<br />
** Fixed at {{PgCommitURL|3a09d75b4f6cabc8331e228b6988dbfcd9afdfbe}}<br />
<br />
* [https://www.postgresql.org/message-id/3742981.1621533210%40sss.pgh.pa.us Reconsider catalog representation and uniqueness rules for procedures with output-only arguments]<br />
** Fixed at {{PgCommitURL|e56bce5d43789cce95d099554ae9593ada92b3b7}}<br />
<br />
* [https://www.postgresql.org/message-id/20210527003144.xxqppojoiwurc2iz@alap3.anarazel.de Performance regression of VACUUM FULL with the addition of recompression path in tuple rewrite]<br />
** Fixed at {{PgCommitURL|dbab0c07e5ba1f19a991da2d72972a8fe9a41bda}}<br />
<br />
* [https://www.postgresql.org/message-id/20210525161458.GZ3676%40telsasoft.com Document incompatibility with aggregates using system functions using anycompatiblearray]<br />
** Fixed at {{PgCommitURL|25dfb5a831a1b8a83a8a68453b4bbe38a5ef737e}}<br />
<br />
=== resolved before 14beta1 ===<br />
<br />
* [https://www.postgresql.org/message-id/OS0PR01MB611340CBD300A7C4FD6B6101FB5F9@OS0PR01MB6113.jpnprd01.prod.outlook.com FailedAssertion reported in lazy_scan_heap() when running logical replication]<br />
** Fixed at: {{PgCommitURL|c9787385db47ba423d845b34d58e158551c6335d}}<br />
<br />
* [https://www.postgresql.org/message-id/CAJKUy5gCXDSmFs2c%3DR%2BVGgn7FiYcLCsEFEuDNNLGfoha%3DpBE_g%40mail.gmail.com Assertion fail with window function and nested partitioned tables]<br />
** [https://www.postgresql.org/message-id/87sg8tqhsl.fsf@aurora.ydns.eu Older report]<br />
** Fixed at: {{PgCommitURL|fb2d645dd53ff571572d830e830fc8c368063802}}<br />
<br />
* [https://www.postgresql.org/message-id/1df88660-6f08-cc6e-b7e2-f85296a2bdab@oss.nttdata.com Atomic initialization of waitStart done at backend startup]<br />
** Fixed at: {{PgCommitURL|f05ed5a5cfa55878baa77a1e39d68cb09793b477}}<br />
<br />
* [https://www.postgresql.org/message-id/20210117215940.GE8560%40telsasoft.com pg_collation_actual_version() ERROR: cache lookup failed for collation 123]<br />
** Fixed at: {{PgCommitURL|0fb0a0503bfc125764c8dba4f515058145dc7f8b}}<br />
<br />
* [https://www.postgresql.org/message-id/fd3ba610085f1ff54623478cf2f7adf5af193cbb.camel@vmware.com cryptohash: missing locking functions for OpenSSL <= 1.0.2?]<br />
** Fixed at: {{PgCommitURL|2c0cefcd18161549e9e8b103f46c0f65fca84d99}}<br />
<br />
* [https://www.postgresql.org/message-id/CAHut%2BPuPGGASnh2Dy37VYODKULVQo-5oE%3DShc6gwtRizDt%3D%3DcA%40mail.gmail.com pg_subscription - substream column?]<br />
** Fixed at: {{PgCommitURL|7efeb214ad832fa96ea950d0906b1d2b96316d15}}<br />
<br />
* [https://www.postgresql.org/message-id/CAJKUy5gcs0zGOp6JXU2mMVdthYhuQpFk%3DS3V8DOKT%3DLZC1L36Q%40mail.gmail.com TOAST compression method of index columns]<br />
** Fixed at: {{PgCommitURL|5db1fd7823a1a12e2bdad98abc8e102fd71ffbda}}<br />
<br />
* [https://www.postgresql.org/message-id/20210402235337.GA4082@ahch-to Crash with encoding conversion functions]<br />
** Fixed at: {{PgCommitURL|c4c393b3ec83ceb4b4d7f37cdd5302126377d069}}<br />
<br />
* [https://www.postgresql.org/message-id/CAApHDvpYT10-nkSp8xXe-nbO3jmoaRyRFHbzh-RWMfAJynqgpQ@mail.gmail.com Crash with extended stats on expressions]<br />
** Fixed at: {{PgCommitURL|518442c7f334f3b05ea28b7ef50f1b551cfcc23e}}<br />
<br />
* [https://postgr.es/m/CA+TgmobwnGawnxufvqLCrcTy4HRhMepFiXQLY8YpVD+PTuwagA@mail.gmail.com Update TOAST documentation for LZ4 compression]<br />
** Fixed at: {{PgCommitURL|e8c435a824e123f43067ce6f69d66f14cfb8815e}}<br />
<br />
* [https://www.postgresql.org/message-id/20210404220802.GA728316@rfd.leadboat.com Behavior of pg_dump --extension with schemas]<br />
** Fixed at: {{PgCommitURL|344487e2db03f3cec13685a839dbc8a0e2a36750}}<br />
<br />
* [https://www.postgresql.org/message-id/OSZPR01MB631017521EE6887ADC9492E8FD759@OSZPR01MB6310.jpnprd01.prod.outlook.com psql query cancellation is broken], as are [https://www.postgresql.org/message-id/2671235.1618154047%40sss.pgh.pa.us autocommit], and [https://www.postgresql.org/message-id/YHTYOFBHDuGaz2gy@paquier.xyz error reporting]<br />
** Reverted by: {{PgCommitURL|fae65629cec824738ee11bf60f757239906d64fa}}<br />
<br />
* On Windows, collation version lookup (sometimes?) fails for names like "English_United States.1252", but works for names like "en-US".<br />
** Fixed at: {{PgCommitURL|9f12a3b95dd56c897f1aa3d756d8fb419e84a187}} -- this commit tolerates failure so at least we don't raise an error, but unfortunately we have no version information<br />
** Fixed at: {{PgCommitURL|1bf946bd43e545b86e567588b791311fe4e36a8c}} -- this commit documents the limitation<br />
<br />
* [https://www.postgresql.org/message-id/1820954.1617860500@sss.pgh.pa.us Handling of querystring inconsistent for parallel execution of SQL function bodies]<br />
** Fixed at: {{PgCommitURL|1111b2668d89bfcb6f502789158b1233ab4217a6}}<br />
<br />
* [https://www.postgresql.org/message-id/YHPkU8hFi4no4NSw@paquier.xyz Problems around compute_query_id]<br />
** Fixed at: {{PgCommitURL|db01f797dd48f826c62e1b8eea70f11fe7ff3efc}}<br />
<br />
* [https://www.postgresql.org/message-id/OS0PR01MB611383FA0FE92EB9DE21946AFB769@OS0PR01MB6113.jpnprd01.prod.outlook.com Table reference leak in logical replication]<br />
** Fixed at: {{PgCommitURL|f3b141c482552a57866c72919007d6481cd59ee3}}<br />
<br />
* [https://www.postgresql.org/message-id/20210410184226.GY6592%40telsasoft.com DETACH PARTITION CONCURRENTLY: Avoid adding redundant constraint]<br />
** Fixed at: {{PgCommitURL|7b357cc6ae}}<br />
<br />
* [https://www.postgresql.org/message-id/CC3F964B-8FA1-4A23-9D3E-6EA00BBFF0EE@enterprisedb.com Issues in PostgresNode and older major versions with multi-install]<br />
** Fixed at {{PgCommitURL|95c3a1956ec9eac686c1b69b033dd79211b72343}} and {{PgCommitURL|4c4eaf3d19201c5e2d9efebc590903dfaba0d3e5}}<br />
<br />
* [https://www.postgresql.org/message-id/3269784.1617215412%40sss.pgh.pa.us DETACH PARTITION CONCURRENTLY tests fail under CLOBBER_CACHE_ALWAYS]<br />
** Fixed at: {{PgCommitURL|8aba9322511f}}<br />
<br />
* [https://www.postgresql.org/message-id/551ed8c1-f531-818b-664a-2cecdab99cd8@oss.nttdata.com TRUNCATE on foreign tables and ONLY clause]<br />
** Fixed at: {{PgCommitURL|8e9ea08bae93a754d5075b7bc9c0b2bc71958bfd}}<br />
<br />
* [https://www.postgresql.org/message-id/CAMkU=1zKGWEJdBbYKw7Tn7cJmYR_UjgdcXTPDqJj=dNwCETBCQ@mail.gmail.com handling of character continuation in psql broken by sql body patch]<br />
** Fixed at: {{PgCommitURL|d9a9f4b4b92ad39e3c4e6600dc61d5603ddd6e24}}<br />
<br />
* [https://www.postgresql.org/message-id/20210505210947.GA27406%40telsasoft.com cache lookup failed for statistics object 123]<br />
** Fixed at: {{PgCommitURL|8d4b311d2494ca592e30aed03b29854d864eb846}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/CAFj8pRCL_Rjw-MCR6J7VX9OF7MR6PA5K8qUbrMvprW_e-aHkfQ%40mail.gmail.com batch fdw insert bug]<br />
** Fixed at: {{PgCommitURL|c6a01d924939306e95c8deafd09352be6a955648}}<br />
<br />
* [https://www.postgresql.org/message-id/3564817.1618420687@sss.pgh.pa.us Bogus collation version recording in recordMultipleDependencies]<br />
** Fixed at: {{PgCommitURL|ec48314708262d8ea6cdcb83f803fc83dd89e721}} (Feature revert)<br />
<br />
* [https://www.postgresql.org/message-id/773932.1619022622@sss.pgh.pa.us Corruption issues with WAL prefetch?]<br />
** Fixed at: {{PgCommitURL|c2dc19342e05e081dc13b296787baa38352681ef}} (Feature revert)<br />
<br />
* [https://www.postgresql.org/message-id/YIetoZGq31L84v5d@paquier.xyz Small issues with CREATE TABLE COMPRESSION]<br />
** MSVC scripts don't support builds with lz4: fixed at {{PgCommitURL|9ca40dcd4d0cad43d95a9a253fafaa9a9ba7de24}}<br />
** pg_dump includes no tests with compression methods of attributes and --no-toast-compression: fixed at {{PgCommitURL|63db0ac3f9e6bae313da67f640c95c0045b7f0ee}}<br />
** Documentation missing for --with-lz4 in installation instructions: fixed at {{PgCommitURL|02a93e7ef9612788081ef07ea1bbd0a8cc99ae99}}<br />
<br />
* [https://www.postgresql.org/message-id/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de Replication slot stats misgivings]<br />
** Fixed at: {{PgCommitURL|3fa17d37716f978f80dfcdab4e7c73f3a24e7a48}}<br />
** Fixed at: {{PgCommitURL|592f00f8dec68038301467a904ac514eddabf6cd}}<br />
** Fixed at: {{PgCommitURL|cca57c1d9bf7eeba5b81115e0b82651cf3d8e4ea}}<br />
** Fixed at: {{PgCommitURL|f5fc2f5b23d1b1dff60f8ca5dc211161df47eda4}}<br />
<br />
* [https://www.postgresql.org/message-id/CAPmGK158e9sJOfuWxfn%2B0ynrspXQU3JhNjSCbaoeSzMvnga%2Bbw%40mail.gmail.com FDW: crash with DDL and async/batch option]<br />
** Fixed at: {{PgCommitURL|a784859f4480ceaa05a00ca35311071ca33483d1}}<br />
<br />
* [https://www.postgresql.org/message-id/20210409213155.GA23912%40alvherre.pgsql should autoanalyze for partitioned tables handle ATTACH/DETACH/DROP?]<br />
** Fixed at: {{PgCommitURL|1b5617eb844cd2470a334c1d2eec66cf9b39c41a}} (docs)<br />
<br />
* [https://www.postgresql.org/message-id/CALT9ZEE7OiszofHELnjPhX%3DhV92PiKn8haSZ4_FWBAw4diaRdQ%40mail.gmail.com OOM in spgist insert]<br />
** Fixed at: {{PgCommitURL|c3c35a733c77b298d3cf7e7de2eeb4aea540a631}}<br />
<br />
== Won't Fix ==<br />
<br />
* [https://www.postgresql.org/message-id/92408.1618772924%40sss.pgh.pa.us SQL-standard function body: pg_dump should handle circular dependencies]<br />
** Owner: Peter Eisentraut<br />
* [https://www.postgresql.org/message-id/17061-dd7f4825b7da3a9d%40postgresql.org SEARCH BREADTH FIRST produces a composite column whose fields can't be accessed]<br />
** Owner: Peter Eisentraut<br />
<br />
== Important Dates ==<br />
<br />
Current schedule:<br />
<br />
* Feature Freeze: April 7, 2021 ('''Last Day to Commit Features''')<br />
* Beta 1: May 20, 2021<br />
* Beta 2: June 24, 2021<br />
* Beta 3: August 12, 2021<br />
* RC 1: <br />
* GA: <br />
<br />
[[Category:Open_Items]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_14_Open_Items&diff=36027PostgreSQL 14 Open Items2021-05-21T18:35:34Z<p>Adunstan: resolved compute_query_id item</p>
<hr />
<div>== Open Issues ==<br />
<br />
'''NOTE''': Please place new open items at the end of the list.<br />
<br />
* [https://www.postgresql.org/message-id/CAD21AoA%3D%3Df2VSw3c-Cp_y%3DWLKHMKc1D6s7g3YWsCOvgaYPpJcg%40mail.gmail.com Performance degradation of REFRESH MATERIALIZED VIEW]<br />
** Owner: Tomas Vondra<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkCYR0U7zXqXo0CgFaFwUDz1WbKq8ngjzKi4+AQ5f-mYQ@mail.gmail.com Generalize INDEX_CLEANUP to allow the user to disable the optimization that has VACUUM skip indexes in marginal cases with very few LP_DEAD items/deletable TIDs.]<br />
** Owner: Peter Geoghegan<br />
** [https://www.postgresql.org/message-id/YJzU8wmVE0+TGAVP@paquier.xyz Patch]<br />
<br />
* [https://www.postgresql.org/message-id/4170264.1620321747%40sss.pgh.pa.us Should we undo libpq change that leaves PQerrorMessage() nonempty after successful connect?]<br />
** Owner: Tom Lane<br />
<br />
* [https://www.postgresql.org/message-id/2591376.1621196582%40sss.pgh.pa.us snapshot-scalability logic fails after pg_resetwal]<br />
** Owner: Andres Freund<br />
<br />
* [https://www.postgresql.org/message-id/20210517204803.iyk5wwvwgtjcmc5w%40alap3.anarazel.de Move pg_attribute.attcompression to earlier in struct for reduced size?]<br />
** Owner: Andres Freund, Robert Haas<br />
<br />
* [https://www.postgresql.org/message-id/3742981.1621533210%40sss.pgh.pa.us CALL versus procedures with output-only arguments]<br />
** Owner: Peter Eisentraut<br />
<br />
* [https://www.postgresql.org/message-id/CAOxo6X+dy-V58iEPFgst8ahPKEU+38NZzUuc+a7wDBZd4TrHMQ@mail.gmail.com Result Cache works incorrectly with unique joins]<br />
** Owner: David Rowley<br />
** [https://www.postgresql.org/message-id/CAApHDvrWsfc3naVQZxS0efU%3DvJOA7dG3NV7fGhkgo2%3DJ38OEpg%40mail.gmail.com Patch]<br />
<br />
== Older bugs affecting stable branches ==<br />
<br />
=== Live issues ===<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkjjCoq5Y4LeeHJcjYJVxGm3M3SAWZ0%3D6J8K1FPSC9K0w%40mail.gmail.com REINDEX on a system catalog can leave index with two index tuples whose heap TIDs match]<br />
** In other words, there is a rare case where the HOT invariant is violated. Same HOT chain is indexed twice due to confusion about which precise heap tuple should be indexed.<br />
** Unclear what the user impact is.<br />
** Affects all stable branches.<br />
<br />
* [https://www.postgresql.org/message-id/20201016135230.GA23633%40alvherre.pgsql CREATE TABLE .. PARTITION OF fails to preserve tgenabled for inherited row triggers]<br />
** tgenabled lost on CREATE TABLE .. PARTITION OF, and on pg_dump, and comments on child triggers lost during pg_dump;<br />
<br />
* [https://www.postgresql.org/message-id/20201001021609.GC8476%40telsasoft.com memory leak with JIT inlining]<br />
** [https://www.postgresql.org/message-id/flat/20210331040751.GU4431%40telsasoft.com#cc34872765add8e483e05009212d9d39 Another report of (same?) issue and reproducer]<br />
** [https://www.postgresql.org/message-id/flat/9f73e655-14b8-feaf-bd66-c0f506224b9e%40stephans-server.de Another report]<br />
** [https://www.postgresql.org/message-id/flat/16707-f5df308978a55bf8%40postgresql.org Another report]<br />
<br />
* [https://www.postgresql.org/message-id/1884374.1617898865%40sss.pgh.pa.us Buildfarm does not test pg_stat_statements]<br />
<br />
* [https://www.postgresql.org/message-id/CAEudQAoR5e7=uMZ0otzuCVb25zTC8QQBe+2Dt1JRsa3u+XuwJg@mail.gmail.com could not rename temporary statistics file on Windows]<br />
** See {{PgCommitURL|909b449e00fc2f71e1a38569bbddbb6457d28485}} that has fixed a similar symptom for WAL segments. Most reporters of the WAL segment problem complained about this renaming issue as well.<br />
<br />
* [https://www.postgresql.org/message-id/20210422203603.fdnh3fu2mmfp2iov@alap3.anarazel.de Incorrect snapshot calculation when 2PC is in use]<br />
** Seems to be an old problem.<br />
<br />
* [https://www.postgresql.org/message-id/3382681.1621381328%40sss.pgh.pa.us Subscription tests fail under CLOBBER_CACHE_ALWAYS]<br />
<br />
* [https://www.postgresql.org/message-id/4aa370cb91ecf2f9885d98b80ad1109c%40postgrespro.ru Add PortalDrop in exec_execute_message]<br />
<br />
=== Fixed issues ===<br />
<br />
* [https://www.postgresql.org/message-id/flat/trinity-1c565d44-159f-488b-a518-caf13883134f-1611835701633%403c-app-gmx-bap78 hashagg broken by failing to spill grouping columns]<br />
** Fixed at: {{PgCommitURL|0ff865fbe50e82f17df8a9280fa01faf270b7f3f}}<br />
<br />
* [https://www.postgresql.org/message-id/CAE-ML+_EjH_fzfq1F3RJ1=XaaNG=-Jz-i3JqkNhXiLAsM3z-Ew@mail.gmail.com PITR promote bug: Checkpointer writes to older timeline]<br />
** Fixed at: {{PgCommitURL|595b9cba2ab0cdd057e02d3c23f34a8bcfd90a2d}}<br />
<br />
* [https://www.postgresql.org/message-id/YFBcRbnBiPdGZvfW%40paquier.xyz Permission failures with WAL files in 13~ on Windows]<br />
** Fixed at: {{PgCommitURL|78c24e97dd189f62187a99ef84016d0eb35a7978}}<br />
<br />
* [https://www.postgresql.org/message-id/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com CLOBBER_CACHE Server crashed with segfault 11 while executing clusterdb]<br />
** Fixed at: {{PgCommitURL|9d523119fd38fd205cb9c8ea8e7cceeb54355818}}<br />
<br />
* [https://www.postgresql.org/message-id/CAAV6ZkQRCVBh8qAY+SZiHnz+U+FqAGBBDaDTjF2yiKa2nJSLKg@mail.gmail.com Reference leak with tupledescs in plpgsql simple expressions]<br />
** Fixed at: {{PgCommitURL|c2db458c1036efae503ce5e451f8369e64c99541}}<br />
<br />
* [https://www.postgresql.org/message-id/a3be61d9-f44b-7fce-3dc8-d700fdfb6f48%402ndquadrant.com extract(julian) is undocumented and gives wrong result]<br />
** Fixed by documentation change at: {{PgCommitURL|79a5928ebcb726b7061bf265b5c6990e835e8c4f}}<br />
<br />
* [https://www.postgresql.org/message-id/CAGRY4nwxKUS_RvXFW-ugrZBYxPFFM5kjwKT5O+0+Stuga5b4+Q@mail.gmail.com lwlock dtrace probes do unnecessary work if dtrace is compiled in but disabled]<br />
** Fixed at: {{PgCommitURL|b94409a02f6122d77b5154e481c0819fed6b4c95}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/15990-eee2ac466b11293d%40postgresql.org Detoast failures after commit/rollback in plpgsql]<br />
** Fixed at: {{PgCommitURL|f21fadafaf0fb5ea4c9622d915972651273d62ce}} and {{PgCommitURL|84f5c2908dad81e8622b0406beea580e40bb03ac}}<br />
<br />
=== Nothing to do ===<br />
<br />
== Non-bugs ==<br />
<br />
* [https://www.postgresql.org/message-id/20210216064214.GI28165%40telsasoft.com progress reporting for partitioned REINDEX]<br />
* [https://www.postgresql.org/message-id/YFnWBYinNf1s0Y6v@msg.df7cb.de pg_regress and tablespace removal]<br />
** [https://www.postgresql.org/message-id/YG/tf6HTZFj4hWlb@paquier.xyz Some patch]<br />
<br />
== Resolved Issues ==<br />
<br />
=== resolved before 14beta2 ===<br />
<br />
* [https://www.postgresql.org/message-id/20210324232224.vrfiij2rxxwqqjjb@alap3.anarazel.de Questions about pg_stat_wal] also [https://www.postgresql.org/message-id/E3774ACD-7894-451E-9F13-71E097D10595@oss.nttdata.com]<br />
** Fixed at: {{PgCommitURL|d8735b8b4651f5ed50afc472e236a8e6120f07f2}}<br />
** Fixed at: {{PgCommitURL|d780d7c0882fe9a385102b292907baaceb505ed0}}<br />
<br />
* [https://www.postgresql.org/message-id/YKMO%2B2gD8R8I2O5b%40paquier.xyz pg_dumpall misses --no-toast-compression]<br />
** Fixed at: {{PgCommitURL|694da1983e9569b2a2f96cd786ead6b8dba31f1d}} <br />
<br />
* [https://www.postgresql.org/message-id/YKQnUoYV63GRJBDD%40msg.df7cb.de portability issue with pgbench's permute() function]<br />
** Fixed at: {{PgCommitURL|0f516d039d8023163e82fa51104052306068dd69}}<br />
<br />
* [https://www.postgresql.org/message-id/35457b09-36f8-add3-1d07-6034fa585ca8@oss.nttdata.com compute_query_id and pg_stat_statements]<br />
** Fixed at {{PgCommitURL|cafde58b33}} and {{PgCommitURL|354f32d01d}}<br />
<br />
=== resolved before 14beta1 ===<br />
<br />
* [https://www.postgresql.org/message-id/OS0PR01MB611340CBD300A7C4FD6B6101FB5F9@OS0PR01MB6113.jpnprd01.prod.outlook.com FailedAssertion reported in lazy_scan_heap() when running logical replication]<br />
** Fixed at: {{PgCommitURL|c9787385db47ba423d845b34d58e158551c6335d}}<br />
<br />
* [https://www.postgresql.org/message-id/CAJKUy5gCXDSmFs2c%3DR%2BVGgn7FiYcLCsEFEuDNNLGfoha%3DpBE_g%40mail.gmail.com Assertion fail with window function and nested partitioned tables]<br />
** [https://www.postgresql.org/message-id/87sg8tqhsl.fsf@aurora.ydns.eu Older report]<br />
** Fixed at: {{PgCommitURL|fb2d645dd53ff571572d830e830fc8c368063802}}<br />
<br />
* [https://www.postgresql.org/message-id/1df88660-6f08-cc6e-b7e2-f85296a2bdab@oss.nttdata.com Atomic initialization of waitStart done at backend startup]<br />
** Fixed at: {{PgCommitURL|f05ed5a5cfa55878baa77a1e39d68cb09793b477}}<br />
<br />
* [https://www.postgresql.org/message-id/20210117215940.GE8560%40telsasoft.com pg_collation_actual_version() ERROR: cache lookup failed for collation 123]<br />
** Fixed at: {{PgCommitURL|0fb0a0503bfc125764c8dba4f515058145dc7f8b}}<br />
<br />
* [https://www.postgresql.org/message-id/fd3ba610085f1ff54623478cf2f7adf5af193cbb.camel@vmware.com cryptohash: missing locking functions for OpenSSL <= 1.0.2?]<br />
** Fixed at: {{PgCommitURL|2c0cefcd18161549e9e8b103f46c0f65fca84d99}}<br />
<br />
* [https://www.postgresql.org/message-id/CAHut%2BPuPGGASnh2Dy37VYODKULVQo-5oE%3DShc6gwtRizDt%3D%3DcA%40mail.gmail.com pg_subscription - substream column?]<br />
** Fixed at: {{PgCommitURL|7efeb214ad832fa96ea950d0906b1d2b96316d15}}<br />
<br />
* [https://www.postgresql.org/message-id/CAJKUy5gcs0zGOp6JXU2mMVdthYhuQpFk%3DS3V8DOKT%3DLZC1L36Q%40mail.gmail.com TOAST compression method of index columns]<br />
** Fixed at: {{PgCommitURL|5db1fd7823a1a12e2bdad98abc8e102fd71ffbda}}<br />
<br />
* [https://www.postgresql.org/message-id/20210402235337.GA4082@ahch-to Crash with encoding conversion functions]<br />
** Fixed at: {{PgCommitURL|c4c393b3ec83ceb4b4d7f37cdd5302126377d069}}<br />
<br />
* [https://www.postgresql.org/message-id/CAApHDvpYT10-nkSp8xXe-nbO3jmoaRyRFHbzh-RWMfAJynqgpQ@mail.gmail.com Crash with extended stats on expressions]<br />
** Fixed at: {{PgCommitURL|518442c7f334f3b05ea28b7ef50f1b551cfcc23e}}<br />
<br />
* [https://postgr.es/m/CA+TgmobwnGawnxufvqLCrcTy4HRhMepFiXQLY8YpVD+PTuwagA@mail.gmail.com Update TOAST documentation for LZ4 compression]<br />
** Fixed at: {{PgCommitURL|e8c435a824e123f43067ce6f69d66f14cfb8815e}}<br />
<br />
* [https://www.postgresql.org/message-id/20210404220802.GA728316@rfd.leadboat.com Behavior of pg_dump --extension with schemas]<br />
** Fixed at: {{PgCommitURL|344487e2db03f3cec13685a839dbc8a0e2a36750}}<br />
<br />
* [https://www.postgresql.org/message-id/OSZPR01MB631017521EE6887ADC9492E8FD759@OSZPR01MB6310.jpnprd01.prod.outlook.com psql query cancellation is broken], as are [https://www.postgresql.org/message-id/2671235.1618154047%40sss.pgh.pa.us autocommit], and [https://www.postgresql.org/message-id/YHTYOFBHDuGaz2gy@paquier.xyz error reporting]<br />
** Reverted by: {{PgCommitURL|fae65629cec824738ee11bf60f757239906d64fa}}<br />
<br />
* On Windows, collation version lookup (sometimes?) fails for names like "English_United States.1252", but works for names like "en-US".<br />
** Fixed at: {{PgCommitURL|9f12a3b95dd56c897f1aa3d756d8fb419e84a187}} -- this commit tolerates failure so at least we don't raise an error, but unfortunately we have no version information<br />
** Fixed at: {{PgCommitURL|1bf946bd43e545b86e567588b791311fe4e36a8c}} -- this commit documents the limitation<br />
<br />
* [https://www.postgresql.org/message-id/1820954.1617860500@sss.pgh.pa.us Handling of querystring inconsistent for parallel execution of SQL function bodies]<br />
** Fixed at: {{PgCommitURL|1111b2668d89bfcb6f502789158b1233ab4217a6}}<br />
<br />
* [https://www.postgresql.org/message-id/YHPkU8hFi4no4NSw@paquier.xyz Problems around compute_query_id]<br />
** Fixed at: {{PgCommitURL|db01f797dd48f826c62e1b8eea70f11fe7ff3efc}}<br />
<br />
* [https://www.postgresql.org/message-id/OS0PR01MB611383FA0FE92EB9DE21946AFB769@OS0PR01MB6113.jpnprd01.prod.outlook.com Table reference leak in logical replication]<br />
** Fixed at: {{PgCommitURL|f3b141c482552a57866c72919007d6481cd59ee3}}<br />
<br />
* [https://www.postgresql.org/message-id/20210410184226.GY6592%40telsasoft.com DETACH PARTITION CONCURRENTLY: Avoid adding redundant constraint]<br />
** Fixed at: {{PgCommitURL|7b357cc6ae}}<br />
<br />
* [https://www.postgresql.org/message-id/CC3F964B-8FA1-4A23-9D3E-6EA00BBFF0EE@enterprisedb.com Issues in PostgresNode and older major versions with multi-install]<br />
** Fixed at {{PgCommitURL|95c3a1956ec9eac686c1b69b033dd79211b72343}} and {{PgCommitURL|4c4eaf3d19201c5e2d9efebc590903dfaba0d3e5}}<br />
<br />
* [https://www.postgresql.org/message-id/3269784.1617215412%40sss.pgh.pa.us DETACH PARTITION CONCURRENTLY tests fail under CLOBBER_CACHE_ALWAYS]<br />
** Fixed at: {{PgCommitURL|8aba9322511f}}<br />
<br />
* [https://www.postgresql.org/message-id/551ed8c1-f531-818b-664a-2cecdab99cd8@oss.nttdata.com TRUNCATE on foreign tables and ONLY clause]<br />
** Fixed at: {{PgCommitURL|8e9ea08bae93a754d5075b7bc9c0b2bc71958bfd}}<br />
<br />
* [https://www.postgresql.org/message-id/CAMkU=1zKGWEJdBbYKw7Tn7cJmYR_UjgdcXTPDqJj=dNwCETBCQ@mail.gmail.com handling of character continuation in psql broken by sql body patch]<br />
** Fixed at: {{PgCommitURL|d9a9f4b4b92ad39e3c4e6600dc61d5603ddd6e24}}<br />
<br />
* [https://www.postgresql.org/message-id/20210505210947.GA27406%40telsasoft.com cache lookup failed for statistics object 123]<br />
** Fixed at: {{PgCommitURL|8d4b311d2494ca592e30aed03b29854d864eb846}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/CAFj8pRCL_Rjw-MCR6J7VX9OF7MR6PA5K8qUbrMvprW_e-aHkfQ%40mail.gmail.com batch fdw insert bug]<br />
** Fixed at: {{PgCommitURL|c6a01d924939306e95c8deafd09352be6a955648}}<br />
<br />
* [https://www.postgresql.org/message-id/3564817.1618420687@sss.pgh.pa.us Bogus collation version recording in recordMultipleDependencies]<br />
** Fixed at: {{PgCommitURL|ec48314708262d8ea6cdcb83f803fc83dd89e721}} (Feature revert)<br />
<br />
* [https://www.postgresql.org/message-id/773932.1619022622@sss.pgh.pa.us Corruption issues with WAL prefetch?]<br />
** Fixed at: {{PgCommitURL|c2dc19342e05e081dc13b296787baa38352681ef}} (Feature revert)<br />
<br />
* [https://www.postgresql.org/message-id/YIetoZGq31L84v5d@paquier.xyz Small issues with CREATE TABLE COMPRESSION]<br />
** MSVC scripts don't support builds with lz4: fixed at {{PgCommitURL|9ca40dcd4d0cad43d95a9a253fafaa9a9ba7de24}}<br />
** pg_dump includes no tests with compression methods of attributes and --no-toast-compression: fixed at {{PgCommitURL|63db0ac3f9e6bae313da67f640c95c0045b7f0ee}}<br />
** Documentation missing for --with-lz4 in installation instructions: fixed at {{PgCommitURL|02a93e7ef9612788081ef07ea1bbd0a8cc99ae99}}<br />
<br />
* [https://www.postgresql.org/message-id/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de Replication slot stats misgivings]<br />
** Fixed at: {{PgCommitURL|3fa17d37716f978f80dfcdab4e7c73f3a24e7a48}}<br />
** Fixed at: {{PgCommitURL|592f00f8dec68038301467a904ac514eddabf6cd}}<br />
** Fixed at: {{PgCommitURL|cca57c1d9bf7eeba5b81115e0b82651cf3d8e4ea}}<br />
** Fixed at: {{PgCommitURL|f5fc2f5b23d1b1dff60f8ca5dc211161df47eda4}}<br />
<br />
* [https://www.postgresql.org/message-id/CAPmGK158e9sJOfuWxfn%2B0ynrspXQU3JhNjSCbaoeSzMvnga%2Bbw%40mail.gmail.com FDW: crash with DDL and async/batch option]<br />
** Fixed at: {{PgCommitURL|a784859f4480ceaa05a00ca35311071ca33483d1}}<br />
<br />
* [https://www.postgresql.org/message-id/20210409213155.GA23912%40alvherre.pgsql should autoanalyze for partitioned tables handle ATTACH/DETACH/DROP?]<br />
** Fixed at: {{PgCommitURL|1b5617eb844cd2470a334c1d2eec66cf9b39c41a}} (docs)<br />
<br />
* [https://www.postgresql.org/message-id/CALT9ZEE7OiszofHELnjPhX%3DhV92PiKn8haSZ4_FWBAw4diaRdQ%40mail.gmail.com OOM in spgist insert]<br />
** Fixed at: {{PgCommitURL|c3c35a733c77b298d3cf7e7de2eeb4aea540a631}}<br />
<br />
== Won't Fix ==<br />
<br />
* [https://www.postgresql.org/message-id/92408.1618772924%40sss.pgh.pa.us SQL-standard function body: pg_dump should handle circular dependencies]<br />
** Owner: Peter Eisentraut<br />
<br />
== Important Dates ==<br />
<br />
Current schedule:<br />
<br />
* Feature Freeze: April 7, 2021 ('''Last Day to Commit Features''')<br />
* Beta 1: May 20, 2021<br />
* Beta 2: <br />
* Beta 3: <br />
* RC 1: <br />
* GA: <br />
<br />
[[Category:Open_Items]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_14_Open_Items&diff=35968PostgreSQL 14 Open Items2021-05-05T21:54:06Z<p>Adunstan: /* Open Issues */ extra email thread</p>
<hr />
<div>== Open Issues ==<br />
<br />
'''NOTE''': Please place new open items at the end of the list.<br />
<br />
* [https://www.postgresql.org/message-id/CAD21AoA%3D%3Df2VSw3c-Cp_y%3DWLKHMKc1D6s7g3YWsCOvgaYPpJcg%40mail.gmail.com Performance degradation of REFRESH MATERIALIZED VIEW]<br />
** Owner: Tomas Vondra<br />
<br />
* [https://www.postgresql.org/message-id/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de Replication slot stats misgivings]<br />
** Owner: Amit Kapila<br />
<br />
* [https://www.postgresql.org/message-id/20210409213155.GA23912%40alvherre.pgsql autoanalyze for partitioned tables should handle ATTACH/DETACH/DROP]<br />
** Owner: Alvaro Herrera<br />
<br />
* [https://www.postgresql.org/message-id/20210324232224.vrfiij2rxxwqqjjb@alap3.anarazel.de Questions about pg_stat_wal] also [https://www.postgresql.org/message-id/E3774ACD-7894-451E-9F13-71E097D10595@oss.nttdata.com]<br />
** Owner: Fujii Masao<br />
<br />
* [https://www.postgresql.org/message-id/3564817.1618420687@sss.pgh.pa.us Bogus collation version recording in recordMultipleDependencies]<br />
** Owner: Thomas Munro<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkCYR0U7zXqXo0CgFaFwUDz1WbKq8ngjzKi4+AQ5f-mYQ@mail.gmail.com Generalize INDEX_CLEANUP to allow the user to disable the optimization that has VACUUM skip indexes in marginal cases with very few LP_DEAD items/deletable TIDs.]<br />
** Owner: Peter Geoghegan<br />
<br />
* [https://www.postgresql.org/message-id/92408.1618772924%40sss.pgh.pa.us SQL-standard function body: pg_dump should handle circular dependencies]<br />
** Owner: Peter Eisentraut<br />
<br />
* [https://www.postgresql.org/message-id/773932.1619022622@sss.pgh.pa.us Corruption issues with WAL prefetch?]<br />
** Owner: Thomas Munro<br />
<br />
* [https://www.postgresql.org/message-id/YIetoZGq31L84v5d@paquier.xyz Small issues with CREATE TABLE COMPRESSION]<br />
** Owner: Robert Haas<br />
<br />
* [https://www.postgresql.org/message-id/35457b09-36f8-add3-1d07-6034fa585ca8@oss.nttdata.com compute_query_id and pg_stat_statements]<br />
** Owner: Bruce Momjian<br />
<br />
* [https://www.postgresql.org/message-id/OS0PR01MB611340CBD300A7C4FD6B6101FB5F9@OS0PR01MB6113.jpnprd01.prod.outlook.com FailedAssertion reported in lazy_scan_heap() when running logical replication]<br />
** Owner: Peter Geoghegan<br />
<br />
== Older bugs affecting stable branches ==<br />
<br />
=== Live issues ===<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkjjCoq5Y4LeeHJcjYJVxGm3M3SAWZ0%3D6J8K1FPSC9K0w%40mail.gmail.com REINDEX on a system catalog can leave index with two index tuples whose heap TIDs match]<br />
** In other words, there is a rare case where the HOT invariant is violated. Same HOT chain is indexed twice due to confusion about which precise heap tuple should be indexed.<br />
** Unclear what the user impact is.<br />
** Affects all stable branches.<br />
<br />
* [https://www.postgresql.org/message-id/20201016135230.GA23633%40alvherre.pgsql CREATE TABLE .. PARTITION OF fails to preserve tgenabled for inherited row triggers]<br />
** tgenabled lost on CREATE TABLE .. PARTITION OF, and on pg_dump, and comments on child triggers lost during pg_dump;<br />
<br />
* [https://www.postgresql.org/message-id/20201001021609.GC8476%40telsasoft.com memory leak with JIT inlining]<br />
** [https://www.postgresql.org/message-id/flat/20210331040751.GU4431%40telsasoft.com#cc34872765add8e483e05009212d9d39 Another report of (same?) issue and reproducer]<br />
** [https://www.postgresql.org/message-id/flat/9f73e655-14b8-feaf-bd66-c0f506224b9e%40stephans-server.de Another report]<br />
** [https://www.postgresql.org/message-id/flat/16707-f5df308978a55bf8%40postgresql.org Another report]<br />
<br />
* [https://www.postgresql.org/message-id/1884374.1617898865%40sss.pgh.pa.us Buildfarm does not test pg_stat_statements]<br />
<br />
* [https://www.postgresql.org/message-id/CAEudQAoR5e7=uMZ0otzuCVb25zTC8QQBe+2Dt1JRsa3u+XuwJg@mail.gmail.com could not rename temporary statistics file on Windows]<br />
** See {{PgCommitURL|909b449e00fc2f71e1a38569bbddbb6457d28485}} that has fixed a similar symptom for WAL segments. Most reporters of the WAL segment problem complained about this renaming issue as well.<br />
<br />
* [https://www.postgresql.org/message-id/20210422203603.fdnh3fu2mmfp2iov@alap3.anarazel.de Incorrect snapshot calculation when 2PC is in use]<br />
** Seems to be an old problem.<br />
<br />
=== Fixed issues ===<br />
<br />
* [https://www.postgresql.org/message-id/flat/trinity-1c565d44-159f-488b-a518-caf13883134f-1611835701633%403c-app-gmx-bap78 hashagg broken by failing to spill grouping columns]<br />
** Fixed at: {{PgCommitURL|0ff865fbe50e82f17df8a9280fa01faf270b7f3f}}<br />
<br />
* [https://www.postgresql.org/message-id/CAE-ML+_EjH_fzfq1F3RJ1=XaaNG=-Jz-i3JqkNhXiLAsM3z-Ew@mail.gmail.com PITR promote bug: Checkpointer writes to older timeline]<br />
** Fixed at: {{PgCommitURL|595b9cba2ab0cdd057e02d3c23f34a8bcfd90a2d}}<br />
<br />
* [https://www.postgresql.org/message-id/YFBcRbnBiPdGZvfW%40paquier.xyz Permission failures with WAL files in 13~ on Windows]<br />
** Fixed at: {{PgCommitURL|78c24e97dd189f62187a99ef84016d0eb35a7978}}<br />
<br />
* [https://www.postgresql.org/message-id/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com CLOBBER_CACHE Server crashed with segfault 11 while executing clusterdb]<br />
** Fixed at: {{PgCommitURL|9d523119fd38fd205cb9c8ea8e7cceeb54355818}}<br />
<br />
* [https://www.postgresql.org/message-id/CAAV6ZkQRCVBh8qAY+SZiHnz+U+FqAGBBDaDTjF2yiKa2nJSLKg@mail.gmail.com Reference leak with tupledescs in plpgsql simple expressions]<br />
** Fixed at: {{PgCommitURL|c2db458c1036efae503ce5e451f8369e64c99541}}<br />
<br />
* [https://www.postgresql.org/message-id/a3be61d9-f44b-7fce-3dc8-d700fdfb6f48%402ndquadrant.com extract(julian) is undocumented and gives wrong result]<br />
** Fixed by documentation change at: {{PgCommitURL|79a5928ebcb726b7061bf265b5c6990e835e8c4f}}<br />
<br />
* [https://www.postgresql.org/message-id/CAGRY4nwxKUS_RvXFW-ugrZBYxPFFM5kjwKT5O+0+Stuga5b4+Q@mail.gmail.com lwlock dtrace probes do unnecessary work if dtrace is compiled in but disabled]<br />
** Fixed at: {{PgCommitURL|b94409a02f6122d77b5154e481c0819fed6b4c95}}<br />
<br />
=== Nothing to do ===<br />
<br />
== Non-bugs ==<br />
<br />
* [https://www.postgresql.org/message-id/20210216064214.GI28165%40telsasoft.com progress reporting for partitioned REINDEX]<br />
* [https://www.postgresql.org/message-id/YFnWBYinNf1s0Y6v@msg.df7cb.de pg_regress and tablespace removal]<br />
** [https://www.postgresql.org/message-id/YG/tf6HTZFj4hWlb@paquier.xyz Some patch]<br />
<br />
== Resolved Issues ==<br />
<br />
=== resolved before 14beta1 ===<br />
<br />
* [https://www.postgresql.org/message-id/CAJKUy5gCXDSmFs2c%3DR%2BVGgn7FiYcLCsEFEuDNNLGfoha%3DpBE_g%40mail.gmail.com Assertion fail with window function and nested partitioned tables]<br />
** [https://www.postgresql.org/message-id/87sg8tqhsl.fsf@aurora.ydns.eu Older report]<br />
** Fixed at: {{PgCommitURL|fb2d645dd53ff571572d830e830fc8c368063802}}<br />
<br />
* [https://www.postgresql.org/message-id/1df88660-6f08-cc6e-b7e2-f85296a2bdab@oss.nttdata.com Atomic initialization of waitStart done at backend startup]<br />
** Fixed at: {{PgCommitURL|f05ed5a5cfa55878baa77a1e39d68cb09793b477}}<br />
<br />
* [https://www.postgresql.org/message-id/20210117215940.GE8560%40telsasoft.com pg_collation_actual_version() ERROR: cache lookup failed for collation 123]<br />
** Fixed at: {{PgCommitURL|0fb0a0503bfc125764c8dba4f515058145dc7f8b}}<br />
<br />
* [https://www.postgresql.org/message-id/fd3ba610085f1ff54623478cf2f7adf5af193cbb.camel@vmware.com cryptohash: missing locking functions for OpenSSL <= 1.0.2?]<br />
** Fixed at: {{PgCommitURL|2c0cefcd18161549e9e8b103f46c0f65fca84d99}}<br />
<br />
* [https://www.postgresql.org/message-id/CAHut%2BPuPGGASnh2Dy37VYODKULVQo-5oE%3DShc6gwtRizDt%3D%3DcA%40mail.gmail.com pg_subscription - substream column?]<br />
** Fixed at: {{PgCommitURL|7efeb214ad832fa96ea950d0906b1d2b96316d15}}<br />
<br />
* [https://www.postgresql.org/message-id/CAJKUy5gcs0zGOp6JXU2mMVdthYhuQpFk%3DS3V8DOKT%3DLZC1L36Q%40mail.gmail.com TOAST compression method of index columns]<br />
** Fixed at: {{PgCommitURL|5db1fd7823a1a12e2bdad98abc8e102fd71ffbda}}<br />
<br />
* [https://www.postgresql.org/message-id/20210402235337.GA4082@ahch-to Crash with encoding conversion functions]<br />
** Fixed at: {{PgCommitURL|c4c393b3ec83ceb4b4d7f37cdd5302126377d069}}<br />
<br />
* [https://www.postgresql.org/message-id/CAApHDvpYT10-nkSp8xXe-nbO3jmoaRyRFHbzh-RWMfAJynqgpQ@mail.gmail.com Crash with extended stats on expressions]<br />
** Fixed at: {{PgCommitURL|518442c7f334f3b05ea28b7ef50f1b551cfcc23e}}<br />
<br />
* [https://postgr.es/m/CA+TgmobwnGawnxufvqLCrcTy4HRhMepFiXQLY8YpVD+PTuwagA@mail.gmail.com Update TOAST documentation for LZ4 compression]<br />
** Fixed at: {{PgCommitURL|e8c435a824e123f43067ce6f69d66f14cfb8815e}}<br />
<br />
* [https://www.postgresql.org/message-id/20210404220802.GA728316@rfd.leadboat.com Behavior of pg_dump --extension with schemas]<br />
** Fixed at: {{PgCommitURL|344487e2db03f3cec13685a839dbc8a0e2a36750}}<br />
<br />
* [https://www.postgresql.org/message-id/OSZPR01MB631017521EE6887ADC9492E8FD759@OSZPR01MB6310.jpnprd01.prod.outlook.com psql query cancellation is broken], as are [https://www.postgresql.org/message-id/2671235.1618154047%40sss.pgh.pa.us autocommit], and [https://www.postgresql.org/message-id/YHTYOFBHDuGaz2gy@paquier.xyz error reporting]<br />
** Reverted by: {{PgCommitURL|fae65629cec824738ee11bf60f757239906d64fa}}<br />
<br />
* On Windows, collation version lookup (sometimes?) fails for names like "English_United States.1252", but works for names like "en-US".<br />
** Fixed at: {{PgCommitURL|9f12a3b95dd56c897f1aa3d756d8fb419e84a187}} -- this commit tolerates failure so at least we don't raise an error, but unfortunately we have no version information<br />
** Fixed at: {{PgCommitURL|1bf946bd43e545b86e567588b791311fe4e36a8c}} -- this commit documents the limitation<br />
<br />
* [https://www.postgresql.org/message-id/1820954.1617860500@sss.pgh.pa.us Handling of querystring inconsistent for parallel execution of SQL function bodies]<br />
** Fixed at: {{PgCommitURL|1111b2668d89bfcb6f502789158b1233ab4217a6}}<br />
<br />
* [https://www.postgresql.org/message-id/YHPkU8hFi4no4NSw@paquier.xyz Problems around compute_query_id]<br />
** Fixed at: {{PgCommitURL|db01f797dd48f826c62e1b8eea70f11fe7ff3efc}}<br />
<br />
* [https://www.postgresql.org/message-id/OS0PR01MB611383FA0FE92EB9DE21946AFB769@OS0PR01MB6113.jpnprd01.prod.outlook.com Table reference leak in logical replication]<br />
** Fixed at: {{PgCommitURL|f3b141c482552a57866c72919007d6481cd59ee3}}<br />
<br />
* [https://www.postgresql.org/message-id/20210410184226.GY6592%40telsasoft.com DETACH PARTITION CONCURRENTLY: Avoid adding redundant constraint]<br />
** Fixed at: {{PgCommitURL|7b357cc6ae}}<br />
<br />
* [https://www.postgresql.org/message-id/CC3F964B-8FA1-4A23-9D3E-6EA00BBFF0EE@enterprisedb.com Issues in PostgresNode and older major versions with multi-install]<br />
** Fixed at {{PgCommitURL|95c3a1956ec9eac686c1b69b033dd79211b72343}} and {{PgCommitURL|4c4eaf3d19201c5e2d9efebc590903dfaba0d3e5}}<br />
<br />
* [https://www.postgresql.org/message-id/3269784.1617215412%40sss.pgh.pa.us DETACH PARTITION CONCURRENTLY tests fail under CLOBBER_CACHE_ALWAYS]<br />
** Fixed at: {{PgCommitURL|8aba9322511f}}<br />
<br />
* [https://www.postgresql.org/message-id/551ed8c1-f531-818b-664a-2cecdab99cd8@oss.nttdata.com TRUNCATE on foreign tables and ONLY clause]<br />
** Fixed at: {{PgCommitURL|8e9ea08bae93a754d5075b7bc9c0b2bc71958bfd}}<br />
<br />
* [https://www.postgresql.org/message-id/CAMkU=1zKGWEJdBbYKw7Tn7cJmYR_UjgdcXTPDqJj=dNwCETBCQ@mail.gmail.com handling of character continuation in psql broken by sql body patch]<br />
** Fixed at: {{PgCommitURL|d9a9f4b4b92ad39e3c4e6600dc61d5603ddd6e24}}<br />
<br />
== Won't Fix ==<br />
<br />
== Important Dates ==<br />
<br />
Current schedule:<br />
<br />
* Feature Freeze: April 7, 2021 ('''Last Day to Commit Features''')<br />
* Beta 1: May 20, 2021<br />
* Beta 2: <br />
* Beta 3: <br />
* RC 1: <br />
* GA: <br />
<br />
[[Category:Open_Items]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_14_Open_Items&diff=35943PostgreSQL 14 Open Items2021-04-23T12:30:21Z<p>Adunstan: </p>
<hr />
<div>== Open Issues ==<br />
<br />
'''NOTE''': Please place new open items at the end of the list.<br />
<br />
* [https://www.postgresql.org/message-id/CAD21AoA%3D%3Df2VSw3c-Cp_y%3DWLKHMKc1D6s7g3YWsCOvgaYPpJcg%40mail.gmail.com Performance degradation of REFRESH MATERIALIZED VIEW]<br />
** Owner: Tomas Vondra<br />
<br />
* [https://www.postgresql.org/message-id/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de Replication slot stats misgivings]<br />
** Owner: Amit Kapila<br />
<br />
* [https://www.postgresql.org/message-id/20210409213155.GA23912%40alvherre.pgsql autoanalyze for partitioned tables should handle ATTACH/DETACH/DROP]<br />
** Owner: Alvaro Herrera<br />
<br />
* [https://www.postgresql.org/message-id/551ed8c1-f531-818b-664a-2cecdab99cd8@oss.nttdata.com TRUNCATE on foreign tables and ONLY clause]<br />
** Owner: Fujii Masao<br />
<br />
* [https://www.postgresql.org/message-id/20210324232224.vrfiij2rxxwqqjjb@alap3.anarazel.de Questions about pg_stat_wal]<br />
** Owner: Fujii Masao<br />
<br />
* [https://www.postgresql.org/message-id/3564817.1618420687@sss.pgh.pa.us Bogus collation version recording in recordMultipleDependencies]<br />
** Owner: Thomas Munro<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkCYR0U7zXqXo0CgFaFwUDz1WbKq8ngjzKi4+AQ5f-mYQ@mail.gmail.com Generalize INDEX_CLEANUP to allow the user to disable the optimization that has VACUUM skip indexes in marginal cases with very few LP_DEAD items/deletable TIDs.]<br />
** Owner: Peter Geoghegan<br />
<br />
* [https://www.postgresql.org/message-id/92408.1618772924%40sss.pgh.pa.us SQL-standard function body: pg_dump should handle circular dependencies]<br />
** Owner: Peter Eisentraut<br />
<br />
* [https://www.postgresql.org/message-id/773932.1619022622@sss.pgh.pa.us Corruption issues with WAL prefetch?]<br />
** Owner: Thomas Munro<br />
<br />
* [https://www.postgresql.org/message-id/CAMkU=1zKGWEJdBbYKw7Tn7cJmYR_UjgdcXTPDqJj=dNwCETBCQ@mail.gmail.com handling of character continuation in psql broken by sql body patch]<br />
** Owner: Peter Eisentraut<br />
<br />
== Older bugs affecting stable branches ==<br />
<br />
=== Live issues ===<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkjjCoq5Y4LeeHJcjYJVxGm3M3SAWZ0%3D6J8K1FPSC9K0w%40mail.gmail.com REINDEX on a system catalog can leave index with two index tuples whose heap TIDs match]<br />
** In other words, there is a rare case where the HOT invariant is violated. Same HOT chain is indexed twice due to confusion about which precise heap tuple should be indexed.<br />
** Unclear what the user impact is.<br />
** Affects all stable branches.<br />
<br />
* [https://www.postgresql.org/message-id/20201016135230.GA23633%40alvherre.pgsql CREATE TABLE .. PARTITION OF fails to preserve tgenabled for inherited row triggers]<br />
** tgenabled lost on CREATE TABLE .. PARTITION OF, and on pg_dump, and comments on child triggers lost during pg_dump;<br />
<br />
* [https://www.postgresql.org/message-id/20201001021609.GC8476%40telsasoft.com memory leak with JIT inlining]<br />
** [https://www.postgresql.org/message-id/flat/20210331040751.GU4431%40telsasoft.com#cc34872765add8e483e05009212d9d39 Another report of (same?) issue and reproducer]<br />
** [https://www.postgresql.org/message-id/flat/9f73e655-14b8-feaf-bd66-c0f506224b9e%40stephans-server.de Another report]<br />
** [https://www.postgresql.org/message-id/flat/16707-f5df308978a55bf8%40postgresql.org Another report]<br />
<br />
* [https://www.postgresql.org/message-id/a3be61d9-f44b-7fce-3dc8-d700fdfb6f48%402ndquadrant.com extract(julian) is undocumented and gives wrong result]<br />
** With reimplementation of extract to return numeric, this might be an opportune time to fix this one way or the other.<br />
** Proposed doc patch at [https://www.postgresql.org/message-id/1197050.1619123213%40sss.pgh.pa.us]<br />
<br />
* [https://www.postgresql.org/message-id/CAGRY4nwxKUS_RvXFW-ugrZBYxPFFM5kjwKT5O+0+Stuga5b4+Q@mail.gmail.com lwlock dtrace probes do unnecessary work if dtrace is compiled in but disabled]<br />
** since PG13<br />
<br />
* [https://www.postgresql.org/message-id/1884374.1617898865%40sss.pgh.pa.us Buildfarm does not test pg_stat_statements]<br />
<br />
* [https://www.postgresql.org/message-id/CAEudQAoR5e7=uMZ0otzuCVb25zTC8QQBe+2Dt1JRsa3u+XuwJg@mail.gmail.com could not rename temporary statistics file on Windows]<br />
** See {{PgCommitURL|909b449e00fc2f71e1a38569bbddbb6457d28485}} that has fixed a similar symptom for WAL segments. Most reporters of the WAL segment problem complained about this renaming issue as well.<br />
<br />
=== Fixed issues ===<br />
<br />
* [https://www.postgresql.org/message-id/flat/trinity-1c565d44-159f-488b-a518-caf13883134f-1611835701633%403c-app-gmx-bap78 hashagg broken by failing to spill grouping columns]<br />
** Fixed at: {{PgCommitURL|0ff865fbe50e82f17df8a9280fa01faf270b7f3f}}<br />
<br />
* [https://www.postgresql.org/message-id/CAE-ML+_EjH_fzfq1F3RJ1=XaaNG=-Jz-i3JqkNhXiLAsM3z-Ew@mail.gmail.com PITR promote bug: Checkpointer writes to older timeline]<br />
** Fixed at: {{PgCommitURL|595b9cba2ab0cdd057e02d3c23f34a8bcfd90a2d}}<br />
<br />
* [https://www.postgresql.org/message-id/YFBcRbnBiPdGZvfW%40paquier.xyz Permission failures with WAL files in 13~ on Windows]<br />
** Fixed at: {{PgCommitURL|78c24e97dd189f62187a99ef84016d0eb35a7978}}<br />
<br />
* [https://www.postgresql.org/message-id/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com CLOBBER_CACHE Server crashed with segfault 11 while executing clusterdb]<br />
** Fixed at: {{PgCommitURL|9d523119fd38fd205cb9c8ea8e7cceeb54355818}}<br />
<br />
* [https://www.postgresql.org/message-id/CAAV6ZkQRCVBh8qAY+SZiHnz+U+FqAGBBDaDTjF2yiKa2nJSLKg@mail.gmail.com Reference leak with tupledescs in plpgsql simple expressions]<br />
** Fixed at: {{PgCommitURL|c2db458c1036efae503ce5e451f8369e64c99541}}<br />
<br />
=== Nothing to do ===<br />
<br />
== Non-bugs ==<br />
<br />
* [https://www.postgresql.org/message-id/20210216064214.GI28165%40telsasoft.com progress reporting for partitioned REINDEX]<br />
* [https://www.postgresql.org/message-id/YFnWBYinNf1s0Y6v@msg.df7cb.de pg_regress and tablespace removal]<br />
** [https://www.postgresql.org/message-id/YG/tf6HTZFj4hWlb@paquier.xyz Some patch]<br />
<br />
== Resolved Issues ==<br />
<br />
=== resolved before 14beta1 ===<br />
<br />
* [https://www.postgresql.org/message-id/CAJKUy5gCXDSmFs2c%3DR%2BVGgn7FiYcLCsEFEuDNNLGfoha%3DpBE_g%40mail.gmail.com Assertion fail with window function and nested partitioned tables]<br />
** [https://www.postgresql.org/message-id/87sg8tqhsl.fsf@aurora.ydns.eu Older report]<br />
** Fixed at: {{PgCommitURL|fb2d645dd53ff571572d830e830fc8c368063802}}<br />
<br />
* [https://www.postgresql.org/message-id/1df88660-6f08-cc6e-b7e2-f85296a2bdab@oss.nttdata.com Atomic initialization of waitStart done at backend startup]<br />
** Fixed at: {{PgCommitURL|f05ed5a5cfa55878baa77a1e39d68cb09793b477}}<br />
<br />
* [https://www.postgresql.org/message-id/20210117215940.GE8560%40telsasoft.com pg_collation_actual_version() ERROR: cache lookup failed for collation 123]<br />
** Fixed at: {{PgCommitURL|0fb0a0503bfc125764c8dba4f515058145dc7f8b}}<br />
<br />
* [https://www.postgresql.org/message-id/fd3ba610085f1ff54623478cf2f7adf5af193cbb.camel@vmware.com cryptohash: missing locking functions for OpenSSL <= 1.0.2?]<br />
** Fixed at: {{PgCommitURL|2c0cefcd18161549e9e8b103f46c0f65fca84d99}}<br />
<br />
* [https://www.postgresql.org/message-id/CAHut%2BPuPGGASnh2Dy37VYODKULVQo-5oE%3DShc6gwtRizDt%3D%3DcA%40mail.gmail.com pg_subscription - substream column?]<br />
** Fixed at: {{PgCommitURL|7efeb214ad832fa96ea950d0906b1d2b96316d15}}<br />
<br />
* [https://www.postgresql.org/message-id/CAJKUy5gcs0zGOp6JXU2mMVdthYhuQpFk%3DS3V8DOKT%3DLZC1L36Q%40mail.gmail.com TOAST compression method of index columns]<br />
** Fixed at: {{PgCommitURL|5db1fd7823a1a12e2bdad98abc8e102fd71ffbda}}<br />
<br />
* [https://www.postgresql.org/message-id/20210402235337.GA4082@ahch-to Crash with encoding conversion functions]<br />
** Fixed at: {{PgCommitURL|c4c393b3ec83ceb4b4d7f37cdd5302126377d069}}<br />
<br />
* [https://www.postgresql.org/message-id/CAApHDvpYT10-nkSp8xXe-nbO3jmoaRyRFHbzh-RWMfAJynqgpQ@mail.gmail.com Crash with extended stats on expressions]<br />
** Fixed at: {{PgCommitURL|518442c7f334f3b05ea28b7ef50f1b551cfcc23e}}<br />
<br />
* [https://postgr.es/m/CA+TgmobwnGawnxufvqLCrcTy4HRhMepFiXQLY8YpVD+PTuwagA@mail.gmail.com Update TOAST documentation for LZ4 compression]<br />
** Fixed at: {{PgCommitURL|e8c435a824e123f43067ce6f69d66f14cfb8815e}}<br />
<br />
* [https://www.postgresql.org/message-id/20210404220802.GA728316@rfd.leadboat.com Behavior of pg_dump --extension with schemas]<br />
** Fixed at: {{PgCommitURL|344487e2db03f3cec13685a839dbc8a0e2a36750}}<br />
<br />
* [https://www.postgresql.org/message-id/OSZPR01MB631017521EE6887ADC9492E8FD759@OSZPR01MB6310.jpnprd01.prod.outlook.com psql query cancellation is broken], as are [https://www.postgresql.org/message-id/2671235.1618154047%40sss.pgh.pa.us autocommit], and [https://www.postgresql.org/message-id/YHTYOFBHDuGaz2gy@paquier.xyz error reporting]<br />
** Reverted by: {{PgCommitURL|fae65629cec824738ee11bf60f757239906d64fa}}<br />
<br />
* On Windows, collation version lookup (sometimes?) fails for names like "English_United States.1252", but works for names like "en-US".<br />
** Fixed at: {{PgCommitURL|9f12a3b95dd56c897f1aa3d756d8fb419e84a187}} -- this commit tolerates failure so at least we don't raise an error, but unfortunately we have no version information<br />
** Fixed at: {{PgCommitURL|1bf946bd43e545b86e567588b791311fe4e36a8c}} -- this commit documents the limitation<br />
<br />
* [https://www.postgresql.org/message-id/1820954.1617860500@sss.pgh.pa.us Handling of querystring inconsistent for parallel execution of SQL function bodies]<br />
** Fixed at: {{PgCommitURL|1111b2668d89bfcb6f502789158b1233ab4217a6}}<br />
<br />
* [https://www.postgresql.org/message-id/YHPkU8hFi4no4NSw@paquier.xyz Problems around compute_query_id]<br />
** Fixed at: {{PgCommitURL|db01f797dd48f826c62e1b8eea70f11fe7ff3efc}}<br />
<br />
* [https://www.postgresql.org/message-id/OS0PR01MB611383FA0FE92EB9DE21946AFB769@OS0PR01MB6113.jpnprd01.prod.outlook.com Table reference leak in logical replication]<br />
** Fixed at: {{PgCommitURL|f3b141c482552a57866c72919007d6481cd59ee3}}<br />
<br />
* [https://www.postgresql.org/message-id/20210410184226.GY6592%40telsasoft.com DETACH PARTITION CONCURRENTLY: Avoid adding redundant constraint]<br />
** Fixed at: {{PgCommitURL|7b357cc6ae}}<br />
<br />
* [https://www.postgresql.org/message-id/CC3F964B-8FA1-4A23-9D3E-6EA00BBFF0EE@enterprisedb.com Issues in PostgresNode and older major versions with multi-install]<br />
** Fixed at {{PgCommitURL|95c3a1956ec9eac686c1b69b033dd79211b72343}} and {{PgCommitURL|4c4eaf3d19201c5e2d9efebc590903dfaba0d3e5}}<br />
<br />
* [https://www.postgresql.org/message-id/3269784.1617215412%40sss.pgh.pa.us DETACH PARTITION CONCURRENTLY tests fail under CLOBBER_CACHE_ALWAYS]<br />
** Fixed at: {{PgCommitURL|8aba9322511f}}<br />
<br />
== Won't Fix ==<br />
<br />
== Important Dates ==<br />
<br />
Current schedule:<br />
<br />
* Feature Freeze: April 7, 2021 ('''Last Day to Commit Features''')<br />
* Beta 1: <br />
* Beta 2: <br />
* Beta 3: <br />
* RC 1: <br />
* GA: <br />
<br />
[[Category:Open_Items]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_14_Open_Items&diff=35936PostgreSQL 14 Open Items2021-04-22T19:51:39Z<p>Adunstan: </p>
<hr />
<div>== Open Issues ==<br />
<br />
'''NOTE''': Please place new open items at the end of the list.<br />
<br />
* [https://www.postgresql.org/message-id/CAD21AoA%3D%3Df2VSw3c-Cp_y%3DWLKHMKc1D6s7g3YWsCOvgaYPpJcg%40mail.gmail.com Performance degradation of REFRESH MATERIALIZED VIEW]<br />
<br />
* [https://www.postgresql.org/message-id/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de Replication slot stats misgivings]<br />
<br />
* [https://www.postgresql.org/message-id/3269784.1617215412%40sss.pgh.pa.us DETACH PARTITION CONCURRENTLY tests fail under CLOBBER_CACHE_ALWAYS]<br />
** Owner: Alvaro Herrera<br />
<br />
* [https://www.postgresql.org/message-id/20210409213155.GA23912%40alvherre.pgsql autoanalyze for partitioned tables should handle ATTACH/DETACH/DROP]<br />
** Owner: Alvaro Herrera<br />
<br />
* [https://www.postgresql.org/message-id/551ed8c1-f531-818b-664a-2cecdab99cd8@oss.nttdata.com TRUNCATE on foreign tables and ONLY clause]<br />
** Owner: Fujii Masao<br />
<br />
* [https://www.postgresql.org/message-id/20210324232224.vrfiij2rxxwqqjjb@alap3.anarazel.de Questions about pg_stat_wal]<br />
** Owner: Fujii Masao<br />
<br />
* [https://www.postgresql.org/message-id/3564817.1618420687@sss.pgh.pa.us Bogus collation version recording in recordMultipleDependencies]<br />
** Owner: Thomas Munro<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkCYR0U7zXqXo0CgFaFwUDz1WbKq8ngjzKi4+AQ5f-mYQ@mail.gmail.com Generalize INDEX_CLEANUP to allow the user to disable the optimization that has VACUUM skip indexes in marginal cases with very few LP_DEAD items/deletable TIDs.]<br />
** Owner: Peter Geoghegan<br />
<br />
* [https://www.postgresql.org/message-id/92408.1618772924%40sss.pgh.pa.us SQL-standard function body: pg_dump should handle circular dependencies]<br />
** Owner: Peter Eisentraut<br />
<br />
* [https://www.postgresql.org/message-id/773932.1619022622@sss.pgh.pa.us Corruption issues with WAL prefetch?]<br />
** Owner: Thomas Munro<br />
<br />
== Older bugs affecting stable branches ==<br />
<br />
=== Live issues ===<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkjjCoq5Y4LeeHJcjYJVxGm3M3SAWZ0%3D6J8K1FPSC9K0w%40mail.gmail.com REINDEX on a system catalog can leave index with two index tuples whose heap TIDs match]<br />
** In other words, there is a rare case where the HOT invariant is violated. Same HOT chain is indexed twice due to confusion about which precise heap tuple should be indexed.<br />
** Unclear what the user impact is.<br />
** Affects all stable branches.<br />
<br />
* [https://www.postgresql.org/message-id/20201016135230.GA23633%40alvherre.pgsql CREATE TABLE .. PARTITION OF fails to preserve tgenabled for inherited row triggers]<br />
** tgenabled lost on CREATE TABLE .. PARTITION OF, and on pg_dump, and comments on child triggers lost during pg_dump;<br />
<br />
* [https://www.postgresql.org/message-id/20201001021609.GC8476%40telsasoft.com memory leak with JIT inlining]<br />
** [https://www.postgresql.org/message-id/flat/20210331040751.GU4431%40telsasoft.com#cc34872765add8e483e05009212d9d39 Another report of (same?) issue and reproducer]<br />
** [https://www.postgresql.org/message-id/flat/9f73e655-14b8-feaf-bd66-c0f506224b9e%40stephans-server.de Another report]<br />
** [https://www.postgresql.org/message-id/flat/16707-f5df308978a55bf8%40postgresql.org Another report]<br />
<br />
* [https://www.postgresql.org/message-id/a3be61d9-f44b-7fce-3dc8-d700fdfb6f48%402ndquadrant.com extract(julian) is undocumented and gives wrong result]<br />
** With reimplementation of extract to return numeric, this might be an opportune time to fix this one way or the other.<br />
<br />
* [https://www.postgresql.org/message-id/CAGRY4nwxKUS_RvXFW-ugrZBYxPFFM5kjwKT5O+0+Stuga5b4+Q@mail.gmail.com lwlock dtrace probes do unnecessary work if dtrace is compiled in but disabled]<br />
** since PG13<br />
<br />
* [https://www.postgresql.org/message-id/1884374.1617898865%40sss.pgh.pa.us Buildfarm does not test pg_stat_statements]<br />
<br />
* [https://www.postgresql.org/message-id/CAEudQAoR5e7=uMZ0otzuCVb25zTC8QQBe+2Dt1JRsa3u+XuwJg@mail.gmail.com could not rename temporary statistics file on Windows]<br />
** See {{PgCommitURL|909b449e00fc2f71e1a38569bbddbb6457d28485}} that has fixed a similar symptom for WAL segments. Most reporters of the WAL segment problem complained about this renaming issue as well.<br />
<br />
=== Fixed issues ===<br />
<br />
* [https://www.postgresql.org/message-id/flat/trinity-1c565d44-159f-488b-a518-caf13883134f-1611835701633%403c-app-gmx-bap78 hashagg broken by failing to spill grouping columns]<br />
** Fixed at: {{PgCommitURL|0ff865fbe50e82f17df8a9280fa01faf270b7f3f}}<br />
<br />
* [https://www.postgresql.org/message-id/CAE-ML+_EjH_fzfq1F3RJ1=XaaNG=-Jz-i3JqkNhXiLAsM3z-Ew@mail.gmail.com PITR promote bug: Checkpointer writes to older timeline]<br />
** Fixed at: {{PgCommitURL|595b9cba2ab0cdd057e02d3c23f34a8bcfd90a2d}}<br />
<br />
* [https://www.postgresql.org/message-id/YFBcRbnBiPdGZvfW%40paquier.xyz Permission failures with WAL files in 13~ on Windows]<br />
** Fixed at: {{PgCommitURL|78c24e97dd189f62187a99ef84016d0eb35a7978}}<br />
<br />
* [https://www.postgresql.org/message-id/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com CLOBBER_CACHE Server crashed with segfault 11 while executing clusterdb]<br />
** Fixed at: {{PgCommitURL|9d523119fd38fd205cb9c8ea8e7cceeb54355818}}<br />
<br />
* [https://www.postgresql.org/message-id/CAAV6ZkQRCVBh8qAY+SZiHnz+U+FqAGBBDaDTjF2yiKa2nJSLKg@mail.gmail.com Reference leak with tupledescs in plpgsql simple expressions]<br />
** Fixed at: {{PgCommitURL|c2db458c1036efae503ce5e451f8369e64c99541}}<br />
<br />
=== Nothing to do ===<br />
<br />
== Non-bugs ==<br />
<br />
* [https://www.postgresql.org/message-id/20210216064214.GI28165%40telsasoft.com progress reporting for partitioned REINDEX]<br />
* [https://www.postgresql.org/message-id/YFnWBYinNf1s0Y6v@msg.df7cb.de pg_regress and tablespace removal]<br />
** [https://www.postgresql.org/message-id/YG/tf6HTZFj4hWlb@paquier.xyz Some patch]<br />
<br />
== Resolved Issues ==<br />
<br />
=== resolved before 14beta1 ===<br />
<br />
* [https://www.postgresql.org/message-id/CC3F964B-8FA1-4A23-9D3E-6EA00BBFF0EE@enterprisedb.com Issues in PostgresNode and older major versions with multi-install]<br />
** Fixed at {{PgCommitURL|95c3a1956ec9eac686c1b69b033dd79211b72343}} and {{PgCommitURL|4c4eaf3d19201c5e2d9efebc590903dfaba0d3e5}}<br />
<br />
* [https://www.postgresql.org/message-id/20210410184226.GY6592%40telsasoft.com DETACH PARTITION CONCURRENTLY: Avoid adding redundant constraint]<br />
** Owner: Alvaro Herrera<br />
** Fixed at: {{PgCommitURL|7b357cc6ae}}<br />
<br />
* [https://www.postgresql.org/message-id/CAJKUy5gCXDSmFs2c%3DR%2BVGgn7FiYcLCsEFEuDNNLGfoha%3DpBE_g%40mail.gmail.com Assertion fail with window function and nested partitioned tables]<br />
** [https://www.postgresql.org/message-id/87sg8tqhsl.fsf@aurora.ydns.eu Older report]<br />
** Fixed at: {{PgCommitURL|fb2d645dd53ff571572d830e830fc8c368063802}}<br />
<br />
* [https://www.postgresql.org/message-id/1df88660-6f08-cc6e-b7e2-f85296a2bdab@oss.nttdata.com Atomic initialization of waitStart done at backend startup]<br />
** Fixed at: {{PgCommitURL|f05ed5a5cfa55878baa77a1e39d68cb09793b477}}<br />
<br />
* [https://www.postgresql.org/message-id/20210117215940.GE8560%40telsasoft.com pg_collation_actual_version() ERROR: cache lookup failed for collation 123]<br />
** Fixed at: {{PgCommitURL|0fb0a0503bfc125764c8dba4f515058145dc7f8b}}<br />
<br />
* [https://www.postgresql.org/message-id/fd3ba610085f1ff54623478cf2f7adf5af193cbb.camel@vmware.com cryptohash: missing locking functions for OpenSSL <= 1.0.2?]<br />
** Fixed at: {{PgCommitURL|2c0cefcd18161549e9e8b103f46c0f65fca84d99}}<br />
<br />
* [https://www.postgresql.org/message-id/CAHut%2BPuPGGASnh2Dy37VYODKULVQo-5oE%3DShc6gwtRizDt%3D%3DcA%40mail.gmail.com pg_subscription - substream column?]<br />
** Fixed at: {{PgCommitURL|7efeb214ad832fa96ea950d0906b1d2b96316d15}}<br />
<br />
* [https://www.postgresql.org/message-id/CAJKUy5gcs0zGOp6JXU2mMVdthYhuQpFk%3DS3V8DOKT%3DLZC1L36Q%40mail.gmail.com TOAST compression method of index columns]<br />
** Fixed at: {{PgCommitURL|5db1fd7823a1a12e2bdad98abc8e102fd71ffbda}}<br />
<br />
* [https://www.postgresql.org/message-id/20210402235337.GA4082@ahch-to Crash with encoding conversion functions]<br />
** Fixed at: {{PgCommitURL|c4c393b3ec83ceb4b4d7f37cdd5302126377d069}}<br />
<br />
* [https://www.postgresql.org/message-id/CAApHDvpYT10-nkSp8xXe-nbO3jmoaRyRFHbzh-RWMfAJynqgpQ@mail.gmail.com Crash with extended stats on expressions]<br />
** Fixed at: {{PgCommitURL|518442c7f334f3b05ea28b7ef50f1b551cfcc23e}}<br />
<br />
* [https://postgr.es/m/CA+TgmobwnGawnxufvqLCrcTy4HRhMepFiXQLY8YpVD+PTuwagA@mail.gmail.com Update TOAST documentation for LZ4 compression]<br />
** Fixed at: {{PgCommitURL|e8c435a824e123f43067ce6f69d66f14cfb8815e}}<br />
<br />
* [https://www.postgresql.org/message-id/20210404220802.GA728316@rfd.leadboat.com Behavior of pg_dump --extension with schemas]<br />
** Fixed at: {{PgCommitURL|344487e2db03f3cec13685a839dbc8a0e2a36750}}<br />
<br />
* [https://www.postgresql.org/message-id/OSZPR01MB631017521EE6887ADC9492E8FD759@OSZPR01MB6310.jpnprd01.prod.outlook.com psql query cancellation is broken], as are [https://www.postgresql.org/message-id/2671235.1618154047%40sss.pgh.pa.us autocommit], and [https://www.postgresql.org/message-id/YHTYOFBHDuGaz2gy@paquier.xyz error reporting]<br />
** Reverted by: {{PgCommitURL|fae65629cec824738ee11bf60f757239906d64fa}}<br />
<br />
* On Windows, collation version lookup (sometimes?) fails for names like "English_United States.1252", but works for names like "en-US".<br />
** Fixed at: {{PgCommitURL|9f12a3b95dd56c897f1aa3d756d8fb419e84a187}} -- this commit tolerates failure so at least we don't raise an error, but unfortunately we have no version information<br />
** Fixed at: {{PgCommitURL|1bf946bd43e545b86e567588b791311fe4e36a8c}} -- this commit documents the limitation<br />
<br />
* [https://www.postgresql.org/message-id/1820954.1617860500@sss.pgh.pa.us Handling of querystring inconsistent for parallel execution of SQL function bodies]<br />
** Fixed at: {{PgCommitURL|1111b2668d89bfcb6f502789158b1233ab4217a6}}<br />
<br />
* [https://www.postgresql.org/message-id/YHPkU8hFi4no4NSw@paquier.xyz Problems around compute_query_id]<br />
** Fixed at: {{PgCommitURL|db01f797dd48f826c62e1b8eea70f11fe7ff3efc}}<br />
<br />
* [https://www.postgresql.org/message-id/OS0PR01MB611383FA0FE92EB9DE21946AFB769@OS0PR01MB6113.jpnprd01.prod.outlook.com Table reference leak in logical replication]<br />
** Fixed at: {{PgCommitURL|f3b141c482552a57866c72919007d6481cd59ee3}}<br />
<br />
== Won't Fix ==<br />
<br />
== Important Dates ==<br />
<br />
Current schedule:<br />
<br />
* Feature Freeze: April 7, 2021 ('''Last Day to Commit Features''')<br />
* Beta 1: <br />
* Beta 2: <br />
* Beta 3: <br />
* RC 1: <br />
* GA: <br />
<br />
[[Category:Open_Items]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_14_Open_Items&diff=35931PostgreSQL 14 Open Items2021-04-21T14:44:26Z<p>Adunstan: /* Open Issues */</p>
<hr />
<div>== Open Issues ==<br />
<br />
'''NOTE''': Please place new open items at the end of the list.<br />
<br />
* [https://www.postgresql.org/message-id/CAD21AoA%3D%3Df2VSw3c-Cp_y%3DWLKHMKc1D6s7g3YWsCOvgaYPpJcg%40mail.gmail.com Performance degradation of REFRESH MATERIALIZED VIEW]<br />
<br />
* [https://www.postgresql.org/message-id/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de Replication slot stats misgivings]<br />
<br />
* [https://www.postgresql.org/message-id/CC3F964B-8FA1-4A23-9D3E-6EA00BBFF0EE@enterprisedb.com Issues in PostgresNode and older major versions with multi-install]<br />
** Owner: Andrew Dunstan<br />
** patch pending<br />
<br />
* [https://www.postgresql.org/message-id/3269784.1617215412%40sss.pgh.pa.us DETACH PARTITION CONCURRENTLY tests fail under CLOBBER_CACHE_ALWAYS]<br />
** Owner: Alvaro Herrera<br />
<br />
* [https://www.postgresql.org/message-id/OS0PR01MB611383FA0FE92EB9DE21946AFB769@OS0PR01MB6113.jpnprd01.prod.outlook.com Table reference leak in logical replication]<br />
** Owner: Heikki Linnakangas<br />
** One patch [https://www.postgresql.org/message-id/YHktsjqjM89fyAIt@paquier.xyz here], after some review.<br />
<br />
* [https://www.postgresql.org/message-id/20210409213155.GA23912%40alvherre.pgsql autoanalyze for partitioned tables should handle ATTACH/DETACH/DROP]<br />
** Owner: Alvaro Herrera<br />
<br />
* [https://www.postgresql.org/message-id/20210410184226.GY6592%40telsasoft.com DETACH PARTITION CONCURRENTLY: Avoid adding redundant constraint]<br />
** Owner: Alvaro Herrera<br />
<br />
* [https://www.postgresql.org/message-id/551ed8c1-f531-818b-664a-2cecdab99cd8@oss.nttdata.com TRUNCATE on foreign tables and ONLY clause]<br />
** Owner: Fujii Masao<br />
<br />
* [https://www.postgresql.org/message-id/20210324232224.vrfiij2rxxwqqjjb@alap3.anarazel.de Questions about pg_stat_wal]<br />
** Owner: Fujii Masao<br />
<br />
* [https://www.postgresql.org/message-id/3564817.1618420687@sss.pgh.pa.us Bogus collation version recording in recordMultipleDependencies]<br />
** Owner: Thomas Munro<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkCYR0U7zXqXo0CgFaFwUDz1WbKq8ngjzKi4+AQ5f-mYQ@mail.gmail.com Generalize INDEX_CLEANUP to allow the user to disable the optimization that has VACUUM skip indexes in marginal cases with very few LP_DEAD items/deletable TIDs.]<br />
** Owner: Peter Geoghegan<br />
<br />
* [https://www.postgresql.org/message-id/92408.1618772924%40sss.pgh.pa.us SQL-standard function body: pg_dump should handle circular dependencies]<br />
** Owner: Peter Eisentraut<br />
<br />
== Older bugs affecting stable branches ==<br />
<br />
=== Live issues ===<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkjjCoq5Y4LeeHJcjYJVxGm3M3SAWZ0%3D6J8K1FPSC9K0w%40mail.gmail.com REINDEX on a system catalog can leave index with two index tuples whose heap TIDs match]<br />
** In other words, there is a rare case where the HOT invariant is violated. Same HOT chain is indexed twice due to confusion about which precise heap tuple should be indexed.<br />
** Unclear what the user impact is.<br />
** Affects all stable branches.<br />
<br />
* [https://www.postgresql.org/message-id/20201016135230.GA23633%40alvherre.pgsql CREATE TABLE .. PARTITION OF fails to preserve tgenabled for inherited row triggers]<br />
** tgenabled lost on CREATE TABLE .. PARTITION OF, and on pg_dump, and comments on child triggers lost during pg_dump;<br />
<br />
* [https://www.postgresql.org/message-id/20201001021609.GC8476%40telsasoft.com memory leak with JIT inlining]<br />
** [https://www.postgresql.org/message-id/flat/20210331040751.GU4431%40telsasoft.com#cc34872765add8e483e05009212d9d39 Another report of (same?) issue and reproducer]<br />
** [https://www.postgresql.org/message-id/flat/9f73e655-14b8-feaf-bd66-c0f506224b9e%40stephans-server.de Another report]<br />
** [https://www.postgresql.org/message-id/flat/16707-f5df308978a55bf8%40postgresql.org Another report]<br />
<br />
* [https://www.postgresql.org/message-id/a3be61d9-f44b-7fce-3dc8-d700fdfb6f48%402ndquadrant.com extract(julian) is undocumented and gives wrong result]<br />
** With reimplementation of extract to return numeric, this might be an opportune time to fix this one way or the other.<br />
<br />
* [https://www.postgresql.org/message-id/CAGRY4nwxKUS_RvXFW-ugrZBYxPFFM5kjwKT5O+0+Stuga5b4+Q@mail.gmail.com lwlock dtrace probes do unnecessary work if dtrace is compiled in but disabled]<br />
** since PG13<br />
<br />
* [https://www.postgresql.org/message-id/1884374.1617898865%40sss.pgh.pa.us Buildfarm does not test pg_stat_statements]<br />
<br />
* [https://www.postgresql.org/message-id/CAEudQAoR5e7=uMZ0otzuCVb25zTC8QQBe+2Dt1JRsa3u+XuwJg@mail.gmail.com could not rename temporary statistics file on Windows]<br />
** See {{PgCommitURL|909b449e00fc2f71e1a38569bbddbb6457d28485}} that has fixed a similar symptom for WAL segments. Most reporters of the WAL segment problem complained about this renaming issue as well.<br />
<br />
=== Fixed issues ===<br />
<br />
* [https://www.postgresql.org/message-id/flat/trinity-1c565d44-159f-488b-a518-caf13883134f-1611835701633%403c-app-gmx-bap78 hashagg broken by failing to spill grouping columns]<br />
** Fixed at: {{PgCommitURL|0ff865fbe50e82f17df8a9280fa01faf270b7f3f}}<br />
<br />
* [https://www.postgresql.org/message-id/CAE-ML+_EjH_fzfq1F3RJ1=XaaNG=-Jz-i3JqkNhXiLAsM3z-Ew@mail.gmail.com PITR promote bug: Checkpointer writes to older timeline]<br />
** Fixed at: {{PgCommitURL|595b9cba2ab0cdd057e02d3c23f34a8bcfd90a2d}}<br />
<br />
* [https://www.postgresql.org/message-id/YFBcRbnBiPdGZvfW%40paquier.xyz Permission failures with WAL files in 13~ on Windows]<br />
** Fixed at: {{PgCommitURL|78c24e97dd189f62187a99ef84016d0eb35a7978}}<br />
<br />
* [https://www.postgresql.org/message-id/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com CLOBBER_CACHE Server crashed with segfault 11 while executing clusterdb]<br />
** Fixed at: {{PgCommitURL|9d523119fd38fd205cb9c8ea8e7cceeb54355818}}<br />
<br />
* [https://www.postgresql.org/message-id/CAAV6ZkQRCVBh8qAY+SZiHnz+U+FqAGBBDaDTjF2yiKa2nJSLKg@mail.gmail.com Reference leak with tupledescs in plpgsql simple expressions]<br />
** Fixed at: {{PgCommitURL|c2db458c1036efae503ce5e451f8369e64c99541}}<br />
<br />
=== Nothing to do ===<br />
<br />
== Non-bugs ==<br />
<br />
* [https://www.postgresql.org/message-id/20210216064214.GI28165%40telsasoft.com progress reporting for partitioned REINDEX]<br />
* [https://www.postgresql.org/message-id/YFnWBYinNf1s0Y6v@msg.df7cb.de pg_regress and tablespace removal]<br />
** [https://www.postgresql.org/message-id/YG/tf6HTZFj4hWlb@paquier.xyz Some patch]<br />
<br />
== Resolved Issues ==<br />
<br />
=== resolved before 14beta1 ===<br />
<br />
* [https://www.postgresql.org/message-id/CAJKUy5gCXDSmFs2c%3DR%2BVGgn7FiYcLCsEFEuDNNLGfoha%3DpBE_g%40mail.gmail.com Assertion fail with window function and nested partitioned tables]<br />
** [https://www.postgresql.org/message-id/87sg8tqhsl.fsf@aurora.ydns.eu Older report]<br />
** Fixed at: {{PgCommitURL|fb2d645dd53ff571572d830e830fc8c368063802}}<br />
<br />
* [https://www.postgresql.org/message-id/1df88660-6f08-cc6e-b7e2-f85296a2bdab@oss.nttdata.com Atomic initialization of waitStart done at backend startup]<br />
** Fixed at: {{PgCommitURL|f05ed5a5cfa55878baa77a1e39d68cb09793b477}}<br />
<br />
* [https://www.postgresql.org/message-id/20210117215940.GE8560%40telsasoft.com pg_collation_actual_version() ERROR: cache lookup failed for collation 123]<br />
** Fixed at: {{PgCommitURL|0fb0a0503bfc125764c8dba4f515058145dc7f8b}}<br />
<br />
* [https://www.postgresql.org/message-id/fd3ba610085f1ff54623478cf2f7adf5af193cbb.camel@vmware.com cryptohash: missing locking functions for OpenSSL <= 1.0.2?]<br />
** Fixed at: {{PgCommitURL|2c0cefcd18161549e9e8b103f46c0f65fca84d99}}<br />
<br />
* [https://www.postgresql.org/message-id/CAHut%2BPuPGGASnh2Dy37VYODKULVQo-5oE%3DShc6gwtRizDt%3D%3DcA%40mail.gmail.com pg_subscription - substream column?]<br />
** Fixed at: {{PgCommitURL|7efeb214ad832fa96ea950d0906b1d2b96316d15}}<br />
<br />
* [https://www.postgresql.org/message-id/CAJKUy5gcs0zGOp6JXU2mMVdthYhuQpFk%3DS3V8DOKT%3DLZC1L36Q%40mail.gmail.com TOAST compression method of index columns]<br />
** Fixed at: {{PgCommitURL|5db1fd7823a1a12e2bdad98abc8e102fd71ffbda}}<br />
<br />
* [https://www.postgresql.org/message-id/20210402235337.GA4082@ahch-to Crash with encoding conversion functions]<br />
** Fixed at: {{PgCommitURL|c4c393b3ec83ceb4b4d7f37cdd5302126377d069}}<br />
<br />
* [https://www.postgresql.org/message-id/CAApHDvpYT10-nkSp8xXe-nbO3jmoaRyRFHbzh-RWMfAJynqgpQ@mail.gmail.com Crash with extended stats on expressions]<br />
** Fixed at: {{PgCommitURL|518442c7f334f3b05ea28b7ef50f1b551cfcc23e}}<br />
<br />
* [https://postgr.es/m/CA+TgmobwnGawnxufvqLCrcTy4HRhMepFiXQLY8YpVD+PTuwagA@mail.gmail.com Update TOAST documentation for LZ4 compression]<br />
** Fixed at: {{PgCommitURL|e8c435a824e123f43067ce6f69d66f14cfb8815e}}<br />
<br />
* [https://www.postgresql.org/message-id/20210404220802.GA728316@rfd.leadboat.com Behavior of pg_dump --extension with schemas]<br />
** Fixed at: {{PgCommitURL|344487e2db03f3cec13685a839dbc8a0e2a36750}}<br />
<br />
* [https://www.postgresql.org/message-id/OSZPR01MB631017521EE6887ADC9492E8FD759@OSZPR01MB6310.jpnprd01.prod.outlook.com psql query cancellation is broken], as are [https://www.postgresql.org/message-id/2671235.1618154047%40sss.pgh.pa.us autocommit], and [https://www.postgresql.org/message-id/YHTYOFBHDuGaz2gy@paquier.xyz error reporting]<br />
** Reverted by: {{PgCommitURL|fae65629cec824738ee11bf60f757239906d64fa}}<br />
<br />
* On Windows, collation version lookup (sometimes?) fails for names like "English_United States.1252", but works for names like "en-US".<br />
** Fixed at: {{PgCommitURL|9f12a3b95dd56c897f1aa3d756d8fb419e84a187}} -- this commit tolerates failure so at least we don't raise an error, but unfortunately we have no version information<br />
** Fixed at: {{PgCommitURL|1bf946bd43e545b86e567588b791311fe4e36a8c}} -- this commit documents the limitation<br />
<br />
* [https://www.postgresql.org/message-id/1820954.1617860500@sss.pgh.pa.us Handling of querystring inconsistent for parallel execution of SQL function bodies]<br />
** Fixed at: {{PgCommitURL|1111b2668d89bfcb6f502789158b1233ab4217a6}}<br />
<br />
* [https://www.postgresql.org/message-id/YHPkU8hFi4no4NSw@paquier.xyz Problems around compute_query_id]<br />
** Fixed at: {{PgCommitURL|db01f797dd48f826c62e1b8eea70f11fe7ff3efc}}<br />
<br />
== Won't Fix ==<br />
<br />
== Important Dates ==<br />
<br />
Current schedule:<br />
<br />
* Feature Freeze: April 7, 2021 ('''Last Day to Commit Features''')<br />
* Beta 1: <br />
* Beta 2: <br />
* Beta 3: <br />
* RC 1: <br />
* GA: <br />
<br />
[[Category:Open_Items]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=Developer_FAQ&diff=35815Developer FAQ2021-03-28T13:44:23Z<p>Adunstan: /* What tools are available for developers? */ update for unused_oids best practice</p>
<hr />
<div>{{Languages}}<br />
<br />
== Getting Involved ==<br />
<br />
=== How do I get involved in PostgreSQL development? ===<br />
<br />
Download the code and have a look around. See [[#How_do_I_download.2Fupdate_the_current_source_tree.3F|downloading the source tree]].<br />
<br />
Subscribe to and read the [http://archives.postgresql.org/pgsql-hackers/ pgsql-hackers mailing list] (often termed "hackers"). This is where the major contributors and core members of the project discuss development.<br />
<br />
=== How do I download/update the current source tree? ===<br />
<br />
There are several ways to obtain the source tree. Occasional developers can just get the most recent source tree snapshot from ftp://ftp.postgresql.org/pub/snapshot/.<br />
<br />
Regular developers might want to take advantage of anonymous access to our source code management system. The source tree is currently hosted in git. For details of how to obtain the source from git see http://developer.postgresql.org/pgdocs/postgres/git.html and [[Working with Git]].<br />
<br />
=== What development environment is required to develop code? ===<br />
<br />
PostgreSQL is developed mostly in the C programming language. The source code is targeted at most of the popular Unix platforms and the Windows environment (XP, Windows 2000, and up).<br />
<br />
Most developers run a Unix-like operating system and use an open source tool chain with [http://gcc.gnu.org GCC], [http://www.gnu.org/software/make/make.html GNU Make], [http://www.gnu.org/software/gdb/gdb.html GDB], [http://www.gnu.org/software/autoconf/ Autoconf], and so on. If you have contributed to open source software before, you will probably be familiar with these tools. Developers using this tool chain on Windows make use of [http://www.mingw.org/ MinGW], though most development on Windows is currently done with the Microsoft Visual Studio 2005 (version 8) development environment and associated tools.<br />
<br />
The complete list of required software to build PostgreSQL can be found in the [http://developer.postgresql.org/pgdocs/postgres/install-requirements.html installation instructions].<br />
<br />
Developers who regularly rebuild the source often pass the --enable-depend flag to configure. The result is that if you make a modification to a C header file, all files depend upon that file are also rebuilt.<br />
<br />
src/Makefile.custom can be used to set environment variables, like CUSTOM_COPT, that are used for every compile.<br />
<br />
=== What areas need work? ===<br />
Outstanding features are detailed in [[Todo]].<br />
<br />
You can learn more about these features by consulting the [http://archives.postgresql.org/ archives], the SQL standards and the recommended texts (see [[#What_books_are_good_for_developers.3F|books for developers]]).<br />
<br />
=== How do I get involved in PostgreSQL web site development? ===<br />
<br />
PostgreSQL website development is discussed on the [http://archives.postgresql.org/pgsql-www/ pgsql-www mailing list] and organized by the [[Infrastructure team]]. Source code for the postgresql.org web site is stored in a [http://git.postgresql.org/gitweb/?p=pgweb.git;a=summary Git repository].<br />
<br />
== Development Tools and Help ==<br />
<br />
=== How is the source code organized? ===<br />
<br />
If you point your browser at [http://www.postgresql.org/developer/backend/ Backend Flowchart], you will see a few paragraphs describing the data flow, the backend components in a flow chart, and a description of the shared memory area. You can click on any flowchart box to see a description. If you then click on the directory name, you will be taken to the source directory, to browse the actual source code behind it. We also have several README files in some source directories to describe the function of the module. The browser will display these when you enter the directory also.<br />
<br />
=== What information is available to learn PostgreSQL internals? ===<br />
<br />
* Overview of PostgreSQL Internals https://www.postgresql.org/docs/devel/static/overview.html<br />
* Coding https://www.postgresql.org/developer/coding/<br />
* Introduction to Hacking PostgreSQL - With lots of code review! https://www.cse.iitb.ac.in/infolab/Data/Courses/CS631/PostgreSQL-Resources/hacking_intro.pdf<br />
* Introduction to Hacking PostgreSQL http://www.neilconway.org/talks/hacking/<br />
* Postgres Internals Presentations http://momjian.us/main/presentations/internals.html<br />
* The Internals of PostgreSQL http://www.interdb.jp/pg/<br />
* PostgreSQL source code analysis (in Japanese) http://ikubo.x0.com/PostgreSQL/pg_source.htm<br />
* PostgreSQL Memorandum (in Japanese) http://www.nminoru.jp/~nminoru/postgresql/<br />
<br />
=== What tools are available to learn about/inspect the PostgreSQL on-disk format? ===<br />
<br />
* [https://www.postgresql.org/docs/current/static/pageinspect.html contrib/pageinspect]<br />
* [https://github.com/petergeoghegan/pg_hexedit pg_hexedit] - hex editor toolkit<br />
* [https://github.com/hlinnaka/pg-internals-explorer pg-internals-explorer] - ncurses interface to explore on-disk format (unmaintained)<br />
<br />
=== What tools are available for developers? ===<br />
<br />
First, all the files in the src/tools directory are designed for developers.<br />
<br />
RELEASE_CHANGES changes we have to make for each release<br />
ccsym find standard defines made by your compiler<br />
copyright fixes copyright notices<br />
<br />
entab converts spaces to tabs, used by pgindent<br />
find_static finds functions that could be made static<br />
find_typedef finds typedefs in the source code<br />
find_badmacros finds macros that use braces incorrectly<br />
fsync a script to provide information about the cost of cache<br />
syncing system calls<br />
make_ctags make vi 'tags' file in each directory<br />
make_diff make *.orig and diffs of source<br />
make_etags make emacs 'etags' files<br />
make_keywords make comparison of our keywords and SQL'92<br />
make_mkid make mkid ID files<br />
git_changelog used to generate a list of changes for each release<br />
pginclude scripts for adding/removing include files<br />
pgindent indents source files<br />
pgtest a semi-automated build system<br />
thread a thread testing script<br />
<br />
In src/include/catalog:<br />
<br />
unused_oids a script that finds unused OIDs for use in system catalogs<br />
duplicate_oids finds duplicate OIDs in system catalog definitions<br />
<br />
tools/backend was already described in the question-and-answer above.<br />
<br />
Second, you really should have an editor that can handle tags, so you can tag a function call to see the function definition, and then tag inside that function to see an even lower-level function, and then back out twice to return to the original function. Most editors support this via tags or etags files.<br />
<br />
Third, you need to get id-utils from ftp://ftp.gnu.org/gnu/idutils/<br />
<br />
By running tools/make_mkid, an archive of source symbols can be created that can be rapidly queried.<br />
<br />
Some developers make use of cscope, which can be found at http://cscope.sf.net/. Others use glimpse, which can be found at http://webglimpse.net/.<br />
<br />
tools/make_diff has tools to create patch diff files that can be applied to the distribution. This produces diffs for easier readability.<br />
<br />
pgindent is used to fix the source code style to conform to our standards, and is normally run at the end of each development cycle; see [[#What.27s_the_formatting_style_used_in_PostgreSQL_source_code.3F|this question]] for more information on our style.<br />
<br />
pginclude contains scripts used to add needed #include's to include files, and removed unneeded #include's.<br />
<br />
When adding built-in objects such as types or functions, you will need to assign OIDs to them. Our convention is that all hand-assigned OIDs are distinct values in the range 1-9999. (It would work mechanically for them to be unique within individual system catalogs, but for clarity we require them to be unique across the whole system.) There is a script called unused_oids in src/include/catalog that shows the currently unused OIDs. To assign a new OID, pick one that is free according to unused_oids. The script will recommend a range to you, looking like this:<br />
<br />
Patches should use a more-or-less consecutive range of OIDs.<br />
Best practice is to start with a random choice in the range 8000-9999.<br />
Suggested random unused OID: 9209 (46 consecutive OID(s) available starting here)<br />
<br />
and it's normally best to take its recommendation. See also the duplicate_oids script, which will complain if you made a mistake.<br />
<br />
=== What's the formatting style used in PostgreSQL source code? ===<br />
<br />
Our standard format BSD style, with each level of code indented one tab, where each tab is four spaces. You will need to set your editor or file viewer to display tabs as four spaces.<br />
<br />
The [http://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=src/tools/editors;hb=HEAD src/tools/editors directory of the latest sources] contains sample settings that can be used with the '''emacs''', '''xemacs''' and editors, that assist in keeping to PostgreSQL coding standards.<br />
<br />
'''Vim''' users will also find using-vim-for-postgres-dev tips in article [[Configuring vim for postgres development]]<br />
<br />
For '''less''' or '''more''', specify <code>-x4</code> to get the correct indentation.<br />
<br />
<tt>pgindent</tt> will the format code by specifying flags to your operating system's utility indent. pgindent is run on all source files just before each beta test period. It auto-formats all source files to make them consistent. Comment blocks that need specific line breaks should be formatted as block comments, where the comment starts as /*------. These comments will not be reformatted in any way.<br />
<br />
See also [http://developer.postgresql.org/pgdocs/postgres/source-format.html the Formatting section] in the documentation. [http://archives.postgresql.org/message-id/1221125165.5637.12.camel@abbas-laptop This posting] talks about our naming of variable and function names.<br />
<br />
If you're wondering why we bother with this, [http://en.wikipedia.org/wiki/Coding_conventions this article] describes the value of a consistent coding style.<br />
<br />
=== Is there a diagram of the system catalogs available? ===<br />
<br />
Yes, we have [http://dalibo.org/_media/articles/catalog.png at least one for v8.3] ([http://svn.postgresql.fr/repos/materials/advocacy/trunk/posters/catalogs83.svg SVG version]), and [https://www.postgrescompare.com/2017/06/11/pg_catalog_constraints.html several for v10].<br />
<br />
=== What books are good for developers? ===<br />
<br />
There are five good books:<br />
<br />
* An Introduction to Database Systems, by C.J. Date, Addison, Wesley<br />
* A Guide to the SQL Standard, by C.J. Date, et. al, Addison, Wesley<br />
* Fundamentals of Database Systems, by Elmasri and Navathe<br />
* Transaction Processing, by Jim Gray and Andreas Reuter, Morgan Kaufmann<br />
* Transactional Information Systems, by Gerhard Weikum and Gottfried Vossen, Morgan Kaufmann<br />
<br />
=== What is configure all about? ===<br />
<br />
The files configure and configure.in are part of the GNU autoconf package. Configure allows us to test for various capabilities of the OS, and to set variables that can then be tested in C programs and Makefiles. Autoconf is installed on the PostgreSQL main server. To add options to configure, edit configure.in, and then run autoconf to generate configure.<br />
<br />
When configure is run by the user, it tests various OS capabilities, stores those in config.status and config.cache, and modifies a list of *.in files. For example, if there exists a Makefile.in, configure generates a Makefile that contains substitutions for all @var@ parameters found by configure.<br />
<br />
When you need to edit files, make sure you don't waste time modifying files generated by configure. Edit the *.in file, and re-run configure to recreate the needed file. If you run make distclean from the top-level source directory, all files derived by configure are removed, so you see only the file contained in the source distribution.<br />
=== How do I add a new port? ===<br />
<br />
There are a variety of places that need to be modified to add a new port. First, start in the src/template directory. Add an appropriate entry for your OS. Also, use src/config.guess to add your OS to src/template/.similar. You shouldn't match the OS version exactly. The configure test will look for an exact OS version number, and if not found, find a match without version number. Edit src/configure.in to add your new OS. (See configure item above.) You will need to run autoconf, or patch src/configure too.<br />
<br />
Then, check src/include/port and add your new OS file, with appropriate values. Hopefully, there is already locking code in src/include/storage/s_lock.h for your CPU. There is also a src/makefiles directory for port-specific Makefile handling. There is a backend/port directory if you need special files for your OS.<br />
=== Why don't you use raw devices, async-I/O, <insert your favorite wizz-bang feature here>? ===<br />
<br />
There is always a temptation to use the newest operating system features as soon as they arrive. We resist that temptation.<br />
<br />
First, we support 15+ operating systems, so any new feature has to be well established before we will consider it. Second, most new wizz-bang features don't provide dramatic improvements. Third, they usually have some downside, such as decreased reliability or additional code required. Therefore, we don't rush to use new features but rather wait for the feature to be established, then ask for testing to show that a measurable improvement is possible.<br />
<br />
As an example, threads are not yet used instead of multiple processes for backends because:<br />
<br />
* Historically, threads were poorly supported and buggy.<br />
* An error in one backend can corrupt other backends if they're threads within a single process<br />
* Speed improvements using threads are small compared to the remaining backend startup time.<br />
* The backend code would be more complex.<br />
* Terminating backend processes allows the OS to cleanly and quickly free all resources, protecting against memory and file descriptor leaks and making backend shutdown cheaper and faster<br />
* Debugging threaded programs is much harder than debugging worker processes, and core dumps are much less useful<br />
* Sharing of read-only executable mappings and the use of shared_buffers means processes, like threads, are very memory efficient<br />
* Regular creation and destruction of processes helps protect against memory fragmentation, which can be hard to manage in long-running processes<br />
<br />
(Whether individual backend processes should use multiple threads to make use of multiple cores for single queries is a separate question not covered here).<br />
<br />
So, we are not ignorant of new features. It is just that we are cautious about their adoption. The TODO list often contains links to discussions showing our reasoning in these areas.<br />
<br />
Even some modern platforms have surprising problems with widely used functionality. For example, Linux's AIO layer offers no reliable asynchronous way do fsync() and get completion notification.<br />
<br />
=== How are branches managed? ===<br />
<br />
See [[Working_with_Git#Using_Back_Branches|Using Back Branches]] and [[Committing with Git]] for information about how branches and backporting are handled.<br />
<br />
=== Where can I get a copy of the SQL standards? ===<br />
You are supposed to buy them from [https://www.iso.org/committee/45342.html ISO/IEC JTC 1/SC 32] or [http://www.ansi.org ANSI]. Search for ISO/ANSI 9075. ANSI's offer is less expensive, but the contents of the documents are the same between the two organizations.<br />
<br />
Since buying an official copy of the standard is quite expensive, most developers rely on one of the various draft versions available on the Internet. Some of these are:<br />
* SQL-92 http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt<br />
* SQL:1999 http://web.cs.ualberta.ca/~yuan/courses/db_readings/ansi-iso-9075-2-1999.pdf<br />
* SQL:2003 http://www.wiscorp.com/sql_2003_standard.zip<br />
* SQL:2011 (preliminary) http://www.wiscorp.com/sql20nn.zip<br />
* No free copy of SQL:2016 appears to exist. (If you know one, please add a link here.)<br />
<br />
The PostgreSQL documentation contains information about PostgreSQL and [http://developer.postgresql.org/pgdocs/postgres/features.html SQL conformance].<br />
<br />
Some further web pages about the SQL standard are:<br />
* http://troels.arvin.dk/db/rdbms/links/#standards<br />
* http://www.wiscorp.com/SQLStandards.html<br />
* http://www.contrib.andrew.cmu.edu/~shadow/sql.html#syntax (SQL-92)<br />
* http://dbs.uni-leipzig.de/en/lokal/standards.pdf (paper)<br />
<br />
Note that having access to a copy of the SQL standard is not necessary to become a useful contributor to PostgreSQL development. Interpreting the standard is difficult and requires years of experience. As the standard is silent on many useful features like indexing, there is a good bit of development happening outside its bounds.<br />
<br />
=== Are there known deviations from the SQL Standard in PostgreSQL? ===<br />
<br />
Certainly. We list them [[PostgreSQL_vs_SQL_Standard|here]].<br />
<br />
=== Where can I get technical assistance? ===<br />
<br />
Many technical questions held by those new to the code have been answered on the pgsql-hackers mailing list - the archives of which can be found at http://archives.postgresql.org/pgsql-hackers/.<br />
<br />
If you cannot find discussion or your particular question, feel free to put it to the list.<br />
<br />
Major contributors also answer technical questions, including questions about development of new features, on IRC at irc.freenode.net in the #postgresql channel.<br />
<br />
== Development Process ==<br />
<br />
=== What do I do after choosing an item to work on? ===<br />
<br />
Send an email to pgsql-hackers with a proposal for what you want to do (assuming your contribution is not trivial). Working in isolation is not advisable because others might be working on the same TODO item, or you might have misunderstood the TODO item. In the email, discuss both the internal implementation method you plan to use, and any user-visible changes (new syntax, etc). For complex patches, it is important to get community feedback on your proposal before starting work. Failure to do so might mean your patch is rejected. If your work is being sponsored by a company, read [http://momjian.us/main/writings/pgsql/company_contributions.html this article] for tips on being more effective.<br />
<br />
Our queue of patches to be reviewed is maintained via a custom [[CommitFest]] web application at http://commitfest.postgresql.org.<br />
<br />
=== How do I test my changes? ===<br />
<br />
==== Basic system testing ====<br />
<br />
The easiest way to test your code is to ensure that it builds against the latest version of the code and that it does not generate compiler warnings.<br />
<br />
It is worth advised that you pass --enable-cassert to configure. This will turn on assertions within the source which will often make bugs more visible because they cause data corruption or segmentation violations. This generally makes debugging much easier.<br />
<br />
Then, perform run time testing via psql.<br />
<br />
==== Runtime environment ====<br />
<br />
To test your modified version of PostgreSQL, it's convenient to install PostgreSQL into a local directory (in your home <br />
directory, for instance) to avoid conflicting with a system wide <br />
installation. Use the ''--prefix='' option to configure to specify an installation <br />
location; ''--with-pgport'' to specify a non-standard default port is <br />
helpful as well. To run this instance, you will need to make sure that the correct <br />
binaries are used; depending on your operating system, environment variables <br />
like PATH and LD_LIBRARY_PATH (on most Linux/Unix-like systems) need to be <br />
set. Setting PGDATA will also be useful.<br />
<br />
To avoid having to set this environment up manually, you may want to use <br />
Greg Smith's [https://github.com/gregs1104/peg peg] scripts,or the<br />
[https://github.com/PGBuildFarm/client-code scripts] that are used on the <br />
buildfarm.<br />
<br />
==== Regression test suite ====<br />
<br />
The next step is to test your changes against the existing regression test suite. To do this, issue "make check" in the root directory of the source tree. If any tests fail, investigate.<br />
<br />
The regression tests and control program are in <tt>src/test/regress</tt>. <br />
<br />
The control program is <tt>pg_regress</tt>, but you usually run it via make rather than directly.<br />
<br />
You may find it useful to use <tt>PG_REGRESS_DIFF_OPTS=-ud make check</tt> to get unified diffs, rather than the default context diffs that <tt>pg_regress</tt> produces.<br />
<br />
If you've deliberately changed existing behavior, this change might cause a regression test failure but not any actual regression. If so, you should also patch the regression test suite too.<br />
<br />
To change the options PostgreSQL runs with for a given regression test execution you can use the <tt>PGOPTIONS</tt> environment variable, e.g.<br />
<br />
PGOPTIONS="-c log_error_verbosity=verbose -c log_min_messages=debug2" make check<br />
<br />
==== Isolation tests ==== <br />
<br />
For concurrency issues, PostgreSQL includes an "isolation tester" in <tt>src/test/isolation</tt> . This tool supports multiple connections and is useful if you are trying to reproduce concurrency related bugs or test new functionality.<br />
<br />
==== Valgrind ====<br />
<br />
To use Valgrind, edit <tt>src/include/pg_config_manual.h</tt> to set <tt>#define USE_VALGRIND</tt>, then run the postmaster under Valgrind with the supplied suppressions.<br />
<br />
See [[Valgrind]].<br />
<br />
==== Other run time testing ====<br />
<br />
Some developers make use of tools such as [[Profilin with Perf|perf]] (from the Linux kernel), gprof (which comes with the GNU binutils suite), ftrace, dtrace and [[Profiling_with_OProfile|oprofile]] (http://oprofile.sourceforge.net/) for profiling and other related tools.<br />
<br />
==== What about unit testing, static analysis, model checking...? ====<br />
<br />
There have been a number of discussions about other testing frameworks and some developers are exploring these ideas.<br />
<br />
Keep in mind the Makefiles do not have the proper dependencies for include files. You have to do a make clean and then another make. If you are using GCC you can use the --enable-depend option of configure to have the compiler compute the dependencies automatically.<br />
<br />
=== I have developed a patch, what next? ===<br />
<br />
You will need to submit the patch to pgsql-hackers@postgresql.org. To help ensure your patch is reviewed and committed in a timely fashion, please try to follow the guidelines at [[Submitting a Patch]].<br />
<br />
=== What happens to my patch once it is submitted? ===<br />
<br />
It will be reviewed by other contributors to the project and will be either accepted or sent back for further work. The process is explained in more detail at [[Submitting a Patch#Patch review and commit|Submitting a Patch]].<br />
<br />
=== How do I help with reviewing patches? ===<br />
<br />
If you would like to contribute by reviewing a patch in the [http://commitfest.postgresql.org CommitFest] queue, you are most welcome to do so. Please read the guide at [[Reviewing a Patch]] for more information.<br />
<br />
=== Do I need to sign a copyright assignment? ===<br />
<br />
No, contributors keeps their copyright (as is the case for most<br />
European countries anyway). They simply consider themselves to be part of<br />
the Postgres Global Development Group. (It's not even possible to assign<br />
copyright to PGDG, as it's not a legal entity). This is the same way that<br />
the Linux Kernel and many other Open Source projects works.<br />
<br />
=== May I add my own copyright notice where appropriate? ===<br />
<br />
No, please don't. We like to keep the legal information short and crisp.<br />
Additionally, we've heard that could possibly pose problems for<br />
corporate users.<br />
<br />
=== Doesn't the PostgreSQL license itself require to keep the copyright notice intact? ===<br />
<br />
Yes, it does. And it is, because the PostgreSQL Global Development Group<br />
covers all copyright holders. Also note that US law doesn't require any<br />
copyright notice for getting the copyright granted, just like most<br />
European laws.<br />
<br />
== Technical Questions ==<br />
=== How do I efficiently access information in system catalogs from the backend code? ===<br />
<br />
You first need to find the tuples (rows) you are interested in. There are two ways. First, SearchSysCache() and related functions allow you to query the system catalogs using predefined indexes on the catalogs. This is the preferred way to access system tables, because the first call to the cache loads the needed rows, and future requests can return the results without accessing the base table. A list of available caches is located in src/backend/utils/cache/syscache.c. src/backend/utils/cache/lsyscache.c contains many column-specific cache lookup functions.<br />
<br />
The rows returned are cache-owned versions of the heap rows. Therefore, you must not modify or delete the tuple returned by SearchSysCache(). What you should do is release it with ReleaseSysCache() when you are done using it; this informs the cache that it can discard that tuple if necessary. If you neglect to call ReleaseSysCache(), then the cache entry will remain locked in the cache until end of transaction, which is tolerable during development but not considered acceptable for release-worthy code.<br />
<br />
If you can't use the system cache, you will need to retrieve the data directly from the heap table, using the buffer cache that is shared by all backends. The backend automatically takes care of loading the rows into the buffer cache. To do this, open the table with heap_open(). You can then start a table scan with heap_beginscan(), then use heap_getnext() and continue as long as HeapTupleIsValid() returns true. Then do a heap_endscan(). Keys can be assigned to the scan. No indexes are used, so all rows are going to be compared to the keys, and only the valid rows returned.<br />
<br />
You can also use heap_fetch() to fetch rows by block number/offset. While scans automatically lock/unlock rows from the buffer cache, with heap_fetch(), you must pass a Buffer pointer, and ReleaseBuffer() it when completed.<br />
<br />
Once you have the row, you can get data that is common to all tuples, like t_self and t_oid, by merely accessing the HeapTuple structure entries. If you need a table-specific column, you should take the HeapTuple pointer, and use the GETSTRUCT() macro to access the table-specific start of the tuple. You then cast the pointer, for example as a Form_pg_proc pointer if you are accessing the pg_proc table, or Form_pg_type if you are accessing pg_type. You can then access fields of the tuple by using the structure pointer:<br />
<br />
((Form_pg_class) GETSTRUCT(tuple))->relnatts<br />
<br />
Note however that this only works for columns that are fixed-width and never null, and only when all earlier columns are likewise fixed-width and<br />
never null. Otherwise the column's location is variable and you must use heap_getattr() or related functions to extract it from the tuple.<br />
<br />
Also, avoid storing directly into struct fields as a means of changing live tuples. The best way is to use heap_modifytuple() and pass it your original tuple, plus the values you want changed. It returns a palloc'ed tuple, which you pass to heap_update(). You can delete tuples by passing the tuple's t_self to heap_delete(). You use t_self for heap_update() too. Remember, tuples can be either system cache copies, which might go away after you call ReleaseSysCache(), or read directly from disk buffers, which go away when you heap_getnext(), heap_endscan, or ReleaseBuffer(), in the heap_fetch() case. Or it may be a palloc'ed tuple, that you must pfree() when finished.<br />
=== Why are table, column, type, function, view names sometimes referenced as Name or NameData, and sometimes as char *? ===<br />
<br />
Table, column, type, function, and view names are stored in system tables in columns of type Name. Name is a fixed-length, null-terminated type of NAMEDATALEN bytes. (The default value for NAMEDATALEN is 64 bytes.)<br />
<br />
typedef struct nameData<br />
{<br />
char data[NAMEDATALEN];<br />
} NameData;<br />
typedef NameData *Name;<br />
<br />
Table, column, type, function, and view names that come into the backend via user queries are stored as variable-length, null-terminated character strings.<br />
<br />
Many functions are called with both types of names, ie. heap_open(). Because the Name type is null-terminated, it is safe to pass it to a function expecting a char *. Because there are many cases where on-disk names(Name) are compared to user-supplied names(char *), there are many cases where Name and char * are used interchangeably.<br />
<br />
=== Why do we use Node and List to make data structures? ===<br />
<br />
We do this because this allows a consistent way to pass data inside the backend in a flexible way. Every node has a NodeTag which specifies what type of data is inside the Node. Lists are groups of Nodes chained together as a forward-linked list. The ordering of the list elements might or might not be significant, depending on the usage of the particular list.<br />
<br />
Here are some of the List manipulation commands:<br />
<br />
;lfirst(i)<br />
;lfirst_int(i)<br />
;lfirst_oid(i)<br />
:return the data (a pointer, integer or OID respectively) of list cell i.<br />
<br />
;lnext(i)<br />
:return the next list cell after i.<br />
<br />
;foreach(i, list)<br />
:loop through list, assigning each list cell to i.<br />
<br />
It is important to note that i is a <code>ListCell *</code>, not the data in the List cell. You need to use one of the lfirst variants to get at the cell's data.<br />
<br />
Here is a typical code snippet that loops through a List containing <code>Var *</code> cells and processes each one:<br />
<br />
List *list;<br />
ListCell *i;<br />
...<br />
foreach(i, list)<br />
{<br />
Var *var = (Var *) lfirst(i);<br />
...<br />
/* process var here */<br />
}<br />
<br />
;lcons(node, list)<br />
:add node to the front of list, or create a new list with node if list is NIL.<br />
<br />
;lappend(list, node)<br />
:add node to the end of list.<br />
<br />
;list_concat(list1, list2)<br />
:Concatenate list2 on to the end of list1.<br />
<br />
;list_length(list)<br />
:return the length of the list.<br />
<br />
;list_nth(list, i)<br />
:return the i'th element in list, counting from zero.<br />
<br />
;lcons_int, ...<br />
:There are integer versions of these: lcons_int, lappend_int, etc. Also versions for OID lists: lcons_oid, lappend_oid, etc.<br />
<br />
You can print nodes easily inside gdb. First, to disable output truncation when you use the gdb print command:<br />
<br />
(gdb) set print elements 0<br />
<br />
Instead of printing values in gdb format, you can use the next two commands to print out List, Node, and structure contents in a verbose format that is easier to understand. Lists are unrolled into nodes, and nodes are printed in detail. The first prints in a short format, and the second in a long format:<br />
<br />
(gdb) call print(any_pointer)<br />
(gdb) call pprint(any_pointer)<br />
<br />
The output appears in the server log file, or on your screen if you are running a backend directly without a postmaster.<br />
<br />
=== I just added a field to a structure. What else should I do? ===<br />
<br />
The structures passed around in the parser, rewriter, optimizer, and executor require quite a bit of support. Most structures have support routines in src/backend/nodes used to create, copy, read, and output those structures -- in particular, most node types need support in the files copyfuncs.c and equalfuncs.c, and some need support in outfuncs.c and possibly readfuncs.c. Make sure you add support for your new field to these files. Find any other places the structure might need code for your new field -- searching for references to existing fields of the struct is a good way to do that. mkid is helpful with this (see [[#What_tools_are_available_for_developers.3F|available tools]]).<br />
<br />
=== Why do we use palloc() and pfree() to allocate memory? ===<br />
<br />
palloc() and pfree() are used in place of malloc() and free() because we find it easier to automatically free all memory allocated when a query completes. This assures us that all memory that was allocated gets freed even if we have lost track of where we allocated it. There are special non-query contexts that memory can be allocated in. These affect when the allocated memory is freed by the backend.<br />
<br />
You can dump information about these memory contexts, which can be useful when hunting leaks. See [[#Examining backend memory use]].<br />
<br />
=== What is ereport()? ===<br />
<br />
ereport() is used to send messages to the front-end, and optionally terminate the current query being processed. See [http://developer.postgresql.org/pgdocs/postgres/error-message-reporting.html here] for more details on how to use it.<br />
<br />
=== What is CommandCounterIncrement()? ===<br />
<br />
Normally, statements can not see the rows they modify. This allows UPDATE foo SET x = x + 1 to work correctly.<br />
<br />
However, there are cases where a transaction needs to see rows affected in previous parts of the transaction. This is accomplished using a Command Counter. Incrementing the counter allows transactions to be broken into pieces so each piece can see rows modified by previous pieces. CommandCounterIncrement() increments the Command Counter, creating a new part of the transaction.<br />
<br />
=== I need to do some changes to query parsing. Can you succinctly explain the parser files? ===<br />
<br />
The parser files live in the 'src/backend/parser' directory.<br />
<br />
scan.l defines the lexer, i.e. the algorithm that splits a string (containing an SQL statement) into a stream of tokens. A token is usually a single word (i.e., doesn't contain spaces but is delimited by spaces), but can also be a whole single or double-quoted string for example. The lexer is basically defined in terms of regular expressions which describe the different token types. <br />
<br />
gram.y defines the grammar (the syntactical structure) of SQL statements, using the tokens generated by the lexer as basic building blocks. The grammar is defined in BNF notation. BNF resembles regular expressions but works on the level of tokens, not characters. Also, patterns (called rules or productions in BNF) are named, and may be recursive, i.e. use themselves as sub-patterns.<br />
<br />
The actual lexer is generated from scan.l by a tool called flex. You can find the manual at http://flex.sourceforge.net/manual/<br />
<br />
The actual parser is generated from gram.y by a tool called bison. You can find the manual at http://www.gnu.org/s/bison/.<br />
<br />
Beware, though, that you'll have a rather steep learning curve ahead of you if you've never used flex or bison before.<br />
<br />
=== I get shift/reduce conflict I don't know how to deal with ===<br />
<br />
See [[Fixing_shift/reduce_conflicts_in_Bison]]<br />
<br />
=== How do I look at a query plan or parsed query? ===<br />
<br />
It's often desirable to examine the structure of a parsed query or a query plan. PostgreSQL stores these as hierarchical trees, which it can print out in a custom format.<br />
<br />
The <tt>pprint</tt> function is used to dump these trees to the backend's stderr, where you can capture it from the logs. You usually invoke this function by attaching a debugger like gdb or MSVC to the backend of interest before you run a query, then set a breakpoint at the position in the parser/rewriter/optimizer/executor you want to see the query state. When the breakpoint triggers, just run:<br />
<br />
call pprint(theQueryVariable)<br />
<br />
where theQueryVariable is any <tt>Node*</tt> of a type that <tt>pprint</tt> understands. Usually you'll call it on a <tt>Query*</tt> but it's also common to dump various sub-parts of a query, like a target-list, etc.<br />
<br />
This feature can be very useful in conjunction with gdb or MSVC tracepoints.<br />
<br />
=== What debugging features are available? ===<br />
<br />
==== Compile-time ====<br />
<br />
First, if you are developing new C code you should ALWAYS work in a build configured with the <tt>--enable-cassert</tt> and <tt>--enable-debug</tt> options. Enabling asserts turns on many sanity checking options. Enabling debug symbols supports use of debuggers (such as gdb) to trace through misbehaving code. When compiling on <tt>gcc</tt>, the additional cflags <tt>-ggdb -Og -g3 -fno-omit-frame-pointer</tt> are also useful, as they insert a lot of debugging info detail. You can pass them to <tt>configure</tt> with something like:<br />
<br />
./configure --enable-cassert --enable-debug CFLAGS="-ggdb -Og -g3 -fno-omit-frame-pointer"<br />
<br />
Using <tt>-O0</tt> instead of <tt>-Og</tt> will disable most compiler optimisation, including inlining, but <tt>-Og</tt> performs almost as well as the usual optimser flags like <tt>-O2</tt> or <tt>-Os</tt> while providing much more debug info. You'll see many fewer <tt><value optimised out></tt> variables, less confusing and hard to follow re-ordering of execution, etc, but performance will remain quite usable. <tt>-ggdb -g3</tt> tells <tt>gcc</tt> to also include the maximum amount of debug information in the produced binaries, including things like macro definitions.<br />
<br />
<tt>-fno-omit-frame-pointer</tt> is useful when using tracing and profiling tools like <tt>perf</tt>, as frame pointers allow these tools to capture the call stack, not just the top function on the stack.<br />
<br />
==== Run-time ====<br />
<br />
The postgres server has a <tt>-d</tt> option that allows detailed information to be logged (elog or ereport DEBUGn printouts). The -d option takes a number that specifies the debug level. Be warned that high debug level values generate large log files. This option isn't available when starting the server via <tt>pg_ctl</tt>, but you can use <tt>-o log_min_messages=debug4</tt> or similar instead.<br />
<br />
When adding print statements for debugging keep in mind that <tt>logging_collector = on</tt> must be set in your postgresql.conf (the default is <tt>off</tt>) for stdout/stderr to be captured and logged to a file. Consider using either <tt>elog()</tt> or <tt>fprintf(stderr, "Log\n")</tt> instead of <tt>printf("Log\n")</tt> since usually stdout is fully buffered while stderr is only line-buffered. If you print to stdout you'll need to use <tt>fflush</tt> frequently to keep the output in sync with error/log messages (which go through stderr).<br />
<br />
==== gdb ====<br />
<br />
If the postmaster is running, start psql in one window, then find the PID of the postgres process used by psql using <tt>SELECT pg_backend_pid()</tt>. Use a debugger to attach to the postgres PID - <tt>gdb -p 1234</tt> or, within a running gdb, <tt>attach 1234</tt>. You might also find the [[gdblive script]] useful. You can set breakpoints in the debugger and then issue queries from the psql session.<br />
<br />
If you are looking to find the location that is generating an error or log message, set a breakpoint at <tt>errfinish</tt>. This will trap on all <tt>elog</tt> and <tt>ereport</tt> calls for enabled log levels, so it may be triggered a lot. If you're only interested in ERROR/FATAL/PANIC, use a [http://blog.vinceliu.com/2009/07/gdbs-conditional-breakpoints.html gdb conditional breakpoint] for <tt>errordata[errordata_stack_depth].elevel >= 20</tt>, or set a source-line breakpoint within the cases for PANIC, FATAL, and ERROR in <tt>errfinish</tt>. Note that not all errors go through <tt>errfinish</tt>; in particular, permissions checks are thrown separately. If your breakpoint doesn't trigger, <tt>git grep</tt> for the error text and see where it's thrown from.<br />
<br />
If you are debugging something that happens during session startup, you can set <tt>PGOPTIONS="-W n"</tt>, then start psql. This will cause startup to delay for n seconds so you can attach to the process with the debugger, set appropriate breakpoints, then continue through the startup sequence.<br />
<br />
You can sometimes alternately figure out the target process for debugging by looking at <tt>pg_stat_activity</tt>, the logs, <tt>pg_locks</tt>, <tt>pg_stat_replication</tt>, etc.<br />
<br />
===== Tools =====<br />
<br />
There are a some helpful sets of gdb macros and Python scripts to help with PostgreSQL debugging, such as:<br />
<br />
* [https://github.com/tvondra/gdbpg gdbpg] ([http://blog.pgaddict.com/posts/making-debugging-with-gdb-a-bit-easier blog])<br />
<br />
You can also [[Developer_FAQ#Why_do_we_use_Node_and_List_to_make_data_structures.3F|call PostgreSQL functions like <tt>pprint</tt>]] from within <tt>gdb</tt> to inspect data structures.<br />
<br />
All these tools and techniques work within [https://sourceware.org/gdb/wiki/GDB%20Front%20Ends <tt>gdb</tt> wrappers] like [https://wiki.eclipse.org/CDT/StandaloneDebugger the Eclipse CDT standalone graphical debugger].<br />
<br />
===== core dumps =====<br />
<br />
If it's too hard to predict which process will be the problem but you can relibly get it to crash (maybe by adding an appropriate <tt>Assert(...)</tt> and compiling with <tt>--enable-cassert</tt>) you can debug a core dump instead. On Linux you'll want to make sure <tt>/proc/sys/kernel/core_pattern</tt> has a sensible value like <tt>core.%e.%p.SIG%s.%t</tt> and, in the shell you launch PostgreSQL from, run:<br />
<br />
<pre><br />
ulimit -c unlimited<br />
</pre><br />
<br />
Unless you're working with a large <tt>shared_buffers</tt> you probably also want to set core dumps (and <tt>gdb</tt>'s <tt>gcore</tt>) to include shared memory, using:<br />
<br />
<pre><br />
echo 63 > /proc/self/coredump_filter<br />
</pre><br />
<br />
Core dumps will be output in the PostgreSQL data directory unless your kernel's <tt>core_pattern</tt> says otherwise.<br />
<br />
==== rr record and replay debugger ====<br />
<br />
PostgreSQL 13 can be debugged using [https://rr-project.org the rr debugging recorder]. You can think of rr as a powerful framework for using GDB with replayable "recordings" of a program's execution. See the [[Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD#Recording_Postgres_using_rr_Record_and_Replay_Framework|guide to using rr to debug Postgres]] for further details.<br />
<br />
==== Standalone backend ====<br />
<br />
If the postmaster is not running, you can actually run the postgres backend from the command line, and type your SQL statement directly. This is almost always a bad way to do things, however, since the usage environment isn't nearly as friendly as psql (no command history for instance) and there's no chance to study concurrent behavior. You might have to use this method if you broke initdb, but otherwise it has nothing to recommend it.<br />
<br />
=== I broke <tt>initdb</tt>, how do I debug it? ===<br />
<br />
Sometimes a patch will cause <tt>initdb</tt> failures. These are rarely in <tt>initdb</tt> its self; more often a failure occurs in a <tt>postgres</tt> backend launched by <tt>initdb</tt> to do some setup work.<br />
<br />
If one of these is crashing or triggering an assertion, attaching <tt>gdb</tt> to <tt>initdb</tt> isn't going to do much by its self. <tt>initdb</tt> itself isn't crashing so <tt>gdb</tt> won't break.<br />
<br />
What you need to do is run <tt>initdb</tt> under <tt>gdb</tt>, set a breakpoint on <tt>fork</tt>, then continue execution. When you trigger the breakpoint, <tt><b>f</b>inish</tt> the function. <tt>gdb</tt> will report that a child process was created, but this is <i>not</i> what you want, it's the shell that launched the real <tt>postgres</tt> instance.<br />
<br />
While <tt>initdb</tt> is paused, use <tt>ps</tt> to find the <tt>postgres</tt> instance it started. <tt>pstree -p</tt> can be useful for this. When you've found it, attach a separate <tt>gdb</tt> session to it with <tt>gdb -p $the_postgres_pid</tt>. At this point you can safely detach <tt>gdb</tt> from <tt>initdb</tt> and debug the <tt>postgres</tt> instance that's failing.<br />
<br />
See also [[Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD#Tracing_problems_when_creating_a_cluster|Tracing_problems_when_creating_a_cluster]]<br />
<br />
=== Profiling to analyse performance, CPU use ===<br />
<br />
There are many options for profiling PostgreSQL, but one of the most popular now is <tt>perf</tt>, the Linux kernel profiling tool. See [[Profiling with perf]].<br />
<br />
<tt>perf</tt> is extremely powerful and not limited to CPU profiling; it's a useful tracing tool too.<br />
<br />
You can also compile PostgreSQL with profiling enabled to see what functions are taking execution time. Configuring with <tt>--enable-profiling</tt> is the recommended way to set this up. Profile files from server processes will be deposited in the <tt>pgsql/data</tt> directory. Profile files from clients such as <tt>psql</tt> will be put in the client's current directory.<br />
<br />
You usually shouldn't use <tt>--enable-cassert</tt> or any user-defined <tt>-O</tt> flags like <tt>-Og</tt> / <tt>-O0</tt> when studying performance issues. The checks cassert enables are not always cheap, so they'll distort your profile data. Compiler optimisations are important to make sure you're profiling the same thing you'll actually be running.<br />
<br />
<tt>--enable-debug</tt> is fine when profiling with <tt>gcc</tt>; for other compilers, it should be avoided.<br />
<br />
<tt>perf</tt> is a less intrusive alternative to <tt>--enable-profiling</tt> on modern Linux systems.<br />
<br />
=== Examining backend memory use ===<br />
<br />
PostgreSQL's <tt>palloc</tt> is a hierarchical memory allocator that wraps the platform allocator. See [[#Why do we use palloc() and pfree() to allocate memory?]].<br />
<br />
Memory allocated with <tt>palloc</tt> is assigned to a ''memory context'' that's part of a hierarchy rooted at <tt>TopMemoryContext</tt>. Each context has a name.<br />
<br />
You can dump stats about a memory context and its children using the <tt> MemoryContextStats(MemoryContext*)</tt> function. In the most common usage, that's:<br />
<br />
gdb -p $the_backend_pid<br />
(gdb) p MemoryContextStats(TopMemoryContext)<br />
<br />
The output is written to stderr.<br />
<br />
This may appear in the main server log file, a secondary log used by the init system for before PostgreSQL's logging collector starts, journald, or on your screen if you are running a backend directly without a postmaster.<br />
<br />
=== gdb/MSVC tracepoints ===<br />
<br />
Sometimes you want to trace execution and capture information without having to constantly switch to gdb every time you hit a breakpoint.<br />
<br />
Both MSVC and <tt>gdb</tt> offer tracepoints for this. They're much more powerful than those offered by tools like <tt>perf</tt> - with the tradeoff that they're much more intrusive and require a debugger. For gdb, see [https://sourceware.org/gdb/onlinedocs/gdb/Tracepoint-Actions.html gdb tracepoints]. You can use debugger tracepoints to do things like fire a memory context dump every time a tracepoint is hit, or print a query parse tree, etc.<br />
<br />
A viable alternative for some simpler cases is now to use <tt>perf</tt> to capture function calls, local variables, etc. See [[Profiling with perf]].<br />
<br />
=== Why are my variables full of 0x7f bytes? ===<br />
<br />
In a debugger or a crash dump you may see memory full of 0x7f bytes - 0x7f7f words, 0x7f7f7f7f7f longs, etc.<br />
<br />
This is because builds with <tt>CLOBBER_FREED_MEMORY</tt> defined will overwrite memory when it, or its containing memory context, is freed. This isn't necessarily associated with an explicit <tt>pfree</tt> - it can happen as a result of a <tt>MemoryContextReset</tt> or similar, possibly on memory you implicitly allocated to the current memory context by calling <tt>palloc</tt>, or allocated indirectly via a call to another function.<br />
<br />
<tt>CLOBBER_FREED_MEMORY</tt> is enabled by passing <tt>--enable-cassert</tt>.<br />
<br />
See <tt>src/backend/utils/mmgr/aset.c</tt> for details.<br />
<br />
[[Category:FAQ]]<br />
<br />
<br />
=== How do I stop gdb getting interrupted by SIGUSR1 all the time? ===<br />
<br />
PostgreSQL uses SIGUSR1 for latch setting on backends, for SetLatch / WaitLatch / WaitLatchOrSocket etc.<br />
<br />
gdb breaks on SIGUSR1 by default making debugging hard.<br />
<br />
Just<br />
<br />
handle SIGUSR1 noprint pass<br />
<br />
to make it silently pass SIGUSR1 to the program and not pause. Or start it like:<br />
<br />
gdb -ex 'handle SIGUSR1 nostop'<br />
<br />
=== How do I attach gdb and set a breakpoint in a background worker / helper proc? ===<br />
<br />
If you're trying to debug autovacuum, some arbitrary background worker, etc, it can be hard to get gdb attached when you want. Especially if the proc is short-lived.<br />
<br />
A handy trick here is to inject an infinite loop that prints the pid until you attach gdb and change the loop-variable to allow debugging to continue. For example, if you add this just before the call to the function you want to debug:<br />
<br />
/* You may need to #include "miscadmin.h" and <unistd.h> */<br />
<br />
bool continue_sleep = true;<br />
do {<br />
sleep(1);<br />
elog(LOG, "zzzzz %d", MyProcPid);<br />
} while (continue_sleep);<br />
<br />
func_to_debug()<br />
<br />
You can grep the logs for "zzzz" until it appears, attach to the pid of interest, set a breakpoint, and continue execution.<br />
<br />
$ gdb -p $the-pid<br />
(gdb) break func_to_debug<br />
(gdb) p continue_sleep=0<br />
(gdb) cont<br />
<br />
Note that it's bad practice to use <tt>sleep</tt> in PostgreSQL backends; use <tt>WaitLatch</tt> with a timeout instead. This is OK for debugging though.<br />
<br />
Another option can be to have PostgreSQL delay all processes on start with the <tt>postgres -W <seconds></tt> option, but this works poorly when you're debugging an issue in complex groups of bgworkers or something that only happens after extended runtime.</div>Adunstanhttps://wiki.postgresql.org/index.php?title=Speaker_Bureau&diff=35463Speaker Bureau2020-10-14T13:07:47Z<p>Adunstan: </p>
<hr />
<div>In order to help with the challenge of finding speakers for meetups please add your name to this page if you are willing to speak at a meetup. Currently (2020) the meetups will be virtual. At a minimum add your name, topic(s) and timezone. Feel free to add anything else you feel is relevant.<br />
<br />
I'd invite anyone who wants to mentor new speakers to add their name as mentor as well.<br />
<br />
* Dave Cramer: Java and Postgresql, Logical Decoding, mentor<br />
* Jonathan Katz: SCRAM, PostgreSQL + Kubernetes, PostgreSQL 13, Range Types + Applications, Building an App with a bunch of Postgres features (Logical decoding, CTEs, functions, range types, etc.), Data Types<br />
* Stephen Frost: Security, PostgreSQL, other stuff<br />
* Keith Fiske: Partitioning, Extensions, Administration, PG History & Features, Monitoring<br />
* David Christensen: Replication, Bucardo, CTEs.<br />
* David Fetter: PostgreSQL as a control plane, Fun with Foreign Data Wrappers, Hacking for Beginners<br />
* Jennifer Scheuerell: Migrations, PostgreSQL and Django, mentor (Pacific time zone)<br />
* Harry Arroyo: PostgreSQL with Laravel, Django, Java, IT Security Expert, TI Mentor, Sysadmin, FullStack Developer, Hacker, App Developer (Android and iOS), DBA and other Stuff.<br />
* Jimmy Angelakos: PostgreSQL, performance, Full-Text Search, ETL with Python, Django (UK time zone)<br />
* Tomas Vondra: PostgreSQL, performance, various extensions, hacking, community stuff<br />
* Martín Marqués: PostgreSQL, replication, backups, autovacuum<br />
* Andrew Dunstan: (EST) PostgreSQL, vacuuming and freezing, Data Types, Foreign Data Wrappers, SSL, Pgbouncer</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_Buildfarm_Howto&diff=34110PostgreSQL Buildfarm Howto2019-09-23T17:19:51Z<p>Adunstan: use official URL</p>
<hr />
<div>PostgreSQL BuildFarm is a distributed build system designed to detect <br />
build failures on a large collection of platforms and configurations. <br />
This software is written in Perl. If you're not comfortable with Perl<br />
then you possibly don't want to run this, even though the only adjustment<br />
you should ever need is to the config file (which is also Perl).<br />
<br />
=== Get the Software === <br />
Download from [http://buildfarm.postgresql.org/downloads the buildfarm server]<br />
Unpack it and put it somewhere. You can put the config file in a different <br />
place from the run_build.pl script if you want to, but the <br />
simplest thing is to put it in the same place. Decide which user you will run <br />
the script as - it must be a user who can run the PostgreSQL server programs (on Unix<br />
that means it must *not* run as root). Do everything else here as that user.<br />
<br />
=== Other Prerequisites ===<br />
<br />
; Git<br />
: Must be version 1.6 or later.<br />
<br />
; All tools required for building Postgres from a Git checkout<br />
: GNU make, bison, flex, etc<br />
: See [http://www.postgresql.org/docs/devel/static/install-requirements.html the Postgres documentation]<br />
<br />
; ccache<br />
: This isn't ''absolutely'' necessary, but it greatly reduces the amount of CPU your buildfarm member will consume ... at the price of more disk space usage<br />
<br />
=== All the run_build.pl command line options ===<br />
<br />
This list is complete as of release 4.19 of the client<br />
<br />
* --config=/pathto/file - location of config file, default build-farm.conf<br />
* --nosend says don't send the results to the server<br />
* --nostatus says don;t update the status files<br />
* --force says run the build even of it's not needed<br />
* --verbose[=n] says display information. verbosity level 1 (default if --verbose is specified) shows a line for each step as it start. Any higher number causes the logs from the various stages to be sent to the standard output<br />
* --quiet - suppress error output<br />
* --test is short for --nosend --no-status --force --verbose<br />
* --find-typedefs - obsolete way to trigger typedef anaylsis. This should now be done via the config file<br />
* --help - print help text<br />
* --keepall - keep build and installation directories if there is a failure<br />
* [ to be continued ]<br />
<br />
<br />
=== Choose a setup for a base git mirror that all your branches will pull from. ===<br />
Most buildfarm members run on more than one branch, and if you do it's good practice to set up<br />
a mirror on the buildfarm machine and then just clone that for each branch. The official publicly available git repository is at<br />
* git://git.postgresql.org/git/postgresql.git<br />
and there is a mirror at <br />
* git://github.com/postgres/postgres.git<br />
Either should be suitable for cloning.<br />
<br />
The simplest way to set up a mirror is simply to have the buildfarm script create and maintain it for you. <br />
If you do that, the mirror will be updated at the start of a run when it checks to see if any changes have occurred that might<br />
require a new build. To do that, all you need to do is set the following two options in your config file:<br />
git_keep_mirror => 'true',<br />
git_ignore_mirror_failure => 'true',<br />
<br />
If you would rather clone the github mirror for your local mirror instead of the authoritative community repo (doing so can keep load off the community server, which is a good thing), then set the config variable to point to it like this:<br />
scmrepo => 'git://github.com/postgres/postgres.git',<br />
<br />
The mirror will be placed in your build root, above the branch directories.<br />
<br />
You can also opt to create and maintain a git mirror yourself, something like this:<br />
git clone --mirror git://git.postgresql.org/git/postgresql.git pgsql-base.git<br />
When that is done, add an entry to your crontab to keep it up to date, something like:<br />
20,50 * * * * cd /path/to/pgsql-base.git && git fetch -q<br />
<br />
One downside of doing this is that your mirror will only be as up to date as the last time you ran the cron update.<br />
<br />
To have your buildfarm installation use a local mirror you maintain yourself, set the config variable:<br />
scmrepo => '/path/to/pgsql-base.git',<br />
Of course, in this case you don't set the git_keep_mirror option.<br />
<br />
=== Create a directory where builds will run. === <br />
This should be dedicated to<br />
the use of the build farm. Make sure there's plenty of space - on my<br />
machine each branch can use up to about 700Mb during a build. You can use the<br />
directory where the script lives, or a subdirectory of it, or a completely <br />
different directory.<br />
<br />
If you're using ccache, the cache directory can use up to 1Gb by default.<br />
You can reduce that if you like (see the ccache documentation), but it's<br />
good to allow at least 100Mb per active branch.<br />
<br />
=== Edit the build-farm.conf file ===<br />
<br />
Notable things you probably need to set include the following:<br />
<br />
==== %conf ====<br />
<br />
; scmrepo<br />
: Set this to indicate the path to your Git mirror<br />
; scm_url<br />
: If you are not using the Community git repository, or want to point the changesets at a different server, set this URL to indicate where to find a given Git commit on the web. For instance, for the github mirror, this value should be: <i>&#x68;ttp://github.com/postgres/postgres.git/commit/</i> - don't forget the trailing "/".<br />
<br />
Once you have registered your Buildfarm animal you will need to set these, but for initial testing just leave them as-is:<br />
<br />
; animal<br />
: This will need to be set to the animal name you were given by the Buildfarm coordinators<br />
; secret<br />
: This must be the password indicated by the Buildfarm coordinators<br />
<br />
Adjust other config variables "make", "config_opts", and (if you don't use ccache) "config_env" to suit your environment, and to choose which optional Postgres configuration options you want to build with. <br />
<br />
You should not need to adjust other variables.<br />
<br />
You may verify that you didn't screw things up too badly by running "perl -cw build-farm.conf". That verifies that the configuration is still legitimate Perl.<br />
<br />
=== Alerts and Status Notifications ===<br />
<br />
Alerts happen when we haven't heard from your buildfarm member for a while, and suggest that maybe something is wrong. Status notifications happen when we have heard from your buildfarm member, and we are telling you what happened. Both of them happen via email. Alerts are sent to the owner's registered email address. By default, none are sent. You can configure when and how often they are sent in the alerts section of the config file. Status notifications are sent to the addresses configured in the mail_events section of the config file. You can choose four different sorts of notification:<br />
* for every build<br />
* for every build that fails<br />
* for every build that changes the status<br />
* for every build that changes the status if the change is to or from OK (green) <br />
<br />
=== Change the shebang line in the scripts. ===<br />
If the path to your perl <br />
installation isn't "/usr/bin/perl", edit the #! line in perl scripts so it is correct. <br />
This is the ONLY line in those files you should ever need to edit. <br />
<br />
=== Check that required perl modules are present. ===<br />
Run "perl -cw run_build.pl". <br />
If you get errors about missing perl modules you will need to install them. <br />
Most of the required modules are standard modules in any perl<br />
distribution. The rest are all standard CPAN modules, and available either from there<br />
or from your OS distribution. When you don't get an error any more, run the same test on<br />
run_web_txn.pl, and also on run_branches.pl if you plan to use that (see below).<br />
<br />
If you are using an https URL for the buildfarm server (which you should be!), make<br />
sure that LWP::Protocol::https and Mozilla::CA are installed as well; the above test<br />
does not catch these requirements.<br />
<br />
When all is clear you are ready to start testing.<br />
<br />
=== Run in test mode. ===<br />
With a PATH that matches what you will have when running from cron, run<br />
the script in no-send, no-status, verbose mode. Something like this:<br />
./run_build.pl --nosend --nostatus --verbose<br />
and watch the fun begin. If this results in failures because it can't<br />
find some executables (especially gmake and git), you might need to change <br />
the config file again, this time changing the "build_env" with another <br />
setting something like:<br />
PATH => "/usr/local/bin:$ENV{PATH}",<br />
Also, if you put the config file somewhere else, you will need to use <br />
the --config=/path/to/build-farm.conf option.<br />
<br />
If trying to diagnose problems, interesting summary information may be found in the file '''web-txn.data''', which is found in a build-specific directory, of the form $build_root/$CURRENT_BRANCH/$animal.lastrun-logs/web-txn.data<br />
<br />
If particular steps of a build failed, logs for those steps may be found in that same directory.<br />
<br />
=== Test running from cron === <br />
When you have that running, it's time to try with cron. <br />
Put a line in your crontab that looks something like this:<br />
43 * * * * cd /location/of/run_build.pl/ && ./run_build.pl --nosend --verbose<br />
Again, add the --config option if needed. Notice that this time we didn't <br />
specify --nostatus. That means that (after the first run) the script won't <br />
do any build work unless the Git repo has changed. Check that your cron <br />
job runs (it should email you the results, unless you tell it to send them<br />
elsewhere).<br />
<br />
You can, and probably should, drop the --verbose option once things are<br />
working.<br />
<br />
The frequency with which the cron job is launched is up to you, though we do<br />
suggest that active branches get built at least once a day. The build script will<br />
automatically exit if it finds a previous invocation still running, so you do not<br />
need to worry about scheduling jobs too close together. Think of the cron<br />
frequency as how often the buildfarm animal will wake up to see if there have<br />
been changes in the Git repo.<br />
<br />
=== Choose which branches you want to build === <br />
By default run_build.pl builds the HEAD branch. If you want to<br />
build some other branch, you can do so by specifying the name on the commandline,<br />
e.g. <br />
run_build.pl REL9_4_STABLE<br />
<br />
The old way to build multiple branches was to create a cron job for each<br />
active branch, along the lines of:<br />
<br />
6 * * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend<br />
30 4 * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend REL9_4_STABLE<br />
<br />
But there's a better way ...<br />
<br />
=== Using run_branches.pl ===<br />
There is a wrapper script that makes running multiple branches much easier. To build all the branches that are currently being maintained by the project, instead of running run_build.pl, use run_branches.pl with the --run-all option. This script accepts all the options that run_build.pl does, and passes them through. So now your crontab could just look like this:<br />
6 * * * * cd /home/andrew/buildfarm && ./run_branches.pl --run-all<br />
One of the advantages of this approach is that you don't need to manually retire a branch when the Postgres project ends support for it, nor to add one when there's a new stable branch. The script contacts the server to get a list of branches that we're currently interested in, and then builds them. This is now the recommended method of running a buildfarm member.<br />
<br />
The branches that are built are controlled by the <code>branches_to_build</code> setting in the <code>global</code> section of the config file. The sample config file's setting is 'ALL'.<br />
<br />
If you don't want to build every one of the back branches, you can also use HEAD_PLUS_LATEST, or HEAD_PLUS_LATESTn for any n, or a fixed list of branches. In the last case you will probably need to adjust the list whenever the PostgreSQL developers start a new branch or declare an old branch to be at End Of Life.<br />
<br />
=== Register your new buildfarm member and subscribe to the mailing list. === <br />
Once this is all running happily, you can register to upload your<br />
results to the central server. Registration can be done on the buildfarm server <br />
at https://buildfarm.postgresql.org/cgi-bin/register-form.pl. When you receive your approval by <br />
email, you will edit the "animal" and "secret" lines in your config file, <br />
remove the --nosend flags, and you are done.<br />
<br />
Please also join the buildfarm-members mailing list at<br />
https://lists.postgresql.org<br />
This is a low-traffic list for owners of buildfarm members, and every buildfarm owner should be subscribed.<br />
<br />
=== Status Mailing Lists ===<br />
<br />
There are also two mailing lists that report status from all builds, not just your own animals. This is useful for developers who want to be notified of events rather than having to monitor the server's dashboard.<br />
<br />
* <b><code>buildfarm-status-failures</code></b>, which gets an email any time a buildfarm animal reports a failed run.<br />
* <b><code>buildfarm-status-green-chgs</code></b>, which gets an email any time the status of a buildfarm animal changes to or from green (i.e. success). This is the status list most people find useful.<br />
<br />
=== Bugs === <br />
Please file bug reports concerning the buildfarm client (but not Postgres itself)<br />
on the buildfarm members mailing list.<br />
<br />
=== Running on Windows ===<br />
There are three build environments for Windows: Cygwin, MinGW/MSys, and Microsoft Visual C++. The buildfarm can run with each of these environments. This section discusses requirements for the buildfarm, rather than requirements for building on Windows, which are covered elsewhere.<br />
<br />
==== Cygwin ==== <br />
There is almost nothing extra to be done for Cygwin. You need to make sure that cygserver is running, and you should set MAX_CONNECTIONS=>3 and CYGWIN=>'server' in the build_env stanza of the buildfarm config. Other than that it should be just like running on Unix.<br />
<br />
==== MinGW/Msys ====<br />
For MinGW/MSys, you need both the MSys DTK version of perl installed, and a native Windows perl - I have only tested with ActiveState perl, which I have found to be rock solid. You need to run the main buildfarm script using the MSYS DTK perl, and the web transaction script using native Perl. that mean you need to change the first line of the run_web_txn.pl script so it reads something like:<br />
#!/c/perl/bin/perl<br />
You should make sure that the PATH is set in your config file to put the Native perl ahead of the MSys DTK perl.<br />
It's a good idea to have a runbf.bat file that you can call from the Windows scheduler. Mine looks like this:<br />
@echo off<br />
setlocal<br />
c:<br />
cd \msys\1.0\bin<br />
c:\msys\1.0\bin\sh.exe --login -c "cd bf && ./run_build.pl --verbose %1 >> bftask.out 2>&1"<br />
Set up a non-privileged Windows user to run this jobs as. set up the buildfarm as above as that user. Then create scheduler jobs that call runbf.bat with an optional branch name argument.<br />
<br />
==== Microsoft Visual C++ ====<br />
For MSVC you need to edit the config file more extensively. Make sure the 'using_msvc' setting is on. Also, there is a section of the file specially for MSVC builds. As with MinGW, you need a native Windows perl installed. It appears that Windows Git does not like to clone local repositories specified with forward slashes (this is pretty horrible - almost all Windows programs are quite happy with forward slashes. Make sure you specify the repository using backslashes or weird things will happen. Again, you will need a runbf.bat file for the windows scheduler. Mine looks like this:<br />
@echo off<br />
c:<br />
cd \prog\bf<br />
c:\perl\bin\perl run_build.pl --verbose %1 %2 %3 %4 >> bfout.txt<br />
You will also need a tar command capable of bundling up the logs to send to the server. The best one I have found for use on Windows is bsdtar, part of the libarchive collection at http://sourceforge.net/projects/gnuwin32/files/. This is also a good place to get many of the libraries you need for optional pieces of MSVC and MinGW builds.<br />
<br />
=== Running multiple buildfarm members on a single machine ===<br />
<br />
Sometimes you might want to run more than one buildfarm member on a single machine. Possible reasons for doing this include testing different compilers, and running with different build options. For example, on one FreeBSD machine I have two members; one does a normal build and the other does a build with -DCLOBBER_CACHE_ALWAYS set. Or on a Windows machine one might want to test both the 32 bit and 64 bit mingw-w64 compilers.<br />
<br />
The simplest way to do this is to do it all in the same location. Get one member working, then copy the config file to something with the other member's name and change the animal name and password, and whatever in the config will be different from the first one. The members can share a git mirror and build root. There are locking provisions that prevent instances of the buildfarm scripts from tripping over each other. If you are using ccache, you should ensure that each member gets a separate ccache location. The best way to do that is to put the member name into the ccache directory name (which is the default as of recent releases of the buildfarm scripts).<br />
<br />
=== Running in Parallel ===<br />
<br />
If you run a single animal, you can run all the branches in parallel just by changing <code>run_branches.pl</code>'s <code>--run-all</code> to <code>--run-parallel</code>. This will launch each branch's run, spaced out by 60 seconds from launch to launch. <br />
<br />
The long story: parallelism is controlled by a number of configuration parameters in the <code>global</code> section of the config file. The first is <code>parallel_lockdir</code>. By default this is the <code>global_lock_dir</code> which in turn defaults to the <code>build_root</code>. This directory is where <code>run_branches.pl</code> puts a lock file for each running branch. The second is <code>max_parallel</code>. The script will launch a new branch as long as the number of live locks is less than this number. The default is 10. Lastly the setting <code>parallel_stagger</code> determines how long the script will wait before starting a new branch, unless one finishes in the meantime. The default is 60 seconds.<br />
<br />
If you want to run multiple animals and use parallelism between them the best way is to use a separate <code>build_root</code> for each animal. Then don't set the <code>global_lock_dir</code> for each animal, but do set the <code>parallel_lockdir</code> for each animal to point to the same directory, probably the <code>build_root</code> of one of the animals. Then you could have a crontab something like this:<br />
<br />
2-59/15 * * * * cd curly && run_branches.pl --run-parallel --config=curly.conf<br />
7-59/15 * * * * cd larry && run_branches.pl --run-parallel --config=larry.conf<br />
12-59/15 * * * * cd moe && run_branches.pl --run-parallel --config=moe.conf<br />
<br />
=== Tips and Tricks ===<br />
<br />
You can force a single run of your animal by putting a file called <animal>.force-one-run in the <buildroot>/<branch> directory. For example the following will force a build on all the stable branches of my animal crake<br />
cd root<br />
for f in REL* ; do<br />
touch $f/crake.force-one-run<br />
done<br />
When the run is done this file will be removed automatically. <br />
<br />
[[Category:Howto]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PgCon_2019_Developer_Meeting&diff=33289PgCon 2019 Developer Meeting2019-04-10T15:16:41Z<p>Adunstan: /* RSVPs */</p>
<hr />
<div>A meeting of the interested PostgreSQL developers is being planned for Tuesday 28 May, 2019 at the University of Ottawa, prior to pgCon 2019. In order to keep the numbers manageable, this meeting is by '''invitation only'''.<br />
<br />
The invitation list for the meeting has changed this year to include representatives from various project sub-teams, for example, packagers, the release team, Code of Conduct committee and more.<br />
<br />
As at last years event, an Unconference will be held on Wednesday for in-depth discussion of technical topics.<br />
<br />
This is a PostgreSQL Community event.<br />
<br />
== Meeting Goals ==<br />
<br />
* Define the schedule for the 13.0 release cycle<br />
* Address any proposed timing, policy, or procedure issues<br />
* Receive updates from project sub-teams on their activities and discuss any resulting issues or concerns.<br />
* Address any proposed [http://en.wikipedia.org/wiki/Wicked_problem Wicked problems]<br />
<br />
== Time & Location ==<br />
<br />
The meeting will be:<br />
<br />
* 9:00AM to 12PM<br />
* DMS TBC<br />
* University of Ottawa.<br />
<br />
Coffee, tea and snacks will be served starting at 8:45am. Lunch will be after the meeting.<br />
<br />
== RSVPs ==<br />
<br />
The following people have RSVPed to the meeting (in alphabetical order, by surname). Note that we can accommodate a '''maximum of 30'''!<br />
<br />
# Joe Conway<br />
# Andrew Dunstan<br />
# Peter Eisentraut<br />
# Andres Freund<br />
# Peter Geoghegan<br />
# Devrim Gündüz<br />
# Magnus Hagander<br />
# Álvaro Herrera<br />
# Amit Kapila<br />
# Jonathan Katz<br />
# Alexander Korotkov<br />
# Tom Lane<br />
# Heikki Linnakangas<br />
# Bruce Momjian<br />
# Dave Page<br />
# Tomas Vondra<br />
# Robert Haas<br />
<br />
<br />
The following people will not be in Ottawa, and do not plan to attend:<br />
<br />
* Christoph Berg<br />
* Andreas Scherbaum<br />
<br />
== Agenda Items ==<br />
<br />
* 13.0 release and commitfest schedule (Dave)<br />
* Contributor Recognition (Andres, happy to share / pass, but should be discussed)<br />
** [https://www.postgresql.org/community/contributors/ contributors] page update - how well is it working?<br />
** should the developer meeting serve as recognition? <br />
* ''Please add suggestions for agenda items here. (with your name)''<br />
<br />
==Agenda==<br />
<br />
{| border="1" cellpadding="4" cellspacing="0"<br />
!Time<br />
!Item<br />
!Presenter<br />
<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|09:00 - 09:30<br />
|Welcome and introductions<br />
|Dave Page<br />
<br />
|- <br />
|09:30 - 09:45<br />
|12.0 release and commitfest schedule<br />
|Dave Page<br />
<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|10:30 - 11:00<br />
|Coffee break<br />
|All<br />
<br />
|- <br />
|11:50 - 12:00<br />
|Any other business<br />
|Dave Page<br />
<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|12:00<br />
|Lunch<br />
|<br />
<br />
|}<br />
<br />
Note: This timetable is a rough guide only. Items will start as soon as the previous discussion is complete (breaks will not move however). Any remaining time before lunch may be used for Commitfest item triage or other activities.</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_Buildfarm_Howto&diff=32719PostgreSQL Buildfarm Howto2018-11-13T13:27:33Z<p>Adunstan: /* Using run_branches.pl */</p>
<hr />
<div>PostgreSQL BuildFarm is a distributed build system designed to detect <br />
build failures on a large collection of platforms and configurations. <br />
This software is written in Perl. If you're not comfortable with Perl<br />
then you possibly don't want to run this, even though the only adjustment<br />
you should ever need is to the config file (which is also Perl).<br />
<br />
=== Get the Software === <br />
Download from [http://buildfarm.postgresql.org/downloads the buildfarm server]<br />
Unpack it and put it somewhere. You can put the config file in a different <br />
place from the run_build.pl script if you want to, but the <br />
simplest thing is to put it in the same place. Decide which user you will run <br />
the script as - it must be a user who can run the PostgreSQL server programs (on Unix<br />
that means it must *not* run as root). Do everything else here as that user.<br />
<br />
=== Other Prerequisites ===<br />
<br />
; Git<br />
: Must be version 1.6 or later.<br />
<br />
; All tools required for building Postgres from a Git checkout<br />
: GNU make, bison, flex, etc<br />
: See [http://www.postgresql.org/docs/devel/static/install-requirements.html the Postgres documentation]<br />
<br />
; ccache<br />
: This isn't ''absolutely'' necessary, but it greatly reduces the amount of CPU your buildfarm member will consume ... at the price of more disk space usage<br />
<br />
=== All the run_build.pl command line options ===<br />
<br />
This list is complete as of release 4.19 of the client<br />
<br />
* --config=/pathto/file - location of config file, default build-farm.conf<br />
* --nosend says don't send the results to the server<br />
* --nostatus says don;t update the status files<br />
* --force says run the build even of it's not needed<br />
* --verbose[=n] says display information. verbosity level 1 (default if --verbose is specified) shows a line for each step as it start. Any higher number causes the logs from the various stages to be sent to the standard output<br />
* --quiet - suppress error output<br />
* --test is short for --nosend --no-status --force --verbose<br />
* --find-typedefs - obsolete way to trigger typedef anaylsis. This should now be done via the config file<br />
* --help - print help text<br />
* --keepall - keep build and installation directories if there is a failure<br />
* [ to be continued ]<br />
<br />
<br />
=== Choose a setup for a base git mirror that all your branches will pull from. ===<br />
Most buildfarm members run on more than one branch, and if you do it's good practice to set up<br />
a mirror on the buildfarm machine and then just clone that for each branch. The official publicly available git repository is at<br />
* git://git.postgresql.org/git/postgresql.git<br />
and there is a mirror at <br />
* git://github.com/postgres/postgres.git<br />
Either should be suitable for cloning.<br />
<br />
The simplest way to set up a mirror is simply to have the buildfarm script create and maintain it for you. <br />
If you do that, the mirror will be updated at the start of a run when it checks to see if any changes have occurred that might<br />
require a new build. To do that, all you need to do is set the following two options in your config file:<br />
git_keep_mirror => 'true',<br />
git_ignore_mirror_failure => 'true',<br />
<br />
If you would rather clone the github mirror for your local mirror instead of the authoritative community repo (doing so can keep load off the community server, which is a good thing), then set the config variable to point to it like this:<br />
scmrepo => 'git://github.com/postgres/postgres.git',<br />
<br />
The mirror will be placed in your build root, above the branch directories.<br />
<br />
You can also opt to create and maintain a git mirror yourself, something like this:<br />
git clone --mirror git://git.postgresql.org/git/postgresql.git pgsql-base.git<br />
When that is done, add an entry to your crontab to keep it up to date, something like:<br />
20,50 * * * * cd /path/to/pgsql-base.git && git fetch -q<br />
<br />
One downside of doing this is that your mirror will only be as up to date as the last time you ran the cron update.<br />
<br />
To have your buildfarm installation use a local mirror you maintain yourself, set the config variable:<br />
scmrepo => '/path/to/pgsql-base.git',<br />
Of course, in this case you don't set the git_keep_mirror option.<br />
<br />
=== Create a directory where builds will run. === <br />
This should be dedicated to<br />
the use of the build farm. Make sure there's plenty of space - on my<br />
machine each branch can use up to about 700Mb during a build. You can use the<br />
directory where the script lives, or a subdirectory of it, or a completely <br />
different directory.<br />
<br />
If you're using ccache, the cache directory can use up to 1Gb by default.<br />
You can reduce that if you like (see the ccache documentation), but it's<br />
good to allow at least 100Mb per active branch.<br />
<br />
=== Edit the build-farm.conf file ===<br />
<br />
Notable things you probably need to set include the following:<br />
<br />
==== %conf ====<br />
<br />
; scmrepo<br />
: Set this to indicate the path to your Git mirror<br />
; scm_url<br />
: If you are not using the Community git repository, or want to point the changesets at a different server, set this URL to indicate where to find a given Git commit on the web. For instance, for the github mirror, this value should be: <i>&#x68;ttp://github.com/postgres/postgres.git/commit/</i> - don't forget the trailing "/".<br />
<br />
Once you have registered your Buildfarm animal you will need to set these, but for initial testing just leave them as-is:<br />
<br />
; animal<br />
: This will need to be set to the animal name you were given by the Buildfarm coordinators<br />
; secret<br />
: This must be the password indicated by the Buildfarm coordinators<br />
<br />
Adjust other config variables "make", "config_opts", and (if you don't use ccache) "config_env" to suit your environment, and to choose which optional Postgres configuration options you want to build with. <br />
<br />
You should not need to adjust other variables.<br />
<br />
You may verify that you didn't screw things up too badly by running "perl -cw build-farm.conf". That verifies that the configuration is still legitimate Perl.<br />
<br />
=== Alerts and Status Notifications ===<br />
<br />
Alerts happen when we haven't heard from your buildfarm member for a while, and suggest that maybe something is wrong. Status notifications happen when we have heard from your buildfarm member, and we are telling you what happened. Both of them happen via email. Alerts are sent to the owner's registered email address. By default, none are sent. You can configure when and how often they are sent in the alerts section of the config file. Status notifications are sent to the addresses configured in the mail_events section of the config file. You can choose four different sorts of notification:<br />
* for every build<br />
* for every build that fails<br />
* for every build that changes the status<br />
* for every build that changes the status if the change is to or from OK (green) <br />
<br />
=== Change the shebang line in the scripts. ===<br />
If the path to your perl <br />
installation isn't "/usr/bin/perl", edit the #! line in perl scripts so it is correct. <br />
This is the ONLY line in those files you should ever need to edit. <br />
<br />
=== Check that required perl modules are present. ===<br />
Run "perl -cw run_build.pl". <br />
If you get errors about missing perl modules you will need to install them. <br />
Most of the required modules are standard modules in any perl<br />
distribution. The rest are all standard CPAN modules, and available either from there<br />
or from your OS distribution. When you don't get an error any more, run the same test on<br />
run_web_txn.pl, and also on run_branches.pl if you plan to use that (see below).<br />
When all is clear you are ready to start testing.<br />
<br />
=== Run in test mode. ===<br />
With a PATH that matches what you will have when running from cron, run<br />
the script in no-send, no-status, verbose mode. Something like this:<br />
./run_build.pl --nosend --nostatus --verbose<br />
and watch the fun begin. If this results in failures because it can't<br />
find some executables (especially gmake and git), you might need to change <br />
the config file again, this time changing the "build_env" with another <br />
setting something like:<br />
PATH => "/usr/local/bin:$ENV{PATH}",<br />
Also, if you put the config file somewhere else, you will need to use <br />
the --config=/path/to/build-farm.conf option.<br />
<br />
If trying to diagnose problems, interesting summary information may be found in the file '''web-txn.data''', which is found in a build-specific directory, of the form $build_root/$CURRENT_BRANCH/$animal.lastrun-logs/web-txn.data<br />
<br />
If particular steps of a build failed, logs for those steps may be found in that same directory.<br />
<br />
=== Test running from cron === <br />
When you have that running, it's time to try with cron. <br />
Put a line in your crontab that looks something like this:<br />
43 * * * * cd /location/of/run_build.pl/ && ./run_build.pl --nosend --verbose<br />
Again, add the --config option if needed. Notice that this time we didn't <br />
specify --nostatus. That means that (after the first run) the script won't <br />
do any build work unless the Git repo has changed. Check that your cron <br />
job runs (it should email you the results, unless you tell it to send them<br />
elsewhere).<br />
<br />
You can, and probably should, drop the --verbose option once things are<br />
working.<br />
<br />
The frequency with which the cron job is launched is up to you, though we do<br />
suggest that active branches get built at least once a day. The build script will<br />
automatically exit if it finds a previous invocation still running, so you do not<br />
need to worry about scheduling jobs too close together. Think of the cron<br />
frequency as how often the buildfarm animal will wake up to see if there have<br />
been changes in the Git repo.<br />
<br />
=== Choose which branches you want to build === <br />
By default run_build.pl builds the HEAD branch. If you want to<br />
build some other branch, you can do so by specifying the name on the commandline,<br />
e.g. <br />
run_build.pl REL9_4_STABLE<br />
<br />
The old way to build multiple branches was to create a cron job for each<br />
active branch, along the lines of:<br />
<br />
6 * * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend<br />
30 4 * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend REL9_4_STABLE<br />
<br />
But there's a better way ...<br />
<br />
=== Using run_branches.pl ===<br />
There is a wrapper script that makes running multiple branches much easier. To build all the branches that are currently being maintained by the project, instead of running run_build.pl, use run_branches.pl with the --run-all option. This script accepts all the options that run_build.pl does, and passes them through. So now your crontab could just look like this:<br />
6 * * * * cd /home/andrew/buildfarm && ./run_branches.pl --run-all<br />
One of the advantages of this approach is that you don't need to manually retire a branch when the Postgres project ends support for it, nor to add one when there's a new stable branch. The script contacts the server to get a list of branches that we're currently interested in, and then builds them. This is now the recommended method of running a buildfarm member.<br />
<br />
The branches that are built are controlled by the <code>branches_to_build</code> setting in the <code>global</code> section of the config file. The sample config file's setting is 'ALL'.<br />
<br />
If you don't want to build every one of the back branches, you can also use HEAD_PLUS_LATEST, or HEAD_PLUS_LATESTn for any n, or a fixed list of branches. In the last case you will probably need to adjust the list whenever the PostgreSQL developers start a new branch or declare an old branch to be at End Of Life.<br />
<br />
=== Register your new buildfarm member and subscribe to the mailing list. === <br />
Once this is all running happily, you can register to upload your<br />
results to the central server. Registration can be done on the buildfarm server <br />
at http://www.pgbuildfarm.org/cgi-bin/register-form.pl. When you receive your approval by <br />
email, you will edit the "animal" and "secret" lines in your config file, <br />
remove the --nosend flags, and you are done.<br />
<br />
Please also join the buildfarm-members mailing list at<br />
https://lists.postgresql.org<br />
This is a low-traffic list for owners of buildfarm members, and every buildfarm owner should be subscribed.<br />
<br />
=== Status Mailing Lists ===<br />
<br />
There are also two mailing lists that report status from all builds, not just your own animals. This is useful for developers who want to be notified of events rather than having to monitor the server's dashboard.<br />
<br />
* <b><code>buildfarm-status-failures</code></b>, which gets an email any time a buildfarm animal reports a failed run.<br />
* <b><code>buildfarm-status-green-chgs</code></b>, which gets an email any time the status of a buildfarm animal changes to or from green (i.e. success). This is the status list most people find useful.<br />
<br />
=== Bugs === <br />
Please file bug reports concerning the buildfarm client (but not Postgres itself)<br />
on the buildfarm members mailing list.<br />
<br />
=== Running on Windows ===<br />
There are three build environments for Windows: Cygwin, MinGW/MSys, and Microsoft Visual C++. The buildfarm can run with each of these environments. This section discusses requirements for the buildfarm, rather than requirements for building on Windows, which are covered elsewhere.<br />
<br />
==== Cygwin ==== <br />
There is almost nothing extra to be done for Cygwin. You need to make sure that cygserver is running, and you should set MAX_CONNECTIONS=>3 and CYGWIN=>'server' in the build_env stanza of the buildfarm config. Other than that it should be just like running on Unix.<br />
<br />
==== MinGW/Msys ====<br />
For MinGW/MSys, you need both the MSys DTK version of perl installed, and a native Windows perl - I have only tested with ActiveState perl, which I have found to be rock solid. You need to run the main buildfarm script using the MSYS DTK perl, and the web transaction script using native Perl. that mean you need to change the first line of the run_web_txn.pl script so it reads something like:<br />
#!/c/perl/bin/perl<br />
You should make sure that the PATH is set in your config file to put the Native perl ahead of the MSys DTK perl.<br />
It's a good idea to have a runbf.bat file that you can call from the Windows scheduler. Mine looks like this:<br />
@echo off<br />
setlocal<br />
c:<br />
cd \msys\1.0\bin<br />
c:\msys\1.0\bin\sh.exe --login -c "cd bf && ./run_build.pl --verbose %1 >> bftask.out 2>&1"<br />
Set up a non-privileged Windows user to run this jobs as. set up the buildfarm as above as that user. Then create scheduler jobs that call runbf.bat with an optional branch name argument.<br />
<br />
==== Microsoft Visual C++ ====<br />
For MSVC you need to edit the config file more extensively. Make sure the 'using_msvc' setting is on. Also, there is a section of the file specially for MSVC builds. As with MinGW, you need a native Windows perl installed. It appears that Windows Git does not like to clone local repositories specified with forward slashes (this is pretty horrible - almost all Windows programs are quite happy with forward slashes. Make sure you specify the repository using backslashes or weird things will happen. Again, you will need a runbf.bat file for the windows scheduler. Mine looks like this:<br />
@echo off<br />
c:<br />
cd \prog\bf<br />
c:\perl\bin\perl run_build.pl --verbose %1 %2 %3 %4 >> bfout.txt<br />
You will also need a tar command capable of bundling up the logs to send to the server. The best one I have found for use on Windows is bsdtar, part of the libarchive collection at http://sourceforge.net/projects/gnuwin32/files/. This is also a good place to get many of the libraries you need for optional pieces of MSVC and MinGW builds.<br />
<br />
=== Running multiple buildfarm members on a single machine ===<br />
<br />
Sometimes you might want to run more than one buildfarm member on a single machine. Possible reasons for doing this include testing different compilers, and running with different build options. For example, on one FreeBSD machine I have two members; one does a normal build and the other does a build with -DCLOBBER_CACHE_ALWAYS set. Or on a Windows machine one might want to test both the 32 bit and 64 bit mingw-w64 compilers.<br />
<br />
The simplest way to do this is to do it all in the same location. Get one member working, then copy the config file to something with the other member's name and change the animal name and password, and whatever in the config will be different from the first one. The members can share a git mirror and build root. There are locking provisions that prevent instances of the buildfarm scripts from tripping over each other. If you are using ccache, you should ensure that each member gets a separate ccache location. The best way to do that is to put the member name into the ccache directory name (which is the default as of recent releases of the buildfarm scripts).<br />
<br />
=== Running in Parallel ===<br />
<br />
If you run a single animal, you can run all the branches in parallel just by changing <code>run_branches.pl</code>'s <code>--run-all</code> to <code>--run-parallel</code>. This will launch each branch's run, spaced out by 60 seconds from launch to launch. <br />
<br />
The long story: parallelism is controlled by a number of configuration parameters in the <code>global</code> section of the config file. The first is <code>parallel_lockdir</code>. By default this is the <code>global_lock_dir</code> which in turn defaults to the <code>build_root</code>. This directory is where <code>run_branches.pl</code> puts a lock file for each running branch. The second is <code>max_parallel</code>. The script will launch a new branch as long as the number of live locks is less than this number. The default is 10. Lastly the setting <code>parallel_stagger</code> determines how long the script will wait before starting a new branch, unless one finishes in the meantime. The default is 60 seconds.<br />
<br />
If you want to run multiple animals and use parallelism between them the best way is to use a separate <code>build_root</code> for each animal. Then don't set the <code>global_lock_dir</code> for each animal, but do set the <code>parallel_lockdir</code> for each animal to point to the same directory, probably the <code>build_root</code> of one of the animals. Then you could have a crontab something like this:<br />
<br />
2-59/15 * * * * cd curly && run_branches.pl --run-parallel --config=curly.conf<br />
7-59/15 * * * * cd larry && run_branches.pl --run-parallel --config=larry.conf<br />
12-59/15 * * * * cd moe && run_branches.pl --run-parallel --config=moe.conf<br />
<br />
=== Tips and Tricks ===<br />
<br />
You can force a single run of your animal by putting a file called <animal>.force-one-run in the <buildroot>/<branch> directory. For example the following will force a build on all the stable branches of my animal crake<br />
cd root<br />
for f in REL* ; do<br />
touch $f/crake.force-one-run<br />
done<br />
When the run is done this file will be removed automatically. <br />
<br />
[[Category:Howto]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_Buildfarm_Howto&diff=32718PostgreSQL Buildfarm Howto2018-11-13T13:16:42Z<p>Adunstan: /* Running in Parallel */</p>
<hr />
<div>PostgreSQL BuildFarm is a distributed build system designed to detect <br />
build failures on a large collection of platforms and configurations. <br />
This software is written in Perl. If you're not comfortable with Perl<br />
then you possibly don't want to run this, even though the only adjustment<br />
you should ever need is to the config file (which is also Perl).<br />
<br />
=== Get the Software === <br />
Download from [http://buildfarm.postgresql.org/downloads the buildfarm server]<br />
Unpack it and put it somewhere. You can put the config file in a different <br />
place from the run_build.pl script if you want to, but the <br />
simplest thing is to put it in the same place. Decide which user you will run <br />
the script as - it must be a user who can run the PostgreSQL server programs (on Unix<br />
that means it must *not* run as root). Do everything else here as that user.<br />
<br />
=== Other Prerequisites ===<br />
<br />
; Git<br />
: Must be version 1.6 or later.<br />
<br />
; All tools required for building Postgres from a Git checkout<br />
: GNU make, bison, flex, etc<br />
: See [http://www.postgresql.org/docs/devel/static/install-requirements.html the Postgres documentation]<br />
<br />
; ccache<br />
: This isn't ''absolutely'' necessary, but it greatly reduces the amount of CPU your buildfarm member will consume ... at the price of more disk space usage<br />
<br />
=== All the run_build.pl command line options ===<br />
<br />
This list is complete as of release 4.19 of the client<br />
<br />
* --config=/pathto/file - location of config file, default build-farm.conf<br />
* --nosend says don't send the results to the server<br />
* --nostatus says don;t update the status files<br />
* --force says run the build even of it's not needed<br />
* --verbose[=n] says display information. verbosity level 1 (default if --verbose is specified) shows a line for each step as it start. Any higher number causes the logs from the various stages to be sent to the standard output<br />
* --quiet - suppress error output<br />
* --test is short for --nosend --no-status --force --verbose<br />
* --find-typedefs - obsolete way to trigger typedef anaylsis. This should now be done via the config file<br />
* --help - print help text<br />
* --keepall - keep build and installation directories if there is a failure<br />
* [ to be continued ]<br />
<br />
<br />
=== Choose a setup for a base git mirror that all your branches will pull from. ===<br />
Most buildfarm members run on more than one branch, and if you do it's good practice to set up<br />
a mirror on the buildfarm machine and then just clone that for each branch. The official publicly available git repository is at<br />
* git://git.postgresql.org/git/postgresql.git<br />
and there is a mirror at <br />
* git://github.com/postgres/postgres.git<br />
Either should be suitable for cloning.<br />
<br />
The simplest way to set up a mirror is simply to have the buildfarm script create and maintain it for you. <br />
If you do that, the mirror will be updated at the start of a run when it checks to see if any changes have occurred that might<br />
require a new build. To do that, all you need to do is set the following two options in your config file:<br />
git_keep_mirror => 'true',<br />
git_ignore_mirror_failure => 'true',<br />
<br />
If you would rather clone the github mirror for your local mirror instead of the authoritative community repo (doing so can keep load off the community server, which is a good thing), then set the config variable to point to it like this:<br />
scmrepo => 'git://github.com/postgres/postgres.git',<br />
<br />
The mirror will be placed in your build root, above the branch directories.<br />
<br />
You can also opt to create and maintain a git mirror yourself, something like this:<br />
git clone --mirror git://git.postgresql.org/git/postgresql.git pgsql-base.git<br />
When that is done, add an entry to your crontab to keep it up to date, something like:<br />
20,50 * * * * cd /path/to/pgsql-base.git && git fetch -q<br />
<br />
One downside of doing this is that your mirror will only be as up to date as the last time you ran the cron update.<br />
<br />
To have your buildfarm installation use a local mirror you maintain yourself, set the config variable:<br />
scmrepo => '/path/to/pgsql-base.git',<br />
Of course, in this case you don't set the git_keep_mirror option.<br />
<br />
=== Create a directory where builds will run. === <br />
This should be dedicated to<br />
the use of the build farm. Make sure there's plenty of space - on my<br />
machine each branch can use up to about 700Mb during a build. You can use the<br />
directory where the script lives, or a subdirectory of it, or a completely <br />
different directory.<br />
<br />
If you're using ccache, the cache directory can use up to 1Gb by default.<br />
You can reduce that if you like (see the ccache documentation), but it's<br />
good to allow at least 100Mb per active branch.<br />
<br />
=== Edit the build-farm.conf file ===<br />
<br />
Notable things you probably need to set include the following:<br />
<br />
==== %conf ====<br />
<br />
; scmrepo<br />
: Set this to indicate the path to your Git mirror<br />
; scm_url<br />
: If you are not using the Community git repository, or want to point the changesets at a different server, set this URL to indicate where to find a given Git commit on the web. For instance, for the github mirror, this value should be: <i>&#x68;ttp://github.com/postgres/postgres.git/commit/</i> - don't forget the trailing "/".<br />
<br />
Once you have registered your Buildfarm animal you will need to set these, but for initial testing just leave them as-is:<br />
<br />
; animal<br />
: This will need to be set to the animal name you were given by the Buildfarm coordinators<br />
; secret<br />
: This must be the password indicated by the Buildfarm coordinators<br />
<br />
Adjust other config variables "make", "config_opts", and (if you don't use ccache) "config_env" to suit your environment, and to choose which optional Postgres configuration options you want to build with. <br />
<br />
You should not need to adjust other variables.<br />
<br />
You may verify that you didn't screw things up too badly by running "perl -cw build-farm.conf". That verifies that the configuration is still legitimate Perl.<br />
<br />
=== Alerts and Status Notifications ===<br />
<br />
Alerts happen when we haven't heard from your buildfarm member for a while, and suggest that maybe something is wrong. Status notifications happen when we have heard from your buildfarm member, and we are telling you what happened. Both of them happen via email. Alerts are sent to the owner's registered email address. By default, none are sent. You can configure when and how often they are sent in the alerts section of the config file. Status notifications are sent to the addresses configured in the mail_events section of the config file. You can choose four different sorts of notification:<br />
* for every build<br />
* for every build that fails<br />
* for every build that changes the status<br />
* for every build that changes the status if the change is to or from OK (green) <br />
<br />
=== Change the shebang line in the scripts. ===<br />
If the path to your perl <br />
installation isn't "/usr/bin/perl", edit the #! line in perl scripts so it is correct. <br />
This is the ONLY line in those files you should ever need to edit. <br />
<br />
=== Check that required perl modules are present. ===<br />
Run "perl -cw run_build.pl". <br />
If you get errors about missing perl modules you will need to install them. <br />
Most of the required modules are standard modules in any perl<br />
distribution. The rest are all standard CPAN modules, and available either from there<br />
or from your OS distribution. When you don't get an error any more, run the same test on<br />
run_web_txn.pl, and also on run_branches.pl if you plan to use that (see below).<br />
When all is clear you are ready to start testing.<br />
<br />
=== Run in test mode. ===<br />
With a PATH that matches what you will have when running from cron, run<br />
the script in no-send, no-status, verbose mode. Something like this:<br />
./run_build.pl --nosend --nostatus --verbose<br />
and watch the fun begin. If this results in failures because it can't<br />
find some executables (especially gmake and git), you might need to change <br />
the config file again, this time changing the "build_env" with another <br />
setting something like:<br />
PATH => "/usr/local/bin:$ENV{PATH}",<br />
Also, if you put the config file somewhere else, you will need to use <br />
the --config=/path/to/build-farm.conf option.<br />
<br />
If trying to diagnose problems, interesting summary information may be found in the file '''web-txn.data''', which is found in a build-specific directory, of the form $build_root/$CURRENT_BRANCH/$animal.lastrun-logs/web-txn.data<br />
<br />
If particular steps of a build failed, logs for those steps may be found in that same directory.<br />
<br />
=== Test running from cron === <br />
When you have that running, it's time to try with cron. <br />
Put a line in your crontab that looks something like this:<br />
43 * * * * cd /location/of/run_build.pl/ && ./run_build.pl --nosend --verbose<br />
Again, add the --config option if needed. Notice that this time we didn't <br />
specify --nostatus. That means that (after the first run) the script won't <br />
do any build work unless the Git repo has changed. Check that your cron <br />
job runs (it should email you the results, unless you tell it to send them<br />
elsewhere).<br />
<br />
You can, and probably should, drop the --verbose option once things are<br />
working.<br />
<br />
The frequency with which the cron job is launched is up to you, though we do<br />
suggest that active branches get built at least once a day. The build script will<br />
automatically exit if it finds a previous invocation still running, so you do not<br />
need to worry about scheduling jobs too close together. Think of the cron<br />
frequency as how often the buildfarm animal will wake up to see if there have<br />
been changes in the Git repo.<br />
<br />
=== Choose which branches you want to build === <br />
By default run_build.pl builds the HEAD branch. If you want to<br />
build some other branch, you can do so by specifying the name on the commandline,<br />
e.g. <br />
run_build.pl REL9_4_STABLE<br />
<br />
The old way to build multiple branches was to create a cron job for each<br />
active branch, along the lines of:<br />
<br />
6 * * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend<br />
30 4 * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend REL9_4_STABLE<br />
<br />
But there's a better way ...<br />
<br />
=== Using run_branches.pl ===<br />
There is a wrapper script that makes running multiple branches much easier. To build all the branches that are currently being maintained by the project, uncomment this line in the config file:<br />
# $conf{branches_to_build} = 'ALL'; # or [qw( HEAD RELx_y_STABLE etc )]<br />
and instead of running run_build.pl, use run_branches.pl with the --run-all option. This script accepts all the<br />
options that run_build.pl does, and passes them through. So now your crontab could just look like this:<br />
6 * * * * cd /home/andrew/buildfarm && ./run_branches.pl --run-all<br />
One of the advantages of this approach is that you don't need to manually retire a branch when the Postgres project ends support for it, nor to add one when there's a new stable branch. The script contacts the server to get a list of branches that we're currently interested in, and then builds them. This is now the recommended method of running a buildfarm member.<br />
<br />
If you don't want to build every one of the back branches, you can also use HEAD_PLUS_LATEST, or HEAD_PLUS_LATESTn for any n,<br />
in the $conf{branches_to_build} setting.<br />
<br />
=== Register your new buildfarm member and subscribe to the mailing list. === <br />
Once this is all running happily, you can register to upload your<br />
results to the central server. Registration can be done on the buildfarm server <br />
at http://www.pgbuildfarm.org/cgi-bin/register-form.pl. When you receive your approval by <br />
email, you will edit the "animal" and "secret" lines in your config file, <br />
remove the --nosend flags, and you are done.<br />
<br />
Please also join the buildfarm-members mailing list at<br />
https://lists.postgresql.org<br />
This is a low-traffic list for owners of buildfarm members, and every buildfarm owner should be subscribed.<br />
<br />
=== Status Mailing Lists ===<br />
<br />
There are also two mailing lists that report status from all builds, not just your own animals. This is useful for developers who want to be notified of events rather than having to monitor the server's dashboard.<br />
<br />
* <b><code>buildfarm-status-failures</code></b>, which gets an email any time a buildfarm animal reports a failed run.<br />
* <b><code>buildfarm-status-green-chgs</code></b>, which gets an email any time the status of a buildfarm animal changes to or from green (i.e. success). This is the status list most people find useful.<br />
<br />
=== Bugs === <br />
Please file bug reports concerning the buildfarm client (but not Postgres itself)<br />
on the buildfarm members mailing list.<br />
<br />
=== Running on Windows ===<br />
There are three build environments for Windows: Cygwin, MinGW/MSys, and Microsoft Visual C++. The buildfarm can run with each of these environments. This section discusses requirements for the buildfarm, rather than requirements for building on Windows, which are covered elsewhere.<br />
<br />
==== Cygwin ==== <br />
There is almost nothing extra to be done for Cygwin. You need to make sure that cygserver is running, and you should set MAX_CONNECTIONS=>3 and CYGWIN=>'server' in the build_env stanza of the buildfarm config. Other than that it should be just like running on Unix.<br />
<br />
==== MinGW/Msys ====<br />
For MinGW/MSys, you need both the MSys DTK version of perl installed, and a native Windows perl - I have only tested with ActiveState perl, which I have found to be rock solid. You need to run the main buildfarm script using the MSYS DTK perl, and the web transaction script using native Perl. that mean you need to change the first line of the run_web_txn.pl script so it reads something like:<br />
#!/c/perl/bin/perl<br />
You should make sure that the PATH is set in your config file to put the Native perl ahead of the MSys DTK perl.<br />
It's a good idea to have a runbf.bat file that you can call from the Windows scheduler. Mine looks like this:<br />
@echo off<br />
setlocal<br />
c:<br />
cd \msys\1.0\bin<br />
c:\msys\1.0\bin\sh.exe --login -c "cd bf && ./run_build.pl --verbose %1 >> bftask.out 2>&1"<br />
Set up a non-privileged Windows user to run this jobs as. set up the buildfarm as above as that user. Then create scheduler jobs that call runbf.bat with an optional branch name argument.<br />
<br />
==== Microsoft Visual C++ ====<br />
For MSVC you need to edit the config file more extensively. Make sure the 'using_msvc' setting is on. Also, there is a section of the file specially for MSVC builds. As with MinGW, you need a native Windows perl installed. It appears that Windows Git does not like to clone local repositories specified with forward slashes (this is pretty horrible - almost all Windows programs are quite happy with forward slashes. Make sure you specify the repository using backslashes or weird things will happen. Again, you will need a runbf.bat file for the windows scheduler. Mine looks like this:<br />
@echo off<br />
c:<br />
cd \prog\bf<br />
c:\perl\bin\perl run_build.pl --verbose %1 %2 %3 %4 >> bfout.txt<br />
You will also need a tar command capable of bundling up the logs to send to the server. The best one I have found for use on Windows is bsdtar, part of the libarchive collection at http://sourceforge.net/projects/gnuwin32/files/. This is also a good place to get many of the libraries you need for optional pieces of MSVC and MinGW builds.<br />
<br />
=== Running multiple buildfarm members on a single machine ===<br />
<br />
Sometimes you might want to run more than one buildfarm member on a single machine. Possible reasons for doing this include testing different compilers, and running with different build options. For example, on one FreeBSD machine I have two members; one does a normal build and the other does a build with -DCLOBBER_CACHE_ALWAYS set. Or on a Windows machine one might want to test both the 32 bit and 64 bit mingw-w64 compilers.<br />
<br />
The simplest way to do this is to do it all in the same location. Get one member working, then copy the config file to something with the other member's name and change the animal name and password, and whatever in the config will be different from the first one. The members can share a git mirror and build root. There are locking provisions that prevent instances of the buildfarm scripts from tripping over each other. If you are using ccache, you should ensure that each member gets a separate ccache location. The best way to do that is to put the member name into the ccache directory name (which is the default as of recent releases of the buildfarm scripts).<br />
<br />
=== Running in Parallel ===<br />
<br />
If you run a single animal, you can run all the branches in parallel just by changing <code>run_branches.pl</code>'s <code>--run-all</code> to <code>--run-parallel</code>. This will launch each branch's run, spaced out by 60 seconds from launch to launch. <br />
<br />
The long story: parallelism is controlled by a number of configuration parameters in the <code>global</code> section of the config file. The first is <code>parallel_lockdir</code>. By default this is the <code>global_lock_dir</code> which in turn defaults to the <code>build_root</code>. This directory is where <code>run_branches.pl</code> puts a lock file for each running branch. The second is <code>max_parallel</code>. The script will launch a new branch as long as the number of live locks is less than this number. The default is 10. Lastly the setting <code>parallel_stagger</code> determines how long the script will wait before starting a new branch, unless one finishes in the meantime. The default is 60 seconds.<br />
<br />
If you want to run multiple animals and use parallelism between them the best way is to use a separate <code>build_root</code> for each animal. Then don't set the <code>global_lock_dir</code> for each animal, but do set the <code>parallel_lockdir</code> for each animal to point to the same directory, probably the <code>build_root</code> of one of the animals. Then you could have a crontab something like this:<br />
<br />
2-59/15 * * * * cd curly && run_branches.pl --run-parallel --config=curly.conf<br />
7-59/15 * * * * cd larry && run_branches.pl --run-parallel --config=larry.conf<br />
12-59/15 * * * * cd moe && run_branches.pl --run-parallel --config=moe.conf<br />
<br />
=== Tips and Tricks ===<br />
<br />
You can force a single run of your animal by putting a file called <animal>.force-one-run in the <buildroot>/<branch> directory. For example the following will force a build on all the stable branches of my animal crake<br />
cd root<br />
for f in REL* ; do<br />
touch $f/crake.force-one-run<br />
done<br />
When the run is done this file will be removed automatically. <br />
<br />
[[Category:Howto]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_Buildfarm_Howto&diff=32717PostgreSQL Buildfarm Howto2018-11-12T20:58:35Z<p>Adunstan: add section on running in parallel</p>
<hr />
<div>PostgreSQL BuildFarm is a distributed build system designed to detect <br />
build failures on a large collection of platforms and configurations. <br />
This software is written in Perl. If you're not comfortable with Perl<br />
then you possibly don't want to run this, even though the only adjustment<br />
you should ever need is to the config file (which is also Perl).<br />
<br />
=== Get the Software === <br />
Download from [http://buildfarm.postgresql.org/downloads the buildfarm server]<br />
Unpack it and put it somewhere. You can put the config file in a different <br />
place from the run_build.pl script if you want to, but the <br />
simplest thing is to put it in the same place. Decide which user you will run <br />
the script as - it must be a user who can run the PostgreSQL server programs (on Unix<br />
that means it must *not* run as root). Do everything else here as that user.<br />
<br />
=== Other Prerequisites ===<br />
<br />
; Git<br />
: Must be version 1.6 or later.<br />
<br />
; All tools required for building Postgres from a Git checkout<br />
: GNU make, bison, flex, etc<br />
: See [http://www.postgresql.org/docs/devel/static/install-requirements.html the Postgres documentation]<br />
<br />
; ccache<br />
: This isn't ''absolutely'' necessary, but it greatly reduces the amount of CPU your buildfarm member will consume ... at the price of more disk space usage<br />
<br />
=== All the run_build.pl command line options ===<br />
<br />
This list is complete as of release 4.19 of the client<br />
<br />
* --config=/pathto/file - location of config file, default build-farm.conf<br />
* --nosend says don't send the results to the server<br />
* --nostatus says don;t update the status files<br />
* --force says run the build even of it's not needed<br />
* --verbose[=n] says display information. verbosity level 1 (default if --verbose is specified) shows a line for each step as it start. Any higher number causes the logs from the various stages to be sent to the standard output<br />
* --quiet - suppress error output<br />
* --test is short for --nosend --no-status --force --verbose<br />
* --find-typedefs - obsolete way to trigger typedef anaylsis. This should now be done via the config file<br />
* --help - print help text<br />
* --keepall - keep build and installation directories if there is a failure<br />
* [ to be continued ]<br />
<br />
<br />
=== Choose a setup for a base git mirror that all your branches will pull from. ===<br />
Most buildfarm members run on more than one branch, and if you do it's good practice to set up<br />
a mirror on the buildfarm machine and then just clone that for each branch. The official publicly available git repository is at<br />
* git://git.postgresql.org/git/postgresql.git<br />
and there is a mirror at <br />
* git://github.com/postgres/postgres.git<br />
Either should be suitable for cloning.<br />
<br />
The simplest way to set up a mirror is simply to have the buildfarm script create and maintain it for you. <br />
If you do that, the mirror will be updated at the start of a run when it checks to see if any changes have occurred that might<br />
require a new build. To do that, all you need to do is set the following two options in your config file:<br />
git_keep_mirror => 'true',<br />
git_ignore_mirror_failure => 'true',<br />
<br />
If you would rather clone the github mirror for your local mirror instead of the authoritative community repo (doing so can keep load off the community server, which is a good thing), then set the config variable to point to it like this:<br />
scmrepo => 'git://github.com/postgres/postgres.git',<br />
<br />
The mirror will be placed in your build root, above the branch directories.<br />
<br />
You can also opt to create and maintain a git mirror yourself, something like this:<br />
git clone --mirror git://git.postgresql.org/git/postgresql.git pgsql-base.git<br />
When that is done, add an entry to your crontab to keep it up to date, something like:<br />
20,50 * * * * cd /path/to/pgsql-base.git && git fetch -q<br />
<br />
One downside of doing this is that your mirror will only be as up to date as the last time you ran the cron update.<br />
<br />
To have your buildfarm installation use a local mirror you maintain yourself, set the config variable:<br />
scmrepo => '/path/to/pgsql-base.git',<br />
Of course, in this case you don't set the git_keep_mirror option.<br />
<br />
=== Create a directory where builds will run. === <br />
This should be dedicated to<br />
the use of the build farm. Make sure there's plenty of space - on my<br />
machine each branch can use up to about 700Mb during a build. You can use the<br />
directory where the script lives, or a subdirectory of it, or a completely <br />
different directory.<br />
<br />
If you're using ccache, the cache directory can use up to 1Gb by default.<br />
You can reduce that if you like (see the ccache documentation), but it's<br />
good to allow at least 100Mb per active branch.<br />
<br />
=== Edit the build-farm.conf file ===<br />
<br />
Notable things you probably need to set include the following:<br />
<br />
==== %conf ====<br />
<br />
; scmrepo<br />
: Set this to indicate the path to your Git mirror<br />
; scm_url<br />
: If you are not using the Community git repository, or want to point the changesets at a different server, set this URL to indicate where to find a given Git commit on the web. For instance, for the github mirror, this value should be: <i>&#x68;ttp://github.com/postgres/postgres.git/commit/</i> - don't forget the trailing "/".<br />
<br />
Once you have registered your Buildfarm animal you will need to set these, but for initial testing just leave them as-is:<br />
<br />
; animal<br />
: This will need to be set to the animal name you were given by the Buildfarm coordinators<br />
; secret<br />
: This must be the password indicated by the Buildfarm coordinators<br />
<br />
Adjust other config variables "make", "config_opts", and (if you don't use ccache) "config_env" to suit your environment, and to choose which optional Postgres configuration options you want to build with. <br />
<br />
You should not need to adjust other variables.<br />
<br />
You may verify that you didn't screw things up too badly by running "perl -cw build-farm.conf". That verifies that the configuration is still legitimate Perl.<br />
<br />
=== Alerts and Status Notifications ===<br />
<br />
Alerts happen when we haven't heard from your buildfarm member for a while, and suggest that maybe something is wrong. Status notifications happen when we have heard from your buildfarm member, and we are telling you what happened. Both of them happen via email. Alerts are sent to the owner's registered email address. By default, none are sent. You can configure when and how often they are sent in the alerts section of the config file. Status notifications are sent to the addresses configured in the mail_events section of the config file. You can choose four different sorts of notification:<br />
* for every build<br />
* for every build that fails<br />
* for every build that changes the status<br />
* for every build that changes the status if the change is to or from OK (green) <br />
<br />
=== Change the shebang line in the scripts. ===<br />
If the path to your perl <br />
installation isn't "/usr/bin/perl", edit the #! line in perl scripts so it is correct. <br />
This is the ONLY line in those files you should ever need to edit. <br />
<br />
=== Check that required perl modules are present. ===<br />
Run "perl -cw run_build.pl". <br />
If you get errors about missing perl modules you will need to install them. <br />
Most of the required modules are standard modules in any perl<br />
distribution. The rest are all standard CPAN modules, and available either from there<br />
or from your OS distribution. When you don't get an error any more, run the same test on<br />
run_web_txn.pl, and also on run_branches.pl if you plan to use that (see below).<br />
When all is clear you are ready to start testing.<br />
<br />
=== Run in test mode. ===<br />
With a PATH that matches what you will have when running from cron, run<br />
the script in no-send, no-status, verbose mode. Something like this:<br />
./run_build.pl --nosend --nostatus --verbose<br />
and watch the fun begin. If this results in failures because it can't<br />
find some executables (especially gmake and git), you might need to change <br />
the config file again, this time changing the "build_env" with another <br />
setting something like:<br />
PATH => "/usr/local/bin:$ENV{PATH}",<br />
Also, if you put the config file somewhere else, you will need to use <br />
the --config=/path/to/build-farm.conf option.<br />
<br />
If trying to diagnose problems, interesting summary information may be found in the file '''web-txn.data''', which is found in a build-specific directory, of the form $build_root/$CURRENT_BRANCH/$animal.lastrun-logs/web-txn.data<br />
<br />
If particular steps of a build failed, logs for those steps may be found in that same directory.<br />
<br />
=== Test running from cron === <br />
When you have that running, it's time to try with cron. <br />
Put a line in your crontab that looks something like this:<br />
43 * * * * cd /location/of/run_build.pl/ && ./run_build.pl --nosend --verbose<br />
Again, add the --config option if needed. Notice that this time we didn't <br />
specify --nostatus. That means that (after the first run) the script won't <br />
do any build work unless the Git repo has changed. Check that your cron <br />
job runs (it should email you the results, unless you tell it to send them<br />
elsewhere).<br />
<br />
You can, and probably should, drop the --verbose option once things are<br />
working.<br />
<br />
The frequency with which the cron job is launched is up to you, though we do<br />
suggest that active branches get built at least once a day. The build script will<br />
automatically exit if it finds a previous invocation still running, so you do not<br />
need to worry about scheduling jobs too close together. Think of the cron<br />
frequency as how often the buildfarm animal will wake up to see if there have<br />
been changes in the Git repo.<br />
<br />
=== Choose which branches you want to build === <br />
By default run_build.pl builds the HEAD branch. If you want to<br />
build some other branch, you can do so by specifying the name on the commandline,<br />
e.g. <br />
run_build.pl REL9_4_STABLE<br />
<br />
The old way to build multiple branches was to create a cron job for each<br />
active branch, along the lines of:<br />
<br />
6 * * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend<br />
30 4 * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend REL9_4_STABLE<br />
<br />
But there's a better way ...<br />
<br />
=== Using run_branches.pl ===<br />
There is a wrapper script that makes running multiple branches much easier. To build all the branches that are currently being maintained by the project, uncomment this line in the config file:<br />
# $conf{branches_to_build} = 'ALL'; # or [qw( HEAD RELx_y_STABLE etc )]<br />
and instead of running run_build.pl, use run_branches.pl with the --run-all option. This script accepts all the<br />
options that run_build.pl does, and passes them through. So now your crontab could just look like this:<br />
6 * * * * cd /home/andrew/buildfarm && ./run_branches.pl --run-all<br />
One of the advantages of this approach is that you don't need to manually retire a branch when the Postgres project ends support for it, nor to add one when there's a new stable branch. The script contacts the server to get a list of branches that we're currently interested in, and then builds them. This is now the recommended method of running a buildfarm member.<br />
<br />
If you don't want to build every one of the back branches, you can also use HEAD_PLUS_LATEST, or HEAD_PLUS_LATESTn for any n,<br />
in the $conf{branches_to_build} setting.<br />
<br />
=== Register your new buildfarm member and subscribe to the mailing list. === <br />
Once this is all running happily, you can register to upload your<br />
results to the central server. Registration can be done on the buildfarm server <br />
at http://www.pgbuildfarm.org/cgi-bin/register-form.pl. When you receive your approval by <br />
email, you will edit the "animal" and "secret" lines in your config file, <br />
remove the --nosend flags, and you are done.<br />
<br />
Please also join the buildfarm-members mailing list at<br />
https://lists.postgresql.org<br />
This is a low-traffic list for owners of buildfarm members, and every buildfarm owner should be subscribed.<br />
<br />
=== Status Mailing Lists ===<br />
<br />
There are also two mailing lists that report status from all builds, not just your own animals. This is useful for developers who want to be notified of events rather than having to monitor the server's dashboard.<br />
<br />
* <b><code>buildfarm-status-failures</code></b>, which gets an email any time a buildfarm animal reports a failed run.<br />
* <b><code>buildfarm-status-green-chgs</code></b>, which gets an email any time the status of a buildfarm animal changes to or from green (i.e. success). This is the status list most people find useful.<br />
<br />
=== Bugs === <br />
Please file bug reports concerning the buildfarm client (but not Postgres itself)<br />
on the buildfarm members mailing list.<br />
<br />
=== Running on Windows ===<br />
There are three build environments for Windows: Cygwin, MinGW/MSys, and Microsoft Visual C++. The buildfarm can run with each of these environments. This section discusses requirements for the buildfarm, rather than requirements for building on Windows, which are covered elsewhere.<br />
<br />
==== Cygwin ==== <br />
There is almost nothing extra to be done for Cygwin. You need to make sure that cygserver is running, and you should set MAX_CONNECTIONS=>3 and CYGWIN=>'server' in the build_env stanza of the buildfarm config. Other than that it should be just like running on Unix.<br />
<br />
==== MinGW/Msys ====<br />
For MinGW/MSys, you need both the MSys DTK version of perl installed, and a native Windows perl - I have only tested with ActiveState perl, which I have found to be rock solid. You need to run the main buildfarm script using the MSYS DTK perl, and the web transaction script using native Perl. that mean you need to change the first line of the run_web_txn.pl script so it reads something like:<br />
#!/c/perl/bin/perl<br />
You should make sure that the PATH is set in your config file to put the Native perl ahead of the MSys DTK perl.<br />
It's a good idea to have a runbf.bat file that you can call from the Windows scheduler. Mine looks like this:<br />
@echo off<br />
setlocal<br />
c:<br />
cd \msys\1.0\bin<br />
c:\msys\1.0\bin\sh.exe --login -c "cd bf && ./run_build.pl --verbose %1 >> bftask.out 2>&1"<br />
Set up a non-privileged Windows user to run this jobs as. set up the buildfarm as above as that user. Then create scheduler jobs that call runbf.bat with an optional branch name argument.<br />
<br />
==== Microsoft Visual C++ ====<br />
For MSVC you need to edit the config file more extensively. Make sure the 'using_msvc' setting is on. Also, there is a section of the file specially for MSVC builds. As with MinGW, you need a native Windows perl installed. It appears that Windows Git does not like to clone local repositories specified with forward slashes (this is pretty horrible - almost all Windows programs are quite happy with forward slashes. Make sure you specify the repository using backslashes or weird things will happen. Again, you will need a runbf.bat file for the windows scheduler. Mine looks like this:<br />
@echo off<br />
c:<br />
cd \prog\bf<br />
c:\perl\bin\perl run_build.pl --verbose %1 %2 %3 %4 >> bfout.txt<br />
You will also need a tar command capable of bundling up the logs to send to the server. The best one I have found for use on Windows is bsdtar, part of the libarchive collection at http://sourceforge.net/projects/gnuwin32/files/. This is also a good place to get many of the libraries you need for optional pieces of MSVC and MinGW builds.<br />
<br />
=== Running multiple buildfarm members on a single machine ===<br />
<br />
Sometimes you might want to run more than one buildfarm member on a single machine. Possible reasons for doing this include testing different compilers, and running with different build options. For example, on one FreeBSD machine I have two members; one does a normal build and the other does a build with -DCLOBBER_CACHE_ALWAYS set. Or on a Windows machine one might want to test both the 32 bit and 64 bit mingw-w64 compilers.<br />
<br />
The simplest way to do this is to do it all in the same location. Get one member working, then copy the config file to something with the other member's name and change the animal name and password, and whatever in the config will be different from the first one. The members can share a git mirror and build root. There are locking provisions that prevent instances of the buildfarm scripts from tripping over each other. If you are using ccache, you should ensure that each member gets a separate ccache location. The best way to do that is to put the member name into the ccache directory name (which is the default as of recent releases of the buildfarm scripts).<br />
<br />
=== Running in Parallel ===<br />
<br />
If you run a single animal, you can run all the branches in parallel just by changing <code>run_branches.pl</code>'s <code>--run-all</code> to <code>--run-parallel</code>. This will launch each branch's run, spaced out by 60 seconds from launch to launch. <br />
<br />
The long story: parallelism is controlled by a number of configuration parameters in the <code>global</code> section of the config file. The first is <code>parallel_lockdir</code>. By default this is the <code>global_lock_dir</code> which in turn defaults to the <code>build_root</code>. This directory is where <code>run_branches.pl</code> puts a lock file for each running branch. The second is <code>max_parallel</code>. The script will launch a new branch as long as the number of live locks is less than this number. The default is 10. Lastly the setting parallel_stagger determines how long the script will wait before starting a new branch, unless one finishes in the meantime. The default is 60 seconds.<br />
<br />
If you want to run multiple animals and use parallelism between them the best way is to use a separate <code>build_root</code> for each animal. Then don't set the <code>global_lock_dir</code> for each animal, but do set the <code>parallel_lockdir</code> for each animal to point to the same directory, probably the <code>build_root</code> of one of the animals. Then you could have a crontab something like this:<br />
<br />
2-59/15 * * * * cd curly && run_branches.pl --run-parallel --config=curly.conf<br />
7-59/15 * * * * cd larry && run_branches.pl --run-parallel --config=larry.conf<br />
12-59/15 * * * * cd moe && run_branches.pl --run-parallel --config=moe.conf<br />
<br />
=== Tips and Tricks ===<br />
<br />
You can force a single run of your animal by putting a file called <animal>.force-one-run in the <buildroot>/<branch> directory. For example the following will force a build on all the stable branches of my animal crake<br />
cd root<br />
for f in REL* ; do<br />
touch $f/crake.force-one-run<br />
done<br />
When the run is done this file will be removed automatically. <br />
<br />
[[Category:Howto]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_Buildfarm_Howto&diff=32690PostgreSQL Buildfarm Howto2018-10-26T13:29:16Z<p>Adunstan: add info re status lists, update members list info</p>
<hr />
<div>PostgreSQL BuildFarm is a distributed build system designed to detect <br />
build failures on a large collection of platforms and configurations. <br />
This software is written in Perl. If you're not comfortable with Perl<br />
then you possibly don't want to run this, even though the only adjustment<br />
you should ever need is to the config file (which is also Perl).<br />
<br />
=== Get the Software === <br />
Download from [http://buildfarm.postgresql.org/downloads the buildfarm server]<br />
Unpack it and put it somewhere. You can put the config file in a different <br />
place from the run_build.pl script if you want to, but the <br />
simplest thing is to put it in the same place. Decide which user you will run <br />
the script as - it must be a user who can run the PostgreSQL server programs (on Unix<br />
that means it must *not* run as root). Do everything else here as that user.<br />
<br />
=== Other Prerequisites ===<br />
<br />
; Git<br />
: Must be version 1.6 or later.<br />
<br />
; All tools required for building Postgres from a Git checkout<br />
: GNU make, bison, flex, etc<br />
: See [http://www.postgresql.org/docs/devel/static/install-requirements.html the Postgres documentation]<br />
<br />
; ccache<br />
: This isn't ''absolutely'' necessary, but it greatly reduces the amount of CPU your buildfarm member will consume ... at the price of more disk space usage<br />
<br />
=== All the run_build.pl command line options ===<br />
<br />
This list is complete as of release 4.19 of the client<br />
<br />
* --config=/pathto/file - location of config file, default build-farm.conf<br />
* --nosend says don't send the results to the server<br />
* --nostatus says don;t update the status files<br />
* --force says run the build even of it's not needed<br />
* --verbose[=n] says display information. verbosity level 1 (default if --verbose is specified) shows a line for each step as it start. Any higher number causes the logs from the various stages to be sent to the standard output<br />
* --quiet - suppress error output<br />
* --test is short for --nosend --no-status --force --verbose<br />
* --find-typedefs - obsolete way to trigger typedef anaylsis. This should now be done via the config file<br />
* --help - print help text<br />
* --keepall - keep build and installation directories if there is a failure<br />
* [ to be continued ]<br />
<br />
<br />
=== Choose a setup for a base git mirror that all your branches will pull from. ===<br />
Most buildfarm members run on more than one branch, and if you do it's good practice to set up<br />
a mirror on the buildfarm machine and then just clone that for each branch. The official publicly available git repository is at<br />
* git://git.postgresql.org/git/postgresql.git<br />
and there is a mirror at <br />
* git://github.com/postgres/postgres.git<br />
Either should be suitable for cloning.<br />
<br />
The simplest way to set up a mirror is simply to have the buildfarm script create and maintain it for you. <br />
If you do that, the mirror will be updated at the start of a run when it checks to see if any changes have occurred that might<br />
require a new build. To do that, all you need to do is set the following two options in your config file:<br />
git_keep_mirror => 'true',<br />
git_ignore_mirror_failure => 'true',<br />
<br />
If you would rather clone the github mirror for your local mirror instead of the authoritative community repo (doing so can keep load off the community server, which is a good thing), then set the config variable to point to it like this:<br />
scmrepo => 'git://github.com/postgres/postgres.git',<br />
<br />
The mirror will be placed in your build root, above the branch directories.<br />
<br />
You can also opt to create and maintain a git mirror yourself, something like this:<br />
git clone --mirror git://git.postgresql.org/git/postgresql.git pgsql-base.git<br />
When that is done, add an entry to your crontab to keep it up to date, something like:<br />
20,50 * * * * cd /path/to/pgsql-base.git && git fetch -q<br />
<br />
One downside of doing this is that your mirror will only be as up to date as the last time you ran the cron update.<br />
<br />
To have your buildfarm installation use a local mirror you maintain yourself, set the config variable:<br />
scmrepo => '/path/to/pgsql-base.git',<br />
Of course, in this case you don't set the git_keep_mirror option.<br />
<br />
=== Create a directory where builds will run. === <br />
This should be dedicated to<br />
the use of the build farm. Make sure there's plenty of space - on my<br />
machine each branch can use up to about 700Mb during a build. You can use the<br />
directory where the script lives, or a subdirectory of it, or a completely <br />
different directory.<br />
<br />
If you're using ccache, the cache directory can use up to 1Gb by default.<br />
You can reduce that if you like (see the ccache documentation), but it's<br />
good to allow at least 100Mb per active branch.<br />
<br />
=== Edit the build-farm.conf file ===<br />
<br />
Notable things you probably need to set include the following:<br />
<br />
==== %conf ====<br />
<br />
; scmrepo<br />
: Set this to indicate the path to your Git mirror<br />
; scm_url<br />
: If you are not using the Community git repository, or want to point the changesets at a different server, set this URL to indicate where to find a given Git commit on the web. For instance, for the github mirror, this value should be: <i>&#x68;ttp://github.com/postgres/postgres.git/commit/</i> - don't forget the trailing "/".<br />
<br />
Once you have registered your Buildfarm animal you will need to set these, but for initial testing just leave them as-is:<br />
<br />
; animal<br />
: This will need to be set to the animal name you were given by the Buildfarm coordinators<br />
; secret<br />
: This must be the password indicated by the Buildfarm coordinators<br />
<br />
Adjust other config variables "make", "config_opts", and (if you don't use ccache) "config_env" to suit your environment, and to choose which optional Postgres configuration options you want to build with. <br />
<br />
You should not need to adjust other variables.<br />
<br />
You may verify that you didn't screw things up too badly by running "perl -cw build-farm.conf". That verifies that the configuration is still legitimate Perl.<br />
<br />
=== Alerts and Status Notifications ===<br />
<br />
Alerts happen when we haven't heard from your buildfarm member for a while, and suggest that maybe something is wrong. Status notifications happen when we have heard from your buildfarm member, and we are telling you what happened. Both of them happen via email. Alerts are sent to the owner's registered email address. By default, none are sent. You can configure when and how often they are sent in the alerts section of the config file. Status notifications are sent to the addresses configured in the mail_events section of the config file. You can choose four different sorts of notification:<br />
* for every build<br />
* for every build that fails<br />
* for every build that changes the status<br />
* for every build that changes the status if the change is to or from OK (green) <br />
<br />
=== Change the shebang line in the scripts. ===<br />
If the path to your perl <br />
installation isn't "/usr/bin/perl", edit the #! line in perl scripts so it is correct. <br />
This is the ONLY line in those files you should ever need to edit. <br />
<br />
=== Check that required perl modules are present. ===<br />
Run "perl -cw run_build.pl". <br />
If you get errors about missing perl modules you will need to install them. <br />
Most of the required modules are standard modules in any perl<br />
distribution. The rest are all standard CPAN modules, and available either from there<br />
or from your OS distribution. When you don't get an error any more, run the same test on<br />
run_web_txn.pl, and also on run_branches.pl if you plan to use that (see below).<br />
When all is clear you are ready to start testing.<br />
<br />
=== Run in test mode. ===<br />
With a PATH that matches what you will have when running from cron, run<br />
the script in no-send, no-status, verbose mode. Something like this:<br />
./run_build.pl --nosend --nostatus --verbose<br />
and watch the fun begin. If this results in failures because it can't<br />
find some executables (especially gmake and git), you might need to change <br />
the config file again, this time changing the "build_env" with another <br />
setting something like:<br />
PATH => "/usr/local/bin:$ENV{PATH}",<br />
Also, if you put the config file somewhere else, you will need to use <br />
the --config=/path/to/build-farm.conf option.<br />
<br />
If trying to diagnose problems, interesting summary information may be found in the file '''web-txn.data''', which is found in a build-specific directory, of the form $build_root/$CURRENT_BRANCH/$animal.lastrun-logs/web-txn.data<br />
<br />
If particular steps of a build failed, logs for those steps may be found in that same directory.<br />
<br />
=== Test running from cron === <br />
When you have that running, it's time to try with cron. <br />
Put a line in your crontab that looks something like this:<br />
43 * * * * cd /location/of/run_build.pl/ && ./run_build.pl --nosend --verbose<br />
Again, add the --config option if needed. Notice that this time we didn't <br />
specify --nostatus. That means that (after the first run) the script won't <br />
do any build work unless the Git repo has changed. Check that your cron <br />
job runs (it should email you the results, unless you tell it to send them<br />
elsewhere).<br />
<br />
You can, and probably should, drop the --verbose option once things are<br />
working.<br />
<br />
The frequency with which the cron job is launched is up to you, though we do<br />
suggest that active branches get built at least once a day. The build script will<br />
automatically exit if it finds a previous invocation still running, so you do not<br />
need to worry about scheduling jobs too close together. Think of the cron<br />
frequency as how often the buildfarm animal will wake up to see if there have<br />
been changes in the Git repo.<br />
<br />
=== Choose which branches you want to build === <br />
By default run_build.pl builds the HEAD branch. If you want to<br />
build some other branch, you can do so by specifying the name on the commandline,<br />
e.g. <br />
run_build.pl REL9_4_STABLE<br />
<br />
The old way to build multiple branches was to create a cron job for each<br />
active branch, along the lines of:<br />
<br />
6 * * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend<br />
30 4 * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend REL9_4_STABLE<br />
<br />
But there's a better way ...<br />
<br />
=== Using run_branches.pl ===<br />
There is a wrapper script that makes running multiple branches much easier. To build all the branches that are currently being maintained by the project, uncomment this line in the config file:<br />
# $conf{branches_to_build} = 'ALL'; # or [qw( HEAD RELx_y_STABLE etc )]<br />
and instead of running run_build.pl, use run_branches.pl with the --run-all option. This script accepts all the<br />
options that run_build.pl does, and passes them through. So now your crontab could just look like this:<br />
6 * * * * cd /home/andrew/buildfarm && ./run_branches.pl --run-all<br />
One of the advantages of this approach is that you don't need to manually retire a branch when the Postgres project ends support for it, nor to add one when there's a new stable branch. The script contacts the server to get a list of branches that we're currently interested in, and then builds them. This is now the recommended method of running a buildfarm member.<br />
<br />
If you don't want to build every one of the back branches, you can also use HEAD_PLUS_LATEST, or HEAD_PLUS_LATESTn for any n,<br />
in the $conf{branches_to_build} setting.<br />
<br />
=== Register your new buildfarm member and subscribe to the mailing list. === <br />
Once this is all running happily, you can register to upload your<br />
results to the central server. Registration can be done on the buildfarm server <br />
at http://www.pgbuildfarm.org/cgi-bin/register-form.pl. When you receive your approval by <br />
email, you will edit the "animal" and "secret" lines in your config file, <br />
remove the --nosend flags, and you are done.<br />
<br />
Please also join the buildfarm-members mailing list at<br />
https://lists.postgresql.org<br />
This is a low-traffic list for owners of buildfarm members, and every buildfarm owner should be subscribed.<br />
<br />
=== Status Mailing Lists ===<br />
<br />
There are also two mailing lists that report status from all builds, not just your own animals. This is useful for developers who want to be notified of events rather than having to monitor the server's dashboard.<br />
<br />
* <b><code>buildfarm-status-failures</code></b>, which gets an email any time a buildfarm animal reports a failed run.<br />
* <b><code>buildfarm-status-green-chgs</code></b>, which gets an email any time the status of a buildfarm animal changes to or from green (i.e. success). This is the status list most people find useful.<br />
<br />
=== Bugs === <br />
Please file bug reports concerning the buildfarm client (but not Postgres itself)<br />
on the buildfarm members mailing list.<br />
<br />
=== Running on Windows ===<br />
There are three build environments for Windows: Cygwin, MinGW/MSys, and Microsoft Visual C++. The buildfarm can run with each of these environments. This section discusses requirements for the buildfarm, rather than requirements for building on Windows, which are covered elsewhere.<br />
<br />
==== Cygwin ==== <br />
There is almost nothing extra to be done for Cygwin. You need to make sure that cygserver is running, and you should set MAX_CONNECTIONS=>3 and CYGWIN=>'server' in the build_env stanza of the buildfarm config. Other than that it should be just like running on Unix.<br />
<br />
==== MinGW/Msys ====<br />
For MinGW/MSys, you need both the MSys DTK version of perl installed, and a native Windows perl - I have only tested with ActiveState perl, which I have found to be rock solid. You need to run the main buildfarm script using the MSYS DTK perl, and the web transaction script using native Perl. that mean you need to change the first line of the run_web_txn.pl script so it reads something like:<br />
#!/c/perl/bin/perl<br />
You should make sure that the PATH is set in your config file to put the Native perl ahead of the MSys DTK perl.<br />
It's a good idea to have a runbf.bat file that you can call from the Windows scheduler. Mine looks like this:<br />
@echo off<br />
setlocal<br />
c:<br />
cd \msys\1.0\bin<br />
c:\msys\1.0\bin\sh.exe --login -c "cd bf && ./run_build.pl --verbose %1 >> bftask.out 2>&1"<br />
Set up a non-privileged Windows user to run this jobs as. set up the buildfarm as above as that user. Then create scheduler jobs that call runbf.bat with an optional branch name argument.<br />
<br />
==== Microsoft Visual C++ ====<br />
For MSVC you need to edit the config file more extensively. Make sure the 'using_msvc' setting is on. Also, there is a section of the file specially for MSVC builds. As with MinGW, you need a native Windows perl installed. It appears that Windows Git does not like to clone local repositories specified with forward slashes (this is pretty horrible - almost all Windows programs are quite happy with forward slashes. Make sure you specify the repository using backslashes or weird things will happen. Again, you will need a runbf.bat file for the windows scheduler. Mine looks like this:<br />
@echo off<br />
c:<br />
cd \prog\bf<br />
c:\perl\bin\perl run_build.pl --verbose %1 %2 %3 %4 >> bfout.txt<br />
You will also need a tar command capable of bundling up the logs to send to the server. The best one I have found for use on Windows is bsdtar, part of the libarchive collection at http://sourceforge.net/projects/gnuwin32/files/. This is also a good place to get many of the libraries you need for optional pieces of MSVC and MinGW builds.<br />
<br />
=== Running multiple buildfarm members on a single machine ===<br />
<br />
Sometimes you might want to run more than one buildfarm member on a single machine. Possible reasons for doing this include testing different compilers, and running with different build options. For example, on one FreeBSD machine I have two members; one does a normal build and the other does a build with -DCLOBBER_CACHE_ALWAYS set. Or on a Windows machine one might want to test both the 32 bit and 64 bit mingw-w64 compilers.<br />
<br />
The simplest way to do this is to do it all in the same location. Get one member working, then copy the config file to something with the other member's name and change the animal name and password, and whatever in the config will be different from the first one. The members can share a git mirror and build root. There are locking provisions that prevent instances of the buildfarm scripts from tripping over each other. If you are using ccache, you should ensure that each member gets a separate ccache location. The best way to do that is to put the member name into the ccache directory name (which is the default as of recent releases of the buildfarm scripts).<br />
<br />
=== Tips and Tricks ===<br />
<br />
You can force a single run of your animal by putting a file called <animal>.force-one-run in the <buildroot>/<branch> directory. For example the following will force a build on all the stable branches of my animal crake:<br />
cd root<br />
for f in REL* ; do<br />
touch $f/crake.force-one-run<br />
done<br />
When the run is done this file will be removed automatically. <br />
<br />
[[Category:Howto]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_Buildfarm_Howto&diff=32687PostgreSQL Buildfarm Howto2018-10-25T15:12:33Z<p>Adunstan: </p>
<hr />
<div>PostgreSQL BuildFarm is a distributed build system designed to detect <br />
build failures on a large collection of platforms and configurations. <br />
This software is written in Perl. If you're not comfortable with Perl<br />
then you possibly don't want to run this, even though the only adjustment<br />
you should ever need is to the config file (which is also Perl).<br />
<br />
=== Get the Software === <br />
Download from [http://buildfarm.postgresql.org/downloads the buildfarm server]<br />
Unpack it and put it somewhere. You can put the config file in a different <br />
place from the run_build.pl script if you want to, but the <br />
simplest thing is to put it in the same place. Decide which user you will run <br />
the script as - it must be a user who can run the PostgreSQL server programs (on Unix<br />
that means it must *not* run as root). Do everything else here as that user.<br />
<br />
=== Other Prerequisites ===<br />
<br />
; Git<br />
: Must be version 1.6 or later.<br />
<br />
; All tools required for building Postgres from a Git checkout<br />
: GNU make, bison, flex, etc<br />
: See [http://www.postgresql.org/docs/devel/static/install-requirements.html the Postgres documentation]<br />
<br />
; ccache<br />
: This isn't ''absolutely'' necessary, but it greatly reduces the amount of CPU your buildfarm member will consume ... at the price of more disk space usage<br />
<br />
=== All the run_build.pl command line options ===<br />
<br />
This list is complete as of release 4.19 of the client<br />
<br />
* --config=/pathto/file - location of config file, default build-farm.conf<br />
* --nosend says don't send the results to the server<br />
* --nostatus says don;t update the status files<br />
* --force says run the build even of it's not needed<br />
* --verbose[=n] says display information. verbosity level 1 (default if --verbose is specified) shows a line for each step as it start. Any higher number causes the logs from the various stages to be sent to the standard output<br />
* --quiet - suppress error output<br />
* --test is short for --nosend --no-status --force --verbose<br />
* --find-typedefs - obsolete way to trigger typedef anaylsis. This should now be done via the config file<br />
* --help - print help text<br />
* --keepall - keep build and installation directories if there is a failure<br />
* [ to be continued ]<br />
<br />
<br />
=== Choose a setup for a base git mirror that all your branches will pull from. ===<br />
Most buildfarm members run on more than one branch, and if you do it's good practice to set up<br />
a mirror on the buildfarm machine and then just clone that for each branch. The official publicly available git repository is at<br />
* git://git.postgresql.org/git/postgresql.git<br />
and there is a mirror at <br />
* git://github.com/postgres/postgres.git<br />
Either should be suitable for cloning.<br />
<br />
The simplest way to set up a mirror is simply to have the buildfarm script create and maintain it for you. <br />
If you do that, the mirror will be updated at the start of a run when it checks to see if any changes have occurred that might<br />
require a new build. To do that, all you need to do is set the following two options in your config file:<br />
git_keep_mirror => 'true',<br />
git_ignore_mirror_failure => 'true',<br />
<br />
If you would rather clone the github mirror for your local mirror instead of the authoritative community repo (doing so can keep load off the community server, which is a good thing), then set the config variable to point to it like this:<br />
scmrepo => 'git://github.com/postgres/postgres.git',<br />
<br />
The mirror will be placed in your build root, above the branch directories.<br />
<br />
You can also opt to create and maintain a git mirror yourself, something like this:<br />
git clone --mirror git://git.postgresql.org/git/postgresql.git pgsql-base.git<br />
When that is done, add an entry to your crontab to keep it up to date, something like:<br />
20,50 * * * * cd /path/to/pgsql-base.git && git fetch -q<br />
<br />
One downside of doing this is that your mirror will only be as up to date as the last time you ran the cron update.<br />
<br />
To have your buildfarm installation use a local mirror you maintain yourself, set the config variable:<br />
scmrepo => '/path/to/pgsql-base.git',<br />
Of course, in this case you don't set the git_keep_mirror option.<br />
<br />
=== Create a directory where builds will run. === <br />
This should be dedicated to<br />
the use of the build farm. Make sure there's plenty of space - on my<br />
machine each branch can use up to about 700Mb during a build. You can use the<br />
directory where the script lives, or a subdirectory of it, or a completely <br />
different directory.<br />
<br />
If you're using ccache, the cache directory can use up to 1Gb by default.<br />
You can reduce that if you like (see the ccache documentation), but it's<br />
good to allow at least 100Mb per active branch.<br />
<br />
=== Edit the build-farm.conf file ===<br />
<br />
Notable things you probably need to set include the following:<br />
<br />
==== %conf ====<br />
<br />
; scmrepo<br />
: Set this to indicate the path to your Git mirror<br />
; scm_url<br />
: If you are not using the Community git repository, or want to point the changesets at a different server, set this URL to indicate where to find a given Git commit on the web. For instance, for the github mirror, this value should be: <i>&#x68;ttp://github.com/postgres/postgres.git/commit/</i> - don't forget the trailing "/".<br />
<br />
Once you have registered your Buildfarm animal you will need to set these, but for initial testing just leave them as-is:<br />
<br />
; animal<br />
: This will need to be set to the animal name you were given by the Buildfarm coordinators<br />
; secret<br />
: This must be the password indicated by the Buildfarm coordinators<br />
<br />
Adjust other config variables "make", "config_opts", and (if you don't use ccache) "config_env" to suit your environment, and to choose which optional Postgres configuration options you want to build with. <br />
<br />
You should not need to adjust other variables.<br />
<br />
You may verify that you didn't screw things up too badly by running "perl -cw build-farm.conf". That verifies that the configuration is still legitimate Perl.<br />
<br />
=== Alerts and Status Notifications ===<br />
<br />
Alerts happen when we haven't heard from your buildfarm member for a while, and suggest that maybe something is wrong. Status notifications happen when we have heard from your buildfarm member, and we are telling you what happened. Both of them happen via email. Alerts are sent to the owner's registered email address. By default, none are sent. You can configure when and how often they are sent in the alerts section of the config file. Status notifications are sent to the addresses configured in the mail_events section of the config file. You can choose four different sorts of notification:<br />
* for every build<br />
* for every build that fails<br />
* for every build that changes the status<br />
* for every build that changes the status if the change is to or from OK (green) <br />
<br />
=== Change the shebang line in the scripts. ===<br />
If the path to your perl <br />
installation isn't "/usr/bin/perl", edit the #! line in perl scripts so it is correct. <br />
This is the ONLY line in those files you should ever need to edit. <br />
<br />
=== Check that required perl modules are present. ===<br />
Run "perl -cw run_build.pl". <br />
If you get errors about missing perl modules you will need to install them. <br />
Most of the required modules are standard modules in any perl<br />
distribution. The rest are all standard CPAN modules, and available either from there<br />
or from your OS distribution. When you don't get an error any more, run the same test on<br />
run_web_txn.pl, and also on run_branches.pl if you plan to use that (see below).<br />
When all is clear you are ready to start testing.<br />
<br />
=== Run in test mode. ===<br />
With a PATH that matches what you will have when running from cron, run<br />
the script in no-send, no-status, verbose mode. Something like this:<br />
./run_build.pl --nosend --nostatus --verbose<br />
and watch the fun begin. If this results in failures because it can't<br />
find some executables (especially gmake and git), you might need to change <br />
the config file again, this time changing the "build_env" with another <br />
setting something like:<br />
PATH => "/usr/local/bin:$ENV{PATH}",<br />
Also, if you put the config file somewhere else, you will need to use <br />
the --config=/path/to/build-farm.conf option.<br />
<br />
If trying to diagnose problems, interesting summary information may be found in the file '''web-txn.data''', which is found in a build-specific directory, of the form $build_root/$CURRENT_BRANCH/$animal.lastrun-logs/web-txn.data<br />
<br />
If particular steps of a build failed, logs for those steps may be found in that same directory.<br />
<br />
=== Test running from cron === <br />
When you have that running, it's time to try with cron. <br />
Put a line in your crontab that looks something like this:<br />
43 * * * * cd /location/of/run_build.pl/ && ./run_build.pl --nosend --verbose<br />
Again, add the --config option if needed. Notice that this time we didn't <br />
specify --nostatus. That means that (after the first run) the script won't <br />
do any build work unless the Git repo has changed. Check that your cron <br />
job runs (it should email you the results, unless you tell it to send them<br />
elsewhere).<br />
<br />
You can, and probably should, drop the --verbose option once things are<br />
working.<br />
<br />
The frequency with which the cron job is launched is up to you, though we do<br />
suggest that active branches get built at least once a day. The build script will<br />
automatically exit if it finds a previous invocation still running, so you do not<br />
need to worry about scheduling jobs too close together. Think of the cron<br />
frequency as how often the buildfarm animal will wake up to see if there have<br />
been changes in the Git repo.<br />
<br />
=== Choose which branches you want to build === <br />
By default run_build.pl builds the HEAD branch. If you want to<br />
build some other branch, you can do so by specifying the name on the commandline,<br />
e.g. <br />
run_build.pl REL9_4_STABLE<br />
<br />
The old way to build multiple branches was to create a cron job for each<br />
active branch, along the lines of:<br />
<br />
6 * * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend<br />
30 4 * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend REL9_4_STABLE<br />
<br />
But there's a better way ...<br />
<br />
=== Using run_branches.pl ===<br />
There is a wrapper script that makes running multiple branches much easier. To build all the branches that are currently being maintained by the project, uncomment this line in the config file:<br />
# $conf{branches_to_build} = 'ALL'; # or [qw( HEAD RELx_y_STABLE etc )]<br />
and instead of running run_build.pl, use run_branches.pl with the --run-all option. This script accepts all the<br />
options that run_build.pl does, and passes them through. So now your crontab could just look like this:<br />
6 * * * * cd /home/andrew/buildfarm && ./run_branches.pl --run-all<br />
One of the advantages of this approach is that you don't need to manually retire a branch when the Postgres project ends support for it, nor to add one when there's a new stable branch. The script contacts the server to get a list of branches that we're currently interested in, and then builds them. This is now the recommended method of running a buildfarm member.<br />
<br />
If you don't want to build every one of the back branches, you can also use HEAD_PLUS_LATEST, or HEAD_PLUS_LATESTn for any n,<br />
in the $conf{branches_to_build} setting.<br />
<br />
=== Register your new buildfarm member. === <br />
Once this is all running happily, you can register to upload your<br />
results to the central server. Registration can be done on the buildfarm server <br />
at http://www.pgbuildfarm.org/cgi-bin/register-form.pl. When you receive your approval by <br />
email, you will edit the "animal" and "secret" lines in your config file, <br />
remove the --nosend flags, and you are done.<br />
<br />
Please also join the buildfarm-members mailing list at<br />
https://www.postgresql.org/community/lists/subscribe/<br />
This is a low-traffic list for owners of buildfarm members.<br />
<br />
=== Bugs === <br />
Please file bug reports concerning the buildfarm script (but not Postgres itself)<br />
on the buildfarm members mailing list.<br />
<br />
=== Running on Windows ===<br />
There are three build environments for Windows: Cygwin, MinGW/MSys, and Microsoft Visual C++. The buildfarm can run with each of these environments. This section discusses requirements for the buildfarm, rather than requirements for building on Windows, which are covered elsewhere.<br />
<br />
==== Cygwin ==== <br />
There is almost nothing extra to be done for Cygwin. You need to make sure that cygserver is running, and you should set MAX_CONNECTIONS=>3 and CYGWIN=>'server' in the build_env stanza of the buildfarm config. Other than that it should be just like running on Unix.<br />
<br />
==== MinGW/Msys ====<br />
For MinGW/MSys, you need both the MSys DTK version of perl installed, and a native Windows perl - I have only tested with ActiveState perl, which I have found to be rock solid. You need to run the main buildfarm script using the MSYS DTK perl, and the web transaction script using native Perl. that mean you need to change the first line of the run_web_txn.pl script so it reads something like:<br />
#!/c/perl/bin/perl<br />
You should make sure that the PATH is set in your config file to put the Native perl ahead of the MSys DTK perl.<br />
It's a good idea to have a runbf.bat file that you can call from the Windows scheduler. Mine looks like this:<br />
@echo off<br />
setlocal<br />
c:<br />
cd \msys\1.0\bin<br />
c:\msys\1.0\bin\sh.exe --login -c "cd bf && ./run_build.pl --verbose %1 >> bftask.out 2>&1"<br />
Set up a non-privileged Windows user to run this jobs as. set up the buildfarm as above as that user. Then create scheduler jobs that call runbf.bat with an optional branch name argument.<br />
<br />
==== Microsoft Visual C++ ====<br />
For MSVC you need to edit the config file more extensively. Make sure the 'using_msvc' setting is on. Also, there is a section of the file specially for MSVC builds. As with MinGW, you need a native Windows perl installed. It appears that Windows Git does not like to clone local repositories specified with forward slashes (this is pretty horrible - almost all Windows programs are quite happy with forward slashes. Make sure you specify the repository using backslashes or weird things will happen. Again, you will need a runbf.bat file for the windows scheduler. Mine looks like this:<br />
@echo off<br />
c:<br />
cd \prog\bf<br />
c:\perl\bin\perl run_build.pl --verbose %1 %2 %3 %4 >> bfout.txt<br />
You will also need a tar command capable of bundling up the logs to send to the server. The best one I have found for use on Windows is bsdtar, part of the libarchive collection at http://sourceforge.net/projects/gnuwin32/files/. This is also a good place to get many of the libraries you need for optional pieces of MSVC and MinGW builds.<br />
<br />
=== Running multiple buildfarm members on a single machine ===<br />
<br />
Sometimes you might want to run more than one buildfarm member on a single machine. Possible reasons for doing this include testing different compilers, and running with different build options. For example, on one FreeBSD machine I have two members; one does a normal build and the other does a build with -DCLOBBER_CACHE_ALWAYS set. Or on a Windows machine one might want to test both the 32 bit and 64 bit mingw-w64 compilers.<br />
<br />
The simplest way to do this is to do it all in the same location. Get one member working, then copy the config file to something with the other member's name and change the animal name and password, and whatever in the config will be different from the first one. The members can share a git mirror and build root. There are locking provisions that prevent instances of the buildfarm scripts from tripping over each other. If you are using ccache, you should ensure that each member gets a separate ccache location. The best way to do that is to put the member name into the ccache directory name (which is the default as of recent releases of the buildfarm scripts).<br />
<br />
=== Tips and Tricks ===<br />
<br />
You can force a single run of your animal by putting a file called <animal>.force-one-run in the <buildroot>/<branch> directory. For example the following will force a build on all the stable branches of my animal crake:<br />
cd root<br />
for f in REL* ; do<br />
touch $f/crake.force-one-run<br />
done<br />
When the run is done this file will be removed automatically. <br />
<br />
[[Category:Howto]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=List_of_drivers&diff=32603List of drivers2018-10-05T12:19:19Z<p>Adunstan: /* Drivers */</p>
<hr />
<div>= Drivers =<br />
<br />
<br />
{| border="1" <br />
!Driver<br />
!Language<br />
!uses libpq?<br />
!Supports SCRAM?<br />
|-<br />
|[http://www.postgresql.org/docs/current/static/libpq.html libpq]<br />
|C<br />
|Yes<br />
|Yes<br />
|-<br />
|[http://pqxx.org/development/libpqxx/ libpqxx]<br />
|C++<br />
|Yes<br />
|Yes<br />
|-<br />
|[http://initd.org/psycopg/ psycopg2]<br />
|Python (CPython only)<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://github.com/chtd/psycopg2cffi psycopg2cffi]<br />
|Python, PyPi<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://metacpan.org/release/DBD-Pg DBD::Pg]<br />
|Perl<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://rubyforge.org/projects/ruby-pg ruby-pg]<br />
|Ruby<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://github.com/hdbc/hdbc-postgresql/wiki HDBC]<br />
|Haskell<br />
|Yes<br />
|Yes<br />
|-<br />
|[http://jdbc.postgresql.org/ JDBC]<br />
|Java<br />
|No<br />
|Yes, from version 42.2.0.<br />
|-<br />
|[http://odbc.postgresql.org ODBC]<br />
|C<br />
|Yes<br />
|Yes<br />
|-<br />
|[http://glozer.net/src/epgsql/ epgsql]<br />
|Erlang<br />
|<br />
|<br />
|-<br />
|[http://frihjul.net/pgsql pgsql]<br />
|Erlang<br />
|<br />
|<br />
|-<br />
|[http://code.google.com/p/erlang-psql-driver/ erlang-psql-driver]<br />
|Erlang<br />
|<br />
|<br />
|-<br />
|[https://github.com/brianc/node-postgres node-postgres]<br />
|JavaScript<br />
|Optional<br />
|<br />
|-<br />
|[http://www.npgsql.org npgsql]<br />
|C#<br />
|No<br />
|Yes from version 3.2.7<br />
|-<br />
|[https://github.com/anse1/emacs-libpq emacs-libpq]<br />
|Emacs Lisp<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://github.com/lib/pq github.com/lib/pq]<br />
|Go<br />
|No<br />
|No<br />
|-<br />
|[https://github.com/sfackler/rust-postgres rust-postgres]<br />
|Rust<br />
|No<br />
|No<br />
|-<br />
|[http://sourceforge.net/projects/pgtclng/ pgtclng]<br />
|Tcl <br />
|Yes<br />
|Yes<br />
|-<br />
|[https://github.com/will/crystal-pg crystal-pg]<br />
|Crystal <br />
|No<br />
|No<br />
|-<br />
|[https://github.com/MagicStack/asyncpg asyncpg]<br />
|Python <br />
|No<br />
|No<br />
|}<br />
<br />
Note that drivers which have SCRAM support via libpq will need a very recent libpq, released with PostgreSQL v10 or later.<br />
<br />
= See Also =<br />
<br />
* [http://www.postgresql.org/download/products/2 Software catalog list]<br />
* [[Client Libraries]]<br />
<br />
[[Category: Language interface|!]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_11_Open_Items&diff=32110PostgreSQL 11 Open Items2018-06-23T13:04:29Z<p>Adunstan: /* Open Issues */</p>
<hr />
<div>== Open Issues ==<br />
<br />
* [https://www.postgresql.org/message-id/d8ja7ubjnyp.fsf@dalvik.ping.uio.no JSONB PL/Perl transform bugs]<br />
<br />
* [http://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info Explain buffers wrong counter with parallel plans]<br />
** reported as a possible defect in commit 01edb5c7fc3bcf6aea15f2b3be36189b52ad9d1a<br />
<br />
* [https://www.postgresql.org/message-id/2840048a-1184-417a-9da8-3299d207a1d7@postgrespro.ru pg_replication_slot_advance may cause assertion failures with incorrect LSN values]<br />
** Fix committed in [https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f731cfa94c00164814625d5753d376a4a7c43fff f731cfa94c00164814625d5753d376a4a7c43fff], awaiting testing<br />
<br />
* [https://www.postgresql.org/message-id/d01b31f5-0b3e-b69a-1504-a79649d81f46@iki.fi SCRAM with channel binding downgrade attack]<br />
** [https://www.postgresql.org/message-id/20180517140525.GC546@momjian.us Other thread]<br />
** [https://www.postgresql.org/message-id/20180522082218.GA12027%40paquier.xyz patch exists]<br />
<br />
* [https://www.postgresql.org/message-id/CA+TgmoYQD1xSM7=XrY6rv2a-W43gKpcTH76F3nSp5o2SGWeCkA@mail.gmail.com Deal with BEFORE ROW triggers in partitioned tables in some way]<br />
<br />
* [https://www.postgresql.org/message-id/87r2m10zm2.fsf%40news-spur.riddles.org.uk Portability concerns over pq_sendbyte?]<br />
<br />
* [https://www.postgresql.org/message-id/20180529210212.GE6632%40paquier.xyz Supporting tls-server-end-point as SCRAM channel binding for OpenSSL 1.0.0 and 1.0.1]<br />
<br />
* [https://www.postgresql.org/message-id/94dd7a4b-5e50-0712-911d-2278e055c622@dalibo.com Performance regression with PostgreSQL 11 and partitioning]<br />
<br />
* [https://www.postgresql.org/message-id/HE1PR03MB17068BB27404C90B5B788BCABA7B0@HE1PR03MB1706.eurprd03.prod.outlook.com Runtime partition pruning does not handle UNION ALL parents correctly]<br />
<br />
* [https://www.postgresql.org/message-id/eb59ce0b-4c95-98a1-1237-a9b300d1a9fe@joeconway.com assert in nested SQL procedure call in current HEAD] [https://www.postgresql.org/message-id/29608.1518533639@sss.pgh.pa.us see also]<br />
<br />
* [https://www.postgresql.org/message-id/CAJ3gD9fRbEzDqdeDq1jxqZUb47kJn%2BtQ7%3DBcgjc8quqKsDViKQ%40mail.gmail.com Concurrency bug in UPDATE of partition-key]<br />
<br />
* [https://www.postgresql.org/message-id/CAKcux6ktu-8tefLWtQuuZBYFaZA83vUzuRd7c1YHC-yEWyYFpg%40mail.gmail.com Expression errors with "FOR UPDATE" and postgres_fdw with partition wise join enabled.]<br />
<br />
* [https://www.postgresql.org/message-id/CAKcux6kmzWmur5HhA_aU6gYVFu0RLQdgJJ%2BaC9SLdcOvBSrpfA%40mail.gmail.com Server crashed with dense_rank on partition table]<br />
** David Rowley reports that the first bad commit is reported {{PgCommitURL|4b9094eb6e14dfdbed61278ea8e51cc846e43579}} (Adapt to LLVM 7+ Orc API changes.)<br />
** Amit Langote believes based on the core dump that the problem may be in {{PgCommitURL|bf6c614a2f2c58312b3be34a47e7fb7362e07bcb}}) (Do execGrouping.c via expression eval machinery, take two.)<br />
<br />
* [https://www.postgresql.org/message-id/19987.1529420110@sss.pgh.pa.us Fast default feature fails in pg_upgrade]<br />
** Introduced by {{PgCommitURL|16828d5c0273b4fe5f10f42588005f16b415b2d8}} (Fast ALTER TABLE ADD COLUMN with a non-NULL default)<br />
** Was this fixed by {{PgCommitURL|2448adf29c543befbac59f1ecfbb3ef4b0d808ce}}? (Allow for pg_upgrade of attributes with missing values) and {{PgCommitURL|123efbccea694626b36ad952086d883fa7469aa9}}? (Mark binary_upgrade_set_missing_value as parallel_unsafe)<br />
<br />
== Decisions to Recheck Mid-Beta ==<br />
<br />
* [https://www.postgresql.org/message-id/20180328212751.eskdxpljte6ga6wu@alap3.anarazel.de reconsider jit=on default shortly before release]<br />
<br />
== Older Bugs ==<br />
<br />
=== Live issues ===<br />
<br />
* [https://www.postgresql.org/message-id/20153.1523471686%40sss.pgh.pa.us IsInParallelMode() check in set_config_option is wrong (was: WARNING in parallel index creation)]<br />
* [https://www.postgresql.org/message-id/20180309075538.GD9376@paquier.xyz Fixes for missing schema qualifications]<br />
* [https://www.postgresql.org/message-id/CAFiTN-u4BA8KXcQUWDPNgaKAjDXC=C2whnzBM8TAcv=stckYUw@mail.gmail.com Allocation done in critical section when initializing WAL]<br />
* [https://www.postgresql.org/message-id/AD7252BEFBCA3846A8D34ABCDA258D080120F025C6@EXMBX05.mailcloud.dk pg_dump misses public role on schema public]<br />
<br />
* [https://www.postgresql.org/message-id/87lgdyz1wj.fsf@ars-thinkpad Fix slot's xmin advancement and subxact's lost snapshots in decoding]<br />
* [https://www.postgresql.org/message-id/CAD21AoB2ZbCCqOx%3DbgKMcLrAvs1V0ZMqzs7wBTuDySezTGtMZA%40mail.gmail.com Replication status in logical replication]<br />
* [https://www.postgresql.org/message-id/152746742177.1291.9847032632907407358%40wrigleys.postgresql.org Default values in partition tables don't work as expected and allow NOT NULL violation]<br />
* [https://www.postgresql.org/message-id/CABOikdPOewjNL%3D05K5CbNMxnNtXnQjhTx2F--4p4ruorCjukbA%40mail.gmail.com PANIC during crash recovery of a recently promoted standby]<br />
<br />
=== Fixed issues ===<br />
<br />
* [https://www.postgresql.org/message-id/1519917758.6586.8.camel@cybertec.at SHOW ALL does not honor pg_read_all_settings membership]<br />
** Fixed in: {{PgCommitURL|0c8910a0cab7c1e439bf5f5850122c36359e6799}}<br />
* [https://www.postgresql.org/message-id/5AF43E02.30000@lab.ntt.co.jp postgres_fdw: Oddity in pushing down inherited UPDATE/DELETE joins to remote servers]<br />
** Fixed in: {{PgCommitURL|7fc7dac1a711d0dbd01d2daf6dc97d27d6c6409c}}<br />
* [https://www.postgresql.org/message-id/20170117.193645.160386781.horiguchi.kyotaro@lab.ntt.co.jp Continued WAL record can prevent standby from startup]<br />
** Fixed in: {{PgCommitURL|0668719801838aa6a8bda330ff9b3d20097ea844}}<br />
* [https://www.postgresql.org/message-id/3AD85097-A3F3-4EBA-99BD-C38EDF8D2949@postgrespro.ru FinishPreparedTransaction missing HOLD_INTERRUPTS section]<br />
** Fixed in: {{PgCommitURL|8f9be261f43772ccee2eae94d971bac6557cbe6a}}<br />
<br />
== Non-bugs ==<br />
<br />
== Resolved Issues ==<br />
<br />
=== resolved before 11beta2 ===<br />
* [https://www.postgresql.org/message-id/CAKJS1f8w8+awsxgea8wt7_UX8qzOQ=Tm1LD+U1fHqBAkXxkW2w@mail.gmail.com Needless additional partition check in INSERT]<br />
** Fixed in: {{PgCommitURL|5b0c7e2f75}}<br />
<br />
* [https://postgr.es/m/aeb9c3a7-3c3f-a57f-1a18-c8d4fcdc2a1f@enterprisedb.com pg_resetwal fails with relative path to data dir]<br />
** Bug fix: {{PgCommitURL|1d96c1b91a4b7da6288ee63671a234b557ff5ccf}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/152802081668.26724.16985037679312485972%40wrigleys.postgresql.org Parallel Hash: invalid DSA memory alloc request size 1073741824]<br />
** Bug fix: {{PgCommitURL|86a2218eb00eb6f97898945967c5f9c95c72b4c6}}<br />
<br />
* [https://www.postgresql.org/message-id/20180525052805.GA15634%40paquier.xyz pg_replication_slot_advance to return NULL instead of 0/0 if slot not advanced]<br />
** Bug fix: {{PgCommitURL|f731cfa94c00164814625d5753d376a4a7c43fff}}<br />
<br />
* [https://www.postgresql.org/message-id/20180529211559.GF6632%40paquier.xyz pg_config.h.win32 missing a set of flags from pg_config.h.in added in v11 development]<br />
** Fixed in: {{PgCommitURL|bde64eb6107622e8438dd61b93afd4d1adf178b3}}<br />
<br />
* [https://www.postgresql.org/message-id/CAKJS1f94Ojk0og9GMkRHGt8wHTW%3Dijq5KzJKuoBoqWLwSVwGmw%40mail.gmail.com Partitioning with temp tables is broken]<br />
** Fixed in: {{PgCommitURL|1c7c317cd9d1e5647454deed11b55dae764c83bf}}<br />
<br />
* [https://www.postgresql.org/message-id/CAKcux6%3DtPJ6nJ08r__nU_pmLQiC0xY15Fn0HvG1Cprsjdd9s_Q%40mail.gmail.com Server crash during parallel append path generation]<br />
** Fixed in: {{PgCommitURL|403318b71f7058ecbfb65bcc7de1eec96cd35d3f}}<br />
<br />
* [https://www.postgresql.org/message-id/CAKcux6=q4+Mw8gOOX16ef6ZMFp9Cve7KWFstUsrDa4GiFaXGUQ@mail.gmail.com Partition-wise aggregation asserts out]<br />
** Fixed in: {{PgCommitURL|c6f28af5d7af87d7370e5f169251d91437f100a2}}<br />
<br />
=== resolved before 11beta1 ===<br />
<br />
* [http://postgr.es/m/87sh71cakz.fsf@ars-thinkpad Indexes on partitioned tables and foreign partitions]<br />
** appears to be a bug in 8b08f7d4 (Local partitioned indexes)<br />
** Fixed in: {{PgCommitURL|4eaa53727542c39cca71b80e8ff3e1f742d64452}}<br />
<br />
* [https://www.postgresql.org/message-id/a66879e5-636f-d4dd-b4a4-92bdca5a828f%40lab.ntt.co.jp \d doesn't show partitioned tables' foreign key]<br />
** Fixed in: {{PgCommitURL|93316299d6a185bed0a4be5376508fe2f7e6b2d6}}<br />
<br />
* [https://www.postgresql.org/message-id/2018041911380869070310%40i-soft.com.cn Memory leaks with _SPI_stack handling in TopMemoryContext]<br />
** Regression caused by commit 8561e48.<br />
** Fixed in: {{PgCommitURL|30c66e77be1d890c3cca766259c0bec80bcac1b5}}<br />
<br />
* [https://www.postgresql.org/message-id/5AD4882B.10002%40lab.ntt.co.jp Oddity in tuple routing for foreign partitions]<br />
** Fixed in: {{PgCommitURL|37a3058bc7c8224d4c0d8b36176d821636a1f90e}}<br />
<br />
* [https://www.postgresql.org/message-id/20180419052436.GA16000%40paquier.xyz Corrupted btree index on HEAD because of covering indexes]<br />
** Fixed in: {{PgCommitURL|6db4b49986be3fe59a1f6ba6fabf9852864efc3e}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/CAFjFpRcwq7G16J_w%2Byy_xiE7daD0Bm6iYTnhz81f79yrSOn4DA%40mail.gmail.com#CAFjFpRcwq7G16J_w+yy_xiE7daD0Bm6iYTnhz81f79yrSOn4DA@mail.gmail.com Decide if we want a GUC to disable partition pruning]<br />
** Fixed in: {{PgCommitURL|055fb8d33da6ff9003e3da4b9944bdcd2e2b2a49}}<br />
<br />
* [https://www.postgresql.org/message-id/2b02f1e9-9812-9c41-972d-517bdc0f815d%40lab.ntt.co.jp Fix partition pruning for the cases where partition key is of array, enum, record, or range type]<br />
** [https://www.postgresql.org/message-id/69879396-3a63-8fa9-2fa7-4fd1035b9623%40lab.ntt.co.jp Patch exists]<br />
** Bug fix: {{PgCommitURL|e5dcbb88a15d445e0ccb3db3194f4a122b792df6}}<br />
<br />
* [https://www.postgresql.org/message-id/CAKJS1f-tux=KdUz6ENJ9GHM_V2qgxysadYiOyQS9Ko9PTteVhQ@mail.gmail.com Run-time pruning and Parallel Append don't work properly together]<br />
** [https://www.postgresql.org/message-id/CAKJS1f-tux=KdUz6ENJ9GHM_V2qgxysadYiOyQS9Ko9PTteVhQ@mail.gmail.com Patch exists]<br />
** Bug fixes: {{PgCommitURL|47c91b55991883322fdbc4495ce7fe6b2166e8fe}} {{PgCommitURL|4d0f6d3f207d}} {{PgCommitURL|b47a86f5008f2674af20dd00bc233e7b74e01bba}}<br />
<br />
* [https://www.postgresql.org/message-id/87woxi24uw.fsf@ansel.ydns.eu expand_tuple segfaults]<br />
** coverage report shows it's completely untested, too<br />
** Bug fix: {{PgCommitURL|7c44c46deb495a2f3861f402d7f2109263e3d50a}}<br />
** Add coverage: {{PgCommitURL|b39fd897e0398a6bdc6552daa7cacdf9c0e46d7e}}<br />
<br />
* [https://www.postgresql.org/message-id/96cf4a6c-49ad-fa92-0d41-e4b911086dab%40lab.ntt.co.jp Handling of whole-row vars in ON CONFLICT on partitioned tables]<br />
** Bug fix: {{PgCommitURL|158b7bc6d77948d2f474dc9f2777c87f81d1365a}}<br />
<br />
* [https://www.postgresql.org/message-id/12085bc4-0bc6-0f3a-4c43-57fe0681772b@lab.ntt.co.jp relispartition for index partitions]<br />
** Bug fix: {{PgCommitURL|9e9befac4a2228ae8a5309900645ecd8ead69f53}}<br />
<br />
* [https://www.postgresql.org/message-id/CAGPqQf0W%2Bv-Ci_qNV_5R3A%3DZ9LsK4%2BjO7LzgddRncpp_rrnJqQ%40mail.gmail.com failure to validate default partition's constraint when attaching after4dba331cb3]<br />
** [https://www.postgresql.org/message-id/487870f2-d538-9d07-13e8-4ca390e27d18%40lab.ntt.co.jp Patch exists]<br />
** Bug fix: {{PgCommitURL|72cf7f310c0729a331f321fad39835ac886603dc}}<br />
<br />
* [https://www.postgresql.org/message-id/87in923lyw.fsf@ansel.ydns.eu Failed assertion on pfree() via perform_pruning_combine_step]<br />
** Original commit: {{PgCommitURL|9fdb675fc5d2de825414e05939727de8b120ae81}}<br />
** Bug fix: {{PgCommitURL|7ba6ee815dc90d4fab7226d343bf72aa28c9aa5c}}<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkryAPcQOHBJKuDKfni-HGFny31yjcbM-yp5HO-71iCdw@mail.gmail.com Parallel index workers don't have activity set]<br />
** Original commit: {{PgCommitURL|9da0cc35284bdbe8d442d732963303ff0e0a40bc}}<br />
** Bug fix: {{PgCommitURL|7de4a1bcc56f494acbd0d6e70781df877dc8ecb5}}<br />
<br />
* [https://www.postgresql.org/message-id/20180402065149.GC1908%40paquier.xyz check_ssl_key_file_permissions should be in be-secure-common.c]<br />
** Original commit: {{PgCommitURL|8a3d9425290ff5f6434990349886afae9e1c6008}}<br />
** [https://www.postgresql.org/message-id/20180402065149.GC1908%40paquier.xyz Patch exists]<br />
** Bug fix: {{PgCommitURL|2764d5dcfa84d240c901c20ec6e194f72d82b78a}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/CAFjFpRcOTHZSFfHNwhAe4DmS%2BqvWmqK_UW3QF3wG8e0pAKW0tA%40mail.gmail.com#CAFjFpRcOTHZSFfHNwhAe4DmS+qvWmqK_UW3QF3wG8e0pAKW0tA@mail.gmail.com Missing break statement after transformCallStmt in transformStmt]<br />
** Original commit: {{PgCommitURL|76b6aa41f41db66004b1c430f17a546d4102fbe7}}<br />
** Bug fix: {{PgCommitURL|13c7c65ec900a30bcddcb27f5fd138dcdbc2ca2e}}<br />
<br />
* [https://www.postgresql.org/message-id/CAKJS1f91kq1wfYR8rnRRfKtxyhU2woEA+=whd640UxMyU+O0EQ@mail.gmail.com Parallel index creation does not properly clean up after error]<br />
** Original commit: {{PgCommitURL|29d58fd3adae9057c3fd502393b2f131bc96eaf9}}<br />
** Bug fix: {{PgCommitURL|47cb9ca49a611fa518e1a0fe46526507c96a5612}}<br />
<br />
* [https://www.postgresql.org/message-id/30721.1519750231@sss.pgh.pa.us pg_proc.prokind change means we need server-version-dependent tab completion in psql]<br />
** [https://www.postgresql.org/message-id/24314.1520190408@sss.pgh.pa.us Proposed patch]<br />
<br />
* [https://www.postgresql.org/message-id/20180409010031.GA11599%40paquier.xyz "make -j 4 install" broken after running configure]<br />
** Bug fix: {{PgCommitURL|3b8f6e75f3c8c6d192621f21624cc8cee04ec3cb}}<br />
<br />
* [https://www.postgresql.org/message-id/152056616579.4966.583293218357089052@wrigleys.postgresql.org OpenTransientFile() should be paired with CloseTransientFile() rather than close()]<br />
** Bug fix: {{PgCommitURL|231bcd0803eb91c526d4e7522c993fa5ed71bd45}}<br />
<br />
* [https://www.postgresql.org/message-id/20180409051112.GC1740%40paquier.xyz Fix pg_rewind which can be run as root user]<br />
** Bug fix: {{PgCommitURL|5d5aeddabfe0b6b21f556c72a71e0454833d63e5}}<br />
<br />
* [https://www.postgresql.org/message-id/CAMyN-kA7aOJzBmrYFdXcc7Z0NmW+5jBaf_m=_-77uRNyKC9r=A@mail.gmail.com Fix for pg_stat_activity putting client hostaddr into appname field]<br />
** Bug fix: {{PgCommitURL|a820b4c32946c499a2d19846123840a0dad071b5}} and {{PgCommitURL|811969b218ac2e8030dfbbb05873344967461618}}<br />
<br />
* [https://www.postgresql.org/message-id/CAFj8pRCgQ5_O4YL4ZKC5=6Oi7DW_q4xB7==_iN2yRKq7+1Tv9Q@mail.gmail.com Missing support of named convention for procedures]<br />
** Bug fix: {{PgCommitURL|a8677e3ff6bb8ef78a9ba676faa647bba237b1c4}}<br />
<br />
* [https://www.postgresql.org/message-id/20180410042147.GB1552%40paquier.xyz Gotchas about pg_verify_checksums]<br />
<br />
* [https://www.postgresql.org/message-id/20180411001058.GJ26769%40paquier.xyz pg_verify_checksums does not check after all-zero'd pages]<br />
<br />
* [https://www.postgresql.org/message-id/20180411075223.GB19732%40paquier.xyz Typos from the original patch]<br />
<br />
* [https://www.postgresql.org/message-id/20180411082020.GD19732%40paquier.xyz Fixes for the documentation]<br />
<br />
* [https://www.postgresql.org/message-id/5767.1523995174@sss.pgh.pa.us Repeated crashes in GENERATED ... AS IDENTITY tests]<br />
** Bug fix: {{PgCommitURL|b1b71f16581fb5385fa9f9a663ffee271cdfaba5}}<br />
** Bug fix: {{PgCommitURL|676858bcb4c4d9d2d5ee63a87dbff01085984ee0}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/CAKJS1f-BL%2Br5FXSejDu%3D%2BMAvzRARaawRnQ_ZFtbv_o6tha9NJw%40mail.gmail.com Partitions with bool partition keys]<br />
<br />
* [https://www.postgresql.org/message-id/3041e853-b1dd-a0c6-ff21-7cc5633bffd0%40lab.ntt.co.jp wrong memory context used in FmgrInfo's contained in PartitionKey]<br />
** Bug fix (HEAD): {{PgCommitURL|a4d56f583e7cff052c2699e62d867ae1c8fda4f3}}<br />
** Bug fix (PG 10): {{PgCommitURL|5f11c6ec61a579d60347a5d13af7e42b17fadc56}}<br />
<br />
* [https://www.postgresql.org/message-id/20180422111100.GA1393%40paquier.xyz BGWORKER_BYPASS_ALLOWCONN used nowhere (infra part of on-line checksum switcher)]<br />
** Bug fix: {{PgCommitURL|9cad926eb876a30d58a5b39789098da83a6ef91c}}<br />
** Bug fix: {{PgCommitURL|43cc4ee6340779f2a17fb5bab27355c2cb2e23a6}}<br />
<br />
* [https://www.postgresql.org/message-id/87po3a3n46.fsf@ansel.ydns.eu Failed assertion in create_gather_path]<br />
** Bug fix: {{PgCommitURL|dc1057fcd878d5c062c5c4c2b548af2be513b6ab}}<br />
<br />
* [https://www.postgresql.org/message-id/20180428073935.GB1736%40paquier.xyz Cold welcoming message when installing anything because of LLVM bitcode stuff]<br />
<br />
* [https://www.postgresql.org/message-id/CCAJrrPGedKiFE2fqntSauUfhapCksOJzam+QtHfSgx86LhXLeOQ@mail.gmail.com jitflags in _outPlannedStmt and _readPlannedStmt treated as bool type]<br />
** Bug fix: {{PgCommitURL|cfffe83ba82021a1819a656e7ec5c28fb3a99152}}<br />
<br />
* [https://www.postgresql.org/message-id/flat/20180413030828.GD1552%40paquier.xyz#20180413030828.GD1552@paquier.xyz wal_consistency_checking reports an inconsistency on master branch]<br />
** Bug fix: {{PgCommitURL|1667148a4dd98cea28b8b53d57dbc1eece1b0b5c}}<br />
<br />
* [https://www.postgresql.org/message-id/20180507001811.GA27389%40paquier.xyz Refreshing findoidjoins for v11]<br />
** Bug fix: {{PgCommitURL|fbb99e5883d88687de4dbd832c2843f600ab3dd8}}<br />
<br />
* [https://www.postgresql.org/message-id/ff8f9bfa485ff961d6bb43e54120485b@postgrespro.ru Crash with partition pruning with handling of ArrayCoerceExpr]<br />
** Bug fix: {{PgCommitURL|d758d9702e2f64f08565e18eb6cb7991efa2dc16}}<br />
<br />
* [https://www.postgresql.org/message-id/CAH2-WzkOKptQiE51Bh4_xeEHhaBwHkZkGtKizrFMgEkfUuRRQg%40mail.gmail.com Local partitioned indexes and pageinspect]<br />
** Bug fix: {{PgCommitURL|bef5fcc36be3d08ec123889a0c82f5e07a63ff88}}<br />
<br />
* [http://postgr.es/m/877eovbjc3.fsf@news-spur.riddles.org.uk breakage calling a procedure with a toasted parameter]<br />
** Bug fix: {{PgCommitURL|2efc924180f096070d684a712d6c162b6ae0a5e7}}<br />
<br />
== Important Dates ==<br />
<br />
Current schedule:<br />
* feature freeze: 8th of April 2018<br />
* beta1: 24th of May 2018<br />
* beta2: 28th of June 2018<br />
<br />
[[Category:Open_Items]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=Running_pgindent_on_non-core_code_or_development_code&diff=32100Running pgindent on non-core code or development code2018-06-21T11:37:42Z<p>Adunstan: Created page with "Note: Copied from http://adpgtech.blogspot.com/2015/05/running-pgindent-on-non-core-code-or.html Running pgindent is not nearly as hard as some people seem to think it is. Th..."</p>
<hr />
<div>Note: Copied from http://adpgtech.blogspot.com/2015/05/running-pgindent-on-non-core-code-or.html<br />
<br />
Running pgindent is not nearly as hard as some people seem to think it is. The hardest part of getting a workable set of typedefs to use. That's why the buildfarm now constructs these lists automatically for each live branch.<br />
<br />
But that doesn't help if you're working on non-core code. Here's what I did to get a working typedefs list for the Redis FDW code:<br />
<br />
objdump -W redis_fdw.so |\<br />
egrep -A3 DW_TAG_typedef |\<br />
perl -e ' while (<>) { chomp; @flds = split;next unless (1 < @flds);\<br />
next if $flds[0] ne "DW_AT_name" && $flds[1] ne "DW_AT_name";\<br />
next if $flds[-1] =~ /^DW_FORM_str/;\<br />
print $flds[-1],"\n"; }' |\<br />
sort | uniq > redis_fdw.typedefs<br />
<br />
<br />
This is a slight adaptation of what the buildfarm code does on Linux to get a typedefs list.<br />
<br />
After that, indenting the code was a matter of just doing this:<br />
<br />
pgindent --typedefs=redis_fdw.typedefs redis_fdw.c<br />
<br />
<br />
What if you're developing a piece of core code and you'd like to run pgindent on it, but you've introduced some new typedefs, so pgindent mucks up the indentation by adding extraneous spaces. You have a couple of options. Let's assume that what you're working on is backend code. Then you could run the above extraction on the built backend - it doesn't have to be installed, just run it against src/backend/postgres. Then use that to run pgindent against each of the files you're working on. You don't have to run it separately for each file - you can name as many files to indent as you like on the command line.<br />
<br />
If you do that, look at the results carefully. It's possible that the absence of some platform-dependent typedef has mucked up your file. So a safer procedure is to grab the latest typedefs list from the buildfarm server and combine it with the typedefs list you just constructed, like this:<br />
<pre><br />
wget -q -O - "http://www.pgbuildfarm.org/cgi-bin/typedefs.pl?branch=HEAD" |\<br />
cat - mytypedefs | sort | uniq > mynewtypedefs<br />
</pre><br />
and then use that file to pgindent your code.<br />
<br />
None of this is as easy as it might be. But none of it is very hard either.<br />
<br />
Hint:<br />
<br />
If you only have a handful of new typedefs, you can pass them on the command line to pgindent, like this:<br />
<br />
pgindent --typedefs=mytypedefs --list-of-typedefs="typedef1 typedef2" myfile1.c myfile2.c</div>Adunstanhttps://wiki.postgresql.org/index.php?title=Development_information&diff=32099Development information2018-06-21T11:30:05Z<p>Adunstan: /* Developer Resources add link for pgindent tips */</p>
<hr />
<div>__NOTOC__<br />
This area includes developer-targeted documentation regarding aspects of PostgreSQL development. Please visit the [http://www.postgresql.org/developer developer area] of the PostgreSQL website for more general information about the development of PostgreSQL. You can find most developers in [irc://irc.freenode.net/postgresql #postgresql on freenode]. A list of IRC nick names with their respective real world names can be found [[IRC2RWNames | here]].<br />
<br />
==PostgreSQL - Active Development==<br />
<br />
==Development Process==<br />
* [[Todo|Todo list]]<br />
* [[Todo:Contents|Unofficial Todo Detail]]<br />
* [[Submitting a Patch]]<br />
* [[Reviewing a Patch]]<br />
* [[RRReviewers|Round-robin Patch Review]]<br />
* [[Running a CommitFest]]<br />
* [[Committing with Git]]<br />
<br />
'''New Contributors''' should start by reading "[[So, you want to be a developer?]]".<br />
<br />
== Developer Resources ==<br />
* [[Developer FAQ]]<br />
* [[Regression test authoring]]<br />
* [[HowToBetaTest|HOWTO Alpha and Beta Test PostgreSQL]]<br />
* [[Working with Git]]<br />
* [[Running pgindent on non-core code or development code]]<br />
* [[Working with Eclipse]]<br />
* [[Fixing shift/reduce conflicts in Bison]]<br />
* [[PL Matrix|Procedural Language Matrix]]<br />
* [http://www.postgresql.org/about/featurematrix Feature Matrix]<br />
* [http://www.postgresql.org/developer/coding PostgreSQL Coding]<br />
* [http://developer.postgresql.org/pgdocs/postgres/index.html Development docs] (updated every 5 minutes)<br />
* [[Project Hosting]]<br />
* [http://www.pgcon.org/2010/schedule/attachments/142_HackingWithUDFs.pdf Exposing PostgreSQL Internals with UDFs (2010)]<br />
<br />
== CommitFests ==<br />
* [https://commitfest.postgresql.org/ CommitFest Site] - Lists all past, in progress, and open/future commitfests<br />
* [https://commitfest.postgresql.org/action/commitfest_view/open Open CommitFest] - New patch submissions go here<br />
* [https://commitfest.postgresql.org/action/commitfest_view/inprogress In Progress CommitFest] - Patches to review are here<br />
<br />
== Roadmaps and Projects ==<br />
* [[PostgreSQL11 Roadmap]]<br />
* [[PostgreSQL10 Roadmap]]<br />
* [[Development projects]] - links to individual projects<br />
<br />
== Past Developer Meeting Notes ==<br />
* [[FOSDEM/PGDay 2017 Developer Meeting]]<br />
* [[PgConf.Asia 2016 Developer Meeting]]<br />
** [[PGConf.ASIA2016 Developer Unconference]]<br />
* [[PgCon 2016 Developer Meeting]]<br />
** [[PgCon 2016 Developer Unconference]]<br />
* [[FOSDEM/PGDay 2016 Developer Meeting]]<br />
* [[PgCon 2015 Developer Meeting]]<br />
** [[PgCon 2015 Developer Unconference]]<br />
* [[PgCon 2015 Developer Meeting]]<br />
* [[PgCon 2014 Developer Meeting]]<br />
* [[PgCon 2013 Developer Meeting]]<br />
* [[PgCon 2012 Developer Meeting]]<br />
* [[PgCon 2011 Developer Meeting]]<br />
* [[PgCon 2010 Developer Meeting]]<br />
* [[PgCon 2009 Developer Meeting]]<br />
* [[PgCon 2008 Developer Meeting]]<br />
<br />
==PostgreSQL Past Development==<br />
* [[PostgreSQL 9.3 Open Items]]<br />
* [[PostgreSQL 9.3 Development Plan]]<br />
* [[PostgreSQL 9.2 Open Items]]<br />
* [[PostgreSQL 9.2 Development Plan]]<br />
* [[PostgreSQL 9.1 Open Items]]<br />
* [[PostgreSQL 9.1 Development Plan]]<br />
* [[PostgreSQL 9.0 Open Items]]<br />
* [[85AlphaFeatures|PostgreSQL 9.0 Alpha Release Feature List]]<br />
* [[PostgreSQL 8.4]]<br />
<br />
[[Category:CommitFest]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=Talk:PostgreSQL_vs_SQL_Standard&diff=32042Talk:PostgreSQL vs SQL Standard2018-06-10T02:30:51Z<p>Adunstan: Created page with "63 bytes is more than enough for 18 utf-8 characters from the BMP, which covers just about all characters that are likely to be used. Yes it's not enough if they are all 4-byt..."</p>
<hr />
<div>63 bytes is more than enough for 18 utf-8 characters from the BMP, which covers just about all characters that are likely to be used. Yes it's not enough if they are all 4-byte+ characters, but that's extremely unlikely.</div>Adunstanhttps://wiki.postgresql.org/index.php?title=Working_with_Git&diff=31975Working with Git2018-05-23T20:31:10Z<p>Adunstan: remove context diff stuff</p>
<hr />
<div>This page collects various wisdom on working with the [http://git.postgresql.org/ PostgreSQL Git repository]. There are also [[Other Git Repositories]] you might work with, most notably the official [http://github.com/postgres Github mirror] which you might fork on that site.<br />
<br />
==Getting Started==<br />
<br />
A simple way to get started might look like this:<br />
<br />
git clone git://git.postgresql.org/git/postgresql.git<br />
cd postgresql<br />
git checkout -b my-cool-feature<br />
$EDITOR<br />
git commit -a<br />
git diff --patience master my-cool-feature > ../my-cool-feature.patch<br />
<br />
Note that <code>git checkout -b my-cool-feature</code> creates a new branch and checks it out at the same time. Typically, you would develop each feature in a separate branch.<br />
<br />
See the documentation and tutorials at http://git.or.cz/ for a more detailed Git introduction. For a more detailed lesson, check out http://progit.org and maybe get a hardcopy to help support the site.<br />
<br />
You may wish to put the following in your .git/info/exclude [[GitExclude]].<br />
Now that the master repository has been converted to git, the standard<br />
.gitignore files should cover all build products, so you don't need<br />
most of what is listed in that reference. You might still want to<br />
exclude *~, tags, and /cscope.out, though.<br />
<br />
=== Keeping your master branch local synchronized ===<br />
<br />
First, add the origin as a remote. You only need to do this once:<br />
<br />
git remote add origin git://git.postgresql.org/git/postgresql.git<br />
<br />
Next, fetch from your public git repository:<br />
<br />
git fetch origin master<br />
<br />
Merge any new patches from your public repository:<br />
<br />
git merge FETCH_HEAD<br />
<br />
Merge in any changes from the main branch:<br />
<br />
git fetch origin master<br />
git merge FETCH_HEAD<br />
<br />
Now check that it still compiles, passes regression, etc. Make sure you've<br />
invoked ./configure, and then:<br />
<br />
make check<br />
make maintainer-clean<br />
<br />
Assuming all that's good, do a dry run.<br />
<br />
git push --dry-run origin master<br />
<br />
If that's happy, push it out to your public repository.<br />
<br />
git push origin master<br />
<br />
If not, fix any merge failures, do an other dry run, and push.<br />
<br />
=== Tracking Other Branches ===<br />
<br />
Lets say you're happy tracking master, but you'd really like to track any one of the other potential branches at git.postgresql.org<br />
<br />
git remote add <super-fun-branch> git://git.postgresql.org/super-fun-branch.git<br />
git fetch super-fun-branch<br />
git checkout super-fun-branch #this will stage your remote branch for a local checkout<br />
git checkout -b super-fun-branch-name #the name can be wahtever you choose<br />
<br />
Now you have a local branch within your local git repo tracking a different branches history. Most importantly, you can now push to that repo if you have to without making an explicit clone to track the history. It's pretty much impossible to not share some common history with the master branch.<br />
<br />
=== Using Back Branches ===<br />
<br />
Since the git repository contains branches for each of the major versions of PostgreSQL, it's easy to work on the latest code from an older version instead of the current one. Here's how you might list the possibilities and checkout an older version:<br />
<br />
git branch -r<br />
git checkout -b REL8_3_STABLE origin/REL8_3_STABLE<br />
<br />
Note that if you've already checked out and used a later version, you might need to clean up some of the files left behind by it. It's suggested to run:<br />
<br />
make maintainer-clean<br />
<br />
To get rid of as many of those as possible. You might need to delete some files left behind after that anyway before git will allow you to do the checkout (src/interfaces/ecpg/preproc/preproc.y can be a problem with the specific example above).<br />
<br />
=== Testing a patch ===<br />
<br />
This is a typical setup to review a patch text file, as typically sent by e-mail:<br />
<br />
git checkout -b feature-to-review<br />
patch -p1 < feature.patch<br />
<br />
If the patch fails to apply, there will be file.rej files left behind showing the part that didn't apply. If your directory tree is clean of build information, you can easily find these later using:<br />
<br />
git status<br />
<br />
=== Patch cleanup ===<br />
<br />
Patch diff submission works best when the author does a round of self-review of the actual patch--not just the code, but the physical diff file produced. [[Creating Clean Patches]] covers practices commonly used to produce better patch diff output.<br />
<br />
==Publishing Your Work==<br />
<br />
If you develop a feature over a longer period of time, you want to allow for intermediate review. The traditional approach to that has been emailing huge patches around. The more advanced approach that we want to try (see also Peter Eisentraut's [http://petereisentraut.blogspot.com/2008/02/on-patch-review.html blog entry]) is that you push your Git branches to a private area on <code>git.postgresql.org</code>, where others can pull your work, operate on it using the familiar Git tools, and perhaps even send you improvements as Git-formatted patches. See [http://git.postgresql.org/adm/help the git.postgresql.org site] for instructions on how to sign up, and how to use the repository. You may need to eventually create a patch via e-mail as part of officially [[Submitting a Patch]].<br />
<br />
==Pushing New Branches==<br />
<br />
If you create a new branch, generally for a new feature test, you'll need to push it to git.postgresql.org. <br />
<br />
git push origin new_feature_branch<br />
<br />
Note that, if you have a completely blank repository (such as a new repo for a pgfoundry project) then not even the branch "master" will exist and will need to be pushed.<br />
<br />
If you ''are'' working with the postgresql core code, do NOT casually make up your own branches and push them, without clearing it on the pgsql-hackers list first. Generally, you want to use your private repo area instead.<br />
<br />
==Removing a Branch==<br />
<br />
Once your feature has been committed to the PostgreSQL repository, you can usually remove your local feature branch. This works as follows:<br />
<br />
# switch to a different branch<br />
git checkout master<br />
git branch -D my-cool-feature<br />
<br />
==Working with the users/foo/postgres.git==<br />
<br />
One option while requesting a project at git.postgresql.org is to have a clone of the main postgresql repository.<br />
<br />
That is very nice feature, but how do you sync the upstream code?!<br />
<br />
One method is to create a git clone in your own repository and add a new remote to handle the syncing :<br />
<br />
# clone your repos<br />
git clone ssh://git@git.postgresql.org/users/foo/postgres.git my_postgres<br />
<br />
# add a new remote<br />
git remote add pgmaster git://git.postgresql.org/git/postgresql.git<br />
<br />
# track some old versions<br />
git checkout -b REL8_3_STABLE origin/REL8_3_STABLE<br />
git checkout -b REL8_4_STABLE origin/REL8_4_STABLE<br />
<br />
# change the remote of master and our old versions tracked<br />
git config branch.REL8_3_STABLE.remote pgmaster<br />
git config branch.REL8_4_STABLE.remote pgmaster<br />
git config branch.master.remote pgmaster<br />
<br />
# pull from postgres official git for each branch<br />
# and finally push to origin<br />
git checkout master<br />
git pull<br />
git push origin<br />
git checkout REL8_3_STABLE<br />
git pull<br />
git push origin<br />
git checkout REL8_4_STABLE<br />
git pull<br />
git push origin<br />
<br />
<br />
This way, PostgreSQL is easy to sync for each branch. Pulling from the official and pushing to your own repository.<br />
<br />
Create your own branch and work as usual. Users who have a local clone of the postgresql.git can add your branch in their repository and happily merge, just as you do.<br />
<br />
==Using the Web Interface==<br />
<br />
Try the web interface at http://git.postgresql.org/. It offers browsing, "blame" functionality, snapshots, and other advanced features, and it is much faster than CVSweb. Even if you don't care for Git or version control systems, you will probably enjoy the web interface.<br />
<br />
==RSS Feeds==<br />
<br />
The Git service provides RSS feeds that report about commits to the repositories. Some people may find this to be an alternative to subscribing to the pgsql-committers mailing list. The URL for the RSS feed from the PostgreSQL repository is http://git.postgresql.org/gitweb/?p=postgresql.git;a=rss. Other options are available; they can be found via the [http://git.postgresql.org/ home page] of the web interface.<br />
<br />
==PostgreSQL Style==<br />
<br />
The PostgreSQL source uses 4-character tabs, making the output from <code>git diff</code> look odd. You can fix that by putting this into your.<code>git/config</code> file:<br />
<br />
[core]<br />
pager = less -x4<br />
<br />
==Continuing the "rsync the CVSROOT" workflow==<br />
<br />
Aidan van Dyk {{messageLink|20090602162347.GF23972@yugib.highrise.ca|published a nice tutorial}} on how to keep several branches using a single copy of historical objects. This is roughly equivalent to keeping several checkouts of a rsync'ed copy of CVSROOT, which is what some committers were used to doing with CVS.<br />
<br />
<br />
[[Category:Git]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PgCon_2018_Developer_Meeting&diff=31542PgCon 2018 Developer Meeting2018-02-28T22:52:29Z<p>Adunstan: /* RSVPs */</p>
<hr />
<div>A meeting of the interested PostgreSQL developers is being planned for Tuesday 29 May, 2018 at the University of Ottawa, prior to pgCon 2018. In order to keep the numbers manageable, this meeting is by '''invitation only'''. Unfortunately it is quite possible that we've overlooked important individuals during the planning of the event - if you feel you fall into this category and would like to attend, please contact Dave Page (dpage@pgadmin.org).<br />
<br />
Please note that the attendee numbers have been kept low in order to keep the meeting more productive. Invitations have been sent only to developers that have been highly active on the database server over the 11/10 release cycles. We have not invited any contributors based on their contributions to related projects, or seniority in regional user groups or sponsoring companies.<br />
<br />
As at last years event, an Unconference will be held on Wednesday for in-depth discussion of technical topics.<br />
<br />
This is a PostgreSQL Community event.<br />
<br />
== Meeting Goals ==<br />
<br />
* Define the schedule for the 12.0 release cycle<br />
* Address any proposed timing, policy, or procedure issues<br />
* Address any proposed [http://en.wikipedia.org/wiki/Wicked_problem Wicked problems]<br />
<br />
== Time & Location ==<br />
<br />
The meeting will be:<br />
<br />
* 9:00AM to 12PM<br />
* TBD<br />
* University of Ottawa.<br />
<br />
Coffee, tea and snacks will be served starting at 8:45am. Lunch will be after the meeting.<br />
<br />
== RSVPs ==<br />
<br />
The following people have RSVPed to the meeting (in alphabetical order, by surname):<br />
<br />
* Joe Conway<br />
* Andrew Dunstan<br />
* Peter Eisentraut<br />
* Stephen Frost<br />
* Magnus Hagander<br />
* Tatsuo Ishii<br />
* Amit Kapila<br />
* Tom Lane<br />
* Noah Misch<br />
* Bruce Momjian<br />
* Thomas Munro<br />
* Michael Paquier<br />
* Teodor Sigaev<br />
* David Steele<br />
* Tomas Vondra<br />
<br />
== Agenda Items ==<br />
<br />
* 12.0 release and commitfest schedule (Dave)<br />
<br />
* ''Please add suggestions for agenda items here. (with your name)''<br />
<br />
==Agenda==<br />
<br />
{| border="1" cellpadding="4" cellspacing="0"<br />
!Time<br />
!Item<br />
!Presenter<br />
<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|09:00 - 09:30<br />
|Welcome and introductions<br />
|Dave Page<br />
<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|10:30 - 10:45<br />
|Coffee break<br />
|All<br />
<br />
|- <br />
|11:50 - 12:00<br />
|Any other business<br />
|Dave Page<br />
<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|12:00<br />
|Lunch<br />
|<br />
<br />
|}<br />
<br />
== Minutes ==<br />
<br />
=== Welcome and introductions ===<br />
<br />
Attendees:<br />
<br />
=== 12.0 release and commitfest schedule ===<br />
<br />
=== Any other business ===</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_Buildfarm_Howto&diff=29919PostgreSQL Buildfarm Howto2017-04-19T18:52:29Z<p>Adunstan: Adding command line option section, partially done</p>
<hr />
<div>PostgreSQL BuildFarm is a distributed build system designed to detect <br />
build failures on a large collection of platforms and configurations. <br />
This software is written in Perl. If you're not comfortable with Perl<br />
then you possibly don't want to run this, even though the only adjustment<br />
you should ever need is to the config file (which is also Perl).<br />
<br />
=== Get the Software === <br />
Download from [http://www.pgbuildfarm.org/downloads the buildfarm server]<br />
Unpack it and put it somewhere. You can put the config file in a different <br />
place from the run_build.pl script if you want to, but the <br />
simplest thing is to put it in the same place. Decide which user you will run <br />
the script as - it must be a user who can run the PostgreSQL server programs (on Unix<br />
that means it must *not* run as root). Do everything else here as that user.<br />
<br />
=== Other Prerequisites ===<br />
<br />
; Git<br />
: Must be version 1.6 or later.<br />
<br />
; All tools required for building Postgres from a Git checkout<br />
: GNU make, bison, flex, etc<br />
: See [http://www.postgresql.org/docs/devel/static/install-requirements.html the Postgres documentation]<br />
<br />
; ccache<br />
: This isn't ''absolutely'' necessary, but it greatly reduces the amount of CPU your buildfarm member will consume ... at the price of more disk space usage<br />
<br />
=== All the run_build.pl command line options ===<br />
<br />
This list is complete as of release 4.19 of the client<br />
<br />
* --config=/pathto/file - location of config file, default build-farm.conf<br />
* --nosend says don't send the results to the server<br />
* --nostatus says don;t update the status files<br />
* --force says run the build even of it's not needed<br />
* --verbose[=n] says display information. verbosity level 1 (default if --verbose is specified) shows a line for each step as it start. Any higher number causes the logs from the various stages to be sent to the standard output<br />
* --quiet - suppress error output<br />
* --test is short for --nosend --no-status --force --verbose<br />
* --find-typedefs - obsolete way to trigger typedef anaylsis. This should now be done via the config file<br />
* --help - print help text<br />
* --keepall - keep build and installation directories if there is a failure<br />
* [ to be continued ]<br />
<br />
<br />
=== Choose a setup for a base git mirror that all your branches will pull from. ===<br />
Most buildfarm members run on more than one branch, and if you do it's good practice to set up<br />
a mirror on the buildfarm machine and then just clone that for each branch. The official publicly available git repository is at<br />
* git://git.postgresql.org/git/postgresql.git<br />
and there is a mirror at <br />
* git://github.com/postgres/postgres.git<br />
Either should be suitable for cloning.<br />
<br />
The simplest way to set up a mirror is simply to have the buildfarm script create and maintain it for you. <br />
If you do that, the mirror will be updated at the start of a run when it checks to see if any changes have occurred that might<br />
require a new build. To do that, all you need to do is set the following two options in your config file:<br />
git_keep_mirror => 'true',<br />
git_ignore_mirror_failure => 'true',<br />
<br />
If you would rather clone the github mirror for your local mirror instead of the authoritative community repo (doing so can keep load off the community server, which is a good thing), then set the config variable to point to it like this:<br />
scmrepo => 'git://github.com/postgres/postgres.git',<br />
<br />
The mirror will be placed in your build root, above the branch directories.<br />
<br />
You can also opt to create and maintain a git mirror yourself, something like this:<br />
git clone --mirror git://git.postgresql.org/git/postgresql.git pgsql-base.git<br />
When that is done, add an entry to your crontab to keep it up to date, something like:<br />
20,50 * * * * cd /path/to/pgsql-base.git && git fetch -q<br />
<br />
One downside of doing this is that your mirror will only be as up to date as the last time you ran the cron update.<br />
<br />
To have your buildfarm installation use a local mirror you maintain yourself, set the config variable:<br />
scmrepo => '/path/to/pgsql-base.git',<br />
Of course, in this case you don't set the git_keep_mirror option.<br />
<br />
=== Create a directory where builds will run. === <br />
This should be dedicated to<br />
the use of the build farm. Make sure there's plenty of space - on my<br />
machine each branch can use up to about 700Mb during a build. You can use the<br />
directory where the script lives, or a subdirectory of it, or a completely <br />
different directory.<br />
<br />
If you're using ccache, the cache directory can use up to 1Gb by default.<br />
You can reduce that if you like (see the ccache documentation), but it's<br />
good to allow at least 100Mb per active branch.<br />
<br />
=== Edit the build-farm.conf file ===<br />
<br />
Notable things you probably need to set include the following:<br />
<br />
==== %conf ====<br />
<br />
; scmrepo<br />
: Set this to indicate the path to your Git mirror<br />
; scm_url<br />
: If you are not using the Community git repository, or want to point the changesets at a different server, set this URL to indicate where to find a given Git commit on the web. For instance, for the github mirror, this value should be: <i>&#x68;ttp://github.com/postgres/postgres.git/commit/</i> - don't forget the trailing "/".<br />
<br />
Once you have registered your Buildfarm animal you will need to set these, but for initial testing just leave them as-is:<br />
<br />
; animal<br />
: This will need to be set to the animal name you were given by the Buildfarm coordinators<br />
; secret<br />
: This must be the password indicated by the Buildfarm coordinators<br />
<br />
Adjust other config variables "make", "config_opts", and (if you don't use ccache) "config_env" to suit your environment, and to choose which optional Postgres configuration options you want to build with. <br />
<br />
You should not need to adjust other variables.<br />
<br />
You may verify that you didn't screw things up too badly by running "perl -cw build-farm.conf". That verifies that the configuration is still legitimate Perl.<br />
<br />
=== Alerts and Status Notifications ===<br />
<br />
Alerts happen when we haven't heard from your buildfarm member for a while, and suggest that maybe something is wrong. Status notifications happen when we have heard from your buildfarm member, and we are telling you what happened. Both of them happen via email. Alerts are sent to the owner's registered email address. By default, none are sent. You can configure when and how often they are sent in the alerts section of the config file. Status notifications are sent to the addresses configured in the mail_events section of the config file. You can choose four different sorts of notification:<br />
* for every build<br />
* for every build that fails<br />
* for every build that changes the status<br />
* for every build that changes the status if the change is to or from OK (green) <br />
<br />
=== Change the shebang line in the scripts. ===<br />
If the path to your perl <br />
installation isn't "/usr/bin/perl", edit the #! line in perl scripts so it is correct. <br />
This is the ONLY line in those files you should ever need to edit. <br />
<br />
=== Check that required perl modules are present. ===<br />
Run "perl -cw run_build.pl". <br />
If you get errors about missing perl modules you will need to install them. <br />
Most of the required modules are standard modules in any perl<br />
distribution. The rest are all standard CPAN modules, and available either from there<br />
or from your OS distribution. When you don't get an error any more, run the same test on<br />
run_web_txn.pl, and also on run_branches.pl if you plan to use that (see below).<br />
When all is clear you are ready to start testing.<br />
<br />
=== Run in test mode. ===<br />
With a PATH that matches what you will have when running from cron, run<br />
the script in no-send, no-status, verbose mode. Something like this:<br />
./run_build.pl --nosend --nostatus --verbose<br />
and watch the fun begin. If this results in failures because it can't<br />
find some executables (especially gmake and git), you might need to change <br />
the config file again, this time changing the "build_env" with another <br />
setting something like:<br />
PATH => "/usr/local/bin:$ENV{PATH}",<br />
Also, if you put the config file somewhere else, you will need to use <br />
the --config=/path/to/build-farm.conf option.<br />
<br />
If trying to diagnose problems, interesting summary information may be found in the file '''web-txn.data''', which is found in a build-specific directory, of the form $build_root/$CURRENT_BRANCH/$animal.lastrun-logs/web-txn.data<br />
<br />
If particular steps of a build failed, logs for those steps may be found in that same directory.<br />
<br />
=== Test running from cron === <br />
When you have that running, it's time to try with cron. <br />
Put a line in your crontab that looks something like this:<br />
43 * * * * cd /location/of/run_build.pl/ && ./run_build.pl --nosend --verbose<br />
Again, add the --config option if needed. Notice that this time we didn't <br />
specify --nostatus. That means that (after the first run) the script won't <br />
do any build work unless the Git repo has changed. Check that your cron <br />
job runs (it should email you the results, unless you tell it to send them<br />
elsewhere).<br />
<br />
You can, and probably should, drop the --verbose option once things are<br />
working.<br />
<br />
The frequency with which the cron job is launched is up to you, though we do<br />
suggest that active branches get built at least once a day. The build script will<br />
automatically exit if it finds a previous invocation still running, so you do not<br />
need to worry about scheduling jobs too close together. Think of the cron<br />
frequency as how often the buildfarm animal will wake up to see if there have<br />
been changes in the Git repo.<br />
<br />
=== Choose which branches you want to build === <br />
By default run_build.pl builds the HEAD branch. If you want to<br />
build some other branch, you can do so by specifying the name on the commandline,<br />
e.g. <br />
run_build.pl REL9_4_STABLE<br />
<br />
The old way to build multiple branches was to create a cron job for each<br />
active branch, along the lines of:<br />
<br />
6 * * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend<br />
30 4 * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend REL9_4_STABLE<br />
<br />
But there's a better way ...<br />
<br />
=== Using run_branches.pl ===<br />
There is a wrapper script that makes running multiple branches much easier. To build all the branches that are currently being maintained by the project, uncomment this line in the config file:<br />
# $conf{branches_to_build} = 'ALL'; # or [qw( HEAD RELx_y_STABLE etc )]<br />
and instead of running run_build.pl, use run_branches.pl with the --run-all option. This script accepts all the<br />
options that run_build.pl does, and passes them through. So now your crontab could just look like this:<br />
6 * * * * cd /home/andrew/buildfarm && ./run_branches.pl --run-all<br />
One of the advantages of this approach is that you don't need to manually retire a branch when the Postgres project ends support for it, nor to add one when there's a new stable branch. The script contacts the server to get a list of branches that we're currently interested in, and then builds them. This is now the recommended method of running a buildfarm member.<br />
<br />
If you don't want to build every one of the back branches, you can also use HEAD_PLUS_LATEST, or HEAD_PLUS_LATESTn for any n,<br />
in the $conf{branches_to_build} setting.<br />
<br />
=== Register your new buildfarm member. === <br />
Once this is all running happily, you can register to upload your<br />
results to the central server. Registration can be done on the buildfarm server <br />
at http://www.pgbuildfarm.org/cgi-bin/register-form.pl. When you receive your approval by <br />
email, you will edit the "animal" and "secret" lines in your config file, <br />
remove the --nosend flags, and you are done.<br />
<br />
Please also join the buildfarm-members mailing list at<br />
https://www.postgresql.org/community/lists/subscribe/<br />
This is a low-traffic list for owners of buildfarm members.<br />
<br />
=== Bugs === <br />
Please file bug reports concerning the buildfarm script (but not Postgres itself)<br />
on the tracker at [http://pgfoundry.org/tracker/?atid=238&group_id=1000040&func=browse pgFoundry]<br />
<br />
=== Running on Windows ===<br />
There are three build environments for Windows: Cygwin, MinGW/MSys, and Microsoft Visual C++. The buildfarm can run with each of these environments. This section discusses requirements for the buildfarm, rather than requirements for building on Windows, which are covered elsewhere.<br />
<br />
==== Cygwin ==== <br />
There is almost nothing extra to be done for Cygwin. You need to make sure that cygserver is running, and you should set MAX_CONNECTIONS=>3 and CYGWIN=>'server' in the build_env stanza of the buildfarm config. Other than that it should be just like running on Unix.<br />
<br />
==== MinGW/Msys ====<br />
For MinGW/MSys, you need both the MSys DTK version of perl installed, and a native Windows perl - I have only tested with ActiveState perl, which I have found to be rock solid. You need to run the main buildfarm script using the MSYS DTK perl, and the web transaction script using native Perl. that mean you need to change the first line of the run_web_txn.pl script so it reads something like:<br />
#!/c/perl/bin/perl<br />
You should make sure that the PATH is set in your config file to put the Native perl ahead of the MSys DTK perl.<br />
It's a good idea to have a runbf.bat file that you can call from the Windows scheduler. Mine looks like this:<br />
@echo off<br />
setlocal<br />
c:<br />
cd \msys\1.0\bin<br />
c:\msys\1.0\bin\sh.exe --login -c "cd bf && ./run_build.pl --verbose %1 >> bftask.out 2>&1"<br />
Set up a non-privileged Windows user to run this jobs as. set up the buildfarm as above as that user. Then create scheduler jobs that call runbf.bat with an optional branch name argument.<br />
<br />
==== Microsoft Visual C++ ====<br />
For MSVC you need to edit the config file more extensively. Make sure the 'using_msvc' setting is on. Also, there is a section of the file specially for MSVC builds. As with MinGW, you need a native Windows perl installed. It appears that Windows Git does not like to clone local repositories specified with forward slashes (this is pretty horrible - almost all Windows programs are quite happy with forward slashes. Make sure you specify the repository using backslashes or weird things will happen. Again, you will need a runbf.bat file for the windows scheduler. Mine looks like this:<br />
@echo off<br />
c:<br />
cd \prog\bf<br />
c:\perl\bin\perl run_build.pl --verbose %1 %2 %3 %4 >> bfout.txt<br />
You will also need a tar command capable of bundling up the logs to send to the server. The best one I have found for use on Windows is bsdtar, part of the libarchive collection at http://sourceforge.net/projects/gnuwin32/files/. This is also a good place to get many of the libraries you need for optional pieces of MSVC and MinGW builds.<br />
<br />
=== Running multiple buildfarm members on a single machine ===<br />
<br />
Sometimes you might want to run more than one buildfarm member on a single machine. Possible reasons for doing this include testing different compilers, and running with different build options. For example, on one FreeBSD machine I have two members; one does a normal build and the other does a build with -DCLOBBER_CACHE_ALWAYS set. Or on a Windows machine one might want to test both the 32 bit and 64 bit mingw-w64 compilers.<br />
<br />
The simplest way to do this is to do it all in the same location. Get one member working, then copy the config file to something with the other member's name and change the animal name and password, and whatever in the config will be different from the first one. The members can share a git mirror and build root. There are locking provisions that prevent instances of the buildfarm scripts from tripping over each other. If you are using ccache, you should ensure that each member gets a separate ccache location. The best way to do that is to put the member name into the ccache directory name (which is the default as of recent releases of the buildfarm scripts).<br />
<br />
=== Tips and Tricks ===<br />
<br />
You can force a single run of your animal by putting a file called <animal>.force-one-run in the <buildroot>/<branch> directory. For example the following will force a build on all the stable branches of my animal crake:<br />
cd root<br />
for f in REL* ; do<br />
touch $f/crake.force-one-run<br />
done<br />
When the run is done this file will be removed automatically. <br />
<br />
[[Category:Howto]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PgCon_2017_Developer_Meeting&diff=29616PgCon 2017 Developer Meeting2017-03-22T14:27:48Z<p>Adunstan: /* RSVPs */</p>
<hr />
<div>A meeting of the interested PostgreSQL developers is being planned for Tuesday 23 May, 2017 at the University of Ottawa, prior to pgCon 2017. In order to keep the numbers manageable, this meeting is by '''invitation only'''. Unfortunately it is quite possible that we've overlooked important individuals during the planning of the event - if you feel you fall into this category and would like to attend, please contact Dave Page (dpage@pgadmin.org).<br />
<br />
Please note that the attendee numbers have been kept low in order to keep the meeting more productive. Invitations have been sent only to developers that have been highly active on the database server over the 10.0/9.6 release cycles. We have not invited any contributors based on their contributions to related projects, or seniority in regional user groups or sponsoring companies.<br />
<br />
As at last years event, an Unconference will be held on Wednesday for in-depth discussion of technical topics.<br />
<br />
This is a PostgreSQL Community event.<br />
<br />
== Meeting Goals ==<br />
<br />
* Define the schedule for the 11.0 release cycle<br />
* Address any proposed timing, policy, or procedure issues<br />
* Address any proposed [http://en.wikipedia.org/wiki/Wicked_problem Wicked problems]<br />
<br />
== Time & Location ==<br />
<br />
The meeting will be:<br />
<br />
* 9:00AM to 12PM<br />
* TBD...<br />
* University of Ottawa.<br />
<br />
Coffee, tea and snacks will be served starting at 8:45am. Lunch will be after the meeting.<br />
<br />
== RSVPs ==<br />
<br />
The following people have RSVPed to the meeting (in alphabetical order, by surname):<br />
<br />
* Oleg Bartunov<br />
* Joe Conway<br />
* Andrew Dunstan<br />
* Peter Eisentraut<br />
* Andres Freund<br />
* Stephen Frost<br />
* Peter Geoghegan<br />
* Robert Haas<br />
* Kyotaro Horiguchi<br />
* Amit Kapila<br />
* Haribabu Kommi<br />
* Alexander Korotkov<br />
* Tom Lane<br />
* Noah Misch<br />
* Bruce Momjian<br />
* Thomas Munro<br />
* Dave Page<br />
* Michael Paquier<br />
* Teodor Sigaev<br />
<br />
== Agenda Items ==<br />
<br />
* Upgrading PostgreSQL without a downtime. (Alexander)<br />
* Commit fest management (Michael)<br />
* Stable branch naming post-10 (Michael)<br />
* ''Please add suggestions for agenda items here. (with your name)''<br />
<br />
==Agenda==<br />
<br />
{| border="1" cellpadding="4" cellspacing="0"<br />
!Time<br />
!Item<br />
!Presenter<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|09:00 - 09:10<br />
|Welcome and introductions<br />
|Dave Page<br />
<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|10:30 - 10:45<br />
|Coffee break<br />
|All<br />
<br />
|- <br />
|11:30 - 12:00<br />
|Any other business<br />
|Dave Page<br />
<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|12:00<br />
|Lunch<br />
|<br />
|}<br />
<br />
== Minutes ==</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PgCon_2016_Developer_Meeting&diff=27332PgCon 2016 Developer Meeting2016-04-05T20:53:04Z<p>Adunstan: /* RSVPs */</p>
<hr />
<div>'''DRAFT'''<br />
<br />
A meeting of the interested PostgreSQL developers is being planned for Tuesday 17 May, 2016 at the University of Ottawa, prior to pgCon 2016. In order to keep the numbers manageable, this meeting is by '''invitation only'''. Unfortunately it is quite possible that we've overlooked important individuals during the planning of the event - if you feel you fall into this category and would like to attend, please contact Dave Page (dpage@pgadmin.org).<br />
<br />
Please note that the attendee numbers have been kept low in order to keep the meeting more productive. Invitations have been sent only to developers that have been highly active on the database server over the 9.6 release cycle. We have not invited any contributors based on their contributions to related projects, or seniority in regional user groups or sponsoring companies.<br />
<br />
As at last years event, a Developer/Hacker Unconference will be held on Wednesday for in-depth discussion of technical topics.<br />
<br />
This is a PostgreSQL Community event.<br />
<br />
== Meeting Goals ==<br />
<br />
* Define the schedule for the 9.7 release cycle<br />
* Address any proposed timing, policy, or procedure issues<br />
* Address any proposed [http://en.wikipedia.org/wiki/Wicked_problem Wicked problems]<br />
<br />
== Time & Location ==<br />
<br />
The meeting will be:<br />
<br />
* 9:00AM to 12PM<br />
* Location TBC<br />
* University of Ottawa.<br />
<br />
Coffee, tea and snacks will be served starting at 8:45am. Lunch will be after the meeting.<br />
<br />
== RSVPs ==<br />
<br />
The following people have RSVPed to the meeting (in alphabetical order, by surname):<br />
<br />
* Oleg Bartunov<br />
* Joe Conway<br />
* Jeff Davis<br />
* Andrew Dunstan<br />
* Stephen Frost<br />
* Etsuro Fujita<br />
* Robert Haas<br />
* Magnus Hagander<br />
* Amit Kapila<br />
* Alexander Korotkov<br />
* Tom Lane<br />
* Noah Misch<br />
* Dave Page<br />
* Michael Paquier<br />
* Teodor Sigaev<br />
<br />
== Agenda Items ==<br />
<br />
Please add suggestions for agenda items here.<br />
<br />
* (Major) Contributors. The lists of contributors and major contributors on the web site are not always up to speed with who is currently contributing. I think these lists should be updated (both as to adds and removals) more aggressively. The list of invitees to the developer meeting has tends to stagnate somewhat. Can we come up with a better way to keep this information up to date? [Robert Haas]<br />
<br />
* Core Team. Should core team members continue to hold indefinite tenure and be chosen solely by the existing core team? Is the current membership of the core team the best set of people for its current purposes? It started as a release group, but that function has somewhat been taken over by pgsql-release. [Robert Haas]<br />
<br />
==Agenda==<br />
<br />
{| border="1" cellpadding="4" cellspacing="0"<br />
!Time<br />
!Item<br />
!Presenter<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|09:00 - 09:10<br />
|Welcome and introductions<br />
|Dave Page<br />
<br />
|- <br />
|TBD<br />
|TBD<br />
|TBD<br />
<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|10:30 - 10:45<br />
|Coffee break<br />
|All<br />
<br />
|- <br />
|TBD<br />
|TBD<br />
|TBD<br />
<br />
|- <br />
|11:45 - 12:00<br />
|Any other business<br />
|Dave Page<br />
<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|12:00<br />
|Finish<br />
|<br />
|}</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_Buildfarm_Howto&diff=26743PostgreSQL Buildfarm Howto2015-12-18T00:18:01Z<p>Adunstan: This and previous edit added Alerts and Notifications section</p>
<hr />
<div>PostgreSQL BuildFarm is a distributed build system designed to detect <br />
build failures on a large collection of platforms and configurations. <br />
This software is written in Perl. If you're not comfortable with Perl<br />
then you possibly don't want to run this, even though the only adjustment<br />
you should ever need is to the config file (which is also Perl).<br />
<br />
=== Get the Software === <br />
Download from [http://www.pgbuildfarm.org/downloads the buildfarm server]<br />
Unpack it and put it somewhere. You can put the config file in a different <br />
place from the run_build.pl script if you want to, but the <br />
simplest thing is to put it in the same place. Decide which user you will run <br />
the script as - it must be a user who can run the PostgreSQL server programs (on Unix<br />
that means it must *not* run as root). Do everything else here as that user.<br />
<br />
=== Other Prerequisites ===<br />
<br />
; Git<br />
: Must be version 1.6 or later.<br />
<br />
; All tools required for building Postgres from a Git checkout<br />
: GNU make, bison, flex, etc<br />
: See [http://www.postgresql.org/docs/devel/static/install-requirements.html the Postgres documentation]<br />
<br />
; ccache<br />
: This isn't ''absolutely'' necessary, but it greatly reduces the amount of CPU your buildfarm member will consume ... at the price of more disk space usage<br />
<br />
=== Choose a setup for a base git mirror that all your branches will pull from. ===<br />
Most buildfarm members run on more than one branch, and if you do it's good practice to set up<br />
a mirror on the buildfarm machine and then just clone that for each branch. The official publicly available git repository is at<br />
* git://git.postgresql.org/git/postgresql.git<br />
and there is a mirror at <br />
* git://github.com/postgres/postgres.git<br />
Either should be suitable for cloning.<br />
<br />
The simplest way to set up a mirror is simply to have the buildfarm script create and maintain it for you. <br />
If you do that, the mirror will be updated at the start of a run when it checks to see if any changes have occurred that might<br />
require a new build. To do that, all you need to do is set the following two options in your config file:<br />
git_keep_mirror => 'true',<br />
git_ignore_mirror_failure => 'true',<br />
<br />
If you would rather clone the github mirror for your local mirror instead of the authoritative community repo (doing so can keep load off the community server, which is a good thing), then set the config variable to point to it like this:<br />
scmrepo => 'git://github.com/postgres/postgres.git',<br />
<br />
The mirror will be placed in your build root, above the branch directories.<br />
<br />
You can also opt to create and maintain a git mirror yourself, something like this:<br />
git clone --mirror git://git.postgresql.org/git/postgresql.git pgsql-base.git<br />
When that is done, add an entry to your crontab to keep it up to date, something like:<br />
20,50 * * * * cd /path/to/pgsql-base.git && git fetch -q<br />
<br />
One downside of doing this is that your mirror will only be as up to date as the last time you ran the cron update.<br />
<br />
To have your buildfarm installation use a local mirror you maintain yourself, set the config variable:<br />
scmrepo => '/path/to/pgsql-base.git',<br />
Of course, in this case you don't set the git_keep_mirror option.<br />
<br />
=== Create a directory where builds will run. === <br />
This should be dedicated to<br />
the use of the build farm. Make sure there's plenty of space - on my<br />
machine each branch can use up to about 700Mb during a build. You can use the<br />
directory where the script lives, or a subdirectory of it, or a completely <br />
different directory.<br />
<br />
If you're using ccache, the cache directory can use up to 1Gb by default.<br />
You can reduce that if you like (see the ccache documentation), but it's<br />
good to allow at least 100Mb per active branch.<br />
<br />
=== Edit the build-farm.conf file ===<br />
<br />
Notable things you probably need to set include the following:<br />
<br />
==== %conf ====<br />
<br />
; scmrepo<br />
: Set this to indicate the path to your Git mirror<br />
; scm_url<br />
: If you are not using the Community git repository, or want to point the changesets at a different server, set this URL to indicate where to find a given Git commit on the web. For instance, for the github mirror, this value should be: <i>&#x68;ttp://github.com/postgres/postgres.git/commit/</i> - don't forget the trailing "/".<br />
<br />
Once you have registered your Buildfarm animal you will need to set these, but for initial testing just leave them as-is:<br />
<br />
; animal<br />
: This will need to be set to the animal name you were given by the Buildfarm coordinators<br />
; secret<br />
: This must be the password indicated by the Buildfarm coordinators<br />
<br />
Adjust other config variables "make", "config_opts", and (if you don't use ccache) "config_env" to suit your environment, and to choose which optional Postgres configuration options you want to build with. <br />
<br />
You should not need to adjust other variables.<br />
<br />
You may verify that you didn't screw things up too badly by running "perl -cw build-farm.conf". That verifies that the configuration is still legitimate Perl.<br />
<br />
=== Alerts and Status Notifications ===<br />
<br />
Alerts happen when we haven't heard from your buildfarm member for a while, and suggest that maybe something is wrong. Status notifications happen when we have heard from your buildfarm member, and we are telling you what happened. Both of them happen via email. Alerts are sent to the owner's registered email address. By default, none are sent. You can configure when and how often they are sent in the alerts section of the config file. Status notifications are sent to the addresses configured in the mail_events section of the config file. You can choose four different sorts of notification:<br />
* for every build<br />
* for every build that fails<br />
* for every build that changes the status<br />
* for every build that changes the status if the change is to or from OK (green) <br />
<br />
=== Change the shebang line in the scripts. ===<br />
If the path to your perl <br />
installation isn't "/usr/bin/perl", edit the #! line in perl scripts so it is correct. <br />
This is the ONLY line in those files you should ever need to edit. <br />
<br />
=== Check that required perl modules are present. ===<br />
Run "perl -cw run_build.pl". <br />
If you get errors about missing perl modules you will need to install them. <br />
Most of the required modules are standard modules in any perl<br />
distribution. The rest are all standard CPAN modules, and available either from there<br />
or from your OS distribution. When you don't get an error any more, run the same test on<br />
run_web_txn.pl, and also on run_branches.pl if you plan to use that (see below).<br />
When all is clear you are ready to start testing.<br />
<br />
=== Run in test mode. ===<br />
With a PATH that matches what you will have when running from cron, run<br />
the script in no-send, no-status, verbose mode. Something like this:<br />
./run_build.pl --nosend --nostatus --verbose<br />
and watch the fun begin. If this results in failures because it can't<br />
find some executables (especially gmake and git), you might need to change <br />
the config file again, this time changing the "build_env" with another <br />
setting something like:<br />
PATH => "/usr/local/bin:$ENV{PATH}",<br />
Also, if you put the config file somewhere else, you will need to use <br />
the --config=/path/to/build-farm.conf option.<br />
<br />
If trying to diagnose problems, interesting summary information may be found in the file '''web-txn.data''', which is found in a build-specific directory, of the form $build_root/$CURRENT_BRANCH/$animal.lastrun-logs/web-txn.data<br />
<br />
If particular steps of a build failed, logs for those steps may be found in that same directory.<br />
<br />
=== Test running from cron === <br />
When you have that running, it's time to try with cron. <br />
Put a line in your crontab that look=== What if you can't use git for some reason? ===<br />
You can still run in CVS mode against a git-cvs gateway. There is one available for the master (aka HEAD) and REL9_0_STABLE branches.<br />
<br />
You will need to be on release 4.2 or later of the buildfarm client, and have the following settings in your config file:<br />
<br />
scm => 'cvs',<br />
scmrepo => ':pserver:anonymous@git.postgresql.org:/postgresql.git',<br />
use_git_cvsserver => 'true',<br />
<br />
You will also need to do this, once:<br />
<br />
cvs -d :pserver:anonymous@git.postgresql.org:/postgresql.git login<br />
<br />
An empty password will do.<br />
<br />
Using this mode of running is a fall-back. If you can use git you should.<br />
s something like this:<br />
43 * * * * cd /location/of/run_build.pl/ && ./run_build.pl --nosend --verbose<br />
Again, add the --config option if needed. Notice that this time we didn't <br />
specify --nostatus. That means that (after the first run) the script won't <br />
do any build work unless the Git repo has changed. Check that your cron <br />
job runs (it should email you the results, unless you tell it to send them<br />
elsewhere).<br />
<br />
You can, and probably should, drop the --verbose option once things are<br />
working.<br />
<br />
The frequency with which the cron job is launched is up to you, though we do<br />
suggest that active branches get built at least once a day. The build script will<br />
automatically exit if it finds a previous invocation still running, so you do not<br />
need to worry about scheduling jobs too close together. Think of the cron<br />
frequency as how often the buildfarm animal will wake up to see if there have<br />
been changes in the Git repo.<br />
<br />
=== Choose which branches you want to build === <br />
By default run_build.pl builds the HEAD branch. If you want to<br />
build some other branch, you can do so by specifying the name on the commandline,<br />
e.g. <br />
run_build.pl REL9_4_STABLE<br />
<br />
The old way to build multiple branches was to create a cron job for each<br />
active branch, along the lines of:<br />
<br />
6 * * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend<br />
30 4 * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend REL9_4_STABLE<br />
<br />
But there's a better way ...<br />
<br />
=== Using run_branches.pl ===<br />
There is a wrapper script that makes running multiple branches much easier. To build all the branches that are currently being maintained by the project, uncomment this line in the config file:<br />
# $conf{branches_to_build} = 'ALL'; # or [qw( HEAD RELx_y_STABLE etc )]<br />
and instead of running run_build.pl, use run_branches.pl with the --run-all option. This script accepts all the<br />
options that run_build.pl does, and passes them through. So now your crontab could just look like this:<br />
6 * * * * cd /home/andrew/buildfarm && ./run_branches.pl --run-all<br />
One of the advantages of this approach is that you don't need to manually retire a branch when the Postgres project ends support for it, nor to add one when there's a new stable branch. The script contacts the server to get a list of branches that we're currently interested in, and then builds them. This is now the recommended method of running a buildfarm member.<br />
<br />
If you don't want to build every one of the back branches, you can also use HEAD_PLUS_LATEST, or HEAD_PLUS_LATESTn for any n,<br />
in the $conf{branches_to_build} setting.<br />
<br />
=== Register your new buildfarm member. === <br />
Once this is all running happily, you can register to upload your<br />
results to the central server. Registration can be done on the buildfarm server <br />
at http://www.pgbuildfarm.org/cgi-bin/register-form.pl. When you receive your approval by <br />
email, you will edit the "animal" and "secret" lines in your config file, <br />
remove the --nosend flags, and you are done.<br />
<br />
Please also join the pgbuildfarm-members mailing list at<br />
http://pgfoundry.org/mail/?group_id=1000040<br />
This is a low-traffic list for owners of buildfarm members.<br />
<br />
=== Bugs === <br />
Please file bug reports concerning the buildfarm script (but not Postgres itself)<br />
on the tracker at [http://pgfoundry.org/tracker/?atid=238&group_id=1000040&func=browse pgFoundry]<br />
<br />
=== Running on Windows ===<br />
There are three build environments for Windows: Cygwin, MinGW/MSys, and Microsoft Visual C++. The buildfarm can run with each of these environments. This section discusses requirements for the buildfarm, rather than requirements for building on Windows, which are covered elsewhere.<br />
<br />
==== Cygwin ==== <br />
There is almost nothing extra to be done for Cygwin. You need to make sure that cygserver is running, and you should set MAX_CONNECTIONS=>3 and CYGWIN=>'server' in the build_env stanza of the buildfarm config. Other than that it should be just like running on Unix.<br />
<br />
==== MinGW/Msys ====<br />
For MinGW/MSys, you need both the MSys DTK version of perl installed, and a native Windows perl - I have only tested with ActiveState perl, which I have found to be rock solid. You need to run the main buildfarm script using the MSYS DTK perl, and the web transaction script using native Perl. that mean you need to change the first line of the run_web_txn.pl script so it reads something like:<br />
#!/c/perl/bin/perl<br />
You should make sure that the PATH is set in your config file to put the Native perl ahead of the MSys DTK perl.<br />
It's a good idea to have a runbf.bat file that you can call from the Windows scheduler. Mine looks like this:<br />
@echo off<br />
setlocal<br />
c:<br />
cd \msys\1.0\bin<br />
c:\msys\1.0\bin\sh.exe --login -c "cd bf && ./run_build.pl --verbose %1 >> bftask.out 2>&1"<br />
Set up a non-privileged Windows user to run this jobs as. set up the buildfarm as above as that user. Then create scheduler jobs that call runbf.bat with an optional branch name argument.<br />
<br />
==== Microsoft Visual C++ ====<br />
For MSVC you need to edit the config file more extensively. Make sure the 'using_msvc' setting is on. Also, there is a section of the file specially for MSVC builds. As with MinGW, you need a native Windows perl installed. It appears that Windows Git does not like to clone local repositories specified with forward slashes (this is pretty horrible - almost all Windows programs are quite happy with forward slashes. Make sure you specify the repository using backslashes or weird things will happen. Again, you will need a runbf.bat file for the windows scheduler. Mine looks like this:<br />
@echo off<br />
c:<br />
cd \prog\bf<br />
c:\perl\bin\perl run_build.pl --verbose %1 %2 %3 %4 >> bfout.txt<br />
You will also need a tar command capable of bundling up the logs to send to the server. The best one I have found for use on Windows is bsdtar, part of the libarchive collection at http://sourceforge.net/projects/gnuwin32/files/. This is also a good place to get many of the libraries you need for optional pieces of MSVC and MinGW builds.<br />
<br />
=== Running multiple buildfarm members on a single machine ===<br />
<br />
Sometimes you might want to run more than one buildfarm member on a single machine. Possible reasons for doing this include testing different compilers, and running with different build options. For example, on one FreeBSD machine I have two members; one does a normal build and the other does a build with -DCLOBBER_CACHE_ALWAYS set. Or on a Windows machine one might want to test both the 32 bit and 64 bit mingw-w64 compilers.<br />
<br />
The simplest way to do this is to do it all in the same location. Get one member working, then copy the config file to something with the other member's name and change the animal name and password, and whatever in the config will be different from the first one. The members can share a git mirror and build root. There are locking provisions that prevent instances of the buildfarm scripts from tripping over each other. If you are using ccache, you should ensure that each member gets a separate ccache location. The best way to do that is to put the member name into the ccache directory name (which is the default as of recent releases of the buildfarm scripts).<br />
<br />
=== Tips and Tricks ===<br />
<br />
You can force a single run of your animal by putting a file called <animal>.force-one-run in the <buildroot>/<branch> directory. For example the following will force a build on all the stable branches of my animal crake:<br />
cd root<br />
for f in REL* ; do<br />
touch $f/crake.force-one-run<br />
done<br />
When the run is done this file will be removed automatically. <br />
<br />
[[Category:Howto]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_Buildfarm_Howto&diff=26742PostgreSQL Buildfarm Howto2015-12-18T00:09:51Z<p>Adunstan: </p>
<hr />
<div>PostgreSQL BuildFarm is a distributed build system designed to detect <br />
build failures on a large collection of platforms and configurations. <br />
This software is written in Perl. If you're not comfortable with Perl<br />
then you possibly don't want to run this, even though the only adjustment<br />
you should ever need is to the config file (which is also Perl).<br />
<br />
=== Get the Software === <br />
Download from [http://www.pgbuildfarm.org/downloads the buildfarm server]<br />
Unpack it and put it somewhere. You can put the config file in a different <br />
place from the run_build.pl script if you want to, but the <br />
simplest thing is to put it in the same place. Decide which user you will run <br />
the script as - it must be a user who can run the PostgreSQL server programs (on Unix<br />
that means it must *not* run as root). Do everything else here as that user.<br />
<br />
=== Other Prerequisites ===<br />
<br />
; Git<br />
: Must be version 1.6 or later.<br />
<br />
; All tools required for building Postgres from a Git checkout<br />
: GNU make, bison, flex, etc<br />
: See [http://www.postgresql.org/docs/devel/static/install-requirements.html the Postgres documentation]<br />
<br />
; ccache<br />
: This isn't ''absolutely'' necessary, but it greatly reduces the amount of CPU your buildfarm member will consume ... at the price of more disk space usage<br />
<br />
=== Choose a setup for a base git mirror that all your branches will pull from. ===<br />
Most buildfarm members run on more than one branch, and if you do it's good practice to set up<br />
a mirror on the buildfarm machine and then just clone that for each branch. The official publicly available git repository is at<br />
* git://git.postgresql.org/git/postgresql.git<br />
and there is a mirror at <br />
* git://github.com/postgres/postgres.git<br />
Either should be suitable for cloning.<br />
<br />
The simplest way to set up a mirror is simply to have the buildfarm script create and maintain it for you. <br />
If you do that, the mirror will be updated at the start of a run when it checks to see if any changes have occurred that might<br />
require a new build. To do that, all you need to do is set the following two options in your config file:<br />
git_keep_mirror => 'true',<br />
git_ignore_mirror_failure => 'true',<br />
<br />
If you would rather clone the github mirror for your local mirror instead of the authoritative community repo (doing so can keep load off the community server, which is a good thing), then set the config variable to point to it like this:<br />
scmrepo => 'git://github.com/postgres/postgres.git',<br />
<br />
The mirror will be placed in your build root, above the branch directories.<br />
<br />
You can also opt to create and maintain a git mirror yourself, something like this:<br />
git clone --mirror git://git.postgresql.org/git/postgresql.git pgsql-base.git<br />
When that is done, add an entry to your crontab to keep it up to date, something like:<br />
20,50 * * * * cd /path/to/pgsql-base.git && git fetch -q<br />
<br />
One downside of doing this is that your mirror will only be as up to date as the last time you ran the cron update.<br />
<br />
To have your buildfarm installation use a local mirror you maintain yourself, set the config variable:<br />
scmrepo => '/path/to/pgsql-base.git',<br />
Of course, in this case you don't set the git_keep_mirror option.<br />
<br />
=== Create a directory where builds will run. === <br />
This should be dedicated to<br />
the use of the build farm. Make sure there's plenty of space - on my<br />
machine each branch can use up to about 700Mb during a build. You can use the<br />
directory where the script lives, or a subdirectory of it, or a completely <br />
different directory.<br />
<br />
If you're using ccache, the cache directory can use up to 1Gb by default.<br />
You can reduce that if you like (see the ccache documentation), but it's<br />
good to allow at least 100Mb per active branch.<br />
<br />
=== Edit the build-farm.conf file ===<br />
<br />
Notable things you probably need to set include the following:<br />
<br />
==== %conf ====<br />
<br />
; scmrepo<br />
: Set this to indicate the path to your Git mirror<br />
; scm_url<br />
: If you are not using the Community git repository, or want to point the changesets at a different server, set this URL to indicate where to find a given Git commit on the web. For instance, for the github mirror, this value should be: <i>&#x68;ttp://github.com/postgres/postgres.git/commit/</i> - don't forget the trailing "/".<br />
<br />
Once you have registered your Buildfarm animal you will need to set these, but for initial testing just leave them as-is:<br />
<br />
; animal<br />
: This will need to be set to the animal name you were given by the Buildfarm coordinators<br />
; secret<br />
: This must be the password indicated by the Buildfarm coordinators<br />
<br />
Adjust other config variables "make", "config_opts", and (if you don't use ccache) "config_env" to suit your environment, and to choose which optional Postgres configuration options you want to build with. <br />
<br />
You should not need to adjust other variables.<br />
<br />
You may verify that you didn't screw things up too badly by running "perl -cw build-farm.conf". That verifies that the configuration is still legitimate Perl.<br />
<br />
=== Alerts and Status Notifications ===<br />
<br />
Alerts happen when we haven't heard from your buildfarm member for a while, and suggest that maybe something is wrong. Status notifications happen when we have heard from your buildfarm member, and we're telling you what happened. Both of them happen via email. Alerts are sent to the owner's registered email address. By default, none are sent. You can configure when and how often they are sent in the alerts section of the config file. Status notifications are sent to the addresses configured in the mail_events section of the config file. You can choose four different sorts of notification:<br />
* for every build<br />
* for every build that fails<br />
* for every build that changes the status<br />
* for every build that changes the status if the change is to or from OK (green) <br />
<br />
=== Change the shebang line in the scripts. ===<br />
If the path to your perl <br />
installation isn't "/usr/bin/perl", edit the #! line in perl scripts so it is correct. <br />
This is the ONLY line in those files you should ever need to edit. <br />
<br />
=== Check that required perl modules are present. ===<br />
Run "perl -cw run_build.pl". <br />
If you get errors about missing perl modules you will need to install them. <br />
Most of the required modules are standard modules in any perl<br />
distribution. The rest are all standard CPAN modules, and available either from there<br />
or from your OS distribution. When you don't get an error any more, run the same test on<br />
run_web_txn.pl, and also on run_branches.pl if you plan to use that (see below).<br />
When all is clear you are ready to start testing.<br />
<br />
=== Run in test mode. ===<br />
With a PATH that matches what you will have when running from cron, run<br />
the script in no-send, no-status, verbose mode. Something like this:<br />
./run_build.pl --nosend --nostatus --verbose<br />
and watch the fun begin. If this results in failures because it can't<br />
find some executables (especially gmake and git), you might need to change <br />
the config file again, this time changing the "build_env" with another <br />
setting something like:<br />
PATH => "/usr/local/bin:$ENV{PATH}",<br />
Also, if you put the config file somewhere else, you will need to use <br />
the --config=/path/to/build-farm.conf option.<br />
<br />
If trying to diagnose problems, interesting summary information may be found in the file '''web-txn.data''', which is found in a build-specific directory, of the form $build_root/$CURRENT_BRANCH/$animal.lastrun-logs/web-txn.data<br />
<br />
If particular steps of a build failed, logs for those steps may be found in that same directory.<br />
<br />
=== Test running from cron === <br />
When you have that running, it's time to try with cron. <br />
Put a line in your crontab that look=== What if you can't use git for some reason? ===<br />
You can still run in CVS mode against a git-cvs gateway. There is one available for the master (aka HEAD) and REL9_0_STABLE branches.<br />
<br />
You will need to be on release 4.2 or later of the buildfarm client, and have the following settings in your config file:<br />
<br />
scm => 'cvs',<br />
scmrepo => ':pserver:anonymous@git.postgresql.org:/postgresql.git',<br />
use_git_cvsserver => 'true',<br />
<br />
You will also need to do this, once:<br />
<br />
cvs -d :pserver:anonymous@git.postgresql.org:/postgresql.git login<br />
<br />
An empty password will do.<br />
<br />
Using this mode of running is a fall-back. If you can use git you should.<br />
s something like this:<br />
43 * * * * cd /location/of/run_build.pl/ && ./run_build.pl --nosend --verbose<br />
Again, add the --config option if needed. Notice that this time we didn't <br />
specify --nostatus. That means that (after the first run) the script won't <br />
do any build work unless the Git repo has changed. Check that your cron <br />
job runs (it should email you the results, unless you tell it to send them<br />
elsewhere).<br />
<br />
You can, and probably should, drop the --verbose option once things are<br />
working.<br />
<br />
The frequency with which the cron job is launched is up to you, though we do<br />
suggest that active branches get built at least once a day. The build script will<br />
automatically exit if it finds a previous invocation still running, so you do not<br />
need to worry about scheduling jobs too close together. Think of the cron<br />
frequency as how often the buildfarm animal will wake up to see if there have<br />
been changes in the Git repo.<br />
<br />
=== Choose which branches you want to build === <br />
By default run_build.pl builds the HEAD branch. If you want to<br />
build some other branch, you can do so by specifying the name on the commandline,<br />
e.g. <br />
run_build.pl REL9_4_STABLE<br />
<br />
The old way to build multiple branches was to create a cron job for each<br />
active branch, along the lines of:<br />
<br />
6 * * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend<br />
30 4 * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend REL9_4_STABLE<br />
<br />
But there's a better way ...<br />
<br />
=== Using run_branches.pl ===<br />
There is a wrapper script that makes running multiple branches much easier. To build all the branches that are currently being maintained by the project, uncomment this line in the config file:<br />
# $conf{branches_to_build} = 'ALL'; # or [qw( HEAD RELx_y_STABLE etc )]<br />
and instead of running run_build.pl, use run_branches.pl with the --run-all option. This script accepts all the<br />
options that run_build.pl does, and passes them through. So now your crontab could just look like this:<br />
6 * * * * cd /home/andrew/buildfarm && ./run_branches.pl --run-all<br />
One of the advantages of this approach is that you don't need to manually retire a branch when the Postgres project ends support for it, nor to add one when there's a new stable branch. The script contacts the server to get a list of branches that we're currently interested in, and then builds them. This is now the recommended method of running a buildfarm member.<br />
<br />
If you don't want to build every one of the back branches, you can also use HEAD_PLUS_LATEST, or HEAD_PLUS_LATESTn for any n,<br />
in the $conf{branches_to_build} setting.<br />
<br />
=== Register your new buildfarm member. === <br />
Once this is all running happily, you can register to upload your<br />
results to the central server. Registration can be done on the buildfarm server <br />
at http://www.pgbuildfarm.org/cgi-bin/register-form.pl. When you receive your approval by <br />
email, you will edit the "animal" and "secret" lines in your config file, <br />
remove the --nosend flags, and you are done.<br />
<br />
Please also join the pgbuildfarm-members mailing list at<br />
http://pgfoundry.org/mail/?group_id=1000040<br />
This is a low-traffic list for owners of buildfarm members.<br />
<br />
=== Bugs === <br />
Please file bug reports concerning the buildfarm script (but not Postgres itself)<br />
on the tracker at [http://pgfoundry.org/tracker/?atid=238&group_id=1000040&func=browse pgFoundry]<br />
<br />
=== Running on Windows ===<br />
There are three build environments for Windows: Cygwin, MinGW/MSys, and Microsoft Visual C++. The buildfarm can run with each of these environments. This section discusses requirements for the buildfarm, rather than requirements for building on Windows, which are covered elsewhere.<br />
<br />
==== Cygwin ==== <br />
There is almost nothing extra to be done for Cygwin. You need to make sure that cygserver is running, and you should set MAX_CONNECTIONS=>3 and CYGWIN=>'server' in the build_env stanza of the buildfarm config. Other than that it should be just like running on Unix.<br />
<br />
==== MinGW/Msys ====<br />
For MinGW/MSys, you need both the MSys DTK version of perl installed, and a native Windows perl - I have only tested with ActiveState perl, which I have found to be rock solid. You need to run the main buildfarm script using the MSYS DTK perl, and the web transaction script using native Perl. that mean you need to change the first line of the run_web_txn.pl script so it reads something like:<br />
#!/c/perl/bin/perl<br />
You should make sure that the PATH is set in your config file to put the Native perl ahead of the MSys DTK perl.<br />
It's a good idea to have a runbf.bat file that you can call from the Windows scheduler. Mine looks like this:<br />
@echo off<br />
setlocal<br />
c:<br />
cd \msys\1.0\bin<br />
c:\msys\1.0\bin\sh.exe --login -c "cd bf && ./run_build.pl --verbose %1 >> bftask.out 2>&1"<br />
Set up a non-privileged Windows user to run this jobs as. set up the buildfarm as above as that user. Then create scheduler jobs that call runbf.bat with an optional branch name argument.<br />
<br />
==== Microsoft Visual C++ ====<br />
For MSVC you need to edit the config file more extensively. Make sure the 'using_msvc' setting is on. Also, there is a section of the file specially for MSVC builds. As with MinGW, you need a native Windows perl installed. It appears that Windows Git does not like to clone local repositories specified with forward slashes (this is pretty horrible - almost all Windows programs are quite happy with forward slashes. Make sure you specify the repository using backslashes or weird things will happen. Again, you will need a runbf.bat file for the windows scheduler. Mine looks like this:<br />
@echo off<br />
c:<br />
cd \prog\bf<br />
c:\perl\bin\perl run_build.pl --verbose %1 %2 %3 %4 >> bfout.txt<br />
You will also need a tar command capable of bundling up the logs to send to the server. The best one I have found for use on Windows is bsdtar, part of the libarchive collection at http://sourceforge.net/projects/gnuwin32/files/. This is also a good place to get many of the libraries you need for optional pieces of MSVC and MinGW builds.<br />
<br />
=== Running multiple buildfarm members on a single machine ===<br />
<br />
Sometimes you might want to run more than one buildfarm member on a single machine. Possible reasons for doing this include testing different compilers, and running with different build options. For example, on one FreeBSD machine I have two members; one does a normal build and the other does a build with -DCLOBBER_CACHE_ALWAYS set. Or on a Windows machine one might want to test both the 32 bit and 64 bit mingw-w64 compilers.<br />
<br />
The simplest way to do this is to do it all in the same location. Get one member working, then copy the config file to something with the other member's name and change the animal name and password, and whatever in the config will be different from the first one. The members can share a git mirror and build root. There are locking provisions that prevent instances of the buildfarm scripts from tripping over each other. If you are using ccache, you should ensure that each member gets a separate ccache location. The best way to do that is to put the member name into the ccache directory name (which is the default as of recent releases of the buildfarm scripts).<br />
<br />
=== Tips and Tricks ===<br />
<br />
You can force a single run of your animal by putting a file called <animal>.force-one-run in the <buildroot>/<branch> directory. For example the following will force a build on all the stable branches of my animal crake:<br />
cd root<br />
for f in REL* ; do<br />
touch $f/crake.force-one-run<br />
done<br />
When the run is done this file will be removed automatically. <br />
<br />
[[Category:Howto]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_Buildfarm_Howto&diff=26740PostgreSQL Buildfarm Howto2015-12-17T23:52:05Z<p>Adunstan: remove obsolete section on how to use CVS</p>
<hr />
<div>PostgreSQL BuildFarm is a distributed build system designed to detect <br />
build failures on a large collection of platforms and configurations. <br />
This software is written in Perl. If you're not comfortable with Perl<br />
then you possibly don't want to run this, even though the only adjustment<br />
you should ever need is to the config file (which is also Perl).<br />
<br />
=== Get the Software === <br />
Download from [http://www.pgbuildfarm.org/downloads the buildfarm server]<br />
Unpack it and put it somewhere. You can put the config file in a different <br />
place from the run_build.pl script if you want to, but the <br />
simplest thing is to put it in the same place. Decide which user you will run <br />
the script as - it must be a user who can run the PostgreSQL server programs (on Unix<br />
that means it must *not* run as root). Do everything else here as that user.<br />
<br />
=== Other Prerequisites ===<br />
<br />
; Git<br />
: Must be version 1.6 or later.<br />
<br />
; All tools required for building Postgres from a Git checkout<br />
: GNU make, bison, flex, etc<br />
: See [http://www.postgresql.org/docs/devel/static/install-requirements.html the Postgres documentation]<br />
<br />
; ccache<br />
: This isn't ''absolutely'' necessary, but it greatly reduces the amount of CPU your buildfarm member will consume ... at the price of more disk space usage<br />
<br />
=== Choose a setup for a base git mirror that all your branches will pull from. ===<br />
Most buildfarm members run on more than one branch, and if you do it's good practice to set up<br />
a mirror on the buildfarm machine and then just clone that for each branch. The official publicly available git repository is at<br />
* git://git.postgresql.org/git/postgresql.git<br />
and there is a mirror at <br />
* git://github.com/postgres/postgres.git<br />
Either should be suitable for cloning.<br />
<br />
The simplest way to set up a mirror is simply to have the buildfarm script create and maintain it for you. <br />
If you do that, the mirror will be updated at the start of a run when it checks to see if any changes have occurred that might<br />
require a new build. To do that, all you need to do is set the following two options in your config file:<br />
git_keep_mirror => 'true',<br />
git_ignore_mirror_failure => 'true',<br />
<br />
If you would rather clone the github mirror for your local mirror instead of the authoritative community repo (doing so can keep load off the community server, which is a good thing), then set the config variable to point to it like this:<br />
scmrepo => 'git://github.com/postgres/postgres.git',<br />
<br />
The mirror will be placed in your build root, above the branch directories.<br />
<br />
You can also opt to create and maintain a git mirror yourself, something like this:<br />
git clone --mirror git://git.postgresql.org/git/postgresql.git pgsql-base.git<br />
When that is done, add an entry to your crontab to keep it up to date, something like:<br />
20,50 * * * * cd /path/to/pgsql-base.git && git fetch -q<br />
<br />
One downside of doing this is that your mirror will only be as up to date as the last time you ran the cron update.<br />
<br />
To have your buildfarm installation use a local mirror you maintain yourself, set the config variable:<br />
scmrepo => '/path/to/pgsql-base.git',<br />
Of course, in this case you don't set the git_keep_mirror option.<br />
<br />
=== Create a directory where builds will run. === <br />
This should be dedicated to<br />
the use of the build farm. Make sure there's plenty of space - on my<br />
machine each branch can use up to about 700Mb during a build. You can use the<br />
directory where the script lives, or a subdirectory of it, or a completely <br />
different directory.<br />
<br />
If you're using ccache, the cache directory can use up to 1Gb by default.<br />
You can reduce that if you like (see the ccache documentation), but it's<br />
good to allow at least 100Mb per active branch.<br />
<br />
=== Edit the build-farm.conf file ===<br />
<br />
Notable things you probably need to set include the following:<br />
<br />
==== %conf ====<br />
<br />
; scmrepo<br />
: Set this to indicate the path to your Git mirror<br />
; scm_url<br />
: If you are not using the Community git repository, or want to point the changesets at a different server, set this URL to indicate where to find a given Git commit on the web. For instance, for the github mirror, this value should be: <i>&#x68;ttp://github.com/postgres/postgres.git/commit/</i> - don't forget the trailing "/".<br />
<br />
Once you have registered your Buildfarm animal you will need to set these, but for initial testing just leave them as-is:<br />
<br />
; animal<br />
: This will need to be set to the animal name you were given by the Buildfarm coordinators<br />
; secret<br />
: This must be the password indicated by the Buildfarm coordinators<br />
<br />
Adjust other config variables "make", "config_opts", and (if you don't use ccache) "config_env" to suit your environment, and to choose which optional Postgres configuration options you want to build with. <br />
<br />
You should not need to adjust other variables.<br />
<br />
You may verify that you didn't screw things up too badly by running "perl -cw build-farm.conf". That verifies that the configuration is still legitimate Perl.<br />
<br />
=== Change the shebang line in the run_build script. ===<br />
If the path to your perl <br />
installation isn't "/usr/bin/perl", edit the #! line in run_build.pl so it is correct. <br />
This is the ONLY line in that file you should ever need to edit. <br />
<br />
=== Check that required perl modules are present. ===<br />
Run "perl -cw run_build.pl". <br />
If you get errors about missing perl modules you will need to install them. <br />
Most of the required modules are standard modules in any perl<br />
distribution. The rest are all standard CPAN modules, and available either from there<br />
or from your OS distribution. When you don't get an error any more, run the same test on<br />
run_web_txn.pl, and also on run_branches.pl if you plan to use that (see below).<br />
When all is clear you are ready to start testing.<br />
<br />
=== Run in test mode. ===<br />
With a PATH that matches what you will have when running from cron, run<br />
the script in no-send, no-status, verbose mode. Something like this:<br />
./run_build.pl --nosend --nostatus --verbose<br />
and watch the fun begin. If this results in failures because it can't<br />
find some executables (especially gmake and git), you might need to change <br />
the config file again, this time changing the "build_env" with another <br />
setting something like:<br />
PATH => "/usr/local/bin:$ENV{PATH}",<br />
Also, if you put the config file somewhere else, you will need to use <br />
the --config=/path/to/build-farm.conf option.<br />
<br />
If trying to diagnose problems, interesting summary information may be found in the file '''web-txn.data''', which is found in a build-specific directory, of the form $build_root/$CURRENT_BRANCH/$animal.lastrun-logs/web-txn.data<br />
<br />
If particular steps of a build failed, logs for those steps may be found in that same directory.<br />
<br />
=== Test running from cron === <br />
When you have that running, it's time to try with cron. <br />
Put a line in your crontab that looks something like this:<br />
43 * * * * cd /location/of/run_build.pl/ && ./run_build.pl --nosend --verbose<br />
Again, add the --config option if needed. Notice that this time we didn't <br />
specify --nostatus. That means that (after the first run) the script won't <br />
do any build work unless the Git repo has changed. Check that your cron <br />
job runs (it should email you the results, unless you tell it to send them<br />
elsewhere).<br />
<br />
You can, and probably should, drop the --verbose option once things are<br />
working.<br />
<br />
The frequency with which the cron job is launched is up to you, though we do<br />
suggest that active branches get built at least once a day. The build script will<br />
automatically exit if it finds a previous invocation still running, so you do not<br />
need to worry about scheduling jobs too close together. Think of the cron<br />
frequency as how often the buildfarm animal will wake up to see if there have<br />
been changes in the Git repo.<br />
<br />
=== Choose which branches you want to build === <br />
By default run_build.pl builds the HEAD branch. If you want to<br />
build some other branch, you can do so by specifying the name on the commandline,<br />
e.g. <br />
run_build.pl REL9_4_STABLE<br />
<br />
The old way to build multiple branches was to create a cron job for each<br />
active branch, along the lines of:<br />
<br />
6 * * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend<br />
30 4 * * * cd /home/andrew/buildfarm && ./run_build.pl --nosend REL9_4_STABLE<br />
<br />
But there's a better way ...<br />
<br />
=== Using run_branches.pl ===<br />
There is a wrapper script that makes running multiple branches much easier. To build all the branches that are currently being maintained by the project, uncomment this line in the config file:<br />
# $conf{branches_to_build} = 'ALL'; # or [qw( HEAD RELx_y_STABLE etc )]<br />
and instead of running run_build.pl, use run_branches.pl with the --run-all option. This script accepts all the<br />
options that run_build.pl does, and passes them through. So now your crontab could just look like this:<br />
6 * * * * cd /home/andrew/buildfarm && ./run_branches.pl --run-all<br />
One of the advantages of this approach is that you don't need to manually retire a branch when the Postgres project ends support for it, nor to add one when there's a new stable branch. The script contacts the server to get a list of branches that we're currently interested in, and then builds them. This is now the recommended method of running a buildfarm member.<br />
<br />
If you don't want to build every one of the back branches, you can also use HEAD_PLUS_LATEST, or HEAD_PLUS_LATESTn for any n,<br />
in the $conf{branches_to_build} setting.<br />
<br />
=== Register your new buildfarm member. === <br />
Once this is all running happily, you can register to upload your<br />
results to the central server. Registration can be done on the buildfarm server <br />
at http://www.pgbuildfarm.org/cgi-bin/register-form.pl. When you receive your approval by <br />
email, you will edit the "animal" and "secret" lines in your config file, <br />
remove the --nosend flags, and you are done.<br />
<br />
Please also join the pgbuildfarm-members mailing list at<br />
http://pgfoundry.org/mail/?group_id=1000040<br />
This is a low-traffic list for owners of buildfarm members.<br />
<br />
=== Bugs === <br />
Please file bug reports concerning the buildfarm script (but not Postgres itself)<br />
on the tracker at [http://pgfoundry.org/tracker/?atid=238&group_id=1000040&func=browse pgFoundry]<br />
<br />
=== Running on Windows ===<br />
There are three build environments for Windows: Cygwin, MinGW/MSys, and Microsoft Visual C++. The buildfarm can run with each of these environments. This section discusses requirements for the buildfarm, rather than requirements for building on Windows, which are covered elsewhere.<br />
<br />
==== Cygwin ==== <br />
There is almost nothing extra to be done for Cygwin. You need to make sure that cygserver is running, and you should set MAX_CONNECTIONS=>3 and CYGWIN=>'server' in the build_env stanza of the buildfarm config. Other than that it should be just like running on Unix.<br />
<br />
==== MinGW/Msys ====<br />
For MinGW/MSys, you need both the MSys DTK version of perl installed, and a native Windows perl - I have only tested with ActiveState perl, which I have found to be rock solid. You need to run the main buildfarm script using the MSYS DTK perl, and the web transaction script using native Perl. that mean you need to change the first line of the run_web_txn.pl script so it reads something like:<br />
#!/c/perl/bin/perl<br />
You should make sure that the PATH is set in your config file to put the Native perl ahead of the MSys DTK perl.<br />
It's a good idea to have a runbf.bat file that you can call from the Windows scheduler. Mine looks like this:<br />
@echo off<br />
setlocal<br />
c:<br />
cd \msys\1.0\bin<br />
c:\msys\1.0\bin\sh.exe --login -c "cd bf && ./run_build.pl --verbose %1 >> bftask.out 2>&1"<br />
Set up a non-privileged Windows user to run this jobs as. set up the buildfarm as above as that user. Then create scheduler jobs that call runbf.bat with an optional branch name argument.<br />
<br />
==== Microsoft Visual C++ ====<br />
For MSVC you need to edit the config file more extensively. Make sure the 'using_msvc' setting is on. Also, there is a section of the file specially for MSVC builds. As with MinGW, you need a native Windows perl installed. It appears that Windows Git does not like to clone local repositories specified with forward slashes (this is pretty horrible - almost all Windows programs are quite happy with forward slashes. Make sure you specify the repository using backslashes or weird things will happen. Again, you will need a runbf.bat file for the windows scheduler. Mine looks like this:<br />
@echo off<br />
c:<br />
cd \prog\bf<br />
c:\perl\bin\perl run_build.pl --verbose %1 %2 %3 %4 >> bfout.txt<br />
You will also need a tar command capable of bundling up the logs to send to the server. The best one I have found for use on Windows is bsdtar, part of the libarchive collection at http://sourceforge.net/projects/gnuwin32/files/. This is also a good place to get many of the libraries you need for optional pieces of MSVC and MinGW builds.<br />
<br />
=== Running multiple buildfarm members on a single machine ===<br />
<br />
Sometimes you might want to run more than one buildfarm member on a single machine. Possible reasons for doing this include testing different compilers, and running with different build options. For example, on one FreeBSD machine I have two members; one does a normal build and the other does a build with -DCLOBBER_CACHE_ALWAYS set. Or on a Windows machine one might want to test both the 32 bit and 64 bit mingw-w64 compilers.<br />
<br />
The simplest way to do this is to do it all in the same location. Get one member working, then copy the config file to something with the other member's name and change the animal name and password, and whatever in the config will be different from the first one. The members can share a git mirror and build root. There are locking provisions that prevent instances of the buildfarm scripts from tripping over each other. If you are using ccache, you should ensure that each member gets a separate ccache location. The best way to do that is to put the member name into the ccache directory name (which is the default as of recent releases of the buildfarm scripts).<br />
<br />
=== Tips and Tricks ===<br />
<br />
You can force a single run of your animal by putting a file called <animal>.force-one-run in the <buildroot>/<branch> directory. For example the following will force a build on all the stable branches of my animal crake:<br />
cd root<br />
for f in REL* ; do<br />
touch $f/crake.force-one-run<br />
done<br />
When the run is done this file will be removed automatically. <br />
<br />
[[Category:Howto]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=Building_With_MinGW&diff=25922Building With MinGW2015-09-29T01:54:11Z<p>Adunstan: /* Set up */</p>
<hr />
<div>Most of the people who do development work of the PostgreSQL system do so in a Unix-like environment, so that is where most of the testing is also done. Committed patches are automatically tested on MicroSoft Windows by the Build Farm, but the turn around time on that testing is too slow to be convenient for people who do not work in a Window environment but are touching parts of the code which have cross-platform implications, or for issues which are not explored by the standard build and regression tests but require custom testing. Also, during commitfests patches that relate to Windows often suffer from a shortage of peer reviewers: hopefully instructions in use of MinGW for building on Windows can help that.<br />
<br />
Production releases of PostgreSQL for Windows are generally built using MicroSoft's commercial compilers, but these are often not cost-free and can be very hard to use for people more accustomed to a Linux environment. These instructions are intended to help such developers test their code on Windows without much cost and without having to turn themselves into Windows developers. <br />
<br />
== Set up == <br />
<br />
If you do not have access to a Microsoft Windows environment, you can rent one from Amazon Web Services. You will need to have some kind of graphical environment, like X-Windows, to enable you to connect to Windows over RDP. Most modern Linux systems will have readily available such a graphical environment and the "rdesktop" program to enable you to connect to Windows.<br />
<br />
A t1.micro spot instance has a current price of $0.006 / hour (2013/02/23) and may be free if you have been an AWS customer for less than an year. But it will be slow! Also consider a t2.micro which may be faster if it's available. For a little more, m3.medium is a good choice. As of 2015/09/28 the spot price for this is just under $0.06 per hour.<br />
<br />
If you have a Windows system of your own and are willing to install MinGW on it, then the steps of creating and connecting to an Amazon instance can be skipped. If you do run this locally and are not logged on as the administrator, then you can also skip the steps where you create an unprivileged user and run as that user.<br />
<br />
* Create an Amazon instance of Windows_Server-2008-SP2-English-64Bit-Base-2012.12.12 (ami-554ac83c), <br />
or Windows_Server-2012-R2_RTM-English-64Bit-Base-2015.09.09 (ami-c9cea0ac).<br />
* make sure you have enabled the RDP port (3389) for the security group in which you launch the instance.<br />
* get the credentials and log in using<br />
o rdesktop -g 80% -u Administrator -p 'password' amazon-hostname<br />
* turn off annoying IE security enhancements, and fire up IE<br />
* go to http://sourceforge.net/projects/mingw/files/Installer and download latest mingw-get-setup.exe<br />
* run this - make sure to select the Msys and the developer toolkit in addition to the Mingw base.<br />
* navigate in explorer or a command window to C:\Mingw\msys\1.0 and run msys.bat<br />
* run "df" to make sure that the windows Mingw directory is mounted on the virtual /mingw directory. If it's not, edit /etc/fstab with vim and add this line:<br />
c:/mingw /mingw<br />
* run this command to install some useful packages:<br />
o mingw-get install msys-wget msys-rxvt msys-unzip<br />
* close that window<br />
* open a normal command window and run the following to create an unprivileged user and open an msys window as that user:<br />
o net user pgrunner SomePw1234 /add<br />
o runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
* if you want to do 64-bit builds, you will need the compiler from the mingw-w64 project (a separate project from the mingw project). Go to <br />
"http://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win32/Personal%20Builds/mingw-builds/installer" <br />
and download and run the latest installer (mingw-w64-install.exe). Run it and choose the following options:<br />
o architecture: x86_64<br />
o threads: win32<br />
o location: something like "C:\mingw-w64\x86_64-5.2.0-win32-seh-rt_v4-rev0"<br />
Then run msys.bat again as the Administrator and edit /etc/fstab and add a line like this:<br />
c:/mingw-w64/x86_64-5.2.0-win32-seh-rt_v4-rev0/mingw64 /mingw64<br />
<br />
The above steps can take a while and download several hundred MB of data, so if you are using an Amazon instance it may be worthwhile to arrange to store this set up for future work so that it does not need to be repeated. (Could someone add instructions on how to do that?)<br />
<br />
== Build from a tarball ==<br />
<br />
* Again open rxvt window (if not already open) runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
o wget http://ftp.postgresql.org/pub/snapshot/dev/postgresql-snapshot.tar.gz<br />
o tar -z -xf postgresql-snapshot.tar.gz<br />
o cd ~/postgresql-9.4devel<br />
<br />
* For a 64-bit build then do:<br />
o export PATH=/mingw64/bin:$PATH<br />
o ./configure --host=x86_64-w64-mingw32 --without-zlib && make && make check<br />
<br />
* For a 32-bit build do instead:<br />
o ./configure --without-zlib && make && make check<br />
<br />
Make some coffee and do the crossword or read War and Peace - this can take a while.<br />
<br />
== Installing Git ==<br />
<br />
If you want to build from the git repo instead of a tarball snapshot, which you will need to do if you're doing development, you need to install a git client.<br />
<br />
Open https://git-for-windows.github.io/ and grab the latest version. As of the time of writing this is called Git-2.5.3-32-bit.exe or Git-2.5.3-64-bit.exe. Run this installer. Choose an install path that's easy to manage rather than the default, such as "c:\prog\git". You might get permissions errors. If so, try running again, or else try running as the Administrator.<br />
Uncheck all the options unless you think you will need them - you won't need them for command line use from Msys. Don't set up a Start Menu folder, unless you want one - Msys won't need that either. Select "Run git from Windows command prompt" and "Checkout as-is, commit as-is."<br />
<br />
After this git should be in your path on Msys, and just work. Verify by running "git --help" in an Msys window started after you installed git.<br />
<br />
== Build from a git repo ==<br />
<br />
We'll use github's mirror here to check out postgres, to avoid overloading the master repo.<br />
<br />
* Again open rxvt window (if not already open) runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
o git clone https://github.com/postgres/postgres.git<br />
o cd postgres<br />
<br />
Then follow the same 64-bit or 32-bit build instructions as for building from a tarball.<br />
<br />
== Installing ==<br />
<br />
After you have built you can install by running <br />
<br />
make install<br />
<br />
Following this you need to copy the libpq dll from the installation lib directory to the installation bin directory. This lets pg_ctl and psql and other client programs work.<br />
<br />
== Using psql interactively ==<br />
<br />
The psql client doesn't work well in the rxvt terminal emulator, and appears to hang. Instead you can open a non-rxvt shell by omitting the "--rxvt" flag when opening a session, and psql works as expected.<br />
It also works in the normal Windows command window, and in the Windows Power Shell window. None of these builds have readline installed, so you don't get psql history, command completion and so on. If you want to do lots of work <br />
with psql on Windows, the best way might be to build and run psql under Cygwin, where readline is fully supported. That's what I do.<br />
<br />
== Alternatives ==<br />
<br />
=== Cross Compiling ===<br />
<br />
Another alternative to needing/using a windows box is to cross compile postgres from a linux box, ex (ubuntu):<br />
<br />
# skip all the above steps, just do this:<br />
$ sudo apt-get install mingw-w64<br />
# download the source, cd into it, same instructions as above<br />
$ ./configure --host=i686-w64-mingw32 --without-zlib --prefix=... # 32 bit<br />
$ ./configure --host=x86_64-w64-mingw32 --without-zlib --prefix=... # 64 bit<br />
<br />
Then you can run to test it using wine [or copying it to a windows box and running natively, of course].<br />
<br />
$ sudo apt-get install wine<br />
$ wine /full/path/to/psql.exe # etc. You can follow [[First_steps]] after its installed (actually, for any of the build mechanisms).<br />
<br />
=== Cygwin ===<br />
<br />
You can install cygwin and build it "just like you would in linux" using cygwin's packages (gcc etc.), from inside a windows box.<br />
<br />
=== Virtualbox ===<br />
<br />
You can also run windows inside a virtualbox VM inside your Linux box.</div>Adunstanhttps://wiki.postgresql.org/index.php?title=Building_With_MinGW&diff=25919Building With MinGW2015-09-28T19:08:19Z<p>Adunstan: /* Set up */</p>
<hr />
<div>Most of the people who do development work of the PostgreSQL system do so in a Unix-like environment, so that is where most of the testing is also done. Committed patches are automatically tested on MicroSoft Windows by the Build Farm, but the turn around time on that testing is too slow to be convenient for people who do not work in a Window environment but are touching parts of the code which have cross-platform implications, or for issues which are not explored by the standard build and regression tests but require custom testing. Also, during commitfests patches that relate to Windows often suffer from a shortage of peer reviewers: hopefully instructions in use of MinGW for building on Windows can help that.<br />
<br />
Production releases of PostgreSQL for Windows are generally built using MicroSoft's commercial compilers, but these are often not cost-free and can be very hard to use for people more accustomed to a Linux environment. These instructions are intended to help such developers test their code on Windows without much cost and without having to turn themselves into Windows developers. <br />
<br />
== Set up == <br />
<br />
If you do not have access to a Microsoft Windows environment, you can rent one from Amazon Web Services. You will need to have some kind of graphical environment, like X-Windows, to enable you to connect to Windows over RDP. Most modern Linux systems will have readily available such a graphical environment and the "rdesktop" program to enable you to connect to Windows.<br />
<br />
A t1.micro spot instance has a current price of $0.006 / hour (2013/02/23) and may be free if you have been an AWS customer for less than an year. But it will be slow! Also consider a t2.micro which may be faster if it's available. For a little more, m3.medium is a good choice. As of 2015/09/28 the spot price for this is just under $0.06 per hour.<br />
<br />
If you have a Windows system of your own and are willing to install MinGW on it, then the steps of creating and connecting to an Amazon instance can be skipped. If you do run this locally and are not logged on as the administrator, then you can also skip the steps where you create an unprivileged user and run as that user.<br />
<br />
* Create an Amazon instance of Windows_Server-2008-SP2-English-64Bit-Base-2012.12.12 (ami-554ac83c), or Windows_Server-2012-R2_RTM-English-64Bit-Base-2015.09.09 (ami-c9cea0ac).<br />
* make sure you have enabled the RDP port (3389) for the security group in which you launch the instance.<br />
* get the credentials and log in using<br />
o rdesktop -g 80% -u Administrator -p 'password' amazon-hostname<br />
* turn off annoying IE security enhancements, and fire up IE<br />
* go to http://sourceforge.net/projects/mingw/files/Installer and download latest mingw-get-setup.exe<br />
* run this - make sure to select the Msys and the developer toolkit in addition to the Mingw base.<br />
* navigate in explorer or a command window to C:\Mingw\msys\1.0 and run msys.bat<br />
* run "df" to make sure that the windows Mingw directory is mounted on the virtual /mingw directory. If it's not, edit /etc/fstab with vim and add this line:<br />
c:/mingw /mingw<br />
* run this command to install some useful packages:<br />
o mingw-get install msys-wget msys-rxvt msys-unzip<br />
* close that window<br />
* open a normal command window and run the following to create an unprivileged user and open an msys window as that user:<br />
o net user pgrunner SomePw1234 /add<br />
o runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
* if you want to do 64-bit builds, you will need the compiler from the mingw-w64 project (a separate project from the mingw project). Go to <br />
"http://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win32/Personal%20Builds/mingw-builds/installer" and download and run the latest installer (mingw-w64-install.exe). Run it and choose the following options:<br />
o architecture: x86_64<br />
o threads: win32<br />
o location: something like "C:\mingw-w64\x86_64-5.2.0-win32-seh-rt_v4-rev0"<br />
Then run msys.bat again as the Administrator and edit /etc/fstab and add a line like this:<br />
c:/mingw-w64/x86_64-5.2.0-win32-seh-rt_v4-rev0/mingw64 /mingw64<br />
<br />
The above steps can take a while and download several hundred MB of data, so if you are using an Amazon instance it may be worthwhile to arrange to store this set up for future work so that it does not need to be repeated. (Could someone add instructions on how to do that?)<br />
<br />
== Build from a tarball ==<br />
<br />
* Again open rxvt window (if not already open) runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
o wget http://ftp.postgresql.org/pub/snapshot/dev/postgresql-snapshot.tar.gz<br />
o tar -z -xf postgresql-snapshot.tar.gz<br />
o cd ~/postgresql-9.4devel<br />
<br />
* For a 64-bit build then do:<br />
o export PATH=/mingw64/bin:$PATH<br />
o ./configure --host=x86_64-w64-mingw32 --without-zlib && make && make check<br />
<br />
* For a 32-bit build do instead:<br />
o ./configure --without-zlib && make && make check<br />
<br />
Make some coffee and do the crossword or read War and Peace - this can take a while.<br />
<br />
== Installing Git ==<br />
<br />
If you want to build from the git repo instead of a tarball snapshot, which you will need to do if you're doing development, you need to install a git client.<br />
<br />
Open https://git-for-windows.github.io/ and grab the latest version. As of the time of writing this is called Git-2.5.3-32-bit.exe or Git-2.5.3-64-bit.exe. Run this installer. Choose an install path that's easy to manage rather than the default, such as "c:\prog\git". You might get permissions errors. If so, try running again, or else try running as the Administrator.<br />
Uncheck all the options unless you think you will need them - you won't need them for command line use from Msys. Don't set up a Start Menu folder, unless you want one - Msys won't need that either. Select "Run git from Windows command prompt" and "Checkout as-is, commit as-is."<br />
<br />
After this git should be in your path on Msys, and just work. Verify by running "git --help" in an Msys window started after you installed git.<br />
<br />
== Build from a git repo ==<br />
<br />
We'll use github's mirror here to check out postgres, to avoid overloading the master repo.<br />
<br />
* Again open rxvt window (if not already open) runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
o git clone https://github.com/postgres/postgres.git<br />
o cd postgres<br />
<br />
Then follow the same 64-bit or 32-bit build instructions as for building from a tarball.<br />
<br />
== Installing ==<br />
<br />
After you have built you can install by running <br />
<br />
make install<br />
<br />
Following this you need to copy the libpq dll from the installation lib directory to the installation bin directory. This lets pg_ctl and psql and other client programs work.<br />
<br />
== Using psql interactively ==<br />
<br />
The psql client doesn't work well in the rxvt terminal emulator, and appears to hang. Instead you can open a non-rxvt shell by omitting the "--rxvt" flag when opening a session, and psql works as expected.<br />
It also works in the normal Windows command window, and in the Windows Power Shell window. None of these builds have readline installed, so you don't get psql history, command completion and so on. If you want to do lots of work <br />
with psql on Windows, the best way might be to build and run psql under Cygwin, where readline is fully supported. That's what I do.<br />
<br />
== Alternatives ==<br />
<br />
=== Cross Compiling ===<br />
<br />
Another alternative to needing/using a windows box is to cross compile postgres from a linux box, ex (ubuntu):<br />
<br />
# skip all the above steps, just do this:<br />
$ sudo apt-get install mingw-w64<br />
# download the source, cd into it, same instructions as above<br />
$ ./configure --host=i686-w64-mingw32 --without-zlib --prefix=... # 32 bit<br />
$ ./configure --host=x86_64-w64-mingw32 --without-zlib --prefix=... # 64 bit<br />
<br />
Then you can run to test it using wine [or copying it to a windows box and running natively, of course].<br />
<br />
$ sudo apt-get install wine<br />
$ wine /full/path/to/psql.exe # etc. You can follow [[First_steps]] after its installed (actually, for any of the build mechanisms).<br />
<br />
=== Cygwin ===<br />
<br />
You can install cygwin and build it "just like you would in linux" using cygwin's packages (gcc etc.), from inside a windows box.<br />
<br />
=== Virtualbox ===<br />
<br />
You can also run windows inside a virtualbox VM inside your Linux box.</div>Adunstanhttps://wiki.postgresql.org/index.php?title=Building_With_MinGW&diff=25918Building With MinGW2015-09-28T19:07:04Z<p>Adunstan: /* Set up */</p>
<hr />
<div>Most of the people who do development work of the PostgreSQL system do so in a Unix-like environment, so that is where most of the testing is also done. Committed patches are automatically tested on MicroSoft Windows by the Build Farm, but the turn around time on that testing is too slow to be convenient for people who do not work in a Window environment but are touching parts of the code which have cross-platform implications, or for issues which are not explored by the standard build and regression tests but require custom testing. Also, during commitfests patches that relate to Windows often suffer from a shortage of peer reviewers: hopefully instructions in use of MinGW for building on Windows can help that.<br />
<br />
Production releases of PostgreSQL for Windows are generally built using MicroSoft's commercial compilers, but these are often not cost-free and can be very hard to use for people more accustomed to a Linux environment. These instructions are intended to help such developers test their code on Windows without much cost and without having to turn themselves into Windows developers. <br />
<br />
== Set up == <br />
<br />
If you do not have access to a Microsoft Windows environment, you can rent one from Amazon Web Services. You will need to have some kind of graphical environment, like X-Windows, to enable you to connect to Windows over RDP. Most modern Linux systems will have readily available such a graphical environment and the "rdesktop" program to enable you to connect to Windows.<br />
<br />
A t1.micro spot instance has a current price of $0.006 / hour (2013/02/23) and may be free if you have been an AWS customer for less than an year. But it will be slow! Also consider a t2.micro which may be faster. For a little more, m3.medoium is a good choice. As of the time of writing the spot price for this is just under $0.06 per hour.<br />
<br />
If you have a Windows system of your own and are willing to install MinGW on it, then the steps of creating and connecting to an Amazon instance can be skipped. If you do run this locally and are not logged on as the administrator, then you can also skip the steps where you create an unprivileged user and run as that user.<br />
<br />
* Create an Amazon instance of Windows_Server-2008-SP2-English-64Bit-Base-2012.12.12 (ami-554ac83c), or Windows_Server-2012-R2_RTM-English-64Bit-Base-2015.09.09 (ami-c9cea0ac).<br />
* make sure you have enabled the RDP port (3389) for the security group in which you launch the instance.<br />
* get the credentials and log in using<br />
o rdesktop -g 80% -u Administrator -p 'password' amazon-hostname<br />
* turn off annoying IE security enhancements, and fire up IE<br />
* go to http://sourceforge.net/projects/mingw/files/Installer and download latest mingw-get-setup.exe<br />
* run this - make sure to select the Msys and the developer toolkit in addition to the Mingw base.<br />
* navigate in explorer or a command window to C:\Mingw\msys\1.0 and run msys.bat<br />
* run "df" to make sure that the windows Mingw directory is mounted on the virtual /mingw directory. If it's not, edit /etc/fstab with vim and add this line:<br />
c:/mingw /mingw<br />
* run this command to install some useful packages:<br />
o mingw-get install msys-wget msys-rxvt msys-unzip<br />
* close that window<br />
* open a normal command window and run the following to create an unprivileged user and open an msys window as that user:<br />
o net user pgrunner SomePw1234 /add<br />
o runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
* if you want to do 64-bit builds, you will need the compiler from the mingw-w64 project (a separate project from the mingw project). Go to <br />
"http://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win32/Personal%20Builds/mingw-builds/installer" and download and run the latest installer (mingw-w64-install.exe). Run it and choose the following options:<br />
o architecture: x86_64<br />
o threads: win32<br />
o location: something like "C:\mingw-w64\x86_64-5.2.0-win32-seh-rt_v4-rev0"<br />
Then run msys.bat again as the Administrator and edit /etc/fstab and add a line like this:<br />
c:/mingw-w64/x86_64-5.2.0-win32-seh-rt_v4-rev0/mingw64 /mingw64<br />
<br />
The above steps can take a while and download several hundred MB of data, so if you are using an Amazon instance it may be worthwhile to arrange to store this set up for future work so that it does not need to be repeated. (Could someone add instructions on how to do that?)<br />
<br />
== Build from a tarball ==<br />
<br />
* Again open rxvt window (if not already open) runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
o wget http://ftp.postgresql.org/pub/snapshot/dev/postgresql-snapshot.tar.gz<br />
o tar -z -xf postgresql-snapshot.tar.gz<br />
o cd ~/postgresql-9.4devel<br />
<br />
* For a 64-bit build then do:<br />
o export PATH=/mingw64/bin:$PATH<br />
o ./configure --host=x86_64-w64-mingw32 --without-zlib && make && make check<br />
<br />
* For a 32-bit build do instead:<br />
o ./configure --without-zlib && make && make check<br />
<br />
Make some coffee and do the crossword or read War and Peace - this can take a while.<br />
<br />
== Installing Git ==<br />
<br />
If you want to build from the git repo instead of a tarball snapshot, which you will need to do if you're doing development, you need to install a git client.<br />
<br />
Open https://git-for-windows.github.io/ and grab the latest version. As of the time of writing this is called Git-2.5.3-32-bit.exe or Git-2.5.3-64-bit.exe. Run this installer. Choose an install path that's easy to manage rather than the default, such as "c:\prog\git". You might get permissions errors. If so, try running again, or else try running as the Administrator.<br />
Uncheck all the options unless you think you will need them - you won't need them for command line use from Msys. Don't set up a Start Menu folder, unless you want one - Msys won't need that either. Select "Run git from Windows command prompt" and "Checkout as-is, commit as-is."<br />
<br />
After this git should be in your path on Msys, and just work. Verify by running "git --help" in an Msys window started after you installed git.<br />
<br />
== Build from a git repo ==<br />
<br />
We'll use github's mirror here to check out postgres, to avoid overloading the master repo.<br />
<br />
* Again open rxvt window (if not already open) runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
o git clone https://github.com/postgres/postgres.git<br />
o cd postgres<br />
<br />
Then follow the same 64-bit or 32-bit build instructions as for building from a tarball.<br />
<br />
== Installing ==<br />
<br />
After you have built you can install by running <br />
<br />
make install<br />
<br />
Following this you need to copy the libpq dll from the installation lib directory to the installation bin directory. This lets pg_ctl and psql and other client programs work.<br />
<br />
== Using psql interactively ==<br />
<br />
The psql client doesn't work well in the rxvt terminal emulator, and appears to hang. Instead you can open a non-rxvt shell by omitting the "--rxvt" flag when opening a session, and psql works as expected.<br />
It also works in the normal Windows command window, and in the Windows Power Shell window. None of these builds have readline installed, so you don't get psql history, command completion and so on. If you want to do lots of work <br />
with psql on Windows, the best way might be to build and run psql under Cygwin, where readline is fully supported. That's what I do.<br />
<br />
== Alternatives ==<br />
<br />
=== Cross Compiling ===<br />
<br />
Another alternative to needing/using a windows box is to cross compile postgres from a linux box, ex (ubuntu):<br />
<br />
# skip all the above steps, just do this:<br />
$ sudo apt-get install mingw-w64<br />
# download the source, cd into it, same instructions as above<br />
$ ./configure --host=i686-w64-mingw32 --without-zlib --prefix=... # 32 bit<br />
$ ./configure --host=x86_64-w64-mingw32 --without-zlib --prefix=... # 64 bit<br />
<br />
Then you can run to test it using wine [or copying it to a windows box and running natively, of course].<br />
<br />
$ sudo apt-get install wine<br />
$ wine /full/path/to/psql.exe # etc. You can follow [[First_steps]] after its installed (actually, for any of the build mechanisms).<br />
<br />
=== Cygwin ===<br />
<br />
You can install cygwin and build it "just like you would in linux" using cygwin's packages (gcc etc.), from inside a windows box.<br />
<br />
=== Virtualbox ===<br />
<br />
You can also run windows inside a virtualbox VM inside your Linux box.</div>Adunstanhttps://wiki.postgresql.org/index.php?title=Building_With_MinGW&diff=25917Building With MinGW2015-09-28T18:40:46Z<p>Adunstan: /* Set up */</p>
<hr />
<div>Most of the people who do development work of the PostgreSQL system do so in a Unix-like environment, so that is where most of the testing is also done. Committed patches are automatically tested on MicroSoft Windows by the Build Farm, but the turn around time on that testing is too slow to be convenient for people who do not work in a Window environment but are touching parts of the code which have cross-platform implications, or for issues which are not explored by the standard build and regression tests but require custom testing. Also, during commitfests patches that relate to Windows often suffer from a shortage of peer reviewers: hopefully instructions in use of MinGW for building on Windows can help that.<br />
<br />
Production releases of PostgreSQL for Windows are generally built using MicroSoft's commercial compilers, but these are often not cost-free and can be very hard to use for people more accustomed to a Linux environment. These instructions are intended to help such developers test their code on Windows without much cost and without having to turn themselves into Windows developers. <br />
<br />
== Set up == <br />
<br />
If you do not have access to a Microsoft Windows environment, you can rent one from Amazon Web Services. You will need to have some kind of graphical environment, like X-Windows, to enable you to connect to Windows over RDP. Most modern Linux systems will have readily available such a graphical environment and the "rdesktop" program to enable you to connect to Windows.<br />
<br />
A t1.micro spot instance has a current price of $0.006 / hour (2013/02/23) and may be free if you have been an AWS customer for less than an year. But it will be slow! Also consider a t2.micro which may be faster.<br />
<br />
If you have a Windows system of your own and are willing to install MinGW on it, then the steps of creating and connecting to an Amazon instance can be skipped. If you do run this locally and are not logged on as the administrator, then you can also skip the steps where you create an unprivileged user and runas that user.<br />
<br />
* Create an Amazon instance of Windows_Server-2008-SP2-English-64Bit-Base-2012.12.12 (ami-554ac83c), or Windows_Server-2012-R2_RTM-English-64Bit-Base-2015.09.09 (ami-c9cea0ac).<br />
* make sure you have enabled the RDP port (3389) for the security group in which you launch the instance.<br />
* get the credentials and log in using<br />
o rdesktop -g 80% -u Administrator -p 'password' amazon-hostname<br />
* turn off annoying IE security enhancements, and fire up IE<br />
* go to http://sourceforge.net/projects/mingw/files/Installer and download latest mingw-get-setup.exe<br />
* run this - make sure to select the Msys and the developer toolkit in addition to the Mingw base.<br />
* navigate in explorer or a command window to C:\Mingw\msys\1.0 and run msys.bat<br />
* run "df" to make sure that the windows Mingw directory is mounted on the virtual /mingw directory. If it's not, edit /etc/fstab with vim and add this line:<br />
c:/mingw /mingw<br />
* run this command to install some useful packages:<br />
o mingw-get install msys-wget msys-rxvt msys-unzip<br />
* close that window<br />
* open a normal command window and run the following to create an unprivileged user and open an msys window as that user:<br />
o net user pgrunner SomePw1234 /add<br />
o runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
* if you want to do 64-bit builds, you will need the compiler from the mingw-w64 project (a separate project from the mingw project). Go to <br />
"http://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win32/Personal%20Builds/mingw-builds/installer" and download and run the latest installer (mingw-w64-install.exe). Run it and choose the following options:<br />
o architecture: x86_64<br />
o threads: win32<br />
o location: something like "C:\mingw-w64\x86_64-5.2.0-win32-seh-rt_v4-rev0"<br />
Then run msys.bat again as the Administrator and edit /etc/fstab and add a line like this:<br />
c:/mingw-w64/x86_64-5.2.0-win32-seh-rt_v4-rev0/mingw64 /mingw64<br />
<br />
The above steps can take a while and download several hundred MB of data, so if you are using an Amazon instance it may be worthwhile to arrange to store this set up for future work so that it does not need to be repeated. (Could someone add instructions on how to do that?)<br />
<br />
== Build from a tarball ==<br />
<br />
* Again open rxvt window (if not already open) runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
o wget http://ftp.postgresql.org/pub/snapshot/dev/postgresql-snapshot.tar.gz<br />
o tar -z -xf postgresql-snapshot.tar.gz<br />
o cd ~/postgresql-9.4devel<br />
<br />
* For a 64-bit build then do:<br />
o export PATH=/mingw64/bin:$PATH<br />
o ./configure --host=x86_64-w64-mingw32 --without-zlib && make && make check<br />
<br />
* For a 32-bit build do instead:<br />
o ./configure --without-zlib && make && make check<br />
<br />
Make some coffee and do the crossword or read War and Peace - this can take a while.<br />
<br />
== Installing Git ==<br />
<br />
If you want to build from the git repo instead of a tarball snapshot, which you will need to do if you're doing development, you need to install a git client.<br />
<br />
Open https://git-for-windows.github.io/ and grab the latest version. As of the time of writing this is called Git-2.5.3-32-bit.exe or Git-2.5.3-64-bit.exe. Run this installer. Choose an install path that's easy to manage rather than the default, such as "c:\prog\git". You might get permissions errors. If so, try running again, or else try running as the Administrator.<br />
Uncheck all the options unless you think you will need them - you won't need them for command line use from Msys. Don't set up a Start Menu folder, unless you want one - Msys won't need that either. Select "Run git from Windows command prompt" and "Checkout as-is, commit as-is."<br />
<br />
After this git should be in your path on Msys, and just work. Verify by running "git --help" in an Msys window started after you installed git.<br />
<br />
== Build from a git repo ==<br />
<br />
We'll use github's mirror here to check out postgres, to avoid overloading the master repo.<br />
<br />
* Again open rxvt window (if not already open) runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
o git clone https://github.com/postgres/postgres.git<br />
o cd postgres<br />
<br />
Then follow the same 64-bit or 32-bit build instructions as for building from a tarball.<br />
<br />
== Installing ==<br />
<br />
After you have built you can install by running <br />
<br />
make install<br />
<br />
Following this you need to copy the libpq dll from the installation lib directory to the installation bin directory. This lets pg_ctl and psql and other client programs work.<br />
<br />
== Using psql interactively ==<br />
<br />
The psql client doesn't work well in the rxvt terminal emulator, and appears to hang. Instead you can open a non-rxvt shell by omitting the "--rxvt" flag when opening a session, and psql works as expected.<br />
It also works in the normal Windows command window, and in the Windows Power Shell window. None of these builds have readline installed, so you don't get psql history, command completion and so on. If you want to do lots of work <br />
with psql on Windows, the best way might be to build and run psql under Cygwin, where readline is fully supported. That's what I do.<br />
<br />
== Alternatives ==<br />
<br />
=== Cross Compiling ===<br />
<br />
Another alternative to needing/using a windows box is to cross compile postgres from a linux box, ex (ubuntu):<br />
<br />
# skip all the above steps, just do this:<br />
$ sudo apt-get install mingw-w64<br />
# download the source, cd into it, same instructions as above<br />
$ ./configure --host=i686-w64-mingw32 --without-zlib --prefix=... # 32 bit<br />
$ ./configure --host=x86_64-w64-mingw32 --without-zlib --prefix=... # 64 bit<br />
<br />
Then you can run to test it using wine [or copying it to a windows box and running natively, of course].<br />
<br />
$ sudo apt-get install wine<br />
$ wine /full/path/to/psql.exe # etc. You can follow [[First_steps]] after its installed (actually, for any of the build mechanisms).<br />
<br />
=== Cygwin ===<br />
<br />
You can install cygwin and build it "just like you would in linux" using cygwin's packages (gcc etc.), from inside a windows box.<br />
<br />
=== Virtualbox ===<br />
<br />
You can also run windows inside a virtualbox VM inside your Linux box.</div>Adunstanhttps://wiki.postgresql.org/index.php?title=Building_With_MinGW&diff=25916Building With MinGW2015-09-28T18:37:19Z<p>Adunstan: /* Installing Git */</p>
<hr />
<div>Most of the people who do development work of the PostgreSQL system do so in a Unix-like environment, so that is where most of the testing is also done. Committed patches are automatically tested on MicroSoft Windows by the Build Farm, but the turn around time on that testing is too slow to be convenient for people who do not work in a Window environment but are touching parts of the code which have cross-platform implications, or for issues which are not explored by the standard build and regression tests but require custom testing. Also, during commitfests patches that relate to Windows often suffer from a shortage of peer reviewers: hopefully instructions in use of MinGW for building on Windows can help that.<br />
<br />
Production releases of PostgreSQL for Windows are generally built using MicroSoft's commercial compilers, but these are often not cost-free and can be very hard to use for people more accustomed to a Linux environment. These instructions are intended to help such developers test their code on Windows without much cost and without having to turn themselves into Windows developers. <br />
<br />
== Set up == <br />
<br />
If you do not have access to a Microsoft Windows environment, you can rent one from Amazon Web Services. You will need to have some kind of graphical environment, like X-Windows, to enable you to connect to Windows over RDP. Most modern Linux systems will have readily available such a graphical environment and the "rdesktop" program to enable you to connect to Windows.<br />
<br />
A t1.micro spot instance has a current price of $0.006 / hour (2013/02/23) and may be free if you have been an AWS customer for less than an year. But it will be slow! Also consider a t2.micro which may be faster.<br />
<br />
If you have a Windows system of your own and are willing to install MinGW on it, then the steps of creating and connecting to an Amazon instance can be skipped. If you do run this locally and are not logged on as the administrator, then you can also skip the steps where you create an unprivileged user and runas that user.<br />
<br />
* Create an Amazon instance of Windows_Server-2008-SP2-English-64Bit-Base-2012.12.12 (ami-554ac83c), or Windows_Server-2012-R2_RTM-English-64Bit-Base-2015.09.09 (ami-c9cea0ac).<br />
* make sure you have enabled the RDP port (3389) for the security group in which you launch the instance.<br />
* get the credentials and log in using<br />
o rdesktop -g 80% -u Administrator -p 'password' amazon-hostname<br />
* turn off annoying IE security enhancements, and fire up IE<br />
* go to http://sourceforge.net/projects/mingw/files/Installer and download latest mingw-get-setup.exe<br />
* run this - make sure to select the Msys and the developer toolkit in addition to the Mingw base.<br />
* navigate in explorer or a command window to C:\Mingw\msys\1.0 and run msys.bat<br />
* run "df" to make sure that the windows Mingw directory is mounted on the virtual /mingw directory. If it's not, edit /etc/fstab with vim and add this line:<br />
c:/mingw /mingw<br />
* run this command to install some useful packages:<br />
o mingw-get install msys-wget msys-rxvt msys-unzip<br />
* close that window<br />
* open a normal command window and run the following to create an unprivileged user and open an msys window as that user:<br />
o net user pgrunner SomePw1234 /add<br />
o runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
* if you want to do 64-bit builds, you will need the compiler from the mingw-w64 project (a separate project from the mingw project). Go to <br />
"http://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win32/Personal%20Builds/mingw-builds/installer" and download and run the latest installer. Run it and choose the following options:<br />
o architecture: x86_64<br />
o threads: win32<br />
o location: something like "C:\mingw-w64\x86_64-5.2.0-win32-seh-rt_v4-rev0"<br />
Then run msys.bat again as the Administrator and edit /etc/fstab and add a line like this:<br />
c:/mingw-w64/x86_64-5.2.0-win32-seh-rt_v4-rev0/mingw64 /mingw64<br />
<br />
The above steps can take a while and download several hundred MB of data, so if you are using an Amazon instance it may be worthwhile to arrange to store this set up for future work so that it does not need to be repeated. (Could someone add instructions on how to do that?)<br />
<br />
== Build from a tarball ==<br />
<br />
* Again open rxvt window (if not already open) runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
o wget http://ftp.postgresql.org/pub/snapshot/dev/postgresql-snapshot.tar.gz<br />
o tar -z -xf postgresql-snapshot.tar.gz<br />
o cd ~/postgresql-9.4devel<br />
<br />
* For a 64-bit build then do:<br />
o export PATH=/mingw64/bin:$PATH<br />
o ./configure --host=x86_64-w64-mingw32 --without-zlib && make && make check<br />
<br />
* For a 32-bit build do instead:<br />
o ./configure --without-zlib && make && make check<br />
<br />
Make some coffee and do the crossword or read War and Peace - this can take a while.<br />
<br />
== Installing Git ==<br />
<br />
If you want to build from the git repo instead of a tarball snapshot, which you will need to do if you're doing development, you need to install a git client.<br />
<br />
Open https://git-for-windows.github.io/ and grab the latest version. As of the time of writing this is called Git-2.5.3-32-bit.exe or Git-2.5.3-64-bit.exe. Run this installer. Choose an install path that's easy to manage rather than the default, such as "c:\prog\git". You might get permissions errors. If so, try running again, or else try running as the Administrator.<br />
Uncheck all the options unless you think you will need them - you won't need them for command line use from Msys. Don't set up a Start Menu folder, unless you want one - Msys won't need that either. Select "Run git from Windows command prompt" and "Checkout as-is, commit as-is."<br />
<br />
After this git should be in your path on Msys, and just work. Verify by running "git --help" in an Msys window started after you installed git.<br />
<br />
== Build from a git repo ==<br />
<br />
We'll use github's mirror here to check out postgres, to avoid overloading the master repo.<br />
<br />
* Again open rxvt window (if not already open) runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
o git clone https://github.com/postgres/postgres.git<br />
o cd postgres<br />
<br />
Then follow the same 64-bit or 32-bit build instructions as for building from a tarball.<br />
<br />
== Installing ==<br />
<br />
After you have built you can install by running <br />
<br />
make install<br />
<br />
Following this you need to copy the libpq dll from the installation lib directory to the installation bin directory. This lets pg_ctl and psql and other client programs work.<br />
<br />
== Using psql interactively ==<br />
<br />
The psql client doesn't work well in the rxvt terminal emulator, and appears to hang. Instead you can open a non-rxvt shell by omitting the "--rxvt" flag when opening a session, and psql works as expected.<br />
It also works in the normal Windows command window, and in the Windows Power Shell window. None of these builds have readline installed, so you don't get psql history, command completion and so on. If you want to do lots of work <br />
with psql on Windows, the best way might be to build and run psql under Cygwin, where readline is fully supported. That's what I do.<br />
<br />
== Alternatives ==<br />
<br />
=== Cross Compiling ===<br />
<br />
Another alternative to needing/using a windows box is to cross compile postgres from a linux box, ex (ubuntu):<br />
<br />
# skip all the above steps, just do this:<br />
$ sudo apt-get install mingw-w64<br />
# download the source, cd into it, same instructions as above<br />
$ ./configure --host=i686-w64-mingw32 --without-zlib --prefix=... # 32 bit<br />
$ ./configure --host=x86_64-w64-mingw32 --without-zlib --prefix=... # 64 bit<br />
<br />
Then you can run to test it using wine [or copying it to a windows box and running natively, of course].<br />
<br />
$ sudo apt-get install wine<br />
$ wine /full/path/to/psql.exe # etc. You can follow [[First_steps]] after its installed (actually, for any of the build mechanisms).<br />
<br />
=== Cygwin ===<br />
<br />
You can install cygwin and build it "just like you would in linux" using cygwin's packages (gcc etc.), from inside a windows box.<br />
<br />
=== Virtualbox ===<br />
<br />
You can also run windows inside a virtualbox VM inside your Linux box.</div>Adunstanhttps://wiki.postgresql.org/index.php?title=Building_With_MinGW&diff=25915Building With MinGW2015-09-28T18:29:13Z<p>Adunstan: /* Set up */</p>
<hr />
<div>Most of the people who do development work of the PostgreSQL system do so in a Unix-like environment, so that is where most of the testing is also done. Committed patches are automatically tested on MicroSoft Windows by the Build Farm, but the turn around time on that testing is too slow to be convenient for people who do not work in a Window environment but are touching parts of the code which have cross-platform implications, or for issues which are not explored by the standard build and regression tests but require custom testing. Also, during commitfests patches that relate to Windows often suffer from a shortage of peer reviewers: hopefully instructions in use of MinGW for building on Windows can help that.<br />
<br />
Production releases of PostgreSQL for Windows are generally built using MicroSoft's commercial compilers, but these are often not cost-free and can be very hard to use for people more accustomed to a Linux environment. These instructions are intended to help such developers test their code on Windows without much cost and without having to turn themselves into Windows developers. <br />
<br />
== Set up == <br />
<br />
If you do not have access to a Microsoft Windows environment, you can rent one from Amazon Web Services. You will need to have some kind of graphical environment, like X-Windows, to enable you to connect to Windows over RDP. Most modern Linux systems will have readily available such a graphical environment and the "rdesktop" program to enable you to connect to Windows.<br />
<br />
A t1.micro spot instance has a current price of $0.006 / hour (2013/02/23) and may be free if you have been an AWS customer for less than an year. But it will be slow! Also consider a t2.micro which may be faster.<br />
<br />
If you have a Windows system of your own and are willing to install MinGW on it, then the steps of creating and connecting to an Amazon instance can be skipped. If you do run this locally and are not logged on as the administrator, then you can also skip the steps where you create an unprivileged user and runas that user.<br />
<br />
* Create an Amazon instance of Windows_Server-2008-SP2-English-64Bit-Base-2012.12.12 (ami-554ac83c), or Windows_Server-2012-R2_RTM-English-64Bit-Base-2015.09.09 (ami-c9cea0ac).<br />
* make sure you have enabled the RDP port (3389) for the security group in which you launch the instance.<br />
* get the credentials and log in using<br />
o rdesktop -g 80% -u Administrator -p 'password' amazon-hostname<br />
* turn off annoying IE security enhancements, and fire up IE<br />
* go to http://sourceforge.net/projects/mingw/files/Installer and download latest mingw-get-setup.exe<br />
* run this - make sure to select the Msys and the developer toolkit in addition to the Mingw base.<br />
* navigate in explorer or a command window to C:\Mingw\msys\1.0 and run msys.bat<br />
* run "df" to make sure that the windows Mingw directory is mounted on the virtual /mingw directory. If it's not, edit /etc/fstab with vim and add this line:<br />
c:/mingw /mingw<br />
* run this command to install some useful packages:<br />
o mingw-get install msys-wget msys-rxvt msys-unzip<br />
* close that window<br />
* open a normal command window and run the following to create an unprivileged user and open an msys window as that user:<br />
o net user pgrunner SomePw1234 /add<br />
o runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
* if you want to do 64-bit builds, you will need the compiler from the mingw-w64 project (a separate project from the mingw project). Go to <br />
"http://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win32/Personal%20Builds/mingw-builds/installer" and download and run the latest installer. Run it and choose the following options:<br />
o architecture: x86_64<br />
o threads: win32<br />
o location: something like "C:\mingw-w64\x86_64-5.2.0-win32-seh-rt_v4-rev0"<br />
Then run msys.bat again as the Administrator and edit /etc/fstab and add a line like this:<br />
c:/mingw-w64/x86_64-5.2.0-win32-seh-rt_v4-rev0/mingw64 /mingw64<br />
<br />
The above steps can take a while and download several hundred MB of data, so if you are using an Amazon instance it may be worthwhile to arrange to store this set up for future work so that it does not need to be repeated. (Could someone add instructions on how to do that?)<br />
<br />
== Build from a tarball ==<br />
<br />
* Again open rxvt window (if not already open) runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
o wget http://ftp.postgresql.org/pub/snapshot/dev/postgresql-snapshot.tar.gz<br />
o tar -z -xf postgresql-snapshot.tar.gz<br />
o cd ~/postgresql-9.4devel<br />
<br />
* For a 64-bit build then do:<br />
o export PATH=/mingw64/bin:$PATH<br />
o ./configure --host=x86_64-w64-mingw32 --without-zlib && make && make check<br />
<br />
* For a 32-bit build do instead:<br />
o ./configure --without-zlib && make && make check<br />
<br />
Make some coffee and do the crossword or read War and Peace - this can take a while.<br />
<br />
== Installing Git ==<br />
<br />
If you want to build from the git repo instead of a tarball snapshot, which you will need to do if you're doing development, you need to install a git client.<br />
<br />
Open https://git-for-windows.github.io/ and grab the latest version. As of the time of writing this is called Git-1.8.3-preview20130601.exe. Run this installer. Choose an install path that's easy to manage rather than the default, such as "c:\prog\git".<br />
Uncheck all the options unless you think you will need them - you won't need them for command line use from Msys. Don't set up a Start Menu folder, unless you want one - Msys won't need that either. Select "Run git from Windows command prompt" and "Checkout as-is, commit as-is."<br />
<br />
After this git should be in your path on Msys, and just work. Verify by running "git --help".<br />
<br />
== Build from a git repo ==<br />
<br />
We'll use github's mirror here to check out postgres, to avoid overloading the master repo.<br />
<br />
* Again open rxvt window (if not already open) runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
o git clone https://github.com/postgres/postgres.git<br />
o cd postgres<br />
<br />
Then follow the same 64-bit or 32-bit build instructions as for building from a tarball.<br />
<br />
== Installing ==<br />
<br />
After you have built you can install by running <br />
<br />
make install<br />
<br />
Following this you need to copy the libpq dll from the installation lib directory to the installation bin directory. This lets pg_ctl and psql and other client programs work.<br />
<br />
== Using psql interactively ==<br />
<br />
The psql client doesn't work well in the rxvt terminal emulator, and appears to hang. Instead you can open a non-rxvt shell by omitting the "--rxvt" flag when opening a session, and psql works as expected.<br />
It also works in the normal Windows command window, and in the Windows Power Shell window. None of these builds have readline installed, so you don't get psql history, command completion and so on. If you want to do lots of work <br />
with psql on Windows, the best way might be to build and run psql under Cygwin, where readline is fully supported. That's what I do.<br />
<br />
== Alternatives ==<br />
<br />
=== Cross Compiling ===<br />
<br />
Another alternative to needing/using a windows box is to cross compile postgres from a linux box, ex (ubuntu):<br />
<br />
# skip all the above steps, just do this:<br />
$ sudo apt-get install mingw-w64<br />
# download the source, cd into it, same instructions as above<br />
$ ./configure --host=i686-w64-mingw32 --without-zlib --prefix=... # 32 bit<br />
$ ./configure --host=x86_64-w64-mingw32 --without-zlib --prefix=... # 64 bit<br />
<br />
Then you can run to test it using wine [or copying it to a windows box and running natively, of course].<br />
<br />
$ sudo apt-get install wine<br />
$ wine /full/path/to/psql.exe # etc. You can follow [[First_steps]] after its installed (actually, for any of the build mechanisms).<br />
<br />
=== Cygwin ===<br />
<br />
You can install cygwin and build it "just like you would in linux" using cygwin's packages (gcc etc.), from inside a windows box.<br />
<br />
=== Virtualbox ===<br />
<br />
You can also run windows inside a virtualbox VM inside your Linux box.</div>Adunstanhttps://wiki.postgresql.org/index.php?title=Building_With_MinGW&diff=25912Building With MinGW2015-09-28T16:15:24Z<p>Adunstan: /* Set up */</p>
<hr />
<div>Most of the people who do development work of the PostgreSQL system do so in a Unix-like environment, so that is where most of the testing is also done. Committed patches are automatically tested on MicroSoft Windows by the Build Farm, but the turn around time on that testing is too slow to be convenient for people who do not work in a Window environment but are touching parts of the code which have cross-platform implications, or for issues which are not explored by the standard build and regression tests but require custom testing. Also, during commitfests patches that relate to Windows often suffer from a shortage of peer reviewers: hopefully instructions in use of MinGW for building on Windows can help that.<br />
<br />
Production releases of PostgreSQL for Windows are generally built using MicroSoft's commercial compilers, but these are often not cost-free and can be very hard to use for people more accustomed to a Linux environment. These instructions are intended to help such developers test their code on Windows without much cost and without having to turn themselves into Windows developers. <br />
<br />
== Set up == <br />
<br />
If you do not have access to a Microsoft Windows environment, you can rent one from Amazon Web Services. You will need to have some kind of graphical environment, like X-Windows, to enable you to connect to Windows over RDP. Most modern Linux systems will have readily available such a graphical environment and the "rdesktop" program to enable you to connect to Windows.<br />
<br />
A t1.micro spot instance has a current price of $0.006 / hour (2013/02/23) and may be free if you have been an AWS customer for less than an year. But it will be slow! Also consider a t2.micro which may be faster.<br />
<br />
If you have a Windows system of your own and are willing to install MinGW on it, then the steps of creating and connecting to an Amazon instance can be skipped. If you do run this locally and are not logged on as the administrator, then you can also skip the steps where you create an unprivileged user and runas that user.<br />
<br />
* Create an Amazon instance of Windows_Server-2008-SP2-English-64Bit-Base-2012.12.12 (ami-554ac83c), or Windows_Server-2012-R2_RTM-English-64Bit-Base-2015.09.09 (ami-c9cea0ac).<br />
* make sure you have enabled the RDP port (3389) for the security group in which you launch the instance.<br />
* get the credentials and log in using<br />
o rdesktop -g 80% -u Administrator -p 'password' amazon-hostname<br />
* turn off annoying IE security enhancements, and fire up IE<br />
* go to http://sourceforge.net/projects/mingw/files/Installer and download latest mingw-get-setup.exe<br />
* run this - make sure to select the Msys and the developer toolkit in addition to the Mingw base.<br />
* navigate in explorer or a command window to C:\Mingw\msys\1.0 and run msys.bat<br />
* run "df" to make sure that the windows Mingw directory is mounted on the virtual /mingw directory. If it's not, edit /etc/fstab with vim and add this line:<br />
c:/mingw /mingw<br />
* run this command to install some useful packages:<br />
o mingw-get install msys-wget msys-rxvt msys-unzip<br />
* close that window<br />
* open a normal command window and run the following to create an unprivileged user and open an msys window as that user:<br />
o net user pgrunner SomePw1234 /add<br />
o runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
* if you want to do 64-bit builds, in the rxvt window install the extra compiler:<br />
o wget "http://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Automated%20Builds/mingw-w64-bin_i686-mingw_20111220.zip/download"<br />
o mkdir /mingw64<br />
o cd /mingw64<br />
o unzip ~/mingw-w64-bin_i686-mingw_20111220.zip<br />
o cd<br />
<br />
The above steps can take a while and download several hundred MB of data, so if you are using an Amazon instance it may be worthwhile to arrange to store this set up for future work so that it does not need to be repeated. (Could someone add instructions on how to do that?)<br />
<br />
== Build from a tarball ==<br />
<br />
* Again open rxvt window (if not already open) runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
o wget http://ftp.postgresql.org/pub/snapshot/dev/postgresql-snapshot.tar.gz<br />
o tar -z -xf postgresql-snapshot.tar.gz<br />
o cd ~/postgresql-9.4devel<br />
<br />
* For a 64-bit build then do:<br />
o export PATH=/mingw64/bin:$PATH<br />
o ./configure --host=x86_64-w64-mingw32 --without-zlib && make && make check<br />
<br />
* For a 32-bit build do instead:<br />
o ./configure --without-zlib && make && make check<br />
<br />
Make some coffee and do the crossword or read War and Peace - this can take a while.<br />
<br />
== Installing Git ==<br />
<br />
If you want to build from the git repo instead of a tarball snapshot, which you will need to do if you're doing development, you need to install a git client.<br />
<br />
Open https://git-for-windows.github.io/ and grab the latest version. As of the time of writing this is called Git-1.8.3-preview20130601.exe. Run this installer. Choose an install path that's easy to manage rather than the default, such as "c:\prog\git".<br />
Uncheck all the options unless you think you will need them - you won't need them for command line use from Msys. Don't set up a Start Menu folder, unless you want one - Msys won't need that either. Select "Run git from Windows command prompt" and "Checkout as-is, commit as-is."<br />
<br />
After this git should be in your path on Msys, and just work. Verify by running "git --help".<br />
<br />
== Build from a git repo ==<br />
<br />
We'll use github's mirror here to check out postgres, to avoid overloading the master repo.<br />
<br />
* Again open rxvt window (if not already open) runas /user:pgrunner "cmd /c \mingw\msys\1.0\msys.bat --rxvt"<br />
o git clone https://github.com/postgres/postgres.git<br />
o cd postgres<br />
<br />
Then follow the same 64-bit or 32-bit build instructions as for building from a tarball.<br />
<br />
== Installing ==<br />
<br />
After you have built you can install by running <br />
<br />
make install<br />
<br />
Following this you need to copy the libpq dll from the installation lib directory to the installation bin directory. This lets pg_ctl and psql and other client programs work.<br />
<br />
== Using psql interactively ==<br />
<br />
The psql client doesn't work well in the rxvt terminal emulator, and appears to hang. Instead you can open a non-rxvt shell by omitting the "--rxvt" flag when opening a session, and psql works as expected.<br />
It also works in the normal Windows command window, and in the Windows Power Shell window. None of these builds have readline installed, so you don't get psql history, command completion and so on. If you want to do lots of work <br />
with psql on Windows, the best way might be to build and run psql under Cygwin, where readline is fully supported. That's what I do.<br />
<br />
== Alternatives ==<br />
<br />
=== Cross Compiling ===<br />
<br />
Another alternative to needing/using a windows box is to cross compile postgres from a linux box, ex (ubuntu):<br />
<br />
# skip all the above steps, just do this:<br />
$ sudo apt-get install mingw-w64<br />
# download the source, cd into it, same instructions as above<br />
$ ./configure --host=i686-w64-mingw32 --without-zlib --prefix=... # 32 bit<br />
$ ./configure --host=x86_64-w64-mingw32 --without-zlib --prefix=... # 64 bit<br />
<br />
Then you can run to test it using wine [or copying it to a windows box and running natively, of course].<br />
<br />
$ sudo apt-get install wine<br />
$ wine /full/path/to/psql.exe # etc. You can follow [[First_steps]] after its installed (actually, for any of the build mechanisms).<br />
<br />
=== Cygwin ===<br />
<br />
You can install cygwin and build it "just like you would in linux" using cygwin's packages (gcc etc.), from inside a windows box.<br />
<br />
=== Virtualbox ===<br />
<br />
You can also run windows inside a virtualbox VM inside your Linux box.</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PostgreSQL_9.5_Open_Items&diff=25464PostgreSQL 9.5 Open Items2015-07-18T01:19:02Z<p>Adunstan: json issue closed</p>
<hr />
<div>== Open Issues ==<br />
<br />
=== Open Row-Level Security Issues ===<br />
<br />
* [http://www.postgresql.org/message-id/CAHGQGwEqWD=yNQE+ZojbpoxyWT3xLK52-V_q9S+XOfCKJd5egA@mail.gmail.com CREATE POLICY and RETURNING]<br />
** lots of discussion of what the behavior should be, but no patches yet.<br />
* [http://www.postgresql.org/message-id/CAM3SWZScG+S17vwT+E82o=aNrjqar6=kCoAnGf+vw=n4PaAgCw@mail.gmail.com Arguable RLS security bug, EvalPlanQual() paranoia]<br />
** no responses to Peter's original post<br />
* [http://www.postgresql.org/message-id/CAM3SWZRvgL3Ti87etps1L38eba=jNFS9e1vLS7xN6p7vhwOeHg@mail.gmail.com RLS fails to work with UPDATE ... WHERE CURRENT OF]<br />
** Dean agrees this is a bug and [http://www.postgresql.org/message-id/CAEZATCXibt_DtzkeHRTS5z-64XfkStKybp=tHMb+TX8n-KOCXg@mail.gmail.com suggests how to fix it] -- his patch fixing the issue needs to be reviewed + committed<br />
* [http://www.postgresql.org/message-id/CAEZATCVE7hdtfZGCJN-oevVaWBtBGG8-fBCh9VhDBHuZrsWY5w@mail.gmail.com Dean's latest round of RLS refactoring. Includes notable bugfix.]<br />
** DML queries with additional non-target (FROM/USING) relations cared about UPDATE/DELETE applicable policies, not SELECT applicable policies. This is clearly a bug.<br />
** Dean [http://www.postgresql.org/message-id/CAEZATCVE7hdtfZGCJN-oevVaWBtBGG8-fBCh9VhDBHuZrsWY5w@mail.gmail.com posted a patch] on June 1st; Stephen indicated he would review it, but no followups on the thread yet<br />
* [http://www.postgresql.org/message-id/flat/20150703070721.GA844443@tornado.leadboat.com copy.c handling for RLS is insecure]<br />
* [http://www.postgresql.org/message-id/20150703170308.GB844443@tornado.leadboat.com more RLS oversights]<br />
* [http://www.postgresql.org/message-id/1436691547878-5857659.post@n5.nabble.com pg_stats leaks information on relations with RLS enabled]<br />
<br />
=== Open INSERT .. ON CONFLICT Issues ===<br />
<br />
* [http://www.postgresql.org/message-id/CAM3SWZRY92akby8LuibtA=A9-QY5yFrQ+_+m2QvsbdQkbVce5g@mail.gmail.com 9.5 release notes may need ON CONFLICT DO NOTHING compatibility notice for FDW authors]<br />
** Patch for release notes [http://www.postgresql.org/message-id/CAM3SWZTZ7kJu0fgkxb-FON2tFeGZaeB4=ydGAMP6k7uwkKcS7w@mail.gmail.com posted]<br />
* [http://www.postgresql.org/message-id/flat/CAM3SWZS8RPvA=KFxADZWw3wAHnnbxMxDzkEC6fNaFc7zSm411w@mail.gmail.com#CAM3SWZS8RPvA=KFxADZWw3wAHnnbxMxDzkEC6fNaFc7zSm411w@mail.gmail.com Various fairly minor bugfixes should be committed - 3 in all]<br />
** These fix all known UPSERT bugs as of June 6th.<br />
* [http://www.postgresql.org/message-id/CAM3SWZTpWo-guh7bZ3xXU9W=QuUHmhLGE2_GO7anGhCOaYg=7A@mail.gmail.com Refactoring speculative insertion with unique indexes a little]<br />
** Feels like the contract that the executor has with speculative insertion + amcanunique AMs should be made explicit, and be documented under [http://www.postgresql.org/docs/devel/static/index-unique-checks.html "51.5. Index Uniqueness Checks"].<br />
* [http://www.postgresql.org/message-id/CAHGQGwFUCWwSU7dtc2aRdRk73ztyr_jY5cPOyts+K8xKJ92X4Q@mail.gmail.com UPSERT on partition]<br />
** The consensus is to treat the problem as a limitation and document it.<br />
<br />
=== Open Issues Related to Various Write-Ahead Logging Changes in 9.5 ===<br />
<br />
* [http://www.postgresql.org/message-id/55269915.1000309@iki.fi FPW compression leaks information] Make wal_compression SUSET and document potential security risks?<br />
** Parameter has been switched as SUSET (post 9.5 alpha1).<br />
<br />
=== Open pg_rewind Issues === <br />
<br />
* pg_rewind fails when pg_xlog is defined as a soft link in PGDATA.<br />
** Heikki has mentioned one solution: ignore the content on pg_xlog. We may as well consider later a new option to allow the user to set up a soft link with pg_rewind after a rewind.<br />
** Michael has mentioned another solution: use an implementation of pg_readlink and fetch its content from source to target. This needs as well to modify pg_stat_file such as lstat() is used instead of stat() to detect if a path is a soft link (or junction point on Windows) or not. Perhaps this solution is not worth the backward-incompatibility issues as stat() reports the information of the linked target when meeting up with a soft link/junction point.<br />
<br />
=== Open TABLESAMPLE Issues ===<br />
* [http://www.postgresql.org/message-id/12048.1436646520@sss.pgh.pa.us TABLESAMPLE feature needs a lot of work]<br />
* [http://www.postgresql.org/message-id/9871.1436716927@sss.pgh.pa.us TABLESAMPLE doesn't actually satisfy the SQL spec, does it?]<br />
<br />
=== Other Open Issues ===<br />
<br />
* DDL deparsing testing module should have detected that transforms were not supported, but it failed to notice that<br />
** Whack it until it does.<br />
** [http://www.postgresql.org/message-id/CABP8UDS4jf4t6hjNF_g=4e=X=BA_EC2P+AL+Q4y5ESU2E4-Uag@mail.gmail.com Missing reference to TransformRelationId and OCLASS_TRANSFORM in object_classes]. A patch has been sent and should be applied directly. Visibly this was an oversight of previous fixes in this area.<br />
* [http://www.postgresql.org/message-id/20150120152819.GC24381@alap3.anarazel.de basebackups during ALTER DATABASE ... SET TABLESPACE ... not safe]<br />
** this is not a 9.5 regression, although it is a bug<br />
* [http://www.postgresql.org/message-id/20150622151138.GA6415@localhost PGXS "check" target forcing an install]<br />
** alternative patches from Michael Paquier and Robert Haas, need to pick one (or something else)<br />
* [http://www.postgresql.org/message-id/558A18B3.9050201@lab.ntt.co.jp Foreign join pushdown vs EvalPlanQual]<br />
** server crash; no patch yet<br />
* [http://www.postgresql.org/message-id/flat/20150520192157.GE5885@postgresql.org atomics code has portability issues]<br />
** buildfarm member anole, at least, is still not happy as of 2015-06-28<br />
* [http://www.postgresql.org/message-id/20150624144148.GQ4797@alap3.anarazel.de Removal of SSL renegotiation code], perhaps not directly an issue with 9.5, but we may want to get a good outcome here instead of waiting 1 extra year with 9.6.<br />
* [http://www.postgresql.org/message-id/5592DB35.2060401@iki.fi Deadlock in LWLock]<br />
** interaction between LWLockWaitForVar and the introduction of atomic locking.<br />
* [http://www.postgresql.org/message-id/20150707165212.1188.60819@wrigleys.postgresql.org Crash in planner code with 9.5 alpha 1]<br />
** Report mentions that a query on pg_stat_activity when using pg_hero leads to a crash of server.<br />
** Some tests and analysis (playing with extended query protocol, pghero itself and analysis of planner code by Tom) are though showing up that the back trace and information provided do not show enough information to have a reproducible test case yet. And attempts to reproduce the failure with pgbench failed as of now.<br />
* [http://www.postgresql.org/message-id/CAA4eK1JNhY6UhH5VQXDWvGYHd3VGyBDT+wqbyvH6BrXEZufotg@mail.gmail.com Ignore tablespace_map file when backup_label is not present]<br />
** Decision whether the error level should be LOG or WARNING is under discussion.<br />
* [http://www.postgresql.org/message-id/CAEzk6fdVan-rUr5Le2BfNfKncniMdyk4vyVZYnKX_TBJu34Zdw@mail.gmail.com crash with plpgsql caused by CAST]. Test case available.<br />
<br />
== Resolved Issues ==<br />
<br />
=== resolved after 9.5alpha1 ===<br />
<br />
* [http://www.postgresql.org/message-id/CAMkU=1xUSStjv+YYiFRBpr6p7C-Brngxm8-OMpkDqvLVa3qkKw@mail.gmail.com PANIC in GIN code] (the second issue, with metapage-update record)<br />
* [http://www.postgresql.org/message-id/CAM3SWZQgLSAYP1wYUaGfFvFd2HXOes7sLsjw0gjOKKCexZsHZw@mail.gmail.com Trivial bug in bttext_abbrev_convert()]<br />
* [http://www.postgresql.org/message-id/CAHGQGwGxMKnVHGgTfiig2Bt_2djec0in3-DLJmtg7+nEiidFdQ@mail.gmail.com WAL-related tools and .partial WAL file]<br />
** WAL-related tools, i.e., pg_archivecleanup, pg_resetxlog and pg_xlogdump don't seem to properly handle .paritial WAL file.<br />
* [http://www.postgresql.org/message-id/flat/20150704003636.GA856928@tornado.leadboat.com Revoke support for strxfrm() implementations that write past the specified array length.]<br />
* [http://www.postgresql.org/message-id/20150704224041.GA898636@tornado.leadboat.com Finish XLC atomics implementation.]<br />
* [http://www.postgresql.org/message-id/CAM3SWZSyWA+g9ygnRrYkvgmnu82fP1b=2wxLBPOWoOgZG83pPA@mail.gmail.com Final jsonb semantics patch, concerning adding negative subscripting everywhere]<br />
** Patch also concerns adding additional minor input sanitization<br />
<br />
=== resolved before 9.5alpha1 ===<br />
<br />
* [http://www.postgresql.org/message-id/546A16EF.9070005@vmware.com BRIN page type identifier] BRIN special space needs reshuffling<br />
* [http://www.postgresql.org/message-id/CAEZATCXHb+tv8YYo4=XRoBzCOywTrM4cncqR57D4ZM7WdFomiQ@mail.gmail.com proposal: searching in array function - array_position] array_offset(s) do not consider arrays not starting from 1<br />
* [http://www.postgresql.org/message-id/CAB7nPqQSdx7coHk0D6G=mkJntGYjXPDw+PWisKKSsAeZFTskvg@mail.gmail.com Assertion failure when streaming logical changes] (crash in walsender replaying from a logical decoding slot)<br />
* [http://www.postgresql.org/message-id/20141128205453.GA1737@alvh.no-ip.org no test programs in contrib] fix src/test/modules to work on MSVC<br />
* [http://www.postgresql.org/message-id/CAG6W84JA8bhrEzDvv6UaTOyZGBPwDnQb7ZqJRm6wtJdn+mBY9Q@mail.gmail.com Improve GB18030 <-> UTF8 encoding conversions]<br />
* [http://www.postgresql.org/message-id/20150312.213812.115476889.horiguchi.kyotaro@lab.ntt.co.jp alter user/role CURRENT_USER] CURRENT_USER needs some fixes<br />
* [http://www.postgresql.org/message-id/55427924.9090806@dunslane.net transforms vs CLOBBER_CACHE_ALWAYS]<br />
* [http://www.postgresql.org/message-id/87d24y7xwa.fsf@news-spur.riddles.org.uk Re: collations in shared catalogs?]<br />
* [http://www.postgresql.org/message-id/CAHGQGwE0XfGJPL6NUjaPcO6sZyiXEE4eCBR96XYkzL-N0mD8uA@mail.gmail.com CREATE EXTENSION pg_audit can fail] (pg_audit has been reverted)<br />
* [http://www.postgresql.org/message-id/7758.1433610350@sss.pgh.pa.us intermittent "cache lookup failed for access method 403" failure at session start]<br />
* [http://www.postgresql.org/message-id/9A28C8860F777E439AA12E8AEA7694F8010DC708@BPXM15GP.gisp.nec.co.jp custom-join has no way to construct Plan nodes of child Path nodes] ([http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=5ca611841bcd37c7ee8448c46c8398ef8d8edcc4 commit])<br />
* [http://www.postgresql.org/message-id/flat/555673D0.5090406@dunslane.net brin regression test intermittent failures]<br />
** This is probably fixed as of 4-June, but it would be a good idea to watch chipmunk for a week or two before declaring the issue closed.<br />
** No more failures, so far anyway. -rhaas, 2015-06-26<br />
* [http://www.postgresql.org/message-id/CAFj8pRAfUx2C7tYAwzeUewFj=AgQOjFHTw4bypfC_e5gjFBAyA@mail.gmail.com less log level for success dynamic background workers for 9.5] ([http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=91118f1a59f2038f072552fdbb98e01363e30b59 commit])<br />
* [http://www.postgresql.org/message-id/CAB7nPqRSe8GTDJy74Yp3cVONx5Xx9H6Xr82sTDHbNa_b1q8zCw@mail.gmail.com Memory leak with XLogFileCopy since de768844 (WAL file with .partial)] ([http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=7abc68597436da1475b4d9b08f4fa9f3c5ed6185 commit])<br />
** [http://www.postgresql.org/message-id/CAHGQGwFv-LUQGcwHs3j33io3CXvNRO2CXn19hqR8rzJHsC0moQ@mail.gmail.com committed by Fujii Masao]<br />
* [http://www.postgresql.org/message-id/CAA4eK1KEFoTJ8kRxsTid=ZRx8Rd593B+86-GCDDey5s2Mqqw_g@mail.gmail.com Remove symlinks in pg_tblspc during archive recovery and error for non-symlink paths] ([http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8f15f74a44f68f9cb3a644786d3c732a5eeb237a commit])<br />
* DDL deparsing does not support CREATE/ALTER TRANSFORM<br />
** [http://www.postgresql.org/message-id/CAB7nPqT2SZ39N_wH+WK8JGPKO3LCyWQiLoxgcgq_UyPJNc8hSg@mail.gmail.com Patch for support of CREATE/DROP TRANSFORM in DDL deparsing, one bug found with DROP TRANSFORM]<br />
** Alvaro [http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ad89a5d115b3b4025f3c135f95f722e7e4becf13 committed] part of this and, as of 2015-06-22, [http://www.postgresql.org/message-id/20150621192520.GG133018@postgresql.org says he will look at the rest next]<br />
** [http://git.postgresql.org/pg/commitdiff/7d60b2af34842ae89b1abdd31fb5d303bd43c514 second commit]<br />
* [http://www.postgresql.org/message-id/CAMkU=1xyoT4Dz9t6ijsodjOgJaGD-rLad0WW7Vynw4-Zjqyogw@mail.gmail.com PANIC in GIN code]<br />
* [http://www.postgresql.org/message-id/CAMkU=1x-djpi6Cjq_xbFCzVgEpnAO1J-=fzePhcfq2UwGnoSng@mail.gmail.com max_wal_size and restartpoints]<br />
* [http://www.postgresql.org/message-id/28927.1435335457@sss.pgh.pa.us pg_file_settings patch needs some rework]<br />
** current implementation blocks a fix for a 9.4.1 regression concerning unwanted complaints about multiple entries for PGC_POSTMASTER variables<br />
* [http://www.postgresql.org/message-id/29550.1435422769@sss.pgh.pa.us pg_file_settings view does not work properly on Windows]<br />
* [http://www.postgresql.org/message-id/CAHGQGwEdsNgeNZo+GyrzZtjW_TkC=XC6XxrjuAZ7=X_cj1aHHg@mail.gmail.com pg_rewind failure by file deletion in source server]<br />
** [http://www.postgresql.org/message-id/CAB7nPqT=nPzXseCyrJ-yvKvE-Q+vC42Cc1VvGcdsEiWb0AZL1w@mail.gmail.com Similar issue with xlogtemp files], can be fixed by ignoring them in process_remote_files().<br />
** Window triggering failure cannot be reduced to zero, but significantly reduced by scanning files still present in source server with pg_stat_file and a if_not_exists mode (just an idea). Fixed by generalizing the missing_ok logic in system file functions present in core. <br />
* [http://www.postgresql.org/message-id/CAB7nPqTL0YYPgGt00gV8mw+23U4ki8yXUKV0mfji3YVpAqR8sA@mail.gmail.com Potential log(0) and division by 0 in ANALYZE and TABLESAMPLE]<br />
** partial patch from Michael Paquier is attached to the thread, but Michael says it doesn't cover everything<br />
** [http://www.postgresql.org/message-id/5592CE94.1000208@2ndquadrant.com Correct fix by Petr Jelinek]<br />
** Fix done by making sampler recall again pg_erand48 when finding out 0.0. Idea by Tom.<br />
<br />
== Issues That Don't Need To Be Fixed ==<br />
* [http://www.postgresql.org/message-id/CAB7nPqReR+MUupGA5wd9tywdhhgHkREnY9OEJemMxkd2zzrvQw@mail.gmail.com All information of pg_stat_ssl visible to every users]<br />
** seems like this is OK, unless more people weigh in and say it isn't.<br />
* [http://www.postgresql.org/message-id/20150315132707.GB19792@alap3.anarazel.de recovery_target_action = pause & hot_standby = off]<br />
** per [http://www.postgresql.org/message-id/20150605155120.GA30287@alap3.anarazel.de this post from Andres], the remaining issue here is not 9.5 material<br />
* [http://www.postgresql.org/message-id/CAB7nPqQYVuG=1npOi8cpbKrOr+Uj2JNeOBJrVqGTJ30kanH1Dg@mail.gmail.com pg_rewind failure when target path contains non-writable files]. Heikki and Robert have agreed that pg_rewind should fail in this case. Hence users should remove such files from PGDATA before performing a rewind.<br />
<br />
[[Category:PostgreSQL_9.5]]</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PgCon_2015_Developer_Unconference&diff=25034PgCon 2015 Developer Unconference2015-06-14T20:41:38Z<p>Adunstan: /* Topics */</p>
<hr />
<div>An Unconference-style multi-track (three tracks are currently planned) event for active PostgreSQL developers will be held from the afternoon of Tuesday 16 June, 2015 through Wednesday 17 June 2015 at the University of Ottawa, as part of PGCon 2015. This Unconference will be focused on technical PostgreSQL development discussions ranging from Clustering and replication to the infrastructure which runs postgresql.org.<br />
<br />
'''Please add your name under RSVPs if you plan to attend.'''<br />
<br />
== Topics ==<br />
<br />
Developers are asked to propose topics which they wish to either present on or which they would like another individual to present on. All topics should be clearly related to PostgreSQL development. The topic should be added to the table below and any required attendees (presumably at least the presenter, and the requester if different) listed. Other attendees of the Unconference who are interested should list themselves as Optional. Note that non-technical topics related to PostgreSQL development will be addressed during the invite-only Developer meeting, being held in advance of the Unconference. Further, the Developer Unconference is for developers of PostgreSQL and user-oriented topics are not appropriate for this venue.<br />
<br />
== Slot assignment ==<br />
<br />
Slots will be assigned based on the topic's interest among the attendees of the Unconference (the number of individuals who listed themselves as attendees). Final determination on any particular topic will be made by the Unconference organizers. Please only participate if you are confident of your attendance at the Unconference.<br />
<br />
== Venue ==<br />
<br />
These meetings will be held at the University of Ottawa. The topics selected, the schedule and the specific room assignments will be published closer to the event and will be based on the information provided here. Please direct any questions to Dave Page (dpage@pgadmin.org).<br />
<br />
== Sponsorship ==<br />
<br />
The Developer Unconference will be sponsored by Salesforce.com, and by NTT Open Source for the Clustering Track.<br />
<br />
== Attendees ==<br />
<br />
While the Unconference is open to all attendees of PGCon, formal invitations will be sent to specific PostgreSQL developers, including the Core team, Major Contributors, Committers, and other developers who have been involved in the 9.4 release. These invitations are intended to encourage developers to attend the Unconference but we are unable to guarantee every invitee a speaking slot.<br />
<br />
== RSVPs ==<br />
<br />
The following people have RSVPed to the meeting (in alphabetical order, by surname):<br />
<br />
* Ashutosh Bapat<br />
* Oleg Bartunov<br />
* Josh Berkus<br />
* Christopher Browne<br />
* Joe Conway<br />
* Jeff Davis<br />
* Andrew Dunstan<br />
* Ozgun Erdogan<br />
* Andres Freund<br />
* Stephen Frost<br />
* Masao Fujii<br />
* Etsuro Fujita<br />
* Peter Geoghegan<br />
* Kevin Grittner<br />
* Robert Haas<br />
* Ahsan Hadi<br />
* Magnus Hagander<br />
* Shigeru Hanada<br />
* Álvaro Herrera<br />
* Kyotaro Horiguchi<br />
* Thierry Husson (Wednesday @ 11am)<br />
* Ayumi Ishii<br />
* Tatsuo Ishii<br />
* Stefan Kaltenbrunner<br />
* Amit Kapila<br />
* Konstantin Knizhnik<br />
* KaiGai Kohei (arrive tuesday evening)<br />
* Alexander Korotkov<br />
* Ilya Kosmodemiansky<br />
* Tom Lane<br />
* Amit Langote<br />
* Grant McAlister<br />
* Noah Misch<br />
* Bruce Momjian<br />
* Yugo Nagata<br />
* Satoshi Nagayasu<br />
* Jim Nasby<br />
* Dave Page<br />
* Christophe Pettus<br />
* Paul Ramsey<br />
* Kumar Rajeev Rastogi<br />
* Simon Riggs<br />
* Tetsuo Sakata<br />
* Masahiko Sawada<br />
* Dilip Kumar<br />
* Marco Slot (Wednesday)<br />
* Greg Smith<br />
* Steve Singer (arrive tuesday mid-afternoon)<br />
* Rod Taylor<br />
* Tomas Vondra<br />
* Jan Wieck (arrive tuesday evening)<br />
* Chris Winters<br />
* Nat Wyatt<br />
* Naoya Anzai (arrive tuesday evening)<br />
* David Steele (arrive tuesday evening)<br />
* Ingmar Alting<br />
* Mehmet Emin KARAKAŞ<br />
* Yasin TATAR<br />
* Fabrízio de Royes Mello<br />
* Euler Taveira<br />
* Fabio Telles<br />
* Dan Shuster<br />
<br />
=Topics=<br />
<br />
'''Please add any topics you wish covered to the table.'''<br />
<br />
'''For any topics you are requesting or presenting on, please add your name in the Required column.'''<br />
<br />
'''For any topics you would like to attend, please add your name in the Interested column.'''<br />
<br />
{| border="1" cellpadding="4" cellspacing="0"<br />
!Topic<br />
!Policy<br />
!Taker of Notes<br />
!Required Attendees<br />
!Interested Attendees<br />
<br />
|- style="background-color:lightgray;"<br />
|Picture!<br />
|Open<br />
|<br />
|All!<br />
|All!<br />
<br />
|- style="background-color:lightgray;"<br />
|pgAdmin4<br />
|Open<br />
|<br />
|Dave Page, Stephen Frost<br />
|Magnus Hagander, Joe Conway, David Steele, Fabrízio de Royes Mello, Satoshi Nagayasu<br />
<br />
|- style="background-color:lightgray;"<br />
|Infrastructure Q&A<br />
|Open<br />
|<br />
|Dave Page, Stephen Frost, Stefan Kaltenbrunner, Magnus Hagander, Joe Conway<br />
|<br />
<br />
|- style="background-color:lightgray;"<br />
|WWW Team Meeting<br />
|Open<br />
|<br />
|Dave Page, Stephen Frost, Stefan Kaltenbrunner, Magnus Hagander<br />
|<br />
<br />
|- style="background-color:lightgray;"<br />
|Advocacy Team Meeting<br />
|Open<br />
|<br />
|Stephen Frost<br />
|Magnus Hagander, Greg Smith, Jim Nasby, Josh Berkus, Joe Conway<br />
<br />
|- style="background-color:lightgray;"<br />
|Vertical Scalability w.r.t Writes<br />
|Open<br />
|Amit Kapila<br />
|Amit Kapila<br />
|Greg Smith, Hannu Valtonen, Ilya Kosmodemiansky, Tomas Vondra, Grant McAlister, Joe Conway, Kyotaro Horiguchi, Simon Riggs, Amit Langote, Andres Freund, Robert Haas, David Steele, Rod Taylor, Jim Nasby, Chris Winters, Nat Wyatt, Noah Misch, Masao Fujii, Mehmet Emin KARAKAŞ, Christophe Pettus, Fabrízio de Royes Mello, Euler Taveira, Fabio Telles, Andrew Dunstan<br />
<br />
|- style="background-color:lightgray;"<br />
|Security Team Meeting<br />
|Closed<br />
|<br />
|Heikki Linnakangas, Stephen Frost, Magnus Hagander<br />
|Noah Misch, Álvaro Herrera, Andres Freund, Robert Haas, Tom Lane, Andrew Dunstan<br />
<br />
|- style="background-color:lightgray;"<br />
|Native Compilation + LLVM<br />
|Open<br />
|<br />
|Kumar Rajeev Rastogi<br />
|Jeff Davis, Ozgun Erdogan, Tomas Vondra, Robert Haas, Chris Browne, Josh Berkus, Ingmar Alting, Masao Fujii, Christophe Pettus<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|Horizontal Scalability / Sharding in PostgreSQL]] - ground covered so far and remaining to be covered. <br />
|Open<br />
|<br />
|Ahsan Hadi, Ashutosh Bapat, Etsuro Fujita<br />
|Hannu Valtonen, Jeff Davis, Amit Langote, Kyotaro Horiguchi, Tetsuo Sakata, Simon Riggs, Robert Haas, David Steele, Rod Taylor, Chris Browne, Jim Nasby, Josh Berkus, Chris Winters, Masao Fujii, Mehmet Emin KARAKAŞ, Fabrízio de Royes Mello, Euler Taveira, Fabio Telles, Satoshi Nagayasu, Andrew Dunstan<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PGCAC Board Meeting 2015]]<br />
|Open*<br />
|Josh Berkus<br />
|Josh Berkus, Chris Browne, Steve Singer, Dan Langille, Dave Page<br />
|<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|pgPool2 towards version 3.5]]<br />
|Open<br />
|<br />
|Tatsuo Ishii<br />
|Ashutosh Bapat, Ahsan Hadi<br />
<br />
|- style="background-color:lightgray;"<br />
|Partitioning<br />
|Open<br />
|<br />
|Amit Langote<br />
|Hannu Valtonen, Ashutosh Bapat, Jeff Davis, Kyotaro Horiguchi, KaiGai Kohei, Noah Misch, Tetsuo Sakata, Álvaro Herrera, Thierry Husson, Joe Conway, Naoya Anzai, Robert Haas, David Steele, Chris Browne, Jim Nasby, Josh Berkus, Masao Fujii, Mehmet Emin KARAKAŞ, Fabrízio de Royes Mello, Euler Taveira, Fabio Telles, Andrew Dunstan<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|Foreign Data Wrapper enhancements]]<br />
|Open<br />
|<br />
|Shigeru Hanada, Etsuro Fujita<br />
|KaiGai Kohei, Hannu Valtonen, Ashutosh Bapat, Jeff Davis, Amit Langote, Kyotaro Horiguchi, Noah Misch, Tetsuo Sakata, Naoya Anzai, Robert Haas, Jim Nasby, Josh Berkus, Chris Winters, Ingmar Alting, Mehmet Emin KARAKAŞ<br />
<br />
|- style="background-color:lightgray;"<br />
|Utilization of modern semiconductor - GPU, SSD, NVRAM, FPGA, PMEM...<br />
|Open<br />
|<br />
|KaiGai Kohei<br />
|Matthew Wilcox, Josh Berkus, Satoshi Nagayasu<br />
<br />
|- style="background-color:lightgray;"<br />
|Native Columnar Storage<br />
|Open<br />
|<br />
|Álvaro Herrera<br />
|Ozgun Erdogan, Tomas Vondra, KaiGai Kohei, Amit Kapila, Josh Berkus, Naoya Anzai, Amit Langote, Robert Haas, David Steele, Rod Taylor, Chris Browne, Jim Nasby, Chris Winters, Nat Wyatt, Masao Fujii, Fabrízio de Royes Mello, Euler Taveira, Satoshi Nagayasu<br />
<br />
|- style="background-color:lightgray;"<br />
|Future of PostgreSQL shared-nothing cluster<br />
|Open<br />
|<br />
|Konstantin Knizhnik, Alexander Korotkov, Oleg Bartunov<br />
|Jeff Davis, Amit Langote, Kumar Rajeev Rastogi, Josh Berkus, Simon Riggs, Robert Haas, Jim Nasby, Masao Fujii, Christophe Pettus, Fabrízio de Royes Mello, Euler Taveira, Fabio Telles<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PostgreSQL and SMR Drives]] - the future of magnetic storage means very expensive random writes<br />
|Open<br />
|<br />
|Jeff Davis<br />
|Kumar Rajeev Rastogi, Noah Misch, Ilya Kosmodemiansky, Amit Kapila, Simon Riggs, Rod Taylor, Jim Nasby, Josh Berkus, Nat Wyatt, Christophe Pettus, Satoshi Nagayasu<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|Slony Development]]<br />
|Open<br />
|<br />
| Steve Singer, Chris Browne, Jan Wieck<br />
| Josh Berkus, Rod Taylor, Jim Nasby, Satoshi Nagayasu<br />
<br />
|- style="background-color:lightgray;"<br />
|[[DockerizingPostgres|Dockerizing Postgres]]<br />
|Open<br />
|<br />
| Josh Berkus<br />
| Simon Riggs, Nat Wyatt, Christophe Pettus, Fabrízio de Royes Mello<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|Bi Directional Replication & Logical Decoding|BDR]]<br />
|Open<br />
|<br />
| Simon Riggs<br />
| Andres Freund, Jim Nasby, Josh Berkus, Mehmet Emin KARAKAŞ, Christophe Pettus, Fabrízio de Royes Mello, Euler Taveira<br />
<br />
|- style="background-color:lightgray;"<br />
|Autonomous Transactions<br />
|Open<br />
|<br />
| Simon Riggs, Kumar Rajeev Rastogi<br />
| David Steele, Jim Nasby, Josh Berkus, Nat Wyatt, Masao Fujii, Euler Taveira, Andrew Dunstan<br />
<br />
|- style="background-color:lightgray;"<br />
|Audit Logging<br />
|Open<br />
|<br />
| David Steele<br />
| Josh Berkus, Nat Wyatt, Masao Fujii, Christophe Pettus, Fabio Telles, Satoshi Nagayasu<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|pg_shard v2.0 and Lessons Learned from NoSQL Databases ]]<br />
|Open<br />
|<br />
| Ozgun Erdogan, Marco Slot <br />
| Josh Berkus, Jim Nasby, Josh Berkus, Chris Winters, Mehmet Emin KARAKAŞ, Fabrízio de Royes Mello, Satoshi Nagayasu<br />
<br />
<br />
|- style="background-color:lightgray;"<br />
|Direction of json and jsonb<br />
|Open<br />
|<br />
| Andrew Dunstan<br />
| Josh Berkus, Christophe Pettus<br />
<br />
|- style="background-color:lightgray;"<br />
|Native Sparse Set Type <br />
|Open<br />
|<br />
| Andrew Dunstan<br />
| Josh Berkus<br />
<br />
|- style="background-color:lightgray;"<br />
|Testing Framework Adequacy<br />
|Open<br />
|<br />
| Andrew Dunstan<br />
| Josh Berkus, Christophe Pettus<br />
<br />
|}<br />
<br />
== pgAdmin4 ==<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Infrastructure Q&A ==<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== WWW Team Meeting ==<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Advocacy Team Meeting ==<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Vertical Scalability w.r.t Writes ==<br />
Purpose of this discussion:<br />
* Discuss about priority/importance of various performance and scalability problems<br />
* Solution/Idea to solve most important problem('s)<br />
* Is pgbench sufficient to capture various kind of real world workloads?<br />
<br />
Some of important performance problems I have in mind are:<br />
* Avoid/Reduce Vacuum Freeze<br />
* Bloat<br />
Heap<br />
Index<br />
* Instability in TPS due to checkpointer flush<br />
* Tuple size<br />
Heap Tuple Header <br />
Alignment in index can lead to bigger index size for simple datatypes<br />
Scalability bottlenecks<br />
* Locks<br />
ProcArrayLock<br />
WALWriteLock<br />
CLOGControlLock<br />
Lock for Relation Extension<br />
<br />
* Writes, especially when data doesn't fit in shared buffers.<br />
Write Performance<br />
Double Buffering<br />
In-memory table/tablespaces<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Security Team Meeting ==<br />
<br />
=== Meeting Notes ===<br />
* This will be, ehem, secure so nothing will be written here<br />
<br />
== Partitioning ==<br />
Proposal to enhance partitioning support in PostgreSQL was posted to -hackers last year and resulted in discussion of some ideas regarding implementation. Late in the discussion, a crude WIP patch was also posted with some experimental syntax, catalog changes, an idea for internal representation and a proof-of-concept INSERT tuple routing function demonstrating practicality of the internal representation. It would be nice to carry the discussion forward at the same time implementing a patch to be proposed, reviewed early in the 9.6 development cycle. Points to discuss could be: <br />
<br />
* New features and old inheritance based implementation<br />
* Planner considerations for new partitioned table<br />
* Need for a new Append-like executor node for partitioned tables<br />
* DML/DDL restrictions on partitioned tables and partitions<br />
* Basically any considerations for partitioned tables and partitions that are explicitly defined so at a layer that's above the storage layer<br />
* Other points that come up<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Utilization of modern semiconductor ==<br />
Recent evolution of semiconductor devices make us re-consider the assumption we stand on, and utilization of its power is key of innovation.<br />
We'd like to have a discussion to get the future direction in short and middle/long term.<br />
<br />
* GPU, FPGA - have advantage on simple but massive amount of calculation. It allows DBMS to perform as data processing platform that works nearby data.<br />
<br />
* SSD, NVRAM - likely, game changer of storage layer on both of read/write workloads. DBMS also has to pay attention characteristics of these devices.<br />
<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Future of PostgreSQL shared-nothing cluster ==<br />
<br />
=== Meeting Notes ===<br />
In 2015 PostgreSQL Professional company started project of migration PostgreSQL-XL to codebase of PostgreSQL 9.4 and increasing its stability and usability. At this unconference session we'd like to discuss current progress and further development. Generally we'd like to find ways to reduce difference between PostgreSQL and its shared-nothing cluster fork so that burden of the maintenance become manageable. <br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== PostgreSQL and SMR Drives ==<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Native Columnar Storage ==<br />
<br />
See Alvaro's [http://www.postgresql.org/message-id/20150611230316.GM133018@postgresql.org email to Hackers].<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Audit Logging ==<br />
<br />
Audit logging is an important part of a RDBMS for many users and applications. Discuss how best to incorporate audit logging into PostgreSQL and what must be included at a minimum to make the feature viable. <br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Direction of json and jsonb ==<br />
<br />
=== Meeting Notes ===<br />
What are the future needs of the JSON types? Recent suggestions have included an indexable "exists" operator, the json pointer and json patch standards, <br />
recursive merge, intersection, and being able to sssign to a subdocument (json#>path as an lvalue). .What are people using these types for, and what are <br />
the major gaps in functionality?<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Native Sparse Set Type ==<br />
<br />
Sets over small domains can be reasonably modeled by bitmaps, but sets over very large domains can not.<br />
Is there a need for such sets? How would we implement them? Arrays? Balanced trees? Something else?<br />
What types of sets would we allow? Anything with Btree operators, or more restricted? What would the notation look like?<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Testing Framework Adequacy ==<br />
<br />
The buildfarm is more than 10 years old, and the testing needs of Postgres and its ofware ecosystem have changed radically in that time.<br />
What do we now need in the way of testing? How do we test complex arrangements such as the various sorts of replication in an automated way?<br />
Do we need a new framwork, or can the existing framework be adapted to our needs?<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PgCon_2015_Developer_Unconference&diff=25011PgCon 2015 Developer Unconference2015-06-13T21:28:00Z<p>Adunstan: /* Topics */</p>
<hr />
<div>An Unconference-style multi-track (three tracks are currently planned) event for active PostgreSQL developers will be held from the afternoon of Tuesday 16 June, 2015 through Wednesday 17 June 2015 at the University of Ottawa, as part of PGCon 2015. This Unconference will be focused on technical PostgreSQL development discussions ranging from Clustering and replication to the infrastructure which runs postgresql.org.<br />
<br />
'''Please add your name under RSVPs if you plan to attend.'''<br />
<br />
== Topics ==<br />
<br />
Developers are asked to propose topics which they wish to either present on or which they would like another individual to present on. All topics should be clearly related to PostgreSQL development. The topic should be added to the table below and any required attendees (presumably at least the presenter, and the requester if different) listed. Other attendees of the Unconference who are interested should list themselves as Optional. Note that non-technical topics related to PostgreSQL development will be addressed during the invite-only Developer meeting, being held in advance of the Unconference. Further, the Developer Unconference is for developers of PostgreSQL and user-oriented topics are not appropriate for this venue.<br />
<br />
== Slot assignment ==<br />
<br />
Slots will be assigned based on the topic's interest among the attendees of the Unconference (the number of individuals who listed themselves as attendees). Final determination on any particular topic will be made by the Unconference organizers. Please only participate if you are confident of your attendance at the Unconference.<br />
<br />
== Venue ==<br />
<br />
These meetings will be held at the University of Ottawa. The topics selected, the schedule and the specific room assignments will be published closer to the event and will be based on the information provided here. Please direct any questions to Dave Page (dpage@pgadmin.org).<br />
<br />
== Sponsorship ==<br />
<br />
The Developer Unconference will be sponsored by Salesforce.com, and by NTT Open Source for the Clustering Track.<br />
<br />
== Attendees ==<br />
<br />
While the Unconference is open to all attendees of PGCon, formal invitations will be sent to specific PostgreSQL developers, including the Core team, Major Contributors, Committers, and other developers who have been involved in the 9.4 release. These invitations are intended to encourage developers to attend the Unconference but we are unable to guarantee every invitee a speaking slot.<br />
<br />
== RSVPs ==<br />
<br />
The following people have RSVPed to the meeting (in alphabetical order, by surname):<br />
<br />
* Ashutosh Bapat<br />
* Oleg Bartunov<br />
* Josh Berkus<br />
* Christopher Browne<br />
* Joe Conway<br />
* Jeff Davis<br />
* Andrew Dunstan<br />
* Ozgun Erdogan<br />
* Andres Freund<br />
* Stephen Frost<br />
* Masao Fujii<br />
* Etsuro Fujita<br />
* Peter Geoghegan<br />
* Kevin Grittner<br />
* Robert Haas<br />
* Ahsan Hadi<br />
* Magnus Hagander<br />
* Shigeru Hanada<br />
* Álvaro Herrera<br />
* Kyotaro Horiguchi<br />
* Thierry Husson (Wednesday @ 11am)<br />
* Ayumi Ishii<br />
* Tatsuo Ishii<br />
* Stefan Kaltenbrunner<br />
* Amit Kapila<br />
* Konstantin Knizhnik<br />
* KaiGai Kohei (arrive tuesday evening)<br />
* Alexander Korotkov<br />
* Ilya Kosmodemiansky<br />
* Tom Lane<br />
* Amit Langote<br />
* Grant McAlister<br />
* Noah Misch<br />
* Bruce Momjian<br />
* Yugo Nagata<br />
* Jim Nasby<br />
* Dave Page<br />
* Paul Ramsey<br />
* Kumar Rajeev Rastogi<br />
* Simon Riggs<br />
* Tetsuo Sakata<br />
* Masahiko Sawada<br />
* Marco Slot (Wednesday)<br />
* Greg Smith<br />
* Steve Singer (arrive tuesday mid-afternoon)<br />
* Rod Taylor<br />
* Tomas Vondra<br />
* Jan Wieck (arrive tuesday evening)<br />
* Chris Winters<br />
* Nat Wyatt<br />
* Naoya Anzai (arrive tuesday evening)<br />
* David Steele (arrive tuesday evening)<br />
* Ingmar Alting<br />
* Mehmet Emin KARAKAŞ<br />
* Yasin TATAR<br />
<br />
=Topics=<br />
<br />
'''Please add any topics you wish covered to the table.'''<br />
<br />
'''For any topics you are requesting or presenting on, please add your name in the Required column.'''<br />
<br />
'''For any topics you would like to attend, please add your name in the Interested column.'''<br />
<br />
{| border="1" cellpadding="4" cellspacing="0"<br />
!Topic<br />
!Policy<br />
!Taker of Notes<br />
!Required Attendees<br />
!Interested Attendees<br />
<br />
|- style="background-color:lightgray;"<br />
|Picture!<br />
|Open<br />
|<br />
|All!<br />
|All!<br />
<br />
|- style="background-color:lightgray;"<br />
|pgAdmin4<br />
|Open<br />
|<br />
|Dave Page, Stephen Frost<br />
|Magnus Hagander, Joe Conway, David Steele<br />
<br />
|- style="background-color:lightgray;"<br />
|Infrastructure Q&A<br />
|Open<br />
|<br />
|Dave Page, Stephen Frost, Stefan Kaltenbrunner, Magnus Hagander, Joe Conway<br />
|<br />
<br />
|- style="background-color:lightgray;"<br />
|WWW Team Meeting<br />
|Open<br />
|<br />
|Dave Page, Stephen Frost, Stefan Kaltenbrunner, Magnus Hagander<br />
|<br />
<br />
|- style="background-color:lightgray;"<br />
|Advocacy Team Meeting<br />
|Open<br />
|<br />
|Stephen Frost<br />
|Magnus Hagander, Greg Smith, Jim Nasby, Josh Berkus, Joe Conway<br />
<br />
|- style="background-color:lightgray;"<br />
|Vertical Scalability w.r.t Writes<br />
|Open<br />
|Amit Kapila<br />
|Amit Kapila<br />
|Greg Smith, Hannu Valtonen, Ilya Kosmodemiansky, Tomas Vondra, Grant McAlister, Joe Conway, Kyotaro Horiguchi, Simon Riggs, Amit Langote, Andres Freund, Robert Haas, David Steele, Rod Taylor, Jim Nasby, Chris Winters, Nat Wyatt, Noah Misch, Masao Fujii, Mehmet Emin KARAKAŞ<br />
<br />
|- style="background-color:lightgray;"<br />
|Security Team Meeting<br />
|Closed<br />
|<br />
|Heikki Linnakangas, Stephen Frost, Magnus Hagander<br />
|Noah Misch, Álvaro Herrera, Andres Freund, Robert Haas, Tom Lane<br />
<br />
|- style="background-color:lightgray;"<br />
|Native Compilation + LLVM<br />
|Open<br />
|<br />
|Kumar Rajeev Rastogi<br />
|Jeff Davis, Ozgun Erdogan, Tomas Vondra, Robert Haas, Chris Browne, Josh Berkus, Ingmar Alting, Masao Fujii<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|Horizontal Scalability / Sharding in PostgreSQL]] - ground covered so far and remaining to be covered. <br />
|Open<br />
|<br />
|Ahsan Hadi, Ashutosh Bapat, Etsuro Fujita<br />
|Hannu Valtonen, Jeff Davis, Amit Langote, Kyotaro Horiguchi, Tetsuo Sakata, Simon Riggs, Robert Haas, David Steele, Rod Taylor, Chris Browne, Jim Nasby, Josh Berkus, Chris Winters, Masao Fujii, Mehmet Emin KARAKAŞ<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PGCAC Board Meeting 2015]]<br />
|Open*<br />
|Josh Berkus<br />
|Josh Berkus, Chris Browne, Steve Singer, Dan Langille, Dave Page<br />
|<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|pgPool2 towards version 3.5]]<br />
|Open<br />
|<br />
|Tatsuo Ishii<br />
|Ashutosh Bapat, Ahsan Hadi<br />
<br />
|- style="background-color:lightgray;"<br />
|Partitioning<br />
|Open<br />
|<br />
|Amit Langote<br />
|Hannu Valtonen, Ashutosh Bapat, Jeff Davis, Kyotaro Horiguchi, KaiGai Kohei, Noah Misch, Tetsuo Sakata, Álvaro Herrera, Thierry Husson, Joe Conway, Naoya Anzai, Robert Haas, David Steele, Chris Browne, Jim Nasby, Josh Berkus, Masao Fujii, Mehmet Emin KARAKAŞ<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|Foreign Data Wrapper enhancements]]<br />
|Open<br />
|<br />
|Shigeru Hanada, Etsuro Fujita<br />
|KaiGai Kohei, Hannu Valtonen, Ashutosh Bapat, Jeff Davis, Amit Langote, Kyotaro Horiguchi, Noah Misch, Tetsuo Sakata, Naoya Anzai, Robert Haas, Jim Nasby, Josh Berkus, Chris Winters, Ingmar Alting, Mehmet Emin KARAKAŞ<br />
<br />
|- style="background-color:lightgray;"<br />
|Utilization of modern semiconductor - GPU, SSD, NVRAM, FPGA, ...<br />
|Open<br />
|<br />
|KaiGai Kohei<br />
|<br />
<br />
|- style="background-color:lightgray;"<br />
|Native Columnar Storage<br />
|Open<br />
|<br />
|Álvaro Herrera<br />
|Ozgun Erdogan, Tomas Vondra, KaiGai Kohei, Amit Kapila, Josh Berkus, Naoya Anzai, Amit Langote, Robert Haas, David Steele, Rod Taylor, Chris Browne, Jim Nasby, Chris Winters, Nat Wyatt, Masao Fujii<br />
<br />
|- style="background-color:lightgray;"<br />
|Future of PostgreSQL shared-nothing cluster<br />
|Open<br />
|<br />
|Konstantin Knizhnik, Alexander Korotkov, Oleg Bartunov<br />
|Jeff Davis, Amit Langote, Kumar Rajeev Rastogi, Josh Berkus, Simon Riggs, Robert Haas, Jim Nasby, Masao Fujii<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PostgreSQL and SMR Drives]] - the future of magnetic storage means very expensive random writes<br />
|Open<br />
|<br />
|Jeff Davis<br />
|Kumar Rajeev Rastogi, Noah Misch, Ilya Kosmodemiansky, Amit Kapila, Simon Riggs, Rod Taylor, Jim Nasby, Josh Berkus, Nat Wyatt<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|Slony Development]]<br />
|Open<br />
|<br />
| Steve Singer, Chris Browne, Jan Wieck<br />
| Josh Berkus, Rod Taylor, Jim Nasby<br />
<br />
|- style="background-color:lightgray;"<br />
|[[DockerizingPostgres|Dockerizing Postgres]]<br />
|Open<br />
|<br />
| Josh Berkus<br />
| Simon Riggs, Nat Wyatt<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|Bi Directional Replication & Logical Decoding|BDR]]<br />
|Open<br />
|<br />
| Simon Riggs<br />
| Andres Freund, Jim Nasby, Josh Berkus, Mehmet Emin KARAKAŞ<br />
<br />
|- style="background-color:lightgray;"<br />
|Autonomous Transactions<br />
|Open<br />
|<br />
| Simon Riggs, Kumar Rajeev Rastogi<br />
| David Steele, Jim Nasby, Josh Berkus, Nat Wyatt, Masao Fujii<br />
<br />
|- style="background-color:lightgray;"<br />
|Audit Logging<br />
|Open<br />
|<br />
| David Steele<br />
| Josh Berkus, Nat Wyatt, Masao Fujii<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|pg_shard v2.0 and Lessons Learned from NoSQL Databases ]]<br />
|Open<br />
|<br />
| Ozgun Erdogan, Marco Slot <br />
| Josh Berkus, Jim Nasby, Josh Berkus, Chris Winters, Mehmet Emin KARAKAŞ<br />
<br />
<br />
|- style="background-color:lightgray;"<br />
|Direction of json and jsonb<br />
|Open<br />
|<br />
| Andrew Dunstan<br />
|<br />
<br />
|- style="background-color:lightgray;"<br />
|Native Sparse Set Type <br />
|Open<br />
|<br />
| Andrew Dunstan<br />
|<br />
<br />
|- style="background-color:lightgray;"<br />
|Testing Framework Adequacy<br />
|Open<br />
|<br />
| Andrew Dunstan<br />
|<br />
<br />
|}<br />
<br />
== pgAdmin4 ==<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Infrastructure Q&A ==<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== WWW Team Meeting ==<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Advocacy Team Meeting ==<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Vertical Scalability w.r.t Writes ==<br />
Purpose of this discussion:<br />
* Discuss about priority/importance of various performance and scalability problems<br />
* Solution/Idea to solve most important problem('s)<br />
* Is pgbench sufficient to capture various kind of real world workloads?<br />
<br />
Some of important performance problems I have in mind are:<br />
* Avoid/Reduce Vacuum Freeze<br />
* Bloat<br />
Heap<br />
Index<br />
* Instability in TPS due to checkpointer flush<br />
* Tuple size<br />
Heap Tuple Header <br />
Alignment in index can lead to bigger index size for simple datatypes<br />
Scalability bottlenecks<br />
* Locks<br />
ProcArrayLock<br />
WALWriteLock<br />
CLOGControlLock<br />
Lock for Relation Extension<br />
<br />
* Writes, especially when data doesn't fit in shared buffers.<br />
Write Performance<br />
Double Buffering<br />
In-memory table/tablespaces<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Security Team Meeting ==<br />
<br />
=== Meeting Notes ===<br />
* This will be, ehem, secure so nothing will be written here<br />
<br />
== Partitioning ==<br />
Proposal to enhance partitioning support in PostgreSQL was posted to -hackers last year and resulted in discussion of some ideas regarding implementation. Late in the discussion, a crude WIP patch was also posted with some experimental syntax, catalog changes, an idea for internal representation and a proof-of-concept INSERT tuple routing function demonstrating practicality of the internal representation. It would be nice to carry the discussion forward at the same time implementing a patch to be proposed, reviewed early in the 9.6 development cycle. Points to discuss could be: <br />
<br />
* New features and old inheritance based implementation<br />
* Planner considerations for new partitioned table<br />
* Need for a new Append-like executor node for partitioned tables<br />
* DML/DDL restrictions on partitioned tables and partitions<br />
* Basically any considerations for partitioned tables and partitions that are explicitly defined so at a layer that's above the storage layer<br />
* Other points that come up<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Utilization of modern semiconductor ==<br />
Recent evolution of semiconductor devices make us re-consider the assumption we stand on, and utilization of its power is key of innovation.<br />
We'd like to have a discussion to get the future direction in short and middle/long term.<br />
<br />
* GPU, FPGA - have advantage on simple but massive amount of calculation. It allows DBMS to perform as data processing platform that works nearby data.<br />
<br />
* SSD, NVRAM - likely, game changer of storage layer on both of read/write workloads. DBMS also has to pay attention characteristics of these devices.<br />
<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Future of PostgreSQL shared-nothing cluster ==<br />
<br />
=== Meeting Notes ===<br />
In 2015 PostgreSQL Professional company started project of migration PostgreSQL-XL to codebase of PostgreSQL 9.4 and increasing its stability and usability. At this unconference session we'd like to discuss current progress and further development. Generally we'd like to find ways to reduce difference between PostgreSQL and its shared-nothing cluster fork so that burden of the maintenance become manageable. <br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== PostgreSQL and SMR Drives ==<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Native Columnar Storage ==<br />
<br />
See Alvaro's [http://www.postgresql.org/message-id/20150611230316.GM133018@postgresql.org email to Hackers].<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Audit Logging ==<br />
<br />
Audit logging is an important part of a RDBMS for many users and applications. Discuss how best to incorporate audit logging into PostgreSQL and what must be included at a minimum to make the feature viable. <br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Direction of json and jsonb ==<br />
<br />
=== Meeting Notes ===<br />
What are the future needs of the JSON types? Recent suggestions have included an indexable "exists" operator, the json pointer and json patch standards, <br />
recursive merge, intersection, and being able to sssign to a subdocument (json#>path as an lvalue). .What are people using these types for, and what are <br />
the major gaps in functionality?<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Native Sparse Set Type ==<br />
<br />
Sets over small domains can be reasonably modeled by bitmaps, but sets over very large domains can not.<br />
Is there a need for such sets? How would we implement them? Arrays? Balanced trees? Something else?<br />
What types of sets would we allow? Anything with Btree operators, or more restricted? What would the notation look like?<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Testing Framework Adequacy ==<br />
<br />
The buildfarm is more than 10 years old, and the testing needs of Postgres and its ofware ecosystem have changed radically in that time.<br />
What do we now need in the way of testing? How do we test complex arrangements such as the various sorts of replication in an automated way?<br />
Do we need a new framwork, or can the existing framework be adapted to our needs?<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PgCon_2015_Developer_Unconference&diff=25010PgCon 2015 Developer Unconference2015-06-13T19:32:07Z<p>Adunstan: /* RSVPs */</p>
<hr />
<div>An Unconference-style multi-track (three tracks are currently planned) event for active PostgreSQL developers will be held from the afternoon of Tuesday 16 June, 2015 through Wednesday 17 June 2015 at the University of Ottawa, as part of PGCon 2015. This Unconference will be focused on technical PostgreSQL development discussions ranging from Clustering and replication to the infrastructure which runs postgresql.org.<br />
<br />
'''Please add your name under RSVPs if you plan to attend.'''<br />
<br />
== Topics ==<br />
<br />
Developers are asked to propose topics which they wish to either present on or which they would like another individual to present on. All topics should be clearly related to PostgreSQL development. The topic should be added to the table below and any required attendees (presumably at least the presenter, and the requester if different) listed. Other attendees of the Unconference who are interested should list themselves as Optional. Note that non-technical topics related to PostgreSQL development will be addressed during the invite-only Developer meeting, being held in advance of the Unconference. Further, the Developer Unconference is for developers of PostgreSQL and user-oriented topics are not appropriate for this venue.<br />
<br />
== Slot assignment ==<br />
<br />
Slots will be assigned based on the topic's interest among the attendees of the Unconference (the number of individuals who listed themselves as attendees). Final determination on any particular topic will be made by the Unconference organizers. Please only participate if you are confident of your attendance at the Unconference.<br />
<br />
== Venue ==<br />
<br />
These meetings will be held at the University of Ottawa. The topics selected, the schedule and the specific room assignments will be published closer to the event and will be based on the information provided here. Please direct any questions to Dave Page (dpage@pgadmin.org).<br />
<br />
== Sponsorship ==<br />
<br />
The Developer Unconference will be sponsored by Salesforce.com, and by NTT Open Source for the Clustering Track.<br />
<br />
== Attendees ==<br />
<br />
While the Unconference is open to all attendees of PGCon, formal invitations will be sent to specific PostgreSQL developers, including the Core team, Major Contributors, Committers, and other developers who have been involved in the 9.4 release. These invitations are intended to encourage developers to attend the Unconference but we are unable to guarantee every invitee a speaking slot.<br />
<br />
== RSVPs ==<br />
<br />
The following people have RSVPed to the meeting (in alphabetical order, by surname):<br />
<br />
* Ashutosh Bapat<br />
* Oleg Bartunov<br />
* Josh Berkus<br />
* Christopher Browne<br />
* Joe Conway<br />
* Jeff Davis<br />
* Andrew Dunstan<br />
* Ozgun Erdogan<br />
* Andres Freund<br />
* Stephen Frost<br />
* Masao Fujii<br />
* Etsuro Fujita<br />
* Peter Geoghegan<br />
* Kevin Grittner<br />
* Robert Haas<br />
* Ahsan Hadi<br />
* Magnus Hagander<br />
* Shigeru Hanada<br />
* Álvaro Herrera<br />
* Kyotaro Horiguchi<br />
* Thierry Husson (Wednesday @ 11am)<br />
* Ayumi Ishii<br />
* Tatsuo Ishii<br />
* Stefan Kaltenbrunner<br />
* Amit Kapila<br />
* Konstantin Knizhnik<br />
* KaiGai Kohei (arrive tuesday evening)<br />
* Alexander Korotkov<br />
* Ilya Kosmodemiansky<br />
* Tom Lane<br />
* Amit Langote<br />
* Grant McAlister<br />
* Noah Misch<br />
* Bruce Momjian<br />
* Yugo Nagata<br />
* Jim Nasby<br />
* Dave Page<br />
* Paul Ramsey<br />
* Kumar Rajeev Rastogi<br />
* Simon Riggs<br />
* Tetsuo Sakata<br />
* Masahiko Sawada<br />
* Marco Slot (Wednesday)<br />
* Greg Smith<br />
* Steve Singer (arrive tuesday mid-afternoon)<br />
* Rod Taylor<br />
* Tomas Vondra<br />
* Jan Wieck (arrive tuesday evening)<br />
* Chris Winters<br />
* Nat Wyatt<br />
* Naoya Anzai (arrive tuesday evening)<br />
* David Steele (arrive tuesday evening)<br />
* Ingmar Alting<br />
* Mehmet Emin KARAKAŞ<br />
* Yasin TATAR<br />
<br />
=Topics=<br />
<br />
'''Please add any topics you wish covered to the table.'''<br />
<br />
'''For any topics you are requesting or presenting on, please add your name in the Required column.'''<br />
<br />
'''For any topics you would like to attend, please add your name in the Interested column.'''<br />
<br />
{| border="1" cellpadding="4" cellspacing="0"<br />
!Topic<br />
!Policy<br />
!Taker of Notes<br />
!Required Attendees<br />
!Interested Attendees<br />
<br />
|- style="background-color:lightgray;"<br />
|Picture!<br />
|Open<br />
|<br />
|All!<br />
|All!<br />
<br />
|- style="background-color:lightgray;"<br />
|pgAdmin4<br />
|Open<br />
|<br />
|Dave Page, Stephen Frost<br />
|Magnus Hagander, Joe Conway, David Steele<br />
<br />
|- style="background-color:lightgray;"<br />
|Infrastructure Q&A<br />
|Open<br />
|<br />
|Dave Page, Stephen Frost, Stefan Kaltenbrunner, Magnus Hagander, Joe Conway<br />
|<br />
<br />
|- style="background-color:lightgray;"<br />
|WWW Team Meeting<br />
|Open<br />
|<br />
|Dave Page, Stephen Frost, Stefan Kaltenbrunner, Magnus Hagander<br />
|<br />
<br />
|- style="background-color:lightgray;"<br />
|Advocacy Team Meeting<br />
|Open<br />
|<br />
|Stephen Frost<br />
|Magnus Hagander, Greg Smith, Jim Nasby, Josh Berkus, Joe Conway<br />
<br />
|- style="background-color:lightgray;"<br />
|Vertical Scalability w.r.t Writes<br />
|Open<br />
|Amit Kapila<br />
|Amit Kapila<br />
|Greg Smith, Hannu Valtonen, Ilya Kosmodemiansky, Tomas Vondra, Grant McAlister, Joe Conway, Kyotaro Horiguchi, Simon Riggs, Amit Langote, Andres Freund, Robert Haas, David Steele, Rod Taylor, Jim Nasby, Chris Winters, Nat Wyatt, Noah Misch, Masao Fujii, Mehmet Emin KARAKAŞ<br />
<br />
|- style="background-color:lightgray;"<br />
|Security Team Meeting<br />
|Closed<br />
|<br />
|Heikki Linnakangas, Stephen Frost, Magnus Hagander<br />
|Noah Misch, Álvaro Herrera, Andres Freund, Robert Haas, Tom Lane<br />
<br />
|- style="background-color:lightgray;"<br />
|Native Compilation + LLVM<br />
|Open<br />
|<br />
|Kumar Rajeev Rastogi<br />
|Jeff Davis, Ozgun Erdogan, Tomas Vondra, Robert Haas, Chris Browne, Josh Berkus, Ingmar Alting, Masao Fujii<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|Horizontal Scalability / Sharding in PostgreSQL]] - ground covered so far and remaining to be covered. <br />
|Open<br />
|<br />
|Ahsan Hadi, Ashutosh Bapat, Etsuro Fujita<br />
|Hannu Valtonen, Jeff Davis, Amit Langote, Kyotaro Horiguchi, Tetsuo Sakata, Simon Riggs, Robert Haas, David Steele, Rod Taylor, Chris Browne, Jim Nasby, Josh Berkus, Chris Winters, Masao Fujii, Mehmet Emin KARAKAŞ<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PGCAC Board Meeting 2015]]<br />
|Open*<br />
|Josh Berkus<br />
|Josh Berkus, Chris Browne, Steve Singer, Dan Langille, Dave Page<br />
|<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|pgPool2 towards version 3.5]]<br />
|Open<br />
|<br />
|Tatsuo Ishii<br />
|Ashutosh Bapat, Ahsan Hadi<br />
<br />
|- style="background-color:lightgray;"<br />
|Partitioning<br />
|Open<br />
|<br />
|Amit Langote<br />
|Hannu Valtonen, Ashutosh Bapat, Jeff Davis, Kyotaro Horiguchi, KaiGai Kohei, Noah Misch, Tetsuo Sakata, Álvaro Herrera, Thierry Husson, Joe Conway, Naoya Anzai, Robert Haas, David Steele, Chris Browne, Jim Nasby, Josh Berkus, Masao Fujii, Mehmet Emin KARAKAŞ<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|Foreign Data Wrapper enhancements]]<br />
|Open<br />
|<br />
|Shigeru Hanada, Etsuro Fujita<br />
|KaiGai Kohei, Hannu Valtonen, Ashutosh Bapat, Jeff Davis, Amit Langote, Kyotaro Horiguchi, Noah Misch, Tetsuo Sakata, Naoya Anzai, Robert Haas, Jim Nasby, Josh Berkus, Chris Winters, Ingmar Alting, Mehmet Emin KARAKAŞ<br />
<br />
|- style="background-color:lightgray;"<br />
|Utilization of modern semiconductor - GPU, SSD, NVRAM, FPGA, ...<br />
|Open<br />
|<br />
|KaiGai Kohei<br />
|<br />
<br />
|- style="background-color:lightgray;"<br />
|Native Columnar Storage<br />
|Open<br />
|<br />
|Álvaro Herrera<br />
|Ozgun Erdogan, Tomas Vondra, KaiGai Kohei, Amit Kapila, Josh Berkus, Naoya Anzai, Amit Langote, Robert Haas, David Steele, Rod Taylor, Chris Browne, Jim Nasby, Chris Winters, Nat Wyatt, Masao Fujii<br />
<br />
|- style="background-color:lightgray;"<br />
|Future of PostgreSQL shared-nothing cluster<br />
|Open<br />
|<br />
|Konstantin Knizhnik, Alexander Korotkov, Oleg Bartunov<br />
|Jeff Davis, Amit Langote, Kumar Rajeev Rastogi, Josh Berkus, Simon Riggs, Robert Haas, Jim Nasby, Masao Fujii<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PostgreSQL and SMR Drives]] - the future of magnetic storage means very expensive random writes<br />
|Open<br />
|<br />
|Jeff Davis<br />
|Kumar Rajeev Rastogi, Noah Misch, Ilya Kosmodemiansky, Amit Kapila, Simon Riggs, Rod Taylor, Jim Nasby, Josh Berkus, Nat Wyatt<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|Slony Development]]<br />
|Open<br />
|<br />
| Steve Singer, Chris Browne, Jan Wieck<br />
| Josh Berkus, Rod Taylor, Jim Nasby<br />
<br />
|- style="background-color:lightgray;"<br />
|[[DockerizingPostgres|Dockerizing Postgres]]<br />
|Open<br />
|<br />
| Josh Berkus<br />
| Simon Riggs, Nat Wyatt<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|Bi Directional Replication & Logical Decoding|BDR]]<br />
|Open<br />
|<br />
| Simon Riggs<br />
| Andres Freund, Jim Nasby, Josh Berkus, Mehmet Emin KARAKAŞ<br />
<br />
|- style="background-color:lightgray;"<br />
|Autonomous Transactions<br />
|Open<br />
|<br />
| Simon Riggs, Kumar Rajeev Rastogi<br />
| David Steele, Jim Nasby, Josh Berkus, Nat Wyatt, Masao Fujii<br />
<br />
|- style="background-color:lightgray;"<br />
|Audit Logging<br />
|Open<br />
|<br />
| David Steele<br />
| Josh Berkus, Nat Wyatt, Masao Fujii<br />
<br />
|- style="background-color:lightgray;"<br />
|[[PgCon2015ClusterSummit|pg_shard v2.0 and Lessons Learned from NoSQL Databases ]]<br />
|Open<br />
|<br />
| Ozgun Erdogan, Marco Slot <br />
| Josh Berkus, Jim Nasby, Josh Berkus, Chris Winters, Mehmet Emin KARAKAŞ<br />
<br />
|}<br />
<br />
== pgAdmin4 ==<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Infrastructure Q&A ==<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== WWW Team Meeting ==<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Advocacy Team Meeting ==<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Vertical Scalability w.r.t Writes ==<br />
Purpose of this discussion:<br />
* Discuss about priority/importance of various performance and scalability problems<br />
* Solution/Idea to solve most important problem('s)<br />
* Is pgbench sufficient to capture various kind of real world workloads?<br />
<br />
Some of important performance problems I have in mind are:<br />
* Avoid/Reduce Vacuum Freeze<br />
* Bloat<br />
Heap<br />
Index<br />
* Instability in TPS due to checkpointer flush<br />
* Tuple size<br />
Heap Tuple Header <br />
Alignment in index can lead to bigger index size for simple datatypes<br />
Scalability bottlenecks<br />
* Locks<br />
ProcArrayLock<br />
WALWriteLock<br />
CLOGControlLock<br />
Lock for Relation Extension<br />
<br />
* Writes, especially when data doesn't fit in shared buffers.<br />
Write Performance<br />
Double Buffering<br />
In-memory table/tablespaces<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Security Team Meeting ==<br />
<br />
=== Meeting Notes ===<br />
* This will be, ehem, secure so nothing will be written here<br />
<br />
== Partitioning ==<br />
Proposal to enhance partitioning support in PostgreSQL was posted to -hackers last year and resulted in discussion of some ideas regarding implementation. Late in the discussion, a crude WIP patch was also posted with some experimental syntax, catalog changes, an idea for internal representation and a proof-of-concept INSERT tuple routing function demonstrating practicality of the internal representation. It would be nice to carry the discussion forward at the same time implementing a patch to be proposed, reviewed early in the 9.6 development cycle. Points to discuss could be: <br />
<br />
* New features and old inheritance based implementation<br />
* Planner considerations for new partitioned table<br />
* Need for a new Append-like executor node for partitioned tables<br />
* DML/DDL restrictions on partitioned tables and partitions<br />
* Basically any considerations for partitioned tables and partitions that are explicitly defined so at a layer that's above the storage layer<br />
* Other points that come up<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Utilization of modern semiconductor ==<br />
Recent evolution of semiconductor devices make us re-consider the assumption we stand on, and utilization of its power is key of innovation.<br />
We'd like to have a discussion to get the future direction in short and middle/long term.<br />
<br />
* GPU, FPGA - have advantage on simple but massive amount of calculation. It allows DBMS to perform as data processing platform that works nearby data.<br />
<br />
* SSD, NVRAM - likely, game changer of storage layer on both of read/write workloads. DBMS also has to pay attention characteristics of these devices.<br />
<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Future of PostgreSQL shared-nothing cluster ==<br />
<br />
=== Meeting Notes ===<br />
In 2015 PostgreSQL Professional company started project of migration PostgreSQL-XL to codebase of PostgreSQL 9.4 and increasing its stability and usability. At this unconference session we'd like to discuss current progress and further development. Generally we'd like to find ways to reduce difference between PostgreSQL and its shared-nothing cluster fork so that burden of the maintenance become manageable. <br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== PostgreSQL and SMR Drives ==<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Native Columnar Storage ==<br />
<br />
See Alvaro's [http://www.postgresql.org/message-id/20150611230316.GM133018@postgresql.org email to Hackers].<br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in<br />
<br />
== Audit Logging ==<br />
<br />
Audit logging is an important part of a RDBMS for many users and applications. Discuss how best to incorporate audit logging into PostgreSQL and what must be included at a minimum to make the feature viable. <br />
<br />
=== Meeting Notes ===<br />
* To be filled in<br />
<br />
=== Attendees ===<br />
* To be filled in</div>Adunstanhttps://wiki.postgresql.org/index.php?title=PgCon_2015_Developer_Meeting&diff=25009PgCon 2015 Developer Meeting2015-06-13T19:30:54Z<p>Adunstan: /* RSVPs */</p>
<hr />
<div>A meeting of the interested PostgreSQL developers is being planned for Tuesday 16 June, 2015 at the University of Ottawa, prior to pgCon 2015. In order to keep the numbers manageable, this meeting is by '''invitation only'''. Unfortunately it is quite possible that we've overlooked important individuals during the planning of the event - if you feel you fall into this category and would like to attend, please contact Dave Page (dpage@pgadmin.org).<br />
<br />
Please note that the attendee numbers have been kept low in order to keep the meeting more productive. Invitations have been sent only to developers that have been highly active on the database server over the 9.5 release cycle. We have not invited any contributors based on their contributions to related projects, or seniority in regional user groups or sponsoring companies.<br />
<br />
This is a PostgreSQL Community event.<br />
<br />
== Changes from Previous Developer Meetings ==<br />
<br />
Note that the goals for this year's "Developer Meeting" have shifted to account for the Unconference which is being held at pgCon immediately following the Developer meeting and lasting for 1.5 days (Tuesday afternoon and all day Wednesday). This year, the "Developer meeting" will be focused on non-technical issues such as timing/schedule, policies, procedures, and [http://en.wikipedia.org/wiki/Wicked_problem Wicked problems], be they technical or non-technical in nature. The nature of such Wicked problems is that they require a sufficient number of interested individuals to make progress and generally involve both technical and non-technical issues (trade-off decisions, no clear true or false answer, no way to test if a given solution is correct, etc). The Unconference will be focused on technical discussions and design. If you have any questions regarding the nature of the Developer meeting, please contact Dave Page (dpage@pgadmin.org).<br />
<br />
== Meeting Goals ==<br />
<br />
* Define the schedule for the 9.6 release cycle<br />
* Address any proposed timing, policy, or procedure issues<br />
* Address any proposed [http://en.wikipedia.org/wiki/Wicked_problem Wicked problems]<br />
<br />
== Time & Location ==<br />
<br />
The meeting will be from 9:00AM to 12PM at the University of Ottawa. We will update this wiki with the specific room information once we have it.<br />
<br />
Note that this meeting is intentionally shorter this year. This is due to the Unconference being held at pgCon.<br />
<br />
== RSVPs ==<br />
<br />
The following people have RSVPed to the meeting (in alphabetical order, by surname):<br />
<br />
* Josh Berkus<br />
* Jeff Davis<br />
* Andrew Dunstan<br />
* Stephen Frost<br />
* Masao Fujii<br />
* Peter Geoghegan<br />
* Kevin Grittner<br />
* Robert Haas<br />
* Magnus Hagander<br />
* Álvaro Herrera<br />
* Amit Kapila<br />
* Tom Lane<br />
* Heikki Linnakangas<br />
* Noah Misch<br />
* Bruce Momjian<br />
* Dave Page<br />
* Simon Riggs<br />
<br />
==Agenda==<br />
<br />
{| border="1" cellpadding="4" cellspacing="0"<br />
!Time<br />
!Item<br />
!Presenter<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|09:00 - 09:15<br />
|Welcome and introductions<br />
|Dave Page<br />
<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|10:45 - 11:00<br />
|Coffee break<br />
|<br />
<br />
|-<br />
|11:00 - 11:45<br />
|9.6 Schedule<br />
|All<br />
<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|11:45 - 12:00<br />
|Any other business<br />
|Dave Page<br />
<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|12:00<br />
|Finish<br />
|<br />
|}<br />
<br />
= Meeting Notes =<br />
<br />
== Attendees ==</div>Adunstan