PgCon 2009 Developer Meeting

From PostgreSQL wiki
Jump to navigationJump to search

A meeting of the most active PostgreSQL developers and senior figures from PostgreSQL-developer-sponsoring companies is being planned for Wednesday 20th May, 2009 near the University of Ottawa, prior to pgCon 2009. In order to keep the numbers manageable, this meeting is by invitation only. Unfortunately it is quite possible that we've overlooked important code developers during the planning of the event - if you feel you fall into this category and would like to attend, please contact Dave Page (dpage@pgadmin.org).

This is a PostgreSQL Community event, sponsored by EnterpriseDB.

Time & Location

The meeting will start at 9:30AM, and will finish at 5PM, or earlier if we run out of things to discuss! Tea and coffee will be available from around 9AM, and food and drink will also be served during morning and afternoon breaks and at lunchtime.

The meeting will be held in the Red Experience Room at the Novotel Hotel, which is located at:

 33 Nicholas Street
 Ottawa
 Ontario
 K1N 9M7
 Phone: (613) 230-3033

You can use Google Maps for directions if required.

Invitees

The following people have RSVPed to the meeting:

  • Oleg Bartunov
  • Josh Berkus
  • Joe Conway
  • Selena Deckelmann
  • Andrew Dunstan
  • Peter Eisentraut
  • David Fetter
  • Dimitri Fontaine
  • Stephen Frost
  • Robert Haas
  • Magnus Hagander
  • Jonah Harris
  • Zdenek Kotala
  • Marko Kreen
  • Tom Lane
  • Denis Lussier
  • Michael Meskes
  • Bruce Momjian
  • Dave Page
  • Teodor Sigaev
  • Greg Smith
  • Greg Stark
  • Joshua Tolley
  • Robert Treat

In addition, we will be joined via conference phone by Koichi Suzuki, Itagaki Takahiro and Toru Shimogaki from NTT who are not able to join us in person.

Agenda

The following agenda will be used for the meeting:

  • 09:30 - Introductions
  • 09:40 - Source code management (David)
  • 10:10 - Coffee
  • 10:25 - Release management
  • 11:05 - Synchronous + Hot Standby, completion plans
  • 11:35 - Ways to improve commitfests (Josh)
  • 12:05 - The remaining Big Adoption Issues for PostgreSQL (Josh)
  • 12:35 - Auto-configure (Greg, Josh)
  • 13:05 - Lunch
  • 13:45 - Modules/plugins packaging, upgrading (Dimitri)
  • 14:15 - Parallel Query Execution - spread queries on multiple CPUs
  • 14:45 - More comprehensive testing (Peter)
  • 15:15 - Tea
  • 15:30 - Participation in SQL standards committee (Peter)
  • 15:50 - State of PgFoundry (Peter)
  • 16:20 - Upgrade-in-place plans (short, we have a session on this in the main program)
  • 16:40 - Any other business

Minutes

PostgreSQL Developer Meeting May 20, 2009

Attending: Tom Lane, Michael Meskes, Dimitri Fontaine, Josh Tolley, Oleg Bartunov, Teodor Sigaev, Zdenek Kotala, Peter Eisentraut, Selena Deckelman, Magnus Hagander, David Fetter, Stephen Frost, Greg Smith, Greg Stark, Dave Page, Robert Haas, Robert Treat, Joe Conway, Andrew Dunstan, Denis Lussier, Bruce Momjian, Jonah Harris, Marko Kreen

Attending via Skype: Koichi Suzuki, Shimogaki-san, Itagaki-san, Simon Riggs, Greg Sabino Mullane

Source Code Management

System worked for us so far. But David wants to change it. Most people at developer meeting are using Git. David thinks we should use Git for 8.5. What's wrong with using the Git mirror? Problems include: CVS-to-Git breaks; patching is harder for committers. Changing to Git will not increase committers. Peter did a survey of committers, and 12 out of 15 didn't want to switch. We don't want to change the process of having patches vetted before they're applied. SCM doesn't matter for this. Moving to Git might change the perception on working on branches.

Continuing issues with Git Mirror. Mirror is currently broken due to rsync issue. Translators could make use of Git, but there are other issues. Message translation is also a bottleneck, wierd scripts. Git mirror would be good for this. We can fix the mirror script, but could have issues in the future.

Git for Windows works now. No longer an issue.

One problem is that Git does not produce context diffs without add-on scripting. Source review for distributed review is different. Don't read the patch, you load into tree. But some people will expect context diffs. We might want to ask Git for integrated support. Need to solve that issue before moving over. Since most Git users use its browsing tools rather than using the diff directly, maybe if we used Git we wouldn't want context diffs.

We can pull trees back to 7.4 and it works, but we need to test builds for this. Git has tags. Are we going to lose people who don't have Git? What about other platforms, including ones that might not have the full required toolchain to build git?

How would the buildfarm work? Need to investigate if it'll work and how hard it'll be to change. Greg Sabino Mullane wrote a patch already. We could choose it through a config option. If we used Git and GitHub, we could test experimental patches through repository. OmniTi did this with the Dtrace probes.

Using Git would reduce bitrot. And you can't tell committer modifications to submitted patches. And Git would help get work done more quickly. Finding reviewers who will use CVS may be difficult in the future. Young people won't use CVS. Greg Smith says that Truviso has switched to Git and it's much better, especially at reducing bitrot. Summer of Code students worked much better with distributed code management.

Actions:

Incomplete item Check on context diffs & Git
Incomplete item Change buildfarm scripts, client and server side, to add git as an SCM (Andrew Dunstan)
Incomplete item Check whether all the buildfarm machines can be made to work directly with Gi (Andrew Dunstan)
If not, some sort of CVS emulator (such as git-cvsserver) will need to be setup.
[D] Completed itemFix GitMirror script (Magnus)
Incomplete item Confirm past releases can be built identically from Git, using binary diff
Incomplete item Make decision on whether we'll change now.
Incomplete item Announce decision to move to allow core/committers/contributors opportunity to really learn the tool before the switch

Alpha Releases

The idea is to get back into the "release early, release often" mode. Do release tarballs after each commitfest. Check if buildfarm is green, wrap a tarball, make an announcement, package it. Packagers are already doing this in an unapproved way.

Where are we going to get release notes for this? (open item) If you want to get early testing, we need to let people know what's in the release. We could just have a list of the commits and edit it down. If we made up release notes more frequently, it might be easier at the end. But that hasn't worked in the past. Bruce went over process of how we generate the release notes. We can just have a big blog of all the features or on the wiki, maybe just point people at the Commitfest page.

What about users who use Alphas in production? We will get users who deploy alphas. That's their fault. Maybe we do a cat version bump after every alpha. No, that makes it harder.

What would we name them? Date stamp vs. version number. Tie the alphas to the commitfest names. Vote to do Alpha1, alpha2, etc.

psql should say "alpha". Do we want people to really test this? Some features are known broken at the end of a commitfest. Some people may think it's alpha-beta-final, and it's not. What else would we call it though? Maybe we should call it "CF". Or "bikeshed". But RPM (and probably other packaging systems) needs ASCII/numeric sorting N-V-R, which means it has to be before "beta".

Conclusion: it's 8.5alpha1, 8.5alpha2, etc.

We want everyone using the same snapshot for testing as common reference point, and the alpha proposal does this. Would be announced only on postgresql.org, other news feeds will grab from there. And the explanation of the alpha process needs to be in the announcement.

Actions:


[D] Completed itemDocument alpha release process: Alpha release process

Synchronous & Hot Standby

Simon plans to complete hot standby for 8.5. But won't be at the very first commitfest. Currently Simon doesn't have time to complete the project immediately -- won't work on it for at least 6 weeks. Synchronous replication still in development. Dealing with each comment on Synch, Koichi believes that these will all get fixed, but needs review and debug.

Peter asked for debug/review of Synch Replication. Can we get something for the first alpha now? Can we break it up into smaller pieces so that we can test it easier?

Doing both projects at once is problematic. Code is actually pretty separate. Trying to tackle both at once is too difficult. Choosing an order would help. Maybe Synchronous replication should be first. We need reviewers for each thing, it doesn't make sense to have target dates for things. Professional development schedules affect Simon's availability. Robert Haas is happy to read the patch. Heikki will stay on hot standby. Josh asked Bruce to be responsible for Synch Rep. Peter's not sure what his schedule is.

What about using fundraising to pay for review? Would be possible. Some consultants are committers. Andrew doesn't feel competent. Ask Alvaro? Andrew G.?

We need to look at the features and prioritize what will be provided in the patch.

Actions

Incomplete item Get a firm responsible committer for synch replication
Incomplete item Confirm with Heikki
Incomplete item Determine expected CF for patches.

CommitFests

Commitfests have been good but not perfect. Need improvements in a number of areas. For one the tools are horrible. Possible to extend existing tools like Request Tracker to do what we need? Mediawiki extension for rt?

Robert H: frustrating part was the bad patches (cleanup issues, doesn't apply, etc.). One way to improve things is to find a procedure which is real reviewers review serious patches. Lower tolerance for useless patches. Would like software which does this. Github/buildfarm might help. Josh thinks triage using volunteers is the best way to do this. A lot comes down to tools, which Robert is working on. We could also use improvements in the archives.

Last commitfest should be for "previously seen" patches only. Late patches will get bounced to the next version. We should make exceptions. CM needs to have authority to bounce stuff without argument.

Simon: won't we just be transferring the pain from the last commitfest to the penultimate commitfest horrible? What about just accepting a long integration phase? Don't want long integration because it's a halt to development. We need to have largest patches early, and smaller ones later.

Small patches are not a problem. Easy to approve or reject. Tom made the mistake of committing the easy patches first during November fest, leaving little for less-experienced people to do. Won't do that again.

Peter doesn't care about most patches. That's what the RRR handles. There are other things which are not that interesting. People need to explain this on -hackers. We're too restrictive about rejecting of patches because stuff isn't interesting (Robert H). We need more information about each patch. Webform? No, we need to discuss it on the mailing list. Add request for test cases, justification to guide for reviewers. Need a guide for submitters which includes all of this stuff. Need to include documentation, test case, etc., or reject patch.

Original purpose of CF was to get more prompt feedback. Now we're trying to spread out patch submissions, too, so not all big patches land at final fest.

Patches could also use short names as unique ids. Or maybe not ... there's patch drift. Each new post should include link to original submission e-mail. Should look at ReviewBoard again, has improved a lot in last few months.

Actions:

Incomplete item Work on tools
Incomplete item Check ReviewBoard
Incomplete item Improve Submitting a Patch (Greg Smith, Stephen)
Incomplete item Fix archives?
Incomplete item Change policy on accepting to last commitfest.

Top Adoption Issues

  • Installers: pretty much fixed. Could use better documentation on Windows for troubleshooting, where the Windows installer puts files and how to run tools via the command line common issues.
  • Simple low-overhead replication. WANT!
  • Upgrade in place (below)
  • Admin tools need love, both monitoring and managing large numbers of installations (needed to expand PostgreSQL use for big Shared Database Hosting providers)
  • Driver quality needs some love. Official list of drivers in the Software Catalogue
    • Perl & Ruby are good
    • MSFT and JDBC4 need *lots* of love.
    • JDBC performance needs *lots* of love.
    • Driver developers & maintainers need recognition.
    • ODBC needs love.
    • Python has no entries in the catalog...but there are multiple driver projects with no clear leader.
  • Into the future...
    • Module add-ons. We don't help yet (see "Modules" below).
    • Per-column locale/collation

Simon has also heard lots of requests for VLDB features. Regarding PostGIS, we mainly need better module support for installing stuff like PostGIS. One-click installer takes care of some of this. Also there's RPM/Deb packaging. Linux packagers would like a list of which things to build. But staying up to date is very hard.

Synchronous replication & Hot Standby are major gating factors, if we don't have these the PostgreSQL community may stop growing (Josh). Some discussion of existing replication features. If we put them in first CF we would start alpha program with a bang. Are we willing to commit stuff which is known not-complete? Why not just use Git and have a Hot Standby tree? That's what it's for. We need to find out what status is. Needs to be in in 1st or 2nd commitfest.

Actions:

Incomplete item Blog the Top Priorites

Auto-Tuning

Greg Smith: some progress towards getting autotuning. Simple config script right now to output bare minimum config options. Works for some configuration settings, fairly conservative. Some bugs, but works OK for 8.4. The initial spec included converting between the various ways we see this information. Some people want heavily commented config file. Some think we should have a minimal config file. Some people think that we should have something in between. No consensus ever expected there, trying to reach one a waste of time.

Tool to convert into brief form, annotated form, etc. Current tool just comments stuff out and adds settings to beginning. Dump of pg_settings for 8.4 included with pg_tune tool so that we can generate stuff without database being up.

Two development priorities: (1) re-write as C tool. (2) Would like a standard comment form for PostgreSQL.conf. What comments are always in, vs. user comments. Have a different delimiter. Can't do this because there are no standards for comments. Also it's better to have separate user and auto-generated .conf files. Many newbies hate complex config file but sysadmins like it.

Selena: might be more effective to deliver recipes to config management systems. Josh: recipes are too complicated, Greg: people use bad/ancient recipes all the time. We should have two separate files. Apache is good example with local.cf includes structure.

(3) Issue: upgrading configuration files. That hurts upgrades. Also need config-test utility. (4) also need to generate a sysctl.conf. OSX is permanently broken.

Sample is commented out and is included in main (automatic) configuration file. Or maybe use a directory. Subdirectory which we parse in alphabetical order. Directory scanning would also help modules install.

What about putting it in the database? Don't even bring it up!

Actions:

Incomplete item Implement Directories (Magnus, Greg Smith)
Incomplete item Finish pg_tune (Greg Smith)
Incomplete item Work out details on hackers list. (Greg/Magnus/Josh)

Lunch

We ate lunch. It was good.

Modules

The first part for module design is user spec. Started working on it on the wiki. When modules are installed, they shouldn't get dumped, but re-installed for new versions. So we need to have objects belong to modules. Tables are special problem. We need a lot of design work in terms of what we want -- there are too many conflicting purposes right now.

There is nothing in the SQL standard about this. What do we call it? Module, Package, names are taken. How about Extensions? Plug-ins?

A lot of stuff hinges on Ownership. You need a concept of objects belonging to a module, and then we can build a lot of other functionality.

The version upgrade problem is critical. PostGIS has special syntax for populating the PostGIS tables. We need concepts of objects which don't get pgdumped. But module data in module tables is a special problem. PostGIS has special tables with auto-created stuff. Maybe we need to run post-and-pre install scripts that handle specific pg_dump/pg_restore requirements. We need a way to deal with it. PostGIS one example hard case, tsearch2 another.

Do we need special schemas? No. Dependency would solve this. But we could have special schema. But path is broken.

Where do releases go so that people can install them? Maybe we should just have a file spec. Downloading stuff is completely separate, tools like Python's easy_install don't care how you got the bundle. We also want to support Linux packaging systems for this. We can be flexible about this as long as bundles match a directory spec. How do you make sure that compiled modules work with various platforms?

How would dependencies work? Lots of discussion.

Do we want to have module permissions? People seem to think no. Filesystems don't do this. But PG is not a filesystem. Maybe we should just do this with schema. Or we just fix GRANT to wildcards.

Actions:

Incomplete item Finish design spec for "modules"
Incomplete item Pick new name
Incomplete item Figure out how to deal with internal module data

Parallel Query Execution

Zdenek has student, Master's, who is working on Parallel Query Execution, he is starting by trying to thread Postgres. But the Postgres backend isn't thread safe, this can't be done. Early POSTGRES had prototype parallel query, multiprocess. Zdenek wants to use threads, but there are other ways. The big win cases for parallel execution is for long-running queries, and there the difference in overhead doesn't matter.

Zdenek will discuss with student using a multi-process model. Tom claims that a thread-based module will never run. Simon agrees.

Two concepts of parallel query: executing query nodes in parallel, or splitting a single node (i.e. workers each handle a portion of a sequential scan). This is for parallel query on a single machine. You pretty much know how many workers you'll need, about 1/(num cores). What about overallocation of workers? Actually, that's an opportunity for resource management, rebalance workers whenever a new query is added. Original POSTGRES had some code for parallel query. The system can know what resources it has available. Simon had done some work on this. "chunk out" work.

Sharing snapshot clones is needed for parallel dump, too, Andrew looking into it. Current clone code from Simon works with 8.1. Need to update it to 8.4. The code is written as an extension rather than a backend patch. The current way it works though is not expected to be committed, and changes to snapshots since then allow us to do same thing using shared memory.

Actions:

Incomplete item Zdenek to talk with grad student
Incomplete item Jonah to help Zdenek find the POSTGRES stuff.

Comprehensive Testing

Peter has done work on test coverage. We have a bunch of code which is not tested in any organized way. How do you do patch review without tests, and we probably have bugs we don't know. We need a lot more test cases.

Problems are that huge numbers of tests will take too long to run. So divide them up into chunks and run just one chunk. Also maintenance is an issue. Some testing stuff is very difficult, like testing recovery. But simple tests should be simple. Greg Smith says frameworks/harnesses don't really help us that much. We don't have a way to test big issues like "does pg_restore work?".

Question is do people know testing better than Peter? Greg already running more complicated tests.

Performance regression testing. Greg has a tool for running pgbench automatically. It saves all results in a database and it's possible to chart them over time for this purpose, just need to add new reports. Code is not public yet, Greg will post. Are there benchmarks we can just use? Nothing is complete/easy/fast. We need to do development.

Simon: we don't have optimizer testing. Tom: we need something to test really complex cases.

We need a framework to save tests over time so that we're building up a suite of test cases instead of throwing them away.

We can write tests in Perl if we want. And then we can use Perl-based frameworks. Performance tests may need to be "margin of error". We already have a build dependency on Perl.

Will need test suite for in-place upgrade, Bruce already using regression test database for testing pg_migrator.

Performance tests may need to be separate from issue tests. Also, don't introduce tests that fail regularly, too much pressure to comment them out and forget about them.

We would also like to be able to run other stuff on the buildfarm, like specific unit tests about platforms. Maybe not performance, but testing for portability issues. Flag stuff in the buildfarm for other tests. First priority is to get the buildfarm working with Git. After that maybe some of the performance regression tests.

Also we need like driver tests and things. And Slony, and other stuff.

We are testing contrib.

Don't want to start separate mailing list. Do it on hackers or the issue will die. Maybe move it later when the effort takes off.

Anyone used Hudson? CMake lacks documentation.

Actions:

[D] Completed itemPeter & Greg Smith to talk.
[D] Completed itemPost pgbench-tools code. (Greg Smith)
Incomplete item Research existing OSS test frameworks. (Greg, Peter)

SQL Standards Committee

Working at Sun, got Sun/MySQL seat on the SQL Committee. Have to pay $1000 to sign up, then pay $400 a year. Have to name an individual delegate and backup delegate. Every other month is phone conference. You have to attend one before you vote, and if you miss too many you get dropped. There are in-person meetings twice a year, but you're not required to attend.

This would give us access to the drafts and whether or not we're still working on various parts.

So, who's interested? Peter, Stephen Frost, David Fetter, Robert Treat, Greg Stark, Josh Berkus.

Josh talked some about the TPC, and TPC membership and what's involved. But TPC is probably too expensive right now. Discussion ensued.

Actions:

[D] Completed itemPeter to get full details.
[D] Completed itemPeter to propose to funds group.
Incomplete item Organize committee.

pgFoundry

We set up new hardware, and Gsmet was going to help us migrate to it. But he disappeared. There's a new open source version called FusionForge.

pgFoundry has some deviations from the mainstream version, like FreeBSD fixes. Infrastructure team has no idea how it works, done in a rush. The changes aren't that great.

Now we have githost too which is more hosting.

Robert Treat thinks we should use a bunch of external hosting, sourceforge, Google, etc. But we have existing projects and community stuff already on there.

We shouldn't kill it off before we have the design spec for Extensions. pgFoundry is also important for helping people find Postgres stuff. But most of our most popular projects don't use it. But what about smaller projects? Sourceforge doesn't place any restrictions.

What about packagers using the FTP mirrors? Stackbuilder is using it.

How much time are we spending on it? Some fixing because of breaking down once a month. And we're spending about an hour a week admin-ing it, but there's a lot of things we don't do. Josh went over the disaster of the original pgFoundry deployment.

There's also VHFFS as alternative to pgFoundry. But all docs are in French. (It's being improved upon it seems, installation guides & FAQs are available in english, technical docs are in FR only).

What about putting together something as a replacement? We don't have the manpower. How about migrating to a new machine. What about killing projects. Not relevant to this issue.

The real problem is that nobody wants to work on it.

What about moving it to Linux? Well the whole infrastructure is on FreeBSD. We don't have a way to manage it on Linux.

Plan: do clean, good install on the new machine and just move the database, the mailing lists and the cvs and html files.


Incomplete item Peter to join GForge team.

[Update: Gsmet came back and is doing the upgrade to FusionForge.]

Upgrade-in-Place

Bruce is working on pg_migrator for 8.3-->8.4. Two major issues we need to solve: storage upgrade, catalog upgrade. There are several methods how to do that; we need to make a decision on how to do it.

It's a different project from other features. Other features are "done", but UIP requires everyone who submits a patch to do something about UIP for every patch which changes a catalog or disk structure. It's how Illustra did it. We need standard code for that.

Should we ship pg_migrator with the core code? Not this release, maybe next one.

pg_migrator has several issues: (1) you need the old version of the server, (2) storage and files and protecting TOAST, (3) pg_dump dumps the DDL commands and you lose information (that's being fixed). Zdenek has catalog upgrade prototype which does not require old version to be around. Upgrade the pg_upgrade tablespace and most stuff into the new tablespace. What does this solve though? Well, not having binaries around from the old version, also it preserves the tablespaces.

But there is maintenance for Zdenek's version because you need to add migrations for each system catalogs. About 50 system catalog changes per version, so over 5 years this would be 250 migrations to maintain. How about adding version numbers to catalog entries instead? Some issues/limitations. Getting pg_migrator working is good but there are still holes. And internally pg_migrator is fairly complex. But storing up delta changes is unreasonable for stuff like ROLE support changes ... the transformations would be awful.

What's our commitment to pg_migrator? Will we support alphas? pg_migrator pg_dumps catalogs and transforms them.

Let's look at the heap and the index. You have the heap format and the bitmaps. If you're looking at the page format. We'd need to read and change the bits. Currently pg_migrator links stuff over. We'd have to copy it for heap page stuff.

If we're changing the pages, we'd have to look at the pages in copy mode or rewriting the pages. For datatype changes we create a new column in a schema, then copy stuff over into a new OID. Are datatype changes going to require the developer to make the change? For large databases, reindexing is not feasible. Reindexing is often 80% of migration time on big systems (Josh).

Maybe we'll just have some versions we can't upgrade. Zdenek says that we should guarantee upgradability forever, other projects have done it. Fight about features which are important enough to justify breaking upgrade-in-place or not; how are we going to make that decision.

What did we do for 8.2->8.3. We changed the numeric format, var-varlena, HOT. And Phantom Command ID.

3 Methods for UIP: convert-on-read (diagram). The big problem there is TOAST data. Write the converted page back as you read it. 2nd method is Read Old, Write New. Zdenek has prototype of this. 3rd Method is Modularized AM. The whole relation is in one format or another and is read through a converter. For indexes we can just treat stuff as a separate index method. But we can convert heaps with update on read and do this for indexes.

Side issue ... would like to be able to move tables in binary form between systems.

Tabled for tommorrow's session.

Other Business

Josh brought up issue with negative experiences for new hackers. It's a bit of an ordeal for a lot of new people. How could we improve it. Ubuntu put up a code of behavior for community folks. On our lists, people give really honest blunt feedback. But it's not good for new people. It's not really good to get too emotional even for people who can take it. Reported that students are afraid to post their patches.

Andrew says take a breath and step back before getting emotional. Or take it off-list. But has to be generally change of behavior for everyone. And use reasons rather than just facts. Our FAQ also needs cleaning. And "search the archives" isn't very helpful. We need to be nice all the time so it's a habit.

Action:

Incomplete item Look at Ubuntu Code of Conduct
Incomplete item Be nice