https://wiki.postgresql.org/api.php?action=feedcontributions&user=Greg&feedformat=atomPostgreSQL wiki - User contributions [en]2024-03-29T09:36:12ZUser contributionsMediaWiki 1.35.13https://wiki.postgresql.org/index.php?title=Binary_Replication_Tools&diff=38587Binary Replication Tools2024-01-25T05:31:23Z<p>Greg: /* pgBackRest */ Add block level incremental</p>
<hr />
<div>= Disclaimer =<br />
<br />
This is a Work-In-Progress, started Dec 28, 2012.<br />
<br />
Additions welcome, and [[User:Selena|Selena]] reserves the right to edit. :)<br />
<br />
= Purpose =<br />
<br />
Compare binary replication tools for PostgreSQL for features and ease of use. This document should classify and differentiate binary replication tools for easier selection and fit to purpose.<br />
<br />
== Comparison Matrix ==<br />
<br />
{|border=1<br />
!Tool!!Documentation!!License!!Makes base backups!!Makes base backups from replicas!!Manages backups!!Creates replicas!!Monitors replication delay!!Supports automated failover!!Transport used!!Source includes replication tests<br />
|- style="background-color:#ffffcc;"<br />
| [http://www.postgresql.org/docs/current/static/app-pgbasebackup.html pg_basebackup]*<br />
| [http://www.postgresql.org/docs/current/static/app-pgbasebackup.html Postgres docs]<br />
| PostgreSQL<br />
| Yes<br />
| Yes<br />
| No<br />
| Manual<br />
| No<br />
| No<br />
| PostgreSQL connection<br />
|<br />
|-<br />
| [http://www.pgbackrest.org/ pgBackRest]<br />
| [http://www.pgbackrest.org/user-guide.html Documentation]<br />
| MIT<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Yes<br />
| No<br />
| No<br />
| SSH / S3 / Azure / GCS<br />
| Yes<br />
|-<br />
| [http://www.pgbarman.org/ pgbarman]<br />
| [http://docs.pgbarman.org/ Documentation]<br />
| GPLv3<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Manual<br />
| Yes<br />
| No<br />
| rsync/SSH, pg_basebackup<br />
|<br />
|-<br />
| [https://github.com/omniti-labs/omnipitr OmniPITR]<br />
| [https://github.com/omniti-labs/omnipitr/blob/master/doc/intro.pod Intro]<br />
| PostgreSQL<br />
| Yes<br />
| Yes<br />
| No<br />
| Manual<br />
| WAL archive delay<br />
| No<br />
| rsync / SSH<br />
|<br />
|-<br />
| [https://github.com/ohmu/pghoard pghoard]<br />
| [https://github.com/ohmu/pghoard/blob/master/README.rst Readme]<br />
| Apache<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Yes<br />
| No<br />
| No<br />
| S3, Azure, Ceph, GCS<br />
| Yes<br />
|-<br />
| [http://code.google.com/p/pg-rman/ pg-rman]<br />
| [http://code.google.com/p/pg-rman/wiki/readme Readme]<br />
| BSD<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Manual<br />
| No<br />
| No<br />
| local / NFS mount<br />
|<br />
|-<br />
| [http://www.repmgr.org/ repmgr]<br />
| [https://github.com/2ndQuadrant/repmgr#readme Readme]<br />
| GPLv3<br />
| No<br />
| No<br />
| No<br />
| Yes<br />
| Yes<br />
| Yes<br />
| rsync / SSH<br />
|<br />
|-<br />
| [https://github.com/markokr/skytools Skytools]<br />
| [https://github.com/markokr/skytools/blob/master/doc/walmgr3.txt walmgr3]<br />
| BSD-ish<br />
| ?<br />
| ?<br />
| ?<br />
| Manual<br />
| Yes<br />
| Yes<br />
| ?<br />
| ?<br />
|-<br />
| [https://github.com/wal-e/WAL-E WAL-E]<br />
| [https://github.com/wal-e/WAL-E#readme Readme]<br />
| BSD<br />
| Yes<br />
| No<br />
| Yes<br />
| Manual<br />
| No<br />
| No<br />
| HTTPS/SSL<br />
|<br />
|-<br />
| [https://github.com/wal-g/wal-g WAL-G]<br />
| [https://github.com/wal-g/wal-g/blob/master/README.md README]<br />
| Apache 2.0<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Manual<br />
| No<br />
| No<br />
| HTTPS/SSL<br />
| Yes<br />
|-<br />
| [https://github.com/postgrespro/pg_probackup pg_probackup]<br />
| [https://github.com/postgrespro/pg_probackup/blob/master/README.md README]<br />
| PostgreSQL<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Manual<br />
| No<br />
| No<br />
| local / NFS mount<br />
| Yes<br />
|-<br />
| [https://github.com/dalibo/pitrery pitrery]<br />
| [http://dalibo.github.io/pitrery/documentation.html Documentation]<br />
| BSD-2<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Manual<br />
| No<br />
| No<br />
| rsync / SSH<br />
| ongoing<br />
|}<br />
<br />
* pg_basebackup is included with a standard PostgreSQL installation.<br />
<br />
<br />
'''Tool''': name of the project that manages binary replication or WAL archiving<br />
<br />
'''Documentation''': Link to canonical documentation for the project. Several projects have broken links that show up as top results in Google.<br />
<br />
'''License''': License software is released under. So far, we only have open/free software projects listed. We could add commercial projects.<br />
<br />
'''Makes base backups''': Yes if the project supports creating binary archives, including the necessary WAL to restore a backed-up instance.<br />
<br />
'''Makes base backups from replicas''': Yes if the project supports creating binary archives and WAL using the PGDATA directory from a replica rather than the master database.<br />
<br />
'''Manages backups''': Yes if the project adds, removes and lists binary archives.<br />
<br />
'''Creates replicas''': Yes if the project automatically adds a recovery.conf (sets up replication) as part of restoring a base backup.<br />
<br />
'''Monitors replication delay''': Yes if the project supports monitoring of replication delay (WAL shipping or streaming replication).<br />
<br />
'''Supports automated failover''': Yes if the project has an option for detecting master failure and promoting a replica to master.<br />
<br />
'''Transport used''': Supported methods for file transfer for making backups or replicas<br />
<br />
= Barman =<br />
<br />
[http://www.pgbarman.org/ pgbarman] [https://github.com/selenamarie/pg_replication_demo/tree/master/barman Demo setup for pgbarman]<br />
<br />
Summary of features:<br />
* Creates base backups<br />
* Uses SSH as transport for backup<br />
* Configuration stored in file or command-line<br />
* Restore is automatable for creating replicas, although not explictly supported<br />
* GPLv3<br />
<br />
Install notes:<br />
* Had a dependency problem with 9.1 on Ubuntu Precise, so installed from source<br />
* Missing dep for argcomplete, noted in README in demo repo<br />
* written in Python<br />
<br />
= OmniPITR =<br />
<br />
[https://github.com/omniti-labs/omnipitr OmniPITR]<br />
[https://github.com/selenamarie/pg_replication_demo/tree/master/omnipitr Demo of a simple replication setup with OmniPITR]<br />
<br />
Summary of Features:<br />
* Creating PITR backups from Master or Slave<br />
* Restoring a PITR backup for DR<br />
* Creating replicas (by untarring backups)<br />
* Monitoring of replicas<br />
* Supports 'pause removal' of WAL during a backup (nice!)<br />
* PostgreSQL license<br />
<br />
Install notes:<br />
* No packaging, perl<br />
* No documented support for streaming replication<br />
* Uses '^' instead of '%' in custom logfile naming<br />
* No configuration file option (instead of using long command-line options)<br />
* Supported on all Linux, Solaris variants<br />
<br />
= pgBackRest =<br />
<br />
[http://www.pgbackrest.org/ pgBackRest]<br />
<br />
Summary of features:<br />
* Parallel Backup & Restore<br />
* Local or Remote Operation<br />
* Full, Incremental, & Differential Backups<br />
* Block-level Incremental Backups<br />
* Backup & Archive Expiration<br />
* Backup Resume<br />
* Streaming Compression & Checksums<br />
* Delta Restore<br />
* Parallel WAL Archiving<br />
* Tablespace & Link Support<br />
* Compatibility with PostgreSQL >= 8.3<br />
* Support for S3, Azure, and GCS<br />
<br />
= pghoard =<br />
<br />
https://github.com/ohmu/pghoard<br />
<br />
Summary of features:<br />
* Stores basebackups and WAL to cloud object stores (AWS S3, Azure, Ceph, Google Cloud)<br />
* Restore existing basebackups and setup a new cluster with a recovery.conf pointing to another master database<br />
* Create scheduled basebackups<br />
* Can be used as archive_command to archive WALs in the object store<br />
* Can be used as restore_command to restore WALs from the object store<br />
* Basebackup and WAL compression using LZMA<br />
* Optionally encrypts backups<br />
* Supports PITR using timestamps, names and xids<br />
<br />
Install notes:<br />
* Written in Python, includes Debian and Fedora packaging scripts<br />
* 'pghoard' daemon manages basebackups and cleans up WALs<br />
* 'pghoard_archive_command' and 'pghoard_restore' access the locally running 'pghoard' daemon to store and restore files<br />
<br />
= pg-rman =<br />
<br />
[http://code.google.com/p/pg-rman/ pg-rman] [https://github.com/selenamarie/pg_replication_demo/tree/master/pgrman Simple demo for pg_rman setup]<br />
<br />
Summary of features:<br />
* Online backup and recovery, including backup from a replica<br />
* Archive management and restore<br />
* .ini configuration file<br />
* Simple command-line options<br />
<br />
Install notes:<br />
* Written in C, install with 'make USE_PGXS=1'<br />
* On Ubuntu/Debian, installs in non-default bin directory<br />
* Commands are simple, manages WAL archive as well as backups<br />
<br />
<br />
= pitrery =<br />
<br />
[http://dalibo.github.io/pitrery/ pitrery]<br />
[https://github.com/dalibo/pitrery/ GitHub repository]<br />
<br />
Summary of features:<br />
* Local or Remote backups<br />
* Backup from standby<br />
* backup modes: tar or rsync in hard-link mode<br />
* Ease WAL archiving<br />
* Backups and WAL archive expiration<br />
* Uses SSH as transport for backup<br />
* GPG encryption (experimental)<br />
* PostgreSQL >= 8.3<br />
<br />
Install notes:<br />
* written in bash, install with 'make install'<br />
* [https://apt.dalibo.org/labs/ deb] and [https://yum.dalibo.org/labs/ rpm] packages available<br />
<br />
= repmgr =<br />
<br />
[http://www.repmgr.org/ repmgr]<br />
[https://github.com/2ndQuadrant/repmgr GitHub repository]<br />
[https://github.com/selenamarie/pg_replication_demo/tree/master/repmgr Demo of a simple setup with repmgr]<br />
<br />
Supported features:<br />
* Setting up new replicas/hot_standby with streaming replication (makes recovery.conf itself)<br />
* Making base backups<br />
* Failover (automated, or not, including redirecting replicas to connect to a new master after failover)<br />
* Lag monitoring (repmgrd)<br />
* A "witness" DB server for monitoring (typically on a replica)<br />
* License: GPLv3<br />
<br />
Install notes:<br />
* Written in C<br />
* Developed on Debian systems, so support for package is present. [https://launchpad.net/repmgr Ubuntu packages] are available since Trusty (14.04).<br />
* Otherwise, installs like a typical UNIX utility out of postgresql/contrib source tree (make USE_PGXS=1; make USE_PGXS=1 install)<br />
* Detailed docs are in [https://github.com/2ndQuadrant/repmgr the README] for installing on many Linux platforms<br />
* Doesn't appear to be supported on Mac OS X<br />
<br />
= Skytools / walmgr =<br />
<br />
[https://github.com/markokr/skytools Skytools]<br />
<br />
= WAL-E =<br />
<br />
[https://github.com/wal-e/wal-e WAL-E]<br />
<br />
= WAL-G =<br />
<br />
[https://github.com/wal-g/wal-g WAL-G]<br />
<br />
Summary of features:<br />
* Archive management and restore<br />
* Inspired by WAL-E<br />
* High performance<br />
* Parallelized operation<br />
<br />
Installation notes:<br />
* Written in Go, install with the usual Go tools.<br />
<br />
= pg_probackup =<br />
<br />
[https://github.com/postgrespro/pg_probackup pg_probackup]<br />
<br />
[https://postgrespro.com/docs/postgrespro/10/app-pgprobackup Documentation]<br />
<br />
[https://github.com/postgrespro/pg_probackup#installation-and-setup Installation]<br />
<br />
Summary of features:<br />
* PostgreSQL >= 9.5<br />
* Simple command-line interface<br />
* Remote backup/restore<br />
* Parallel Backup, Restore and Validation operations<br />
* Block-level incremental backups<br />
* Two mode for incremental backups: PAGE and DELTA<br />
* Compression of datafiles and WAL files<br />
* Retention policies(time, window)<br />
* Backup corruption detection<br />
* Block corruption detection(during backup) <br />
* Restore target validation<br />
* Backup from standby<br />
* Extended logging options<br />
* Windows support<br />
* Merge Full and Incremental backups</div>Greghttps://wiki.postgresql.org/index.php?title=Ecosystem:Backup&diff=38586Ecosystem:Backup2024-01-25T05:28:46Z<p>Greg: /* pgBackRest */ Fix the year</p>
<hr />
<div>== Amanda ==<br />
<br />
* Provider -- Zmanda, Inc., University of Maryland at College Park<br />
* Website -- http://www.amanda.org/<br />
* License -- open source (BSD-style, GPLv2) for Community Edition, and proprietary for Enterprise Edition<br />
* Interoperability level -- explicitly supports PostgreSQL 8.x or later<br />
* Verified PostgreSQL versions -- didn't actually run the program, but just checked the documentation<br />
* Last update (YYYY-MM-DD) -- 2018-3-11<br />
* Description -- AMANDA, the Advanced Maryland Automatic Network Disk Archiver, is a backup system that allows the administrator of a LAN to set up a single master backup server to back up multiple hosts to a single large capacity tape or disk drive. Amanda uses native tools (such as GNUtar, dump) for backup and can back up a large number of workstations running multiple versions of Unix/Mac OS X/Linux/Windows.<br />
* Additional info -- [[Ecosystem:Amanda|click here]]<br />
<br />
== Bacula ==<br />
<br />
* Provider -- Kern Sibbald, Bacula Systems S.A.<br />
* Website -- https://www.bacula.org/<br />
* License -- open source (AGPLv3) for Community Edition, and proprietary for Enterprise Edition<br />
* Interoperability level -- explicitly supports PostgreSQL 7.4 or later<br />
* Verified PostgreSQL versions -- didn't actually run the program, but just checked the documentation<br />
* Last update (YYYY-MM-DD) -- 2018-3-11<br />
* Description -- Bacula is a set of Open Source, computer programs that permit you (or the system administrator) to manage backup, recovery, and verification of computer data across a network of computers of different kinds. Bacula is relatively easy to use and very efficient, while offering many advanced storage management features that make it easy to find and recover lost or damaged files. In technical terms, it is an Open Source, network based backup program. According to Source Forge statistics (rank and downloads), Bacula is by far the most popular Open Source backup program.<br />
* Additional info -- [[Ecosystem:Bacula|click here]]<br />
<br />
== Barman - Backup and Recovery Manager for PostgreSQL ==<br />
<br />
* Provider -- 2ndQuadrant<br />
* Website -- https://www.2ndquadrant.com/<br />
* License -- GPL v3<br />
* Interoperability level -- See: http://www.pgbarman.org/<br />
* Verified PostgreSQL versions -- https://www.2ndquadrant.com/en/resources/barman/<br />
* Last update (YYYY-MM-DD) -- 2020-07-09<br />
* Description -- Barman (Backup and Recovery Manager) is an open-source administration tool for disaster recovery of PostgreSQL databases with high business continuity requirements. Barman allows remote backups of multiple servers in business critical environments and helps DBAs during the recovery phase.<br />
* Additional info -- http://www.pgbarman.org/<br />
<br />
== Handy Backup ==<br />
<br />
* Provider -- Novosoft LLC<br />
* Website -- https://www.handybackup.net/<br />
* License -- proprietary<br />
* Interoperability level -- explicitly supports PostgreSQL 9 or later<br />
* Verified PostgreSQL versions -- didn't actually run the program, but just checked the documentation<br />
* Last update (YYYY-MM-DD) -- 2018-3-11<br />
* Description -- Handy Backup is famous for being a Swiss Army Knife of data backup tools. Its functionality covers everything from files-based copying and disk imaging to multi-server backup.<br />
* Additional info -- [[Ecosystem:Handy Backup|click here]]<br />
<br />
== Iperius Backup ==<br />
<br />
* Provider -- Enter Srl<br />
* Website -- https://www.iperiusbackup.com/<br />
* License -- proprietary<br />
* Interoperability level -- explicitly supports PostgreSQL (no specific version is specified)<br />
* Verified PostgreSQL versions -- didn't actually run the program, but just checked the documentation<br />
* Last update (YYYY-MM-DD) -- 2018-3-11<br />
* Description -- Iperius Backup is a backup software for Windows. With the introduction of PostgreSQL backup, Iperius is now one of the best and most complete database backup software in the world: with a single license, the Advanced DB, you can back up unlimited servers and databases Oracle, SQL Server, PostgreSQL, MySQL and MariaDB. Moreover, you’ve other powerful features such as automatic compression, encryption, and automatic copy to any device (NAS, FTP server, Cloud, Google Drive, Amazon S3, etc.).<br />
* Additional info -- [[Ecosystem:Iperius Backup|click here]]<br />
<br />
== NetVault Backup ==<br />
<br />
* Provider -- Quest Software Inc.<br />
* Website -- https://www.quest.com/products/netvault-backup/<br />
* License -- proprietary<br />
* Interoperability level -- explicitly supports PostgreSQL 8.x or later<br />
* Verified PostgreSQL versions -- didn't actually run the program, but just checked the documentation<br />
* Last update (YYYY-MM-DD) -- 2018-3-11<br />
* Description -- Protect data in diverse IT environments - from one intuitive console -- in this scalable backup and recovery solution. NetVault Backup supports multiple server and application platforms in both physical and virtual environments. That means you can ensure availability of business-critical applications, including Oracle, Exchange, MySQL, SQL Server, DB2, and SAP. With NetVault Backup, you can safeguard information stored on network-attached storage (NAS) devices. It also allows you to back up to tape or disk, as well as leverage data deduplication to minimize your storage footprint.<br />
* Additional info -- [[Ecosystem:NetVault Backup|click here]]<br />
<br />
== pg_probackup ==<br />
<br />
* Provider -- Postgres Professional<br />
* Website -- https://github.com/postgrespro/pg_probackup<br />
* License -- PostgreSQL License<br />
* Interoperability level -- >= 9.5 <br />
* Verified PostgreSQL versions -- >= 9.5<br />
* Last update (YYYY-MM-DD) -- 2019-08-13<br />
* Current Version: 2.3.1<br />
* Description -- pg_probackup is a feature-rich and simple to use utility to manage backup and recovery of PostgreSQL database clusters. It is designed to perform periodic backups of the PostgreSQL instance that enable you to restore the server in case of a failure.<br />
* Documentation -- https://postgrespro.github.io/pg_probackup/<br />
<br />
== pgBackRest ==<br />
<br />
* Provider -- David Steele, Crunchy Data<br />
* Website -- https://pgbackrest.org/<br />
* License -- MIT License<br />
* Interoperability level -- >= 9.5 <br />
* Verified PostgreSQL versions -- >= 9.5<br />
* Last update (YYYY-MM-DD) -- 2024-01-22<br />
* Current Version: 2.50<br />
* Description -- pgBackRest is a reliable backup and restore solution for PostgreSQL that seamlessly scales up to the largest databases and workloads<br />
* Documentation -- https://pgbackrest.org/<br />
<br />
== pgmoneta ==<br />
<br />
* Provider --pgmoneta community<br />
* Website -- https://github.com/pgmoneta/pgmoneta<br />
* License -- BSD 3-Clause License<br />
* Interoperability level -- >= 10 <br />
* Verified PostgreSQL versions -- >= 10<br />
* Last update (YYYY-MM-DD) -- 2022-09-22<br />
* Current Version: 0.6.0<br />
* Description -- pgmoneta is a backup / restore solution that supports on-disk encryption, storage engines, monitoring and symlinks.<br />
* Documentation -- https://pgmoneta.github.io/gettingstarted.html<br />
<br />
== Simpana ==<br />
<br />
* Provider -- Commvault<br />
* Website -- https://www.commvault.com/<br />
* License -- proprietary<br />
* Interoperability level -- explicitly supports PostgreSQL 8.x or later<br />
* Verified PostgreSQL versions -- didn't actually run the program, but just checked the documentation<br />
* Last update (YYYY-MM-DD) -- 2018-3-11<br />
* Description -- Simpana software offers seamless and efficient backup and restore of data and information in your enterprise from any operating system, database, and application. Simpana software builds on this foundation by integrating application awareness with hardware snapshots, indexing, global deduplication, replication, search, and reporting, all within a single platform.<br />
* Additional info -- [[Ecosystem:Simpana|click here]]<br />
<br />
== Spectrum Protect ==<br />
<br />
* Provider -- IBM Corporation<br />
* Website -- https://www.ibm.com/us-en/marketplace/data-protection-and-recovery<br />
* License -- proprietary<br />
* Interoperability level -- explicitly supports PostgreSQL (no specific version is specified)<br />
* Verified PostgreSQL versions -- didn't actually run the program, but just checked the documentation<br />
* Last update (YYYY-MM-DD) -- 2018-3-11<br />
* Description -- IBM Spectrum Protect can simplify data protection where data is hosted in physical, virtual, software-defined or cloud environments. With IBM Spectrum Protect, you can choose the right software to manage and protect your data-while also simplifying backup administration, improving efficiencies, delivering scalable capacity and enabling advanced capabilities. With superior virtual machine (VM) protection, IBM Spectrum Protect integrates with IBM Spectrum Protect Plus for fast and easy VM protection with searchable catalog and role-based administration.<br />
* Additional info -- [[Ecosystem:Spectrum Protect|click here]]<br />
<br />
* Provider -- Spictera & IBM<br />
* Website -- https://www-356.ibm.com/partnerworld/gsd/solutiondetails.do?solution=56435&lc=en&stateCd=P&tab=2<br />
* Website -- https://www.suse.com/susePSC/viewVersionPage?versionID=20888<br />
* Website -- https://access.redhat.com/ecosystem/software/4167431<br />
* Website -- http://www.spictera.com<br />
* License -- proprietary<br />
* Interoperability -- generic data protection, supports any PostgreSQL version<br />
* Verified PostgreSQL versions -- PostgreSQL 8/9/10/11<br />
* Last update (YYYY-MM-DD) -- 2019-06-18<br />
* Description -- SPFS is a file system that makes it possible to mount IBM Spectrum Protect filespaces anywhere on your server. All file operations goes directly via the IBM Spectrum Protect Client API to the IBM Spectrum Protect backup server. It is very easy to integrate WAL backups, and one can use the prefered backup methods (pg_dump, pg_basebackup) or any other combinations that the PostgreSQL administrator has preferences of using. Very good data reduction ~75-85% using de-duplication in combination with compression, possible to encrypt data using private keys. Easy to use, requires almost no education.<br />
<br />
== Veritas NetBackup for PostgreSQL Agent ==<br />
<br />
* Provider -- Veritas<br />
* Website -- https://www.enterprisedb.com/blog/veritas-netbackup-and-edb-postgres<br />
* License -- proprietary<br />
* Interoperability level -- explicitly supports PostgreSQL 9.x or later<br />
* Verified PostgreSQL versions -- didn't actually run the program, just checked the documentation<br />
* Last update (YYYY-MM-DD) -- 2019-06-28<br />
* Description -- This is a PostgreSQL specific agent for NetBackup, the enterprise backup and recovery solution. It uses filesystem snapshot technology to take a cohesive backup of configured PG databases, rather than dump to an external file and backing that up. This (at least in theory) should mean backup and recovery are both efficient. The NetBackup documentation shows the agent as supported for Windows and Linux (RHEL, SLES)<br />
* Documentation -- https://www.veritas.com/content/support/en_US/doc/129277259-137906533-0/v129276450-137906533<br />
<br />
[[Category:Ecosystem:Backup]]</div>Greghttps://wiki.postgresql.org/index.php?title=Ecosystem:Backup&diff=38585Ecosystem:Backup2024-01-25T05:28:27Z<p>Greg: /* pgBackRest */ Update a few items</p>
<hr />
<div>== Amanda ==<br />
<br />
* Provider -- Zmanda, Inc., University of Maryland at College Park<br />
* Website -- http://www.amanda.org/<br />
* License -- open source (BSD-style, GPLv2) for Community Edition, and proprietary for Enterprise Edition<br />
* Interoperability level -- explicitly supports PostgreSQL 8.x or later<br />
* Verified PostgreSQL versions -- didn't actually run the program, but just checked the documentation<br />
* Last update (YYYY-MM-DD) -- 2018-3-11<br />
* Description -- AMANDA, the Advanced Maryland Automatic Network Disk Archiver, is a backup system that allows the administrator of a LAN to set up a single master backup server to back up multiple hosts to a single large capacity tape or disk drive. Amanda uses native tools (such as GNUtar, dump) for backup and can back up a large number of workstations running multiple versions of Unix/Mac OS X/Linux/Windows.<br />
* Additional info -- [[Ecosystem:Amanda|click here]]<br />
<br />
== Bacula ==<br />
<br />
* Provider -- Kern Sibbald, Bacula Systems S.A.<br />
* Website -- https://www.bacula.org/<br />
* License -- open source (AGPLv3) for Community Edition, and proprietary for Enterprise Edition<br />
* Interoperability level -- explicitly supports PostgreSQL 7.4 or later<br />
* Verified PostgreSQL versions -- didn't actually run the program, but just checked the documentation<br />
* Last update (YYYY-MM-DD) -- 2018-3-11<br />
* Description -- Bacula is a set of Open Source, computer programs that permit you (or the system administrator) to manage backup, recovery, and verification of computer data across a network of computers of different kinds. Bacula is relatively easy to use and very efficient, while offering many advanced storage management features that make it easy to find and recover lost or damaged files. In technical terms, it is an Open Source, network based backup program. According to Source Forge statistics (rank and downloads), Bacula is by far the most popular Open Source backup program.<br />
* Additional info -- [[Ecosystem:Bacula|click here]]<br />
<br />
== Barman - Backup and Recovery Manager for PostgreSQL ==<br />
<br />
* Provider -- 2ndQuadrant<br />
* Website -- https://www.2ndquadrant.com/<br />
* License -- GPL v3<br />
* Interoperability level -- See: http://www.pgbarman.org/<br />
* Verified PostgreSQL versions -- https://www.2ndquadrant.com/en/resources/barman/<br />
* Last update (YYYY-MM-DD) -- 2020-07-09<br />
* Description -- Barman (Backup and Recovery Manager) is an open-source administration tool for disaster recovery of PostgreSQL databases with high business continuity requirements. Barman allows remote backups of multiple servers in business critical environments and helps DBAs during the recovery phase.<br />
* Additional info -- http://www.pgbarman.org/<br />
<br />
== Handy Backup ==<br />
<br />
* Provider -- Novosoft LLC<br />
* Website -- https://www.handybackup.net/<br />
* License -- proprietary<br />
* Interoperability level -- explicitly supports PostgreSQL 9 or later<br />
* Verified PostgreSQL versions -- didn't actually run the program, but just checked the documentation<br />
* Last update (YYYY-MM-DD) -- 2018-3-11<br />
* Description -- Handy Backup is famous for being a Swiss Army Knife of data backup tools. Its functionality covers everything from files-based copying and disk imaging to multi-server backup.<br />
* Additional info -- [[Ecosystem:Handy Backup|click here]]<br />
<br />
== Iperius Backup ==<br />
<br />
* Provider -- Enter Srl<br />
* Website -- https://www.iperiusbackup.com/<br />
* License -- proprietary<br />
* Interoperability level -- explicitly supports PostgreSQL (no specific version is specified)<br />
* Verified PostgreSQL versions -- didn't actually run the program, but just checked the documentation<br />
* Last update (YYYY-MM-DD) -- 2018-3-11<br />
* Description -- Iperius Backup is a backup software for Windows. With the introduction of PostgreSQL backup, Iperius is now one of the best and most complete database backup software in the world: with a single license, the Advanced DB, you can back up unlimited servers and databases Oracle, SQL Server, PostgreSQL, MySQL and MariaDB. Moreover, you’ve other powerful features such as automatic compression, encryption, and automatic copy to any device (NAS, FTP server, Cloud, Google Drive, Amazon S3, etc.).<br />
* Additional info -- [[Ecosystem:Iperius Backup|click here]]<br />
<br />
== NetVault Backup ==<br />
<br />
* Provider -- Quest Software Inc.<br />
* Website -- https://www.quest.com/products/netvault-backup/<br />
* License -- proprietary<br />
* Interoperability level -- explicitly supports PostgreSQL 8.x or later<br />
* Verified PostgreSQL versions -- didn't actually run the program, but just checked the documentation<br />
* Last update (YYYY-MM-DD) -- 2018-3-11<br />
* Description -- Protect data in diverse IT environments - from one intuitive console -- in this scalable backup and recovery solution. NetVault Backup supports multiple server and application platforms in both physical and virtual environments. That means you can ensure availability of business-critical applications, including Oracle, Exchange, MySQL, SQL Server, DB2, and SAP. With NetVault Backup, you can safeguard information stored on network-attached storage (NAS) devices. It also allows you to back up to tape or disk, as well as leverage data deduplication to minimize your storage footprint.<br />
* Additional info -- [[Ecosystem:NetVault Backup|click here]]<br />
<br />
== pg_probackup ==<br />
<br />
* Provider -- Postgres Professional<br />
* Website -- https://github.com/postgrespro/pg_probackup<br />
* License -- PostgreSQL License<br />
* Interoperability level -- >= 9.5 <br />
* Verified PostgreSQL versions -- >= 9.5<br />
* Last update (YYYY-MM-DD) -- 2019-08-13<br />
* Current Version: 2.3.1<br />
* Description -- pg_probackup is a feature-rich and simple to use utility to manage backup and recovery of PostgreSQL database clusters. It is designed to perform periodic backups of the PostgreSQL instance that enable you to restore the server in case of a failure.<br />
* Documentation -- https://postgrespro.github.io/pg_probackup/<br />
<br />
== pgBackRest ==<br />
<br />
* Provider -- David Steele, Crunchy Data<br />
* Website -- https://pgbackrest.org/<br />
* License -- MIT License<br />
* Interoperability level -- >= 9.5 <br />
* Verified PostgreSQL versions -- >= 9.5<br />
* Last update (YYYY-MM-DD) -- 2022-01-22<br />
* Current Version: 2.50<br />
* Description -- pgBackRest is a reliable backup and restore solution for PostgreSQL that seamlessly scales up to the largest databases and workloads<br />
* Documentation -- https://pgbackrest.org/<br />
<br />
== pgmoneta ==<br />
<br />
* Provider --pgmoneta community<br />
* Website -- https://github.com/pgmoneta/pgmoneta<br />
* License -- BSD 3-Clause License<br />
* Interoperability level -- >= 10 <br />
* Verified PostgreSQL versions -- >= 10<br />
* Last update (YYYY-MM-DD) -- 2022-09-22<br />
* Current Version: 0.6.0<br />
* Description -- pgmoneta is a backup / restore solution that supports on-disk encryption, storage engines, monitoring and symlinks.<br />
* Documentation -- https://pgmoneta.github.io/gettingstarted.html<br />
<br />
== Simpana ==<br />
<br />
* Provider -- Commvault<br />
* Website -- https://www.commvault.com/<br />
* License -- proprietary<br />
* Interoperability level -- explicitly supports PostgreSQL 8.x or later<br />
* Verified PostgreSQL versions -- didn't actually run the program, but just checked the documentation<br />
* Last update (YYYY-MM-DD) -- 2018-3-11<br />
* Description -- Simpana software offers seamless and efficient backup and restore of data and information in your enterprise from any operating system, database, and application. Simpana software builds on this foundation by integrating application awareness with hardware snapshots, indexing, global deduplication, replication, search, and reporting, all within a single platform.<br />
* Additional info -- [[Ecosystem:Simpana|click here]]<br />
<br />
== Spectrum Protect ==<br />
<br />
* Provider -- IBM Corporation<br />
* Website -- https://www.ibm.com/us-en/marketplace/data-protection-and-recovery<br />
* License -- proprietary<br />
* Interoperability level -- explicitly supports PostgreSQL (no specific version is specified)<br />
* Verified PostgreSQL versions -- didn't actually run the program, but just checked the documentation<br />
* Last update (YYYY-MM-DD) -- 2018-3-11<br />
* Description -- IBM Spectrum Protect can simplify data protection where data is hosted in physical, virtual, software-defined or cloud environments. With IBM Spectrum Protect, you can choose the right software to manage and protect your data-while also simplifying backup administration, improving efficiencies, delivering scalable capacity and enabling advanced capabilities. With superior virtual machine (VM) protection, IBM Spectrum Protect integrates with IBM Spectrum Protect Plus for fast and easy VM protection with searchable catalog and role-based administration.<br />
* Additional info -- [[Ecosystem:Spectrum Protect|click here]]<br />
<br />
* Provider -- Spictera & IBM<br />
* Website -- https://www-356.ibm.com/partnerworld/gsd/solutiondetails.do?solution=56435&lc=en&stateCd=P&tab=2<br />
* Website -- https://www.suse.com/susePSC/viewVersionPage?versionID=20888<br />
* Website -- https://access.redhat.com/ecosystem/software/4167431<br />
* Website -- http://www.spictera.com<br />
* License -- proprietary<br />
* Interoperability -- generic data protection, supports any PostgreSQL version<br />
* Verified PostgreSQL versions -- PostgreSQL 8/9/10/11<br />
* Last update (YYYY-MM-DD) -- 2019-06-18<br />
* Description -- SPFS is a file system that makes it possible to mount IBM Spectrum Protect filespaces anywhere on your server. All file operations goes directly via the IBM Spectrum Protect Client API to the IBM Spectrum Protect backup server. It is very easy to integrate WAL backups, and one can use the prefered backup methods (pg_dump, pg_basebackup) or any other combinations that the PostgreSQL administrator has preferences of using. Very good data reduction ~75-85% using de-duplication in combination with compression, possible to encrypt data using private keys. Easy to use, requires almost no education.<br />
<br />
== Veritas NetBackup for PostgreSQL Agent ==<br />
<br />
* Provider -- Veritas<br />
* Website -- https://www.enterprisedb.com/blog/veritas-netbackup-and-edb-postgres<br />
* License -- proprietary<br />
* Interoperability level -- explicitly supports PostgreSQL 9.x or later<br />
* Verified PostgreSQL versions -- didn't actually run the program, just checked the documentation<br />
* Last update (YYYY-MM-DD) -- 2019-06-28<br />
* Description -- This is a PostgreSQL specific agent for NetBackup, the enterprise backup and recovery solution. It uses filesystem snapshot technology to take a cohesive backup of configured PG databases, rather than dump to an external file and backing that up. This (at least in theory) should mean backup and recovery are both efficient. The NetBackup documentation shows the agent as supported for Windows and Linux (RHEL, SLES)<br />
* Documentation -- https://www.veritas.com/content/support/en_US/doc/129277259-137906533-0/v129276450-137906533<br />
<br />
[[Category:Ecosystem:Backup]]</div>Greghttps://wiki.postgresql.org/index.php?title=Meson&diff=37868Meson2023-05-25T15:18:38Z<p>Greg: Add a warning about minimum version</p>
<hr />
<div>== PostgreSQL devel documentation ==<br />
<br />
See [https://www.postgresql.org/docs/devel/install-meson.html "Building and Installation with Meson" section] of PostgreSQL devel docs.<br />
<br />
== Autoconf:meson command translations ==<br />
<br />
NOTE: Make sure meson is version 0.57 or higher<br />
<br />
=== Setup and build commands ===<br />
<br />
{|class="wikitable" style="margin:auto"<br />
!description<br />
!old command<br />
!new command<br />
!comment<br />
|-<br />
|| set up build tree<br />
|| <code>./configure [<i>options</i>]</code><br />
|| <code>meson setup [<i>options</i>] [<i>builddir</i>] <i>sourcedir</i></code><br />
|| meson only supports building out of tree<br />
|-<br />
|| set up build tree for visual studio<br />
|| <code>perl src/tools/msvc/mkvcbuild.pl</code><br />
|| <code>meson setup --backend vs [<i>options</i>] [<i>builddir</i>] <i>sourcedir</i></code><br />
|| configures build tree for one build type (debug or release or ...)<br />
|-<br />
|| show configure options<br />
|| <code>./configure --help</code><br />
|| <code>meson configure</code><br />
|| shows options built into meson and PostgreSQL specific options<br />
|-<br />
|| set configure options<br />
|| <code>./configure --prefix=<i>DIR</i>, --$somedir=<i>DIR</i>, --with-$option, --enable-$feature</code><br />
|| <code>meson setup|configure -D$option=$value</code><br />
|| options can be set when setting up build tree (setup) and in existing build tree (configure)<br />
|- <br />
|| enable cassert<br />
|| <code>--enable-cassert</code><br />
|| <code>-Dcassert=true</code><br />
||<br />
|-<br />
|| enable debug symbols<br />
|| <code>./configure --enable-debug</code><br />
|| <code>meson configure|setup -Ddebug=true</code><br />
||<br />
|-<br />
|| specify compiler<br />
|| <code>CC=<i>compiler</i> ./configure</code><br />
|| <code>CC=<i>compiler</i> meson setup</code><br />
|| <code>CC</code> is only checked during meson setup, not with meson configure<br />
|-<br />
|| set CFLAGS<br />
|| <code>CFLAGS=<i>options</i> ./configure</code><br />
|| <code>meson configure|setup -Dc_args=<i>options</i></code><br />
|| <code>CFLAGS</code> is also checked, but only for meson setup<br />
|-<br />
|| build<br />
|| <code>make -s</code><br />
|| <code>ninja</code><br />
|| ninja uses parallelism by default, launch from the root of the build tree.<br />
|-<br />
|| build, showing compiler commands<br />
|| <code>make</code><br />
|| <code>ninja -v</code><br />
|| ninja uses parallelism by default, launch from the root of the build tree.<br />
|-<br />
|| install all the binaries and libraries<br />
|| <code>make install</code><br />
|| <code>ninja install</code><br />
|| use <code>meson install --quiet</code> for a less verbose experience<br />
|-<br />
|| install files that changed only<br />
||<br />
|| <code>meson install --only-changed</code><br />
|| Routinely shaves a few hundred milliseconds off install time<br />
|-<br />
|| clean build<br />
|| <code>make clean</code><br />
|| <code>ninja clean</code><br />
|| ninja uses parallelism by default, launch from the root of the build tree.<br />
|-<br />
|| build documentation<br />
|| <code>cd doc/ && make html && make man</code><br />
|| <code>ninja docs</code><br />
|| Builds html documentation and man pages<br />
|}<br />
<br />
==== Build directory ====<br />
<br />
ninja tries to run from the root of the build directory. If you are not in the build directory, you can use the <code>-C</code> flag to have ninja "change directory" and run from there, e.g.: <br />
<br />
<pre><br />
ninja -C $builddir<br />
</pre><br />
<br />
=== Test related commands ===<br />
<br />
{|class="wikitable" style="margin:auto"<br />
!description<br />
!old command<br />
!new command<br />
!comment<br />
|-<br />
|| list tests<br />
||<br />
|| <code>meson test --list</code><br />
|| Only shows tests from "tmp_install" [https://mesonbuild.com/Reference-manual_functions.html#add_test_setup test setup], since it is the default (<code>--setup tmp_install</code> is implied here)<br />
|-<br />
|| list running/installcheck test variants<br />
||<br />
|| <code>meson test --setup running --list</code><br />
|| "running" [https://mesonbuild.com/Reference-manual_functions.html#add_test_setup test setup] is used to run tests against an existing server<br />
|-<br />
|| run all tests<br />
|| <code>make check-world</code><br />
|| <code>meson test -v</code><br />
|| runs all test, using parallelism by default<br />
|-<br />
|| run all tests against existing server<br />
|| <code>make installcheck-world</code><br />
|| <code>meson test -v --setup running</code><br />
|| Currently [https://postgr.es/m/CAH2-Wz=X7=5jU-+XXJaqQRZja_fseEtrd_dGJa0Wpb74OpsgEA@mail.gmail.com makes brittle assumptions] about test libraries being installed<br />
|-<br />
|| run main regression tests<br />
|| <code>make check</code><br />
|| <code>meson test -v --suite setup --suite regress</code><br />
|| <code>--suite setup</code> required to get a <code>tmp_install</code> directory; see below<br />
|-<br />
|| run specific contrib test suite<br />
|| <code>make -C contrib/amcheck check</code><br />
|| <code>meson test -v --suite setup --suite amcheck</code><br />
|| <code>--suite setup</code> required to get a <code>tmp_install</code> directory; see below<br />
|-<br />
|| run main regression tests against existing server<br />
|| <code>make installcheck</code><br />
|| <code>meson test -v --setup running --suite regress-running</code><br />
||<br />
|-<br />
|| run specific contrib test suite against existing server<br />
|| <code>make -C contrib/amcheck installcheck</code><br />
|| <code>meson test -v --setup running --suite amcheck-running</code><br />
|| "running" amcheck suite variant doesn't include TAP tests<br />
|}<br />
<br />
==== Test structure ====<br />
<br />
When running a specific test suite against a temporary throw away installation, <code>--suite setup</code> should generally be specified. Otherwise the tests could end up running against a stale <code>tmp_install</code> directory, causing general confusion. This [https://postgr.es/m/20230209205605.zo5gfhli22g2kdm2@awork3.anarazel.de workaround] is not required when running tests against an existing server (via the <code>running</code> test setup and variant test suites), since of course the installation directory being tested is whatever directory the external server installation uses.<br />
<br />
Note that the top-level/default project name is <code>postgresql</code>, which is the only one we use in practice. The project name [https://mesonbuild.com/Unit-tests.html#run-subsets-of-tests can be omitted] when using a reasonably recent meson version (meson 0.46 or later), which we assume here.<br />
<br />
You can list all of the tests from a given suite as follows:<br />
<br />
<pre><br />
/path/to/postgresql/build_meson $ meson test --list --suite amcheck<br />
ninja: no work to do.<br />
postgresql:amcheck / amcheck/regress<br />
postgresql:amcheck / amcheck/001_verify_heapam<br />
postgresql:amcheck / amcheck/002_cic<br />
postgresql:amcheck / amcheck/003_cic_2pc<br />
</pre><br />
<br />
Note that there are distinct <code>running</code>/installcheck suites for most of the standard setup suites, though not all of the tests actually carry over to the <code>running</code> variant suites, as shown here:<br />
<pre><br />
/path/to/postgresql/build_meson $ meson test --list --suite amcheck-running<br />
ninja: no work to do.<br />
postgresql:amcheck-running / amcheck-running/regress<br />
</pre><br />
<br />
==== Running individual regression test scripts via an installcheck-tests style workflow ====<br />
<br />
The Postgres autoconf build system supports running a subset of regression test scripts against an existing server using the installcheck-tests target, as shown here:<br />
<br />
<pre><br />
/path/to/postgresql/build_autoconf $ make installcheck-tests TESTS="test_setup create_index"<br />
*** SNIP ***<br />
============== dropping database "regression" ==============<br />
SET<br />
DROP DATABASE<br />
============== creating database "regression" ==============<br />
CREATE DATABASE<br />
ALTER DATABASE<br />
ALTER DATABASE<br />
ALTER DATABASE<br />
ALTER DATABASE<br />
ALTER DATABASE<br />
ALTER DATABASE<br />
============== running regression test queries ==============<br />
test test_setup ... ok 300 ms<br />
test create_index ... ok 1775 ms<br />
<br />
=====================<br />
All 2 tests passed.<br />
=====================<br />
</pre><br />
<br />
You can work around the current lack of an equivalent meson facility by invoking pg_regress directly:<br />
<br />
<pre><br />
/path/to/postgresql/build_meson $ src/test/regress/pg_regress --inputdir ../source/src/test/regress/ --dlpath=src/test/regress/ test_setup create_index<br />
*** SNIP ***<br />
=====================<br />
All 2 tests passed.<br />
=====================<br />
</pre><br />
<br />
The same approach will also work for isolation tests:<br />
<br />
<pre><br />
/path/to/postgresql/build_meson $ src/test/isolation/pg_isolation_regress --inputdir ../source/src/test/isolation freeze-the-dead vacuum-no-cleanup-lock<br />
*** SNIP ***<br />
=====================<br />
All 2 tests passed.<br />
=====================<br />
</pre><br />
<br />
Note that this assumes that the meson build directory is 'build_meson', and that the Postgres source code directory is 'source'. The 'source' directory is located in the same directory as 'build_meson' in this example (a directory layout often used with VPATH builds).<br />
<br />
== Installing Meson ==<br />
<br />
=== FreeBSD ===<br />
<br />
<pre><br />
pkg install meson ninja<br />
</pre><br />
<br />
Arguments to meson setup/configure to find ports libraries:<br />
<pre><br />
meson setup -Dextra_lib_dirs=/opt/local/lib -Dextra_include_dirs=/opt/local/include $builddir $sourcedir<br />
</pre><br />
<br />
=== Linux ===<br />
<br />
Debian / Ubuntu:<br />
<pre><br />
apt-get update && apt-get install -y meson ninja-build<br />
</pre><br />
<br />
Fedora:<br />
<pre><br />
dnf -y install meson ninja-build<br />
</pre><br />
<br />
RHEL 8:<br />
<pre><br />
dnf -y install dnf-plugins-core<br />
dnf config-manager --set-enabled powertools<br />
dnf -y install meson ninja-build<br />
</pre><br />
<br />
RHEL 9 (tested on Rocky Linux 9):<br />
<pre><br />
dnf -y install dnf-plugins-core<br />
dnf config-manager --set-enabled crb<br />
dnf -y install meson<br />
</pre><br />
<br />
=== macOS ===<br />
<br />
With MacPorts:<br />
<br />
<pre><br />
sudo port install meson<br />
</pre><br />
<br />
Arguments to meson setup/configure to find MacPorts libraries:<br />
<pre><br />
meson setup -Dpkg_config_path=/opt/local/lib/pkgconfig -Dextra_lib_dirs=/opt/local/lib/ -Dextra_include_dirs=/opt/local/include $builddir $sourcedir<br />
</pre><br />
<br />
With Homebrew:<br />
<pre><br />
brew install meson<br />
</pre><br />
<br />
Arguments to meson setup/configure to find Homebrew libraries:<br />
<br />
On arm64:<br />
<pre><br />
meson setup -Dpkg_config_path=/opt/homebrew/lib/pkgconfig -Dextra_include_dirs=/opt/homebrew/include -Dextra_lib_dirs=/opt/homebrew/lib $builddir $sourcedir<br />
</pre><br />
<br />
On x86-64:<br />
<pre><br />
meson setup -Dpkg_config_path=/usr/local/lib/pkgconfig -Dextra_include_dirs=/usr/local/include -Dextra_lib_dirs=/usr/local/lib $builddir $sourcedir<br />
</pre><br />
<br />
=== Windows ===<br />
<br />
Assuming python is installed, the easiest way to get meson and ninja is:<br />
<pre><br />
pip install meson ninja<br />
</pre><br />
<br />
As documented on the [https://mesonbuild.com/Getting-meson.html meson website], a MSI installer is also available.<br />
<br />
Using the most recent version of ActivePerl may be a bit challenging, as there is no direct access to a "perl" command except if enabling a project registered in the ActivePerl website, with a command like that:<br />
<pre><br />
state activate --default<br />
</pre><br />
<br />
An easy way to set up things is to install Chocolatey, and rely on StrawberryPerl. Here are the main packages to worry about:<br />
<pre><br />
choco install winflexbison<br />
choco install sed<br />
choco install gzip<br />
choco install strawberryperl<br />
choco install diffutils<br />
</pre><br />
<br />
The compiler detected will depend on the Command Prompt type used. For MSVC, use the command prompt installed for VS. A native Command prompt or Powershell may finish by linking to Chocolatey's gcc, which may be OK, still be careful with what's reported by meson setup.<br />
<br />
== Why and What ==<br />
<br />
Autoconf is showing its age, fewer and fewer contributors know how to wrangle<br />
it. Recursive make has a lot of hard to resolve dependency issues and slow<br />
incremental rebuilds. Our home-grown MSVC buildsystem is hard to maintain for<br />
developers not using windows and runs tests serially. While these and other<br />
issues could individually be addressed with incremental improvements, together<br />
they seem best addressed by moving to a more modern buildsystem.<br />
<br />
After evaluating different buildsystem choices, we chose to use meson, to a<br />
good degree based on the adoption by other open source projects.<br />
<br />
We decided that it's more realistic to commit a relatively early version of<br />
the new buildsystem and mature it in tree.<br />
<br />
The plan is to remove the msvc specific buildsystem in src/tools/msvc soon<br />
after reaching feature parity. However, we're not planning to remove the<br />
autoconf/make buildsystem in the near future. Likely we're going to keep at<br />
least the parts required for PGXS to keep working around until all supported<br />
versions build with meson.<br />
<br />
<br />
== Meson documentation ==<br />
<br />
* [https://mesonbuild.com/Commands.html meson commandline commands]<br />
* [https://mesonbuild.com/Syntax.html meson syntax]<br />
* [https://mesonbuild.com/Reference-manual_functions.html meson functions]<br />
<br />
<br />
== Development tree, other resources ==<br />
<br />
* https://github.com/anarazel/postgres/tree/meson<br />
* https://wiki.postgresql.org/wiki/PgCon_2022_Developer_Unconference#Meson_new_build_system_proposal<br />
<br />
== Visualizing builds ==<br />
<br />
When building with ninja, the generated .ninja_log can be uploaded to [https://ui.perfetto.dev/ ui.perfetto.dev], which is very helpful to visualize builds.</div>Greghttps://wiki.postgresql.org/index.php?title=Handling_Security_Issues&diff=36555Handling Security Issues2021-11-10T16:17:41Z<p>Greg: Protected "Handling Security Issues" ([Edit=Allow only administrators] (indefinite) [Move=Allow only administrators] (indefinite))</p>
<hr />
<div>This page contains information for developers on how to deal with security issues and how to prepare a security release.<br />
<br />
'''Note:''' If you are a user who wants to report a security issue or who wants to learn about how the PostgreSQL project deals with security issues, please go to http://www.postgresql.org/support/security. The wiki page you are looking at is aimed at developers.<br />
<br />
== security@postgresql.org ==<br />
<br />
The email address security@postgresql.org is the recommended contact point for all inquiries about security issues. It currently points at pgsql-security@postgresql.org.<br />
<br />
== Security team ==<br />
<br />
The security team is a closed-subscription list whose purpose is to be able to discuss security issues in a controlled group. The current members are:<br />
<br />
* Álvaro Herrera<br />
* Andres Freund<br />
* Andrew Dunstan<br />
* Bruce Momjian<br />
* Dave Page<br />
* Greg Stark<br />
* Heikki Linnakangas<br />
* Tatsuo Ishii<br />
* Jonathan Katz<br />
* Magnus Hagander<br />
* Michael Paquier<br />
* Noah Misch<br />
* Peter Eisentraut<br />
* Robert Haas<br />
* Stephen Frost<br />
* Simon Riggs<br />
* Stefan Kaltenbrunner<br />
* Tom Lane<br />
<br />
== Responding to security bug reports ==<br />
<br />
Respond to a security bug report reported via the security@ address in the same way as you would with a normal bug report on pgsql-bugs. The only difference is that the issue should not be disclosed to anyone outside the security team and the original reporter.<br />
<br />
== Committing patches for security issues ==<br />
<br />
If the git commit log message for a commit matches<br />
<br />
/^Security:/<br />
<br />
then the message on pgsql-committers will be held for moderation. This facility should be applied when committing a patch for a security issue, so that details of the fix don't get broadcast to the world before the release including the fix is made. See [http://archives.postgresql.org/pgsql-committers/2008-01/msg00100.php this example] for a commit message where this was done. As in that example, the recommended style is something like "Security: CVE-number" or some other reference to the reason for the security marker.<br />
<br />
== CVE numbers ==<br />
<br />
A CVE number should be requested for every security issue. This number should be included in all the commit messages, announcements, and so on. A document describing the process can be found on developer.postgresql.org at ~petere/can-request.txt (only available with shell access, because it is not a public document). The person who drives the process of fixing the issue and preparing the security release (often Tom Lane) is usually the one who deals with requesting the CVE numbers.<br />
<br />
[[Category:Development]]</div>Greghttps://wiki.postgresql.org/index.php?title=List_of_drivers&diff=36508List of drivers2021-09-30T13:25:40Z<p>Greg: /* Drivers */ Artistic > GPL</p>
<hr />
<div>= Drivers =<br />
<br />
The list below are PostgreSQL drivers (also referred to as "client libraries") that developers can use to [https://www.postgresql.org/docs/current/protocol.html interface with PostgreSQL] from various programming languages. The list is alphabetized by programming language, and also indicates if the driver is based on [https://www.postgresql.org/docs/current/libpq.html libpq] and whether or not it supports the [https://www.postgresql.org/docs/current/sasl-authentication.html#SASL-SCRAM-SHA-256 SCRAM-SHA-256] authentication protocol that was added in [https://www.postgresql.org/docs/release/10.0/ PostgreSQL 10].<br />
<br />
'''NOTE''': The drivers listed below are in various states of development. Some of them have been stable for many years and have been proven in various environments, whereas others are in early development. This listed is strictly informational: it is up to you to select the driver that is best for your environment.<br />
<br />
If you would like to add additional drivers to the list, please add them in alphabetical order based on the programming language for the driver.<br />
<br />
== Open Source ==<br />
<br />
{| border="1"<br />
!Driver<br />
!Language<br />
!License<br />
!Uses libpq?<br />
!Supports SCRAM?<br />
|-<br />
|[http://www.postgresql.org/docs/current/static/libpq.html libpq]<br />
|C<br />
|[https://www.postgresql.org/about/licence/ PostgreSQL]<br />
|Yes<br />
|Yes<br />
|-<br />
|[http://odbc.postgresql.org ODBC]<br />
|C<br />
|[https://git.postgresql.org/gitweb/?p=psqlodbc.git;a=blob;f=license.txt;hb=HEAD LGPLv2]<br />
|Yes<br />
|Yes<br />
|-<br />
|[http://pqxx.org/development/libpqxx/ libpqxx]<br />
|C++<br />
|[https://github.com/jtv/libpqxx/blob/master/COPYING BSD 3-Clause]<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://doc.qt.io/qt-5/sql-driver.html#qpsql QPSQL]<br />
|C++ (Qt)<br />
|[https://doc.qt.io/qt-5/licensing.html LGPLv3]<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://github.com/dmitigr/pgfe pgfe]<br />
|C++<br />
|[https://github.com/dmitigr/pgfe/blob/master/LICENSE.txt zlib]<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://github.com/yandex/ozo OZO]<br />
|C++<br />
|[https://github.com/yandex/ozo/blob/master/LICENSE PostgreSQL]<br />
|Yes<br />
|Yes<br />
|-<br />
|[http://www.npgsql.org npgsql]<br />
|C#<br />
|[http://www.npgsql.org/#license PostgreSQL]<br />
|No<br />
|Yes, since 3.2.7<br />
|-<br />
|[https://github.com/marijnh/Postmodern/ Postmodern]<br />
|Common Lisp<br />
|[https://github.com/marijnh/Postmodern/blob/master/LICENSE zlib and PostgreSQL]<br />
|No<br />
|Yes, since 1.30<br />
|-<br />
|[https://github.com/will/crystal-pg crystal-pg]<br />
|Crystal<br />
|[https://github.com/will/crystal-pg/blob/master/LICENSE BSD 3-Clause]<br />
|No<br />
|Yes, since 0.18.0<br />
|-<br />
|[https://github.com/elixir-ecto/postgrex Postgrex]<br />
|Elixir<br />
|[https://github.com/elixir-ecto/postgrex Apache 2]<br />
|No<br />
|Yes, since 0.14.0<br />
|-<br />
|[https://github.com/anse1/emacs-libpq emacs-libpq]<br />
|Emacs Lisp<br />
|[https://github.com/anse1/emacs-libpq/blob/master/COPYING GPLv3]<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://github.com/epgsql/epgsql epgsql]<br />
|Erlang<br />
|[https://github.com/epgsql/epgsql BSD 3-Clause]<br />
|No<br />
|Yes [https://github.com/epgsql/epgsql/issues/142]<br />
|-<br />
|[https://github.com/lib/pq github.com/lib/pq]<br />
|Go<br />
|[https://github.com/lib/pq/blob/master/LICENSE.md MIT]<br />
|No<br />
|Yes, since 1.1.0<br />
|-<br />
|[https://github.com/jackc/pgx pgx]<br />
|Go<br />
|[https://github.com/jackc/pgx/blob/master/LICENSE MIT]<br />
|No<br />
|Yes [https://github.com/jackc/pgx/commit/5044e8473ad948114b6cb63f6f30f94fc7834667], since 3.4.0<br />
|-<br />
|[https://github.com/go-pg/pg go-pg]<br />
|Go<br />
|[https://github.com/go-pg/pg/blob/master/LICENSE BSD 2-Clause]<br />
|No<br />
|Yes, since 6.15<br />
|-<br />
|[https://github.com/hdbc/hdbc-postgresql/wiki HDBC]<br />
|Haskell<br />
|[https://github.com/hdbc/hdbc-postgresql/blob/master/LICENSE BSD 3-Clause]<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://hackage.haskell.org/package/postgresql-simple postgresql-simple]<br />
|Haskell<br />
|[https://github.com/phadej/postgresql-simple/blob/master/LICENSE BSD 3-Clause]<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://jdbc.postgresql.org/ JDBC]<br />
|Java<br />
|[https://github.com/pgjdbc/pgjdbc/blob/master/LICENSE BSD 2-Clause]<br />
|No<br />
|Yes, since 42.2.0.<br />
|-<br />
|[https://github.com/brianc/node-postgres node-postgres]<br />
|JavaScript<br />
|[https://github.com/brianc/node-postgres/blob/master/LICENSE MIT]<br />
|Optional<br />
| Yes [https://github.com/brianc/node-postgres/pull/1835], since 7.9.0<br />
|-<br />
|[https://github.com/porsager/postgres postgres.js]<br />
|JavaScript<br />
|[https://github.com/porsager/postgres/blob/master/LICENSE Do What The F*ck You Want To Public License]<br />
|No<br />
|Yes<br />
|-<br />
|[https://metacpan.org/release/DBD-Pg DBD::Pg]<br />
|Perl<br />
|[https://github.com/bucardo/dbdpg/blob/master/LICENSES/artistic.txt Artistic]<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://www.php.net/manual/en/book.pgsql.php php-pgsql]<br />
|PHP<br />
|[https://www.php.net/license/3_01.txt PHPv3.0.1]<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://www.php.net/manual/en/ref.pdo-pgsql.php PDO_PGSQL]<br />
|PHP<br />
|[https://www.php.net/license/3_01.txt PHPv3.0.1]<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://github.com/m6w6/ext-pq ext-pq]<br />
|PHP<br />
|[https://github.com/m6w6/ext-pq/blob/master/LICENSE BSD 2-Clause]<br />
|Yes<br />
|Yes<br />
|-<br />
|[http://www.pomm-project.org/ Pomm]<br />
|PHP<br />
|[https://github.com/pomm-project/Foundation/blob/master/LICENSE MIT]<br />
|Yes<br />
|Yes<br />
|-<br />
|[http://initd.org/psycopg/ psycopg2]<br />
|Python (CPython only)<br />
|[http://initd.org/psycopg/license/ LGPLv3]<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://github.com/MagicStack/asyncpg asyncpg]<br />
|Python<br />
|[https://github.com/MagicStack/asyncpg/blob/master/LICENSE Apache 2]<br />
|No<br />
|Yes, since 0.19.0 [https://github.com/MagicStack/asyncpg/commit/2d76f50dccf35cf2f1d70b41ebd6198d2dfff8d7]<br />
|-<br />
|[https://github.com/chtd/psycopg2cffi psycopg2cffi]<br />
|Python, PyPy<br />
|[https://github.com/chtd/psycopg2cffi/blob/master/LICENSE LGPLv3]<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://cran.r-project.org/package=RPostgreSQL RPostgreSQL]<br />
|R<br />
|[https://cran.r-project.org/web/packages/RPostgreSQL/LICENSE GPLv2]<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://github.com/ged/ruby-pg ruby-pg]<br />
|Ruby<br />
|[https://github.com/ged/ruby-pg/blob/master/LICENSE BSD 2-Clause]<br />
|Yes<br />
|Yes<br />
|-<br />
|[https://github.com/sfackler/rust-postgres rust-postgres]<br />
|Rust<br />
|[https://github.com/sfackler/rust-postgres/blob/master/LICENSE MIT]<br />
|No<br />
|Yes [https://github.com/sfackler/rust-postgres/commit/11ffcac087fef8907dd5cdfc3c082ff13f76557b]<br />
|-<br />
|[https://github.com/codewinsdotcom/PostgresClientKit PostgresClientKit]<br />
|Swift<br />
|[https://github.com/codewinsdotcom/PostgresClientKit/blob/master/LICENSE Apache 2]<br />
|No<br />
|Yes [https://github.com/codewinsdotcom/PostgresClientKit/releases/tag/v1.3.0], since 1.3.0<br />
|-<br />
|[https://github.com/vapor/postgres-nio PostgresNIO]<br />
|Swift<br />
|[https://github.com/vapor/postgres-nio/blob/master/LICENSE MIT]<br />
|No<br />
|Yes<br />
|-<br />
|[https://flightaware.github.io/Pgtcl/ Pgtcl]<br />
|Tcl<br />
|[https://github.com/flightaware/Pgtcl/blob/master/LICENSE BSD 3-Clause]<br />
|Yes<br />
|Yes<br />
|-<br />
|[http://sourceforge.net/projects/pgtclng/ pgtclng]<br />
|Tcl<br />
|[https://sourceforge.net/p/pgtclng/code/HEAD/tree/trunk/src/COPYRIGHT PostgreSQL]<br />
|Yes<br />
|Yes<br />
|-<br />
|}<br />
<br />
== Proprietary ==<br />
<br />
This section is temporary until we determine how we want to list out proprietary / closed source drivers.<br />
<br />
{| border="1"<br />
!Driver<br />
!Language<br />
|-<br />
|[https://www.cdata.com/drivers/postgresql/ado/ ADO.NET Provider for PostgreSQL by CData]<br />
|C#<br />
|-<br />
|[https://www.cdata.com/drivers/postgresql/odbc/ ODBC Driver for PostgreSQL by CData]<br />
|C<br />
|-<br />
|[https://www.cdata.com/drivers/postgresql/jdbc/ JDBC Driver for PostgreSQL by CData]<br />
|Java<br />
|}<br />
<br />
<br />
== Unsupported Drivers ==<br />
<br />
Below is a list of drivers that are no longer maintained.<br />
<br />
{| border="1"<br />
!Driver<br />
!Language<br />
|-<br />
|[https://github.com/sshirokov/CLSQL CLSQL]<br />
|Common Lisp<br />
|-<br />
|[http://frihjul.net/pgsql pgsql]<br />
|Erlang<br />
|-<br />
|[http://code.google.com/p/erlang-psql-driver/ erlang-psql-driver]<br />
|Erlang<br />
|-<br />
|[https://metacpan.org/pod/release/ARC/DBD-PgPP-0.08/lib/DBD/PgPP.pm PgPP]<br />
|Perl<br />
|}<br />
<br />
= See Also =<br />
<br />
* [http://www.postgresql.org/download/products/2 Software catalog list]<br />
<br />
[[Category: Language interface|!]]</div>Greghttps://wiki.postgresql.org/index.php?title=Binary_Replication_Tools&diff=35980Binary Replication Tools2021-05-10T17:46:04Z<p>Greg: /* Comparison Matrix */ Minor cleanups and clarifications</p>
<hr />
<div>= Disclaimer =<br />
<br />
This is a Work-In-Progress, started Dec 28, 2012.<br />
<br />
Additions welcome, and [[User:Selena|Selena]] reserves the right to edit. :)<br />
<br />
= Purpose =<br />
<br />
Compare binary replication tools for PostgreSQL for features and ease of use. This document should classify and differentiate binary replication tools for easier selection and fit to purpose.<br />
<br />
== Comparison Matrix ==<br />
<br />
{|border=1<br />
!Tool!!Documentation!!License!!Makes base backups!!Makes base backups from replicas!!Manages backups!!Creates replicas!!Monitors replication delay!!Supports automated failover!!Transport used!!Source includes replication tests<br />
|- style="background-color:#ffffcc;"<br />
| [http://www.postgresql.org/docs/current/static/app-pgbasebackup.html pg_basebackup]*<br />
| [http://www.postgresql.org/docs/current/static/app-pgbasebackup.html Postgres docs]<br />
| PostgreSQL<br />
| Yes<br />
| Yes<br />
| No<br />
| Manual<br />
| No<br />
| No<br />
| PostgreSQL connection<br />
|<br />
|-<br />
| [http://www.pgbackrest.org/ pgBackRest]<br />
| [http://www.pgbackrest.org/user-guide.html Documentation]<br />
| MIT<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Yes<br />
| No<br />
| No<br />
| SSH / S3 / Azure / GCS<br />
| Yes<br />
|-<br />
| [http://www.pgbarman.org/ pgbarman]<br />
| [http://docs.pgbarman.org/ Documentation]<br />
| GPLv3<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Manual<br />
| Yes<br />
| No<br />
| rsync/SSH, pg_basebackup<br />
|<br />
|-<br />
| [https://github.com/omniti-labs/omnipitr OmniPITR]<br />
| [https://github.com/omniti-labs/omnipitr/blob/master/doc/intro.pod Intro]<br />
| PostgreSQL<br />
| Yes<br />
| Yes<br />
| No<br />
| Manual<br />
| WAL archive delay<br />
| No<br />
| rsync / SSH<br />
|<br />
|-<br />
| [https://github.com/ohmu/pghoard pghoard]<br />
| [https://github.com/ohmu/pghoard/blob/master/README.rst Readme]<br />
| Apache<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Yes<br />
| No<br />
| No<br />
| S3, Azure, Ceph, GCS<br />
| Yes<br />
|-<br />
| [http://code.google.com/p/pg-rman/ pg-rman]<br />
| [http://code.google.com/p/pg-rman/wiki/readme Readme]<br />
| BSD<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Manual<br />
| No<br />
| No<br />
| local / NFS mount<br />
|<br />
|-<br />
| [http://www.repmgr.org/ repmgr]<br />
| [https://github.com/2ndQuadrant/repmgr#readme Readme]<br />
| GPLv3<br />
| No<br />
| No<br />
| No<br />
| Yes<br />
| Yes<br />
| Yes<br />
| rsync / SSH<br />
|<br />
|-<br />
| [https://github.com/markokr/skytools Skytools]<br />
| [https://github.com/markokr/skytools/blob/master/doc/walmgr3.txt walmgr3]<br />
| BSD-ish<br />
| ?<br />
| ?<br />
| ?<br />
| Manual<br />
| Yes<br />
| Yes<br />
| ?<br />
| ?<br />
|-<br />
| [https://github.com/wal-e/WAL-E WAL-E]<br />
| [https://github.com/wal-e/WAL-E#readme Readme]<br />
| BSD<br />
| Yes<br />
| No<br />
| Yes<br />
| Manual<br />
| No<br />
| No<br />
| HTTPS/SSL<br />
|<br />
|-<br />
| [https://github.com/wal-g/wal-g WAL-G]<br />
| [https://github.com/wal-g/wal-g/blob/master/README.md README]<br />
| Apache 2.0<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Manual<br />
| No<br />
| No<br />
| HTTPS/SSL<br />
| Yes<br />
|-<br />
| [https://github.com/postgrespro/pg_probackup pg_probackup]<br />
| [https://github.com/postgrespro/pg_probackup/blob/master/README.md README]<br />
| PostgreSQL<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Manual<br />
| No<br />
| No<br />
| local / NFS mount<br />
| Yes<br />
|-<br />
| [https://github.com/dalibo/pitrery pitrery]<br />
| [http://dalibo.github.io/pitrery/documentation.html Documentation]<br />
| BSD-2<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Manual<br />
| No<br />
| No<br />
| rsync / SSH<br />
| ongoing<br />
|}<br />
<br />
* pg_basebackup is included with a standard PostgreSQL installation.<br />
<br />
<br />
'''Tool''': name of the project that manages binary replication or WAL archiving<br />
<br />
'''Documentation''': Link to canonical documentation for the project. Several projects have broken links that show up as top results in Google.<br />
<br />
'''License''': License software is released under. So far, we only have open/free software projects listed. We could add commercial projects.<br />
<br />
'''Makes base backups''': Yes if the project supports creating binary archives, including the necessary WAL to restore a backed-up instance.<br />
<br />
'''Makes base backups from replicas''': Yes if the project supports creating binary archives and WAL using the PGDATA directory from a replica rather than the master database.<br />
<br />
'''Manages backups''': Yes if the project adds, removes and lists binary archives.<br />
<br />
'''Creates replicas''': Yes if the project automatically adds a recovery.conf (sets up replication) as part of restoring a base backup.<br />
<br />
'''Monitors replication delay''': Yes if the project supports monitoring of replication delay (WAL shipping or streaming replication).<br />
<br />
'''Supports automated failover''': Yes if the project has an option for detecting master failure and promoting a replica to master.<br />
<br />
'''Transport used''': Supported methods for file transfer for making backups or replicas<br />
<br />
= Barman =<br />
<br />
[http://www.pgbarman.org/ pgbarman] [https://github.com/selenamarie/pg_replication_demo/tree/master/barman Demo setup for pgbarman]<br />
<br />
Summary of features:<br />
* Creates base backups<br />
* Uses SSH as transport for backup<br />
* Configuration stored in file or command-line<br />
* Restore is automatable for creating replicas, although not explictly supported<br />
* GPLv3<br />
<br />
Install notes:<br />
* Had a dependency problem with 9.1 on Ubuntu Precise, so installed from source<br />
* Missing dep for argcomplete, noted in README in demo repo<br />
* written in Python<br />
<br />
= OmniPITR =<br />
<br />
[https://github.com/omniti-labs/omnipitr OmniPITR]<br />
[https://github.com/selenamarie/pg_replication_demo/tree/master/omnipitr Demo of a simple replication setup with OmniPITR]<br />
<br />
Summary of Features:<br />
* Creating PITR backups from Master or Slave<br />
* Restoring a PITR backup for DR<br />
* Creating replicas (by untarring backups)<br />
* Monitoring of replicas<br />
* Supports 'pause removal' of WAL during a backup (nice!)<br />
* PostgreSQL license<br />
<br />
Install notes:<br />
* No packaging, perl<br />
* No documented support for streaming replication<br />
* Uses '^' instead of '%' in custom logfile naming<br />
* No configuration file option (instead of using long command-line options)<br />
* Supported on all Linux, Solaris variants<br />
<br />
= pgBackRest =<br />
<br />
[http://www.pgbackrest.org/ pgBackRest]<br />
<br />
Summary of features:<br />
* Parallel Backup & Restore<br />
* Local or Remote Operation<br />
* Full, Incremental, & Differential Backups<br />
* Backup & Archive Expiration<br />
* Backup Resume<br />
* Streaming Compression & Checksums<br />
* Delta Restore<br />
* Parallel WAL Archiving<br />
* Tablespace & Link Support<br />
* Compatibility with PostgreSQL >= 8.3<br />
* Support for S3, Azure, and GCS<br />
<br />
= pghoard =<br />
<br />
https://github.com/ohmu/pghoard<br />
<br />
Summary of features:<br />
* Stores basebackups and WAL to cloud object stores (AWS S3, Azure, Ceph, Google Cloud)<br />
* Restore existing basebackups and setup a new cluster with a recovery.conf pointing to another master database<br />
* Create scheduled basebackups<br />
* Can be used as archive_command to archive WALs in the object store<br />
* Can be used as restore_command to restore WALs from the object store<br />
* Basebackup and WAL compression using LZMA<br />
* Optionally encrypts backups<br />
* Supports PITR using timestamps, names and xids<br />
<br />
Install notes:<br />
* Written in Python, includes Debian and Fedora packaging scripts<br />
* 'pghoard' daemon manages basebackups and cleans up WALs<br />
* 'pghoard_archive_command' and 'pghoard_restore' access the locally running 'pghoard' daemon to store and restore files<br />
<br />
= pg-rman =<br />
<br />
[http://code.google.com/p/pg-rman/ pg-rman] [https://github.com/selenamarie/pg_replication_demo/tree/master/pgrman Simple demo for pg_rman setup]<br />
<br />
Summary of features:<br />
* Online backup and recovery, including backup from a replica<br />
* Archive management and restore<br />
* .ini configuration file<br />
* Simple command-line options<br />
<br />
Install notes:<br />
* Written in C, install with 'make USE_PGXS=1'<br />
* On Ubuntu/Debian, installs in non-default bin directory<br />
* Commands are simple, manages WAL archive as well as backups<br />
<br />
<br />
= pitrery =<br />
<br />
[http://dalibo.github.io/pitrery/ pitrery]<br />
[https://github.com/dalibo/pitrery/ GitHub repository]<br />
<br />
Summary of features:<br />
* Local or Remote backups<br />
* Backup from standby<br />
* backup modes: tar or rsync in hard-link mode<br />
* Ease WAL archiving<br />
* Backups and WAL archive expiration<br />
* Uses SSH as transport for backup<br />
* GPG encryption (experimental)<br />
* PostgreSQL >= 8.3<br />
<br />
Install notes:<br />
* written in bash, install with 'make install'<br />
* [https://apt.dalibo.org/labs/ deb] and [https://yum.dalibo.org/labs/ rpm] packages available<br />
<br />
= repmgr =<br />
<br />
[http://www.repmgr.org/ repmgr]<br />
[https://github.com/2ndQuadrant/repmgr GitHub repository]<br />
[https://github.com/selenamarie/pg_replication_demo/tree/master/repmgr Demo of a simple setup with repmgr]<br />
<br />
Supported features:<br />
* Setting up new replicas/hot_standby with streaming replication (makes recovery.conf itself)<br />
* Making base backups<br />
* Failover (automated, or not, including redirecting replicas to connect to a new master after failover)<br />
* Lag monitoring (repmgrd)<br />
* A "witness" DB server for monitoring (typically on a replica)<br />
* License: GPLv3<br />
<br />
Install notes:<br />
* Written in C<br />
* Developed on Debian systems, so support for package is present. [https://launchpad.net/repmgr Ubuntu packages] are available since Trusty (14.04).<br />
* Otherwise, installs like a typical UNIX utility out of postgresql/contrib source tree (make USE_PGXS=1; make USE_PGXS=1 install)<br />
* Detailed docs are in [https://github.com/2ndQuadrant/repmgr the README] for installing on many Linux platforms<br />
* Doesn't appear to be supported on Mac OS X<br />
<br />
= Skytools / walmgr =<br />
<br />
[https://github.com/markokr/skytools Skytools]<br />
<br />
= WAL-E =<br />
<br />
[https://github.com/wal-e/wal-e WAL-E]<br />
<br />
= WAL-G =<br />
<br />
[https://github.com/wal-g/wal-g WAL-G]<br />
<br />
Summary of features:<br />
* Archive management and restore<br />
* Inspired by WAL-E<br />
* High performance<br />
* Parallelized operation<br />
<br />
Installation notes:<br />
* Written in Go, install with the usual Go tools.<br />
<br />
= pg_probackup =<br />
<br />
[https://github.com/postgrespro/pg_probackup pg_probackup]<br />
<br />
[https://postgrespro.com/docs/postgrespro/10/app-pgprobackup Documentation]<br />
<br />
[https://github.com/postgrespro/pg_probackup#installation-and-setup Installation]<br />
<br />
Summary of features:<br />
* PostgreSQL >= 9.5<br />
* Simple command-line interface<br />
* Remote backup/restore<br />
* Parallel Backup, Restore and Validation operations<br />
* Block-level incremental backups<br />
* Two mode for incremental backups: PAGE and DELTA<br />
* Compression of datafiles and WAL files<br />
* Retention policies(time, window)<br />
* Backup corruption detection<br />
* Block corruption detection(during backup) <br />
* Restore target validation<br />
* Backup from standby<br />
* Extended logging options<br />
* Windows support<br />
* Merge Full and Incremental backups</div>Greghttps://wiki.postgresql.org/index.php?title=Binary_Replication_Tools&diff=35979Binary Replication Tools2021-05-10T17:42:53Z<p>Greg: /* pgBackRest */ +azure and gcs</p>
<hr />
<div>= Disclaimer =<br />
<br />
This is a Work-In-Progress, started Dec 28, 2012.<br />
<br />
Additions welcome, and [[User:Selena|Selena]] reserves the right to edit. :)<br />
<br />
= Purpose =<br />
<br />
Compare binary replication tools for PostgreSQL for features and ease of use. This document should classify and differentiate binary replication tools for easier selection and fit to purpose.<br />
<br />
== Comparison Matrix ==<br />
<br />
{|border=1<br />
!Tool!!Documentation!!License!!Makes base backups!!Makes base backups from replicas!!Manages backups!!Creates replicas!!Monitors replication delay!!Supports automated failover!!Transport used!!Source includes replication tests<br />
|- style="background-color:#ffffcc;"<br />
| [http://www.postgresql.org/docs/current/static/app-pgbasebackup.html pg_basebackup]*<br />
| [http://www.postgresql.org/docs/current/static/app-pgbasebackup.html Postgres docs]<br />
| PostgreSQL<br />
| Yes<br />
| Yes<br />
| No<br />
| Manual<br />
| No<br />
| No<br />
| PostgreSQL connection<br />
|<br />
|-<br />
| [http://www.pgbackrest.org/ pgBackRest]<br />
| [http://www.pgbackrest.org/user-guide.html Documentation]<br />
| MIT<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Manual<br />
| No<br />
| No<br />
| ssh / s3<br />
| Yes<br />
|-<br />
| [http://www.pgbarman.org/ pgbarman]<br />
| [http://docs.pgbarman.org/ Documentation]<br />
| GPLv3<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Manual<br />
| Yes<br />
| No<br />
| rsync/ssh, pg_basebackup<br />
|<br />
|-<br />
| [https://github.com/omniti-labs/omnipitr OmniPITR]<br />
| [https://github.com/omniti-labs/omnipitr/blob/master/doc/intro.pod Intro]<br />
| PostgreSQL<br />
| Yes<br />
| Yes<br />
| No<br />
| Manual<br />
| WAL archive delay<br />
| No<br />
| rsync / ssh<br />
|<br />
|-<br />
| [https://github.com/ohmu/pghoard pghoard]<br />
| [https://github.com/ohmu/pghoard/blob/master/README.rst Readme]<br />
| Apache<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Yes<br />
| No<br />
| No<br />
| AWS S3, Azure, Ceph, Google Cloud Storage<br />
| Yes<br />
|-<br />
| [http://code.google.com/p/pg-rman/ pg-rman]<br />
| [http://code.google.com/p/pg-rman/wiki/readme Readme]<br />
| BSD<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Manual<br />
| No<br />
| No<br />
| local / NFS mount<br />
|<br />
|-<br />
| [http://www.repmgr.org/ repmgr]<br />
| [https://github.com/2ndQuadrant/repmgr#readme Readme]<br />
| GPLv3<br />
| No<br />
| No<br />
| No<br />
| Yes<br />
| Yes<br />
| Yes<br />
| rsync / ssh<br />
|<br />
|-<br />
| [https://github.com/markokr/skytools Skytools]<br />
| [https://github.com/markokr/skytools/blob/master/doc/walmgr3.txt walmgr3]<br />
| BSD-ish<br />
| ?<br />
| ?<br />
| ?<br />
| Manual<br />
| Yes<br />
| Yes<br />
| ?<br />
| ?<br />
|-<br />
| [https://github.com/wal-e/WAL-E WAL-E]<br />
| [https://github.com/wal-e/WAL-E#readme Readme]<br />
| BSD<br />
| Yes<br />
| No<br />
| Yes<br />
| Manual<br />
| No<br />
| No<br />
| HTTPS/SSL<br />
|<br />
|-<br />
| [https://github.com/wal-g/wal-g WAL-G]<br />
| [https://github.com/wal-g/wal-g/blob/master/README.md README]<br />
| Apache 2.0<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Manual<br />
| No<br />
| No<br />
| HTTPS/SSL<br />
| Yes<br />
|-<br />
| [https://github.com/postgrespro/pg_probackup pg_probackup]<br />
| [https://github.com/postgrespro/pg_probackup/blob/master/README.md README]<br />
| PostgreSQL<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Manual<br />
| No<br />
| No<br />
| local / NFS mount<br />
| Yes<br />
|-<br />
| [https://github.com/dalibo/pitrery pitrery]<br />
| [http://dalibo.github.io/pitrery/documentation.html Documentation]<br />
| BSD-2<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Manual<br />
| No<br />
| No<br />
| rsync / ssh<br />
| ongoing<br />
|}<br />
<br />
* pg_basebackup is included with a standard PostgreSQL installation.<br />
<br />
<br />
'''Tool''': name of the project that manages binary replication or WAL archiving<br />
<br />
'''Documentation''': Link to canonical documentation for the project. Several projects have broken links that show up as top results in Google.<br />
<br />
'''License''': License software is released under. So far, we only have open/free software projects listed. We could add commercial projects.<br />
<br />
'''Makes base backups''': Yes if the project supports creating binary archives, including the necessary WAL to restore a backed-up instance.<br />
<br />
'''Makes base backups from replicas''': Yes if the project supports creating binary archives and WAL using the PGDATA directory from a replica rather than the master database.<br />
<br />
'''Manages backups''': Yes if the project adds, removes and lists binary archives.<br />
<br />
'''Creates replicas''': Yes if the project automatically adds a recovery.conf (sets up replication) as part of restoring a base backup.<br />
<br />
'''Monitors replication delay''': Yes if the project supports monitoring of replication delay (WAL shipping or streaming replication).<br />
<br />
'''Supports automated failover''': Yes if the project has an option for detecting master failure and promoting a replica to master.<br />
<br />
'''Transport used''': Supported methods for file transfer for making backups or replicas<br />
<br />
= Barman =<br />
<br />
[http://www.pgbarman.org/ pgbarman] [https://github.com/selenamarie/pg_replication_demo/tree/master/barman Demo setup for pgbarman]<br />
<br />
Summary of features:<br />
* Creates base backups<br />
* Uses SSH as transport for backup<br />
* Configuration stored in file or command-line<br />
* Restore is automatable for creating replicas, although not explictly supported<br />
* GPLv3<br />
<br />
Install notes:<br />
* Had a dependency problem with 9.1 on Ubuntu Precise, so installed from source<br />
* Missing dep for argcomplete, noted in README in demo repo<br />
* written in Python<br />
<br />
= OmniPITR =<br />
<br />
[https://github.com/omniti-labs/omnipitr OmniPITR]<br />
[https://github.com/selenamarie/pg_replication_demo/tree/master/omnipitr Demo of a simple replication setup with OmniPITR]<br />
<br />
Summary of Features:<br />
* Creating PITR backups from Master or Slave<br />
* Restoring a PITR backup for DR<br />
* Creating replicas (by untarring backups)<br />
* Monitoring of replicas<br />
* Supports 'pause removal' of WAL during a backup (nice!)<br />
* PostgreSQL license<br />
<br />
Install notes:<br />
* No packaging, perl<br />
* No documented support for streaming replication<br />
* Uses '^' instead of '%' in custom logfile naming<br />
* No configuration file option (instead of using long command-line options)<br />
* Supported on all Linux, Solaris variants<br />
<br />
= pgBackRest =<br />
<br />
[http://www.pgbackrest.org/ pgBackRest]<br />
<br />
Summary of features:<br />
* Parallel Backup & Restore<br />
* Local or Remote Operation<br />
* Full, Incremental, & Differential Backups<br />
* Backup & Archive Expiration<br />
* Backup Resume<br />
* Streaming Compression & Checksums<br />
* Delta Restore<br />
* Parallel WAL Archiving<br />
* Tablespace & Link Support<br />
* Compatibility with PostgreSQL >= 8.3<br />
* Support for S3, Azure, and GCS<br />
<br />
= pghoard =<br />
<br />
https://github.com/ohmu/pghoard<br />
<br />
Summary of features:<br />
* Stores basebackups and WAL to cloud object stores (AWS S3, Azure, Ceph, Google Cloud)<br />
* Restore existing basebackups and setup a new cluster with a recovery.conf pointing to another master database<br />
* Create scheduled basebackups<br />
* Can be used as archive_command to archive WALs in the object store<br />
* Can be used as restore_command to restore WALs from the object store<br />
* Basebackup and WAL compression using LZMA<br />
* Optionally encrypts backups<br />
* Supports PITR using timestamps, names and xids<br />
<br />
Install notes:<br />
* Written in Python, includes Debian and Fedora packaging scripts<br />
* 'pghoard' daemon manages basebackups and cleans up WALs<br />
* 'pghoard_archive_command' and 'pghoard_restore' access the locally running 'pghoard' daemon to store and restore files<br />
<br />
= pg-rman =<br />
<br />
[http://code.google.com/p/pg-rman/ pg-rman] [https://github.com/selenamarie/pg_replication_demo/tree/master/pgrman Simple demo for pg_rman setup]<br />
<br />
Summary of features:<br />
* Online backup and recovery, including backup from a replica<br />
* Archive management and restore<br />
* .ini configuration file<br />
* Simple command-line options<br />
<br />
Install notes:<br />
* Written in C, install with 'make USE_PGXS=1'<br />
* On Ubuntu/Debian, installs in non-default bin directory<br />
* Commands are simple, manages WAL archive as well as backups<br />
<br />
<br />
= pitrery =<br />
<br />
[http://dalibo.github.io/pitrery/ pitrery]<br />
[https://github.com/dalibo/pitrery/ GitHub repository]<br />
<br />
Summary of features:<br />
* Local or Remote backups<br />
* Backup from standby<br />
* backup modes: tar or rsync in hard-link mode<br />
* Ease WAL archiving<br />
* Backups and WAL archive expiration<br />
* Uses SSH as transport for backup<br />
* GPG encryption (experimental)<br />
* PostgreSQL >= 8.3<br />
<br />
Install notes:<br />
* written in bash, install with 'make install'<br />
* [https://apt.dalibo.org/labs/ deb] and [https://yum.dalibo.org/labs/ rpm] packages available<br />
<br />
= repmgr =<br />
<br />
[http://www.repmgr.org/ repmgr]<br />
[https://github.com/2ndQuadrant/repmgr GitHub repository]<br />
[https://github.com/selenamarie/pg_replication_demo/tree/master/repmgr Demo of a simple setup with repmgr]<br />
<br />
Supported features:<br />
* Setting up new replicas/hot_standby with streaming replication (makes recovery.conf itself)<br />
* Making base backups<br />
* Failover (automated, or not, including redirecting replicas to connect to a new master after failover)<br />
* Lag monitoring (repmgrd)<br />
* A "witness" DB server for monitoring (typically on a replica)<br />
* License: GPLv3<br />
<br />
Install notes:<br />
* Written in C<br />
* Developed on Debian systems, so support for package is present. [https://launchpad.net/repmgr Ubuntu packages] are available since Trusty (14.04).<br />
* Otherwise, installs like a typical UNIX utility out of postgresql/contrib source tree (make USE_PGXS=1; make USE_PGXS=1 install)<br />
* Detailed docs are in [https://github.com/2ndQuadrant/repmgr the README] for installing on many Linux platforms<br />
* Doesn't appear to be supported on Mac OS X<br />
<br />
= Skytools / walmgr =<br />
<br />
[https://github.com/markokr/skytools Skytools]<br />
<br />
= WAL-E =<br />
<br />
[https://github.com/wal-e/wal-e WAL-E]<br />
<br />
= WAL-G =<br />
<br />
[https://github.com/wal-g/wal-g WAL-G]<br />
<br />
Summary of features:<br />
* Archive management and restore<br />
* Inspired by WAL-E<br />
* High performance<br />
* Parallelized operation<br />
<br />
Installation notes:<br />
* Written in Go, install with the usual Go tools.<br />
<br />
= pg_probackup =<br />
<br />
[https://github.com/postgrespro/pg_probackup pg_probackup]<br />
<br />
[https://postgrespro.com/docs/postgrespro/10/app-pgprobackup Documentation]<br />
<br />
[https://github.com/postgrespro/pg_probackup#installation-and-setup Installation]<br />
<br />
Summary of features:<br />
* PostgreSQL >= 9.5<br />
* Simple command-line interface<br />
* Remote backup/restore<br />
* Parallel Backup, Restore and Validation operations<br />
* Block-level incremental backups<br />
* Two mode for incremental backups: PAGE and DELTA<br />
* Compression of datafiles and WAL files<br />
* Retention policies(time, window)<br />
* Backup corruption detection<br />
* Block corruption detection(during backup) <br />
* Restore target validation<br />
* Backup from standby<br />
* Extended logging options<br />
* Windows support<br />
* Merge Full and Incremental backups</div>Greghttps://wiki.postgresql.org/index.php?title=Gsoc&diff=35953Gsoc2021-04-28T20:14:47Z<p>Greg: Redirect for searching goodness</p>
<hr />
<div>#REDIRECT [[GSoC]]</div>Greghttps://wiki.postgresql.org/index.php?title=Transparent_Data_Encryption&diff=35623Transparent Data Encryption2021-01-25T20:38:05Z<p>Greg: Fix wrapping</p>
<hr />
<div><br />
This page describes the transparent data encryption feature proposed in pgsql-hackers.<br />
<br />
= Overview =<br />
<br />
There has been continual discussion about whether and how to implement Transparent Data Encryption (TDE) in Postgres. Many other relational databases support TDE, and some security standards require it.<br />
However, it is also debatable how much security value TDE provides.<br />
<br />
Fundamentally, TDE must meet three criteria — it must be secure, obviously, but it also must be done in a way that has minimal impact on the rest of the Postgres code. This has value for two reasons —<br />
first, only a small number of users will use TDE, so the less code that is added, the less testing is required. Second, the less code that is added, the less likely TDE will break because of future<br />
Postgres changes. Finally, TDE should meet regulatory requirements. <br />
<br />
== History ==<br />
<br />
The first patch was proposed in 2016 [https://www.postgresql.org/message-id/CA%2BCSw_tb3bk5i7if6inZFc3yyf%2B9HEVNTy51QFBoeUk7UE_V%3Dw%40mail.gmail.com] and implemented cluster-wide encryption with a single<br />
key. In 2018 table-level transparent data encryption was proposed [https://www.postgresql.org/message-id/031401d3f41d%245c70ed90%241552c8b0%24%40lab.ntt.co.jp], together with a method to integrate with key<br />
management systems; that first patch was submitted in 2019 [https://www.postgresql.org/message-id/CAD21AoBjrbxvaMpTApX1cEsO%3D8N%3Dnc2xVZPB0d9e-VjJ%3DYaRnw%40mail.gmail.com]. The patch implemented both<br />
tablespace-level encryption using a 2-tier key architecture and generic key management API to communicate with external key management systems.<br />
<br />
== Scope of TDE ==<br />
<br />
The scope of TDE is:<br />
<br />
* Internal key management system (KMS), storing keys in the database<br />
* Cluster-wide encryption<br />
** encrypt everything that is persistent<br />
** not encrypting shared buffers or data in memory<br />
<br />
The benefit of cluster wide encryption is:<br />
<br />
* Simple architecture<br />
* Suitable for the requirement of encrypting everything<br />
<br />
Cluster-wide encryption meets the compliance requirements and checks the box as far as TDE is concerned. It also meets the criteria of encrypting the data at rest i.e., persistent data.<br />
<br />
= When to encrypt/decrypt =<br />
<br />
== Buffer ==<br />
<br />
It encrypts buffer data during disk I/O<br />
<br />
* Processes encrypt data when writing it to disk<br />
* Decrypt when reading from disk<br />
* Data in the shared buffer is '''not''' encrypted<br />
<br />
== WAL ==<br />
<br />
In cluster encryption<br />
<br />
* Processes insert WAL data to WAL buffers in non-encrypted state<br />
* WAL buffers are encrypted when writing to the file system<br />
* WAL would use a dedicated encryption key<br />
<br />
== Temporary Files ==<br />
<br />
Temporary files would use a temporary key that is randomly generated at postmaster start and lives only for the postmaster lifetime. For parallel queries, especially parallel hash joins, since it's<br />
possible that multiple parallel workers use the same temporary files, the the temporary key should be shared with parallel workers.<br />
<br />
== Backups ==<br />
<br />
In cluster-wide encryption, there will be an option in pg_basebackup to change the heap/index key for key rotation purposes. After failover to a standby, the WAL key can be changed too.<br />
<br />
= How to encrypt =<br />
<br />
We will use Advanced Encryption Standard (AES) [https://en.wikipedia.org/wiki/Advanced_Encryption_Standard]. We will use AES-GCM [https://en.wikipedia.org/wiki/Galois/Counter_Mode] for heap/index and WAL<br />
encryption and AES-GCM with KWP [https://csrc.nist.gov/CSRC/media/Projects/Cryptographic-Algorithm-Validation-Program/documents/mac/KWVS.pdf] for key wrapping.<br />
<br />
== Key length ==<br />
<br />
We will offer three key length options (128, 192, and 256-bits) selected at initdb time with ```--file-encryption-keylen```.<br />
<br />
== Initialization Vector(IV) ==<br />
<br />
Nonce means "number used once". An IV is a specific type of nonce. That is, unique but not necessarily random or secret, as specified by the NIST<br />
[https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-38a.pdf]. To generate unique IVs, the NIST recommends two methods:<br />
<br />
The first method is to apply the forward cipher function, under the same key that is used for the<br />
encryption of the plaintext, to a nonce. The nonce must be a data block that is unique to each<br />
execution of the encryption operation. For example, the nonce may be a counter, as described in<br />
Appendix B, or a message number. The second method is to generate a random data block using a<br />
FIPS-approved random number generator. <br />
<br />
We will use the first method to generate IVs. That is, select nonce carefully and use a cipher with the key to make it unique enough to use as an IV. The nonce selection for buffer encryption and WAL<br />
encryption are described below.<br />
<br />
=== IV for heap/index encryption ===<br />
<br />
We will use the page LSN (8 bytes) and page number (4 bytes) to create an IV (16 bytes) for each page.<br />
<br />
Using the page LSN and page number as part of the nonce has three benefits:<br />
<br />
* We don't need to decrypt/re-encrypt during CREATE DATABASE since the page contents are the same in both places, and once one database changes its pages, it gets a new LSN, and hence a new nonce/IV.<br />
* For each change of an 8k page, you get a new nonce/IV, so you are not encrypting different data with the same nonce/IV.<br />
* This avoids requiring pg_upgrade to preserve database oids, tablespace oids, and relfilenodes.<br />
* We get a unique nonce even when two different pages in the same relation have the same LSN because the page numbers are different (we don't use the same LSN in different relations)<br />
** This can happen when a heap update expires an old tuple and adds a new tuple to another page.<br />
<br />
However, the LSN must then be visible on encrypted pages, so we will not encrypt the LSN on the page. We will also not encrypt the CRC so pg_checksums can still check pages offline without access to the<br />
keys. Probably, we will need to encrypt pd_lower and pd_upper to not give a hint to an attacker where the hole is within the page. Therefore, we will need to encrypt from pd_flags or pg_lower.<br />
<br />
typedef struct PageHeaderData<br />
{<br />
/* XXX LSN is member of *any* block, not only page-organized ones */<br />
PageXLogRecPtr pd_lsn; /* LSN: next byte after last byte of xlog<br />
* record for last change to this page */<br />
uint16 pd_checksum; /* checksum */<br />
uint16 pd_flags; /* flag bits, see below */<br />
LocationIndex pd_lower; /* offset to start of free space */<br />
LocationIndex pd_upper; /* offset to end of free space */<br />
LocationIndex pd_special; /* offset to start of special space */<br />
uint16 pd_pagesize_version;<br />
TransactionId pd_prune_xid; /* oldest prunable XID, or zero if none */<br />
ItemIdData pd_linp[FLEXIBLE_ARRAY_MEMBER]; /* line pointer array */<br />
} PageHeaderData;<br />
<br />
There are some challenges in making sure the nonce is never reused. Unlogged tables don't update the page LSN when updating the page, so we would need things like the magic GIST "GistBuildLSN" fake-LSN<br />
too, and we will need a bit used in the IV to distinguish if it's a real LSN or an unlogged LSN; perhaps we need a separate unlogged table encryption key.<br />
<br />
Another challenge is hint bit updates, which change the page contents but don't always update the page LSN. wal_log_hints can be enabled so the LSN changes on the first hint bit change during a<br />
checkpoint. However, later hint bit changes during the same checkpoint would not cause an LSN page change. This would result in using the same nonce to encrypt different page contents.<br />
<br />
One fix for this would be to create dummy WAL records and update the page LSN for every hint bit change (not just the first one since the last checkpoint), but that would write too many dummy records. A<br />
more reasonable option would be to update a per-page counter on every hint bit change for the same page LSN, and use that counter in the four unused IV bytes. However, the counter would have to be stored<br />
unencrypted on the page. If the counter is about to wraparound, we force wal_log_hint_bits to generate a new LSN, even though it is not the first hint bit change during the checkpoint. (Checksums use a similar method to allow page checksums to be recomputed multiple<br />
times during the same checkpoint, but only create a page image WAL record on first page hint bit change during a checkpoint.)<br />
<br />
=== IV for WAL encryption ===<br />
<br />
We will use the WAL segment number to create an IV for each WAL segments. The maximum bits that can be processed with a single key/IV(nonce) pair is 68GB<br />
[https://crypto.stackexchange.com/questions/44113/what-is-a-safe-maximum-message-size-limit-when-encrypting-files-to-disk-with-aes][https://crypto.stackexchange.com/questions/20333/encryption-of-big-files-in-java-with-aes-gcm/20340#20340].<br />
<br />
We will use a different IV(nonce) 16MB WAL file, so we will be OK there too. What about timelines?<br />
<br />
=== IV for temporary files ===<br />
<br />
It is unclear how to set the nonce for temporary files. We will probably use a data encryption key generated at postmaster start, and mix that with the time of day, process id, and maybe file path.<br />
<br />
= Checksum and Encryption =<br />
<br />
Encrypt and then CRC, and store the CRC decrypted<br />
<br />
== Prepared Transaction Encryption ==<br />
<br />
During the discussion, the point about prepared transaction encryption also came up since they are also persisted. Sawada-san mentioned that we aren’t storing any important data for prepared transactions<br />
so we might not need to encrypt it. However we need to have it as part of the todo list.<br />
<br />
= Other requirements =<br />
<br />
TDE requires the OpenSSL library.<br />
<br />
= TODO for Full-Cluster Encryption =<br />
<br />
Here is list of ongoing tasks with there assignment and status for cluster-wide encryption and internal key management system.<br />
<br />
* Buffer encryption - Assigned to Sawada Status : Needs to update based on the latest KMS patch.<br />
** '''Which Files do we need to encrypt?'''<br />
*** Need to go through the data directory and get a list of all the files that contain user data - Assigned to Moon-san (Moon-san will check if NTT has done some work in this area and share the result)<br />
*** Need to get buy-in from the community for not encrypting non-user data files i.e. visibility map, transactional data etc (This needs to be a team effort)<br />
** don't encrypt LSN(pd_lsn) and CRC(pd_checksum) of the encrypted page contents<br />
*** Encrypt then checksum<br />
** don't encrypt the first 12 bytes of a page so pd_flags is visible in encrypted and non-encrypted mode? (Might be useful for online checksum and encryption.)<br />
** make pd_pagesize_version visible on the encrypted page?<br />
** shared buffers nonce is LSN/page-number (nonce is run through encryption to create the IV?)<br />
** require ''wal_log_hints'' and ''full_page_writes'' to prevent force bit changes to be WAL logged (generates new page LSN)<br />
<br />
* WAL encryption - Assigned to Moon/Swada Status : Needs to update based on the latest KMS patch.<br />
** use WAL key<br />
** use CTR mode<br />
** for WAL, don't use OpenSSL's EVP interface so the offset can be specified?<br />
** WAL nonce is segment number (no timeline?)<br />
** Encrypt whole WAL records<br />
*** Need to make sure that we don't encrypt the different data with the same key and nonce, and write it to the disk, especially when encrypting data which is not multiple of 16 bytes.<br />
** add Asssert() code to check that there are no WAL record types that modify more than one relation (already written)<br />
<br />
* Temporary file encryption - Assigned to Moon Status : Work in progress, will be posted to hackers soon.<br />
** Encryption key: A hash value is randomly generated for each temporary file, and a temporary key is generated by a combination of the hash value and the master key. (will use HMAC256)<br />
** IV value: The encryption key will be used separately for each file. So that the IV is simply generated(64bit = hash value, 32bit = counter) as it should not exceed 1GB per file(pgsql_tmp).<br />
** Need to check if some new temporary files that could have user data are introduced by recent changes for PG13.<br />
<br />
* Front end tools encryption - Assigned to Cary Status : Pending due to the shift of focus on KMS for PG13. Some front-end patches have been shared with community that illustrates the interactions with KMS. Development can resume once the focus is shifted back to TDE.<br />
** Allow ''pg_rewind'' and ''pg_waldump'' to work, add ''--cluster-passphrase-command'' option<br />
** does ''pg_rewind'' need to work across WAL key changes?<br />
** offline tool to allow changing the data encryption key of current WAL and PITR WAL files, must be crash-safe<br />
** changing the pass phrase will require ''--old-passphrase-command'' and --new-passphrase-command'' options<br />
** modify pg_basebackup to store relations with a different relation key so standby servers can be used for relation key rotation<br />
<br />
Here are some tasks or areas that we need to research, some of these are being worked as part of main to do's listed above. This is a exhaustive list to ensure that we don't skip any todo required for the<br />
first phase of TDE. <br />
<br />
* TDE for replication<br />
** wal sender (especially xlogreader) needs to take TDE-wal key to decrypt wal data.<br />
** Physical replication<br />
*** WAL records are sent in a form of plaintext?<br />
*** Allow primary and the standby servers to use different heap/index keys<br />
** Logical replication (and decoding)<br />
*** We decrypt and decode WAL data and send these changes in a form of plaintext.<br />
*** The subscriber will use different encryption keys or even can disable TDE.<br />
* TDE for backup<br />
** During physical backup (pg_basebackup or copying OS files), table/index data are transferred in a form of encrypted text and all three internal keys are replicated<br />
** '''Maybe we need to have the ability to change some internal keys during basebackup?''' This can be done by generating new internal keys, re-encrypting database files with the new key during the transfer (we already do checksum verification for every page), generate new control file having new internal keys and sending it.<br />
** For logical backup (pg_dump), all data are dumped in a form of plaintext because pg_dump simply fetches data via SQL.<br />
** How does TDE work with backup manifests?<br />
* Regression test cases for TDE<br />
* Documentation<br />
** if a standby is promoted to a primary and the old primary continues writing, one must be re-keyed to avoid using the same IV<br />
* Comprehensive testing<br />
<br />
= List of the contains of user data for PostgreSQL files =<br />
If there are any added files, incorrect note or user data, please correct it.<br />
{| cellspacing="0" border="1"<br />
!Num<br />
!Database cluster<br />
!Contains of user data<br />
!Single Sequential Write<br />
!Single Process Write-then-Read<br />
!note<br />
|-<br />
|1<br />
|PG_VERSION<br />
|not contain<br />
|<br />
|<br />
|Only PostgreSQL version information is written<br />
|-<br />
|2<br />
|'''base/NNNNN/NNNNN'''<br />
|'''contain'''<br />
|<br />
|<br />
|Table data<br />
|-<br />
|3<br />
|base/NNNNN/NNNNN_vm<br />
|not contain<br />
|<br />
|<br />
|VM file<br />
|-<br />
|4<br />
|base/NNNNN/NNNNN_fsm<br />
|not contain<br />
|<br />
|<br />
|FSM file<br />
|-<br />
|5<br />
|base/NNNNN/NNNNN_init<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|6<br />
|base/NNNNN/PG_VERSION<br />
|not contain<br />
|<br />
|<br />
|Only PostgreSQL version information is written<br />
|-<br />
|7<br />
|base/NNNNN/pg_filenode.map<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|8<br />
|base/NNNNN/pg_internal.init<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|9<br />
|'''base/pgsql_tmp/pgsql_tmpPID.tempFileCounter'''<br />
|'''contain'''<br />
|<br />
|<br />
|Temporary file that creates user data temporarily when work_mem size is insufficient<br />
|-<br />
|10<br />
|current_logfiles<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|11<br />
|'''global/NNNN'''<br />
|'''contain'''<br />
|<br />
|<br />
|Database name and user name<br />
|-<br />
|12<br />
|global/NNNN_vm<br />
|not contain<br />
|<br />
|<br />
|vm file<br />
|-<br />
|13<br />
|global/NNNN_fsm<br />
|not contain<br />
|<br />
|<br />
|fsm file<br />
|-<br />
|14<br />
|global/pg_control<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|15<br />
|global/pg_filenode.map<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|16<br />
|global/pg_internal.init<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|17<br />
|pg_commit_ts/0000<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|18<br />
|pg_dynshmem/mmap.NNNNNNN<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|19<br />
|pg_logical/mappings/<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|20<br />
|pg_logical/replorigin_checkpoint<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|21<br />
|pg_logical/snapshots/0-XXXXXXXX.snap<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|22<br />
|pg_multixact/members/0000<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|23<br />
|pg_multixact/offsets/0000<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|24<br />
|pg_notify/0000<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|25<br />
|pg_replslot/Slotname/state<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|26<br />
|'''pg_replslot/Slotname/xid-NNN-lsn-0-NNNNNNNN.snap'''<br />
|'''contain'''<br />
|<br />
|<br />
|Includes user data decoded from WAL files<br />
|-<br />
|27<br />
|pg_serial/<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|28<br />
|pg_snapshots/NNNNNNNN-N<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|29<br />
|'''pg_stat/db_NNNNN.stat'''<br />
|'''contain'''<br />
|<br />
|<br />
|Statistics collector includes user data<br />
|-<br />
|30<br />
|'''pg_stat/global.stat'''<br />
|'''contain'''<br />
|<br />
|<br />
|Statistics collector includes user data<br />
|-<br />
|31<br />
|'''pg_stat_tmp/db_NNNNN.stat'''<br />
|'''contain'''<br />
|<br />
|<br />
|Statistics collector includes user data<br />
|-<br />
|32<br />
|'''pg_stat_tmp/global.stat'''<br />
|'''contain'''<br />
|<br />
|<br />
|Statistics collector includes user data<br />
|-<br />
|33<br />
|pg_subtrans/0000<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|34<br />
|'''pg_tblspc/PG_NN_NNNNNNNN/NNNNN'''<br />
|'''contain'''<br />
|<br />
|<br />
|Symlink files<br />
|-<br />
|35<br />
|pg_twophase/NNNNNNNN<br />
|not contain<br />
|<br />
|<br />
|Exclude user data<br />
|-<br />
|36<br />
|'''pg_wal/NNNNNNNNNNNNNNNNNN'''<br />
|'''contain'''<br />
|<br />
|<br />
|WAL data<br />
|-<br />
|37<br />
|pg_wal/*.backup<br />
|not contain<br />
|<br />
|<br />
|Exclude user data<br />
|-<br />
|38<br />
|pg_wal/*.history<br />
|not contain<br />
|<br />
|<br />
|Exclude user data<br />
|-<br />
|39<br />
|'''pg_wal/*.partial'''<br />
|'''contain'''<br />
|<br />
|<br />
|WAL data<br />
|-<br />
|40<br />
|pg_wal/archive_status/NNNNNNNNNNNNNNNNNN.done<br />
|not contain<br />
|<br />
|<br />
|Exclude user data<br />
|-<br />
|41<br />
|pg_wal/archive_status/NNNNNNNNNNNNNNNNNN.ready<br />
|not contain<br />
|<br />
|<br />
|Exclude user data<br />
|-<br />
|42<br />
|pg_xact/0000<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|43<br />
|postgresql.auto.conf<br />
|not contain<br />
|<br />
|<br />
|setting file<br />
|-<br />
|44<br />
|postmaster.opts<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|45<br />
|postmaster.pid<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|46<br />
|postgresql.conf<br />
|not contain<br />
|<br />
|<br />
|Setting file<br />
|-<br />
|47<br />
|pg_hba.conf<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|48<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
|49<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
|}<br />
<br />
= TDE in other systems =<br />
<br />
== MySQL (InnoDB) ==<br />
<br />
MySQL supports per tablespace, data at rest encryption [https://dev.mysql.com/doc/refman/8.0/en/innodb-tablespace-encryption.html#innodb-tablespace-encryption-about]. Please note that in MySQL the<br />
tablespace refers to a data file that can hold data for one or more InnoDB tables and associated indexes, while tablespace refers to a directory in PostgreSQL. innodb_file_per_table option allows<br />
tables to be created in their own tablespace. As of MySQL 8.0.16 it supports redo log and undo log<br />
encryption [https://dev.mysql.com/doc/refman/8.0/en/innodb-tablespace-encryption.html#innodb-tablespace-encryption-redo-log] and system tables<br />
encryption [https://dev.mysql.com/doc/refman/8.0/en/innodb-tablespace-encryption.html#innodb-schema-tablespace-encryption-default]. It supports 2 tier key architecture; it has tablespace keys for each<br />
tablespace which are located on the header of tablespace file. The master key can be obtained from external systems via a key ring plugin [https://dev.mysql.com/doc/refman/8.0/en/keyring.html]<br />
<br />
MySQL encrypts each page of both redo log and undo log with dedicated keys, not with the keys used for table encryption. The encryption key is stored in the header of the first redo/undo log file in<br />
encrypted state.<br />
<br />
== Oracle DB ==<br />
<br />
Oracle DB supports column-level and tablespace-level TDE, both approaches use a two-tiered key-based architecture<br />
[https://docs.oracle.com/database/121/ASOAG/introduction-to-transparent-data-encryption.htm#ASOAG10272].<br />
The Master Encryption Key (MEK) is stored in an external key store with both hardware and software key stores supported<br />
[https://docs.oracle.com/database/121/ASOAG/introduction-to-transparent-data-encryption.htm#ASOAG10273]. The MEK is used to secure the column- and tablespace-level keys. Column-level TDE uses one key per<br />
table, tablespace-level TDE uses one key per tablespace.<br />
Oracle TDE supports Triple-DES (3DES168) and AES (128, 192, 256 bit). Column-level TDE defaults to AES-192, tablespace-level TDE defaults to AES-128. Both methods add a salt to the plaintext before<br />
encryption by default<br />
[https://docs.oracle.com/database/121/ASOAG/introduction-to-transparent-data-encryption.htm#ASOAG9578].<br />
Column-level TDE supports a NOMAC parameter to improve performance.<br />
<br />
== MS SQL Server ==<br />
<br />
MS SQL Server supports database-level TDE with a three-tiered architecture using both symmetric and asymmetric key encryption<br />
[https://docs.microsoft.com/en-us/sql/relational-databases/security/encryption/transparent-data-encryption?view=sql-server-2017]. The Service Master Key (SMK) is generated automatically during installation<br />
(e.g. `initdb` in PostgreSQL).<br />
The Database Master Key (DMK) is created in the `master` database (e.g. postgres default database) and is<br />
encrypted by the SMK. The DMK is then used to generate the certificates actually used to secure the<br />
Database Encryption Key (DEK). The DEK is the per-database symmetricly used to encrypt data and log files.<br />
<br />
<br />
== Filesystem-level encryption (fscrypt) ==<br />
<br />
https://www.kernel.org/doc/html/latest/filesystems/fscrypt.html<br />
<br />
= Links =<br />
<br />
* [https://wiki.archlinux.org/index.php/Disk_encryption#How_the_encryption_works Disk encryption]<br />
* [https://crypto.stackexchange.com/questions/44113/what-is-a-safe-maximum-meage-size-limit-when-encrypting-files-to-disk-with-aes What is a safe maximum message size limit when encrypting files to disk with AES-GCM before the need to re-generate the key or NONCE]</div>Greghttps://wiki.postgresql.org/index.php?title=Transparent_Data_Encryption&diff=35613Transparent Data Encryption2021-01-22T15:07:34Z<p>Greg: /* Links */ Spacing</p>
<hr />
<div><br />
This page describes the transparent data encryption feature proposed in pgsql-hackers.<br />
<br />
= Overview =<br />
<br />
There has been continual discussion about whether and how to implement Transparent Data Encryption (TDE) in Postgres. Many other relational databases support TDE, and some security standards require it.<br />
However, it is also debatable how much security value TDE provides.<br />
<br />
Fundamentally, TDE must meet three criteria — it must be secure, obviously, but it also must be done in a way that has minimal impact on the rest of the Postgres code. This has value for two reasons —<br />
first, only a small number of users will use TDE, so the less code that is added, the less testing is required. Second, the less code that is added, the less likely TDE will break because of future<br />
Postgres changes. Finally, TDE should meet regulatory requirements. <br />
<br />
== History ==<br />
<br />
The first patch was proposed in 2016 [https://www.postgresql.org/message-id/CA%2BCSw_tb3bk5i7if6inZFc3yyf%2B9HEVNTy51QFBoeUk7UE_V%3Dw%40mail.gmail.com] and implemented cluster-wide encryption with a single<br />
key. In 2018 table-level transparent data encryption was proposed [https://www.postgresql.org/message-id/031401d3f41d%245c70ed90%241552c8b0%24%40lab.ntt.co.jp], together with a method to integrate with key<br />
management systems; that first patch was submitted in 2019 [https://www.postgresql.org/message-id/CAD21AoBjrbxvaMpTApX1cEsO%3D8N%3Dnc2xVZPB0d9e-VjJ%3DYaRnw%40mail.gmail.com]. The patch implemented both<br />
tablespace-level encryption using a 2-tier key architecture and generic key management API to communicate with external key management systems.<br />
<br />
= Threat model =<br />
<br />
TDE protects data from theft if file system access controls are compromised:<br />
<br />
* Malicious user steals storage device and reads database files directly<br />
* Malicious backup operator takes backup<br />
* Protecting data at rest (persistent data)<br />
<br />
This does not protect from users who can read system memory, e.g., shared buffers, which root users can do.<br />
<br />
== Scope of TDE ==<br />
<br />
The scope of TDE is:<br />
<br />
* Internal key management system (KMS), storing keys in the database<br />
* Cluster-wide encryption<br />
** encrypt everything that is persistent<br />
** not encrypting shared buffers or data in memory<br />
<br />
The benefit of cluster wide encryption is:<br />
<br />
* Simple architecture<br />
* Suitable for the requirement of encrypting everything<br />
<br />
Cluster-wide encryption meets the compliance requirements and checks the box as far as TDE is concerned. It also meets the criteria of encrypting the data at rest i.e., persistent data.<br />
<br />
= When to encrypt/decrypt =<br />
<br />
== Buffer ==<br />
<br />
It encrypts buffer data during disk I/O<br />
<br />
* Processes encrypt data when writing it to disk<br />
* Decrypt when reading from disk<br />
* Data in the shared buffer is '''not''' encrypted<br />
<br />
== WAL ==<br />
<br />
In cluster encryption<br />
<br />
* Processes insert WAL data to WAL buffers in non-encrypted state<br />
* WAL buffers are encrypted when writing to the file system<br />
* WAL would use a dedicated encryption key<br />
<br />
== Temporary Files ==<br />
<br />
Temporary files would use a temporary key that is randomly generated at postmaster start and lives only for the postmaster lifetime. For parallel queries, especially parallel hash joins, since it's<br />
possible that multiple parallel workers use the same temporary files, the the temporary key should be shared with parallel workers.<br />
<br />
== Backups ==<br />
<br />
In cluster-wide encryption, there will be an option in pg_basebackup to change the heap/index key for key rotation purposes. After failover to a standby, the WAL key can be changed too.<br />
<br />
= How to encrypt =<br />
<br />
We will use Advanced Encryption Standard (AES) [https://en.wikipedia.org/wiki/Advanced_Encryption_Standard]. We will use AES-GCM [https://en.wikipedia.org/wiki/Galois/Counter_Mode] for heap/index and WAL<br />
encryption and AES-GCM with KWP [https://csrc.nist.gov/CSRC/media/Projects/Cryptographic-Algorithm-Validation-Program/documents/mac/KWVS.pdf] for key wrapping.<br />
<br />
== Key length ==<br />
<br />
We will offer three key length options (128, 192, and 256-bits) selected at initdb time with ```--file-encryption-keylen```.<br />
<br />
== Initialization Vector(IV) ==<br />
<br />
Nonce means "number used once". An IV is a specific type of nonce. That is, unique but not necessarily random or secret, as specified by the NIST<br />
[https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-38a.pdf]. To generate unique IVs, the NIST recommends two methods:<br />
<br />
The first method is to apply the forward cipher function, under the same key that is used for the<br />
encryption of the plaintext, to a nonce. The nonce must be a data block that is unique to each<br />
execution of the encryption operation. For example, the nonce may be a counter, as described in<br />
Appendix B, or a message number. The second method is to generate a random data block using a<br />
FIPS-approved random number generator. <br />
<br />
We will use the first method to generate IVs. That is, select nonce carefully and use a cipher with the key to make it unique enough to use as an IV. The nonce selection for buffer encryption and WAL<br />
encryption are described below.<br />
<br />
=== IV for heap/index encryption ===<br />
<br />
We will use the page LSN (8 bytes) and page number (4 bytes) to create an IV (16 bytes) for each page.<br />
<br />
Using the page LSN and page number as part of the nonce has three benefits:<br />
<br />
* We don't need to decrypt/re-encrypt during CREATE DATABASE since the page contents are the same in both places, and once one database changes its pages, it gets a new LSN, and hence a new nonce/IV.<br />
* For each change of an 8k page, you get a new nonce/IV, so you are not encrypting different data with the same nonce/IV.<br />
* This avoids requiring pg_upgrade to preserve database oids, tablespace oids, and relfilenodes.<br />
* We get a unique nonce even when two different pages in the same relation have the same LSN because the page numbers are different (we don't use the same LSN in different relations)<br />
** This can happen when a heap update expires an old tuple and adds a new tuple to another page.<br />
<br />
However, the LSN must then be visible on encrypted pages, so we will not encrypt the LSN on the page. We will also not encrypt the CRC so pg_checksums can still check pages offline without access to the<br />
keys. Probably, we will need to encrypt pd_lower and pd_upper to not give a hint to an attacker where the hole is within the page. Therefore, we will need to encrypt from pd_flags or pg_lower.<br />
<br />
typedef struct PageHeaderData<br />
{<br />
/* XXX LSN is member of *any* block, not only page-organized ones */<br />
PageXLogRecPtr pd_lsn; /* LSN: next byte after last byte of xlog<br />
* record for last change to this page */<br />
uint16 pd_checksum; /* checksum */<br />
uint16 pd_flags; /* flag bits, see below */<br />
LocationIndex pd_lower; /* offset to start of free space */<br />
LocationIndex pd_upper; /* offset to end of free space */<br />
LocationIndex pd_special; /* offset to start of special space */<br />
uint16 pd_pagesize_version;<br />
TransactionId pd_prune_xid; /* oldest prunable XID, or zero if none */<br />
ItemIdData pd_linp[FLEXIBLE_ARRAY_MEMBER]; /* line pointer array */<br />
} PageHeaderData;<br />
<br />
There are some challenges in making sure the nonce is never reused. Unlogged tables don't update the page LSN when updating the page, so we would need things like the magic GIST "GistBuildLSN" fake-LSN<br />
too, and we will need a bit used in the IV to distinguish if it's a real LSN or an unlogged LSN; perhaps we need a separate unlogged table encryption key.<br />
<br />
Another challenge is hint bit updates, which change the page contents but don't always update the page LSN. wal_log_hints can be enabled so the LSN changes on the first hint bit change during a<br />
checkpoint. However, later hint bit changes during the same checkpoint would not cause an LSN page change. This would result in using the same nonce to encrypt different page contents.<br />
<br />
One fix for this would be to create dummy WAL records and update the page LSN for every hint bit change (not just the first one since the last checkpoint), but that would write too many dummy records. A<br />
more reasonable option would be to update a per-page counter on every hint bit change for the same page LSN, and use that counter in the four unused IV bytes. However, the counter would have to be stored<br />
unencrypted on the page. If the counter is about to wraparound, we force wal_log_hint_bits to generate a new LSN, even though it is not the first hint bit change during the checkpoint. (Checksums use a similar method to allow page checksums to be recomputed multiple<br />
times during the same checkpoint, but only create a page image WAL record on first page hint bit change during a checkpoint.)<br />
<br />
=== IV for WAL encryption ===<br />
<br />
We will use the WAL segment number to create an IV for each WAL segments. The maximum bits that can be processed with a single key/IV(nonce) pair is 68GB<br />
[https://crypto.stackexchange.com/questions/44113/what-is-a-safe-maximum-message-size-limit-when-encrypting-files-to-disk-with-aes][https://crypto.stackexchange.com/questions/20333/encryption-of-big-files-in-java-with-aes-gcm/20340#20340].<br />
<br />
We will use a different IV(nonce) 16MB WAL file, so we will be OK there too. What about timelines?<br />
<br />
=== IV for temporary files ===<br />
<br />
It is unclear how to set the nonce for temporary files. We will probably use a data encryption key generated at postmaster start, and mix that with the time of day, process id, and maybe file path.<br />
<br />
= Checksum and Encryption =<br />
<br />
Encrypt and then CRC, and store the CRC decrypted<br />
<br />
== Prepared Transaction Encryption ==<br />
<br />
During the discussion, the point about prepared transaction encryption also came up since they are also persisted. Sawada-san mentioned that we aren’t storing any important data for prepared transactions<br />
so we might not need to encrypt it. However we need to have it as part of the todo list.<br />
<br />
= Other requirements =<br />
<br />
TDE requires the OpenSSL library.<br />
<br />
= TODO for Full-Cluster Encryption =<br />
<br />
Here is list of ongoing tasks with there assignment and status for cluster-wide encryption and internal key management system.<br />
<br />
* Buffer encryption - Assigned to Sawada Status : Needs to update based on the latest KMS patch.<br />
** '''Which Files do we need to encrypt?'''<br />
*** Need to go through the data directory and get a list of all the files that contain user data - Assigned to Moon-san (Moon-san will check if NTT has done some work in this area and share the result)<br />
*** Need to get buy-in from the community for not encrypting non-user data files i.e. visibility map, transactional data etc (This needs to be a team effort)<br />
** don't encrypt LSN(pd_lsn) and CRC(pd_checksum) of the encrypted page contents<br />
*** Encrypt then checksum<br />
** don't encrypt the first 12 bytes of a page so pd_flags is visible in encrypted and non-encrypted mode? (Might be useful for online checksum and encryption.)<br />
** make pd_pagesize_version visible on the encrypted page?<br />
** shared buffers nonce is LSN/page-number (nonce is run through encryption to create the IV?)<br />
** require ''wal_log_hints'' and ''full_page_writes'' to prevent force bit changes to be WAL logged (generates new page LSN)<br />
<br />
* WAL encryption - Assigned to Moon/Swada Status : Needs to update based on the latest KMS patch.<br />
** use WAL key<br />
** use CTR mode<br />
** for WAL, don't use OpenSSL's EVP interface so the offset can be specified?<br />
** WAL nonce is segment number (no timeline?)<br />
** Encrypt whole WAL records<br />
*** Need to make sure that we don't encrypt the different data with the same key and nonce, and write it to the disk, especially when encrypting data which is not multiple of 16 bytes.<br />
** add Asssert() code to check that there are no WAL record types that modify more than one relation (already written)<br />
<br />
* Temporary file encryption - Assigned to Moon Status : Work in progress, will be posted to hackers soon.<br />
** Encryption key: A hash value is randomly generated for each temporary file, and a temporary key is generated by a combination of the hash value and the master key. (will use HMAC256)<br />
** IV value: The encryption key will be used separately for each file. So that the IV is simply generated(64bit = hash value, 32bit = counter) as it should not exceed 1GB per file(pgsql_tmp).<br />
** Need to check if some new temporary files that could have user data are introduced by recent changes for PG13.<br />
<br />
* Front end tools encryption - Assigned to Cary Status : Pending due to the shift of focus on KMS for PG13. Some front-end patches have been shared with community that illustrates the interactions with<br />
KMS. Development can resume once the focus is shifted back to TDE.<br />
** Allow ''pg_rewind'' and ''pg_waldump'' to work, add ''--cluster-passphrase-command'' option<br />
** does ''pg_rewind'' need to work across WAL key changes?<br />
** offline tool to allow changing the data encryption key of current WAL and PITR WAL files, must be crash-safe<br />
** changing the pass phrase will require ''--old-passphrase-command'' and --new-passphrase-command'' options<br />
** modify pg_basebackup to store relations with a different relation key so standby servers can be used for relation key rotation<br />
<br />
Here are some tasks or areas that we need to research, some of these are being worked as part of main to do's listed above. This is a exhaustive list to ensure that we don't skip any todo required for the<br />
first phase of TDE. <br />
<br />
* TDE for replication<br />
** wal sender (especially xlogreader) needs to take TDE-wal key to decrypt wal data.<br />
** Physical replication<br />
*** WAL records are sent in a form of plaintext?<br />
*** Allow primary and the standby servers to use different heap/index keys<br />
** Logical replication (and decoding)<br />
*** We decrypt and decode WAL data and send these changes in a form of plaintext.<br />
*** The subscriber will use different encryption keys or even can disable TDE.<br />
* TDE for backup<br />
** During physical backup (pg_basebackup or copying OS files), table/index data are transferred in a form of encrypted text and all three internal keys are replicated<br />
** '''Maybe we need to have the ability to change some internal keys during basebackup?''' This can be done by generating new internal keys, re-encrypting database files with the new key during the<br />
transfer (we already do checksum verification for every page), generate new control file having new internal keys and sending it.<br />
** For logical backup (pg_dump), all data are dumped in a form of plaintext because pg_dump simply fetches data via SQL.<br />
** How does TDE work with backup manifests?<br />
* Regression test cases for TDE<br />
* Documentation<br />
** if a standby is promoted to a primary and the old primary continues writing, one must be re-keyed to avoid using the same IV<br />
* Comprehensive testing<br />
<br />
= List of the contains of user data for PostgreSQL files =<br />
If there are any added files, incorrect note or user data, please correct it.<br />
{| cellspacing="0" border="1"<br />
!Num<br />
!Database cluster<br />
!Contains of user data<br />
!Single Sequential Write<br />
!Single Process Write-then-Read<br />
!note<br />
|-<br />
|1<br />
|PG_VERSION<br />
|not contain<br />
|<br />
|<br />
|Only PostgreSQL version information is written<br />
|-<br />
|2<br />
|'''base/NNNNN/NNNNN'''<br />
|'''contain'''<br />
|<br />
|<br />
|Table data<br />
|-<br />
|3<br />
|base/NNNNN/NNNNN_vm<br />
|not contain<br />
|<br />
|<br />
|VM file<br />
|-<br />
|4<br />
|base/NNNNN/NNNNN_fsm<br />
|not contain<br />
|<br />
|<br />
|FSM file<br />
|-<br />
|5<br />
|base/NNNNN/NNNNN_init<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|6<br />
|base/NNNNN/PG_VERSION<br />
|not contain<br />
|<br />
|<br />
|Only PostgreSQL version information is written<br />
|-<br />
|7<br />
|base/NNNNN/pg_filenode.map<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|8<br />
|base/NNNNN/pg_internal.init<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|9<br />
|'''base/pgsql_tmp/pgsql_tmpPID.tempFileCounter'''<br />
|'''contain'''<br />
|<br />
|<br />
|Temporary file that creates user data temporarily when work_mem size is insufficient<br />
|-<br />
|10<br />
|current_logfiles<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|11<br />
|'''global/NNNN'''<br />
|'''contain'''<br />
|<br />
|<br />
|Database name and user name<br />
|-<br />
|12<br />
|global/NNNN_vm<br />
|not contain<br />
|<br />
|<br />
|vm file<br />
|-<br />
|13<br />
|global/NNNN_fsm<br />
|not contain<br />
|<br />
|<br />
|fsm file<br />
|-<br />
|14<br />
|global/pg_control<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|15<br />
|global/pg_filenode.map<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|16<br />
|global/pg_internal.init<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|17<br />
|pg_commit_ts/0000<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|18<br />
|pg_dynshmem/mmap.NNNNNNN<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|19<br />
|pg_logical/mappings/<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|20<br />
|pg_logical/replorigin_checkpoint<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|21<br />
|pg_logical/snapshots/0-XXXXXXXX.snap<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|22<br />
|pg_multixact/members/0000<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|23<br />
|pg_multixact/offsets/0000<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|24<br />
|pg_notify/0000<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|25<br />
|pg_replslot/Slotname/state<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|26<br />
|'''pg_replslot/Slotname/xid-NNN-lsn-0-NNNNNNNN.snap'''<br />
|'''contain'''<br />
|<br />
|<br />
|Includes user data decoded from WAL files<br />
|-<br />
|27<br />
|pg_serial/<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|28<br />
|pg_snapshots/NNNNNNNN-N<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|29<br />
|'''pg_stat/db_NNNNN.stat'''<br />
|'''contain'''<br />
|<br />
|<br />
|Statistics collector includes user data<br />
|-<br />
|30<br />
|'''pg_stat/global.stat'''<br />
|'''contain'''<br />
|<br />
|<br />
|Statistics collector includes user data<br />
|-<br />
|31<br />
|'''pg_stat_tmp/db_NNNNN.stat'''<br />
|'''contain'''<br />
|<br />
|<br />
|Statistics collector includes user data<br />
|-<br />
|32<br />
|'''pg_stat_tmp/global.stat'''<br />
|'''contain'''<br />
|<br />
|<br />
|Statistics collector includes user data<br />
|-<br />
|33<br />
|pg_subtrans/0000<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|34<br />
|'''pg_tblspc/PG_NN_NNNNNNNN/NNNNN'''<br />
|'''contain'''<br />
|<br />
|<br />
|Symlink files<br />
|-<br />
|35<br />
|pg_twophase/NNNNNNNN<br />
|not contain<br />
|<br />
|<br />
|Exclude user data<br />
|-<br />
|36<br />
|'''pg_wal/NNNNNNNNNNNNNNNNNN'''<br />
|'''contain'''<br />
|<br />
|<br />
|WAL data<br />
|-<br />
|37<br />
|pg_wal/*.backup<br />
|not contain<br />
|<br />
|<br />
|Exclude user data<br />
|-<br />
|38<br />
|pg_wal/*.history<br />
|not contain<br />
|<br />
|<br />
|Exclude user data<br />
|-<br />
|39<br />
|'''pg_wal/*.partial'''<br />
|'''contain'''<br />
|<br />
|<br />
|WAL data<br />
|-<br />
|40<br />
|pg_wal/archive_status/NNNNNNNNNNNNNNNNNN.done<br />
|not contain<br />
|<br />
|<br />
|Exclude user data<br />
|-<br />
|41<br />
|pg_wal/archive_status/NNNNNNNNNNNNNNNNNN.ready<br />
|not contain<br />
|<br />
|<br />
|Exclude user data<br />
|-<br />
|42<br />
|pg_xact/0000<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|43<br />
|postgresql.auto.conf<br />
|not contain<br />
|<br />
|<br />
|setting file<br />
|-<br />
|44<br />
|postmaster.opts<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|45<br />
|postmaster.pid<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|46<br />
|postgresql.conf<br />
|not contain<br />
|<br />
|<br />
|Setting file<br />
|-<br />
|47<br />
|pg_hba.conf<br />
|not contain<br />
|<br />
|<br />
|<br />
|-<br />
|48<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
|49<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
|}<br />
<br />
= TDE in other systems =<br />
<br />
== MySQL (InnoDB) ==<br />
<br />
MySQL supports per tablespace, data at rest encryption [https://dev.mysql.com/doc/refman/8.0/en/innodb-tablespace-encryption.html#innodb-tablespace-encryption-about]. Please note that in MySQL the<br />
tablespace refers to a data file that can hold data for one or more InnoDB tables and associated indexes, while tablespace refers to a directory in PostgreSQL. innodb_file_per_table option allows<br />
tables to be created in their own tablespace. As of MySQL 8.0.16 it supports redo log and undo log<br />
encryption [https://dev.mysql.com/doc/refman/8.0/en/innodb-tablespace-encryption.html#innodb-tablespace-encryption-redo-log] and system tables<br />
encryption [https://dev.mysql.com/doc/refman/8.0/en/innodb-tablespace-encryption.html#innodb-schema-tablespace-encryption-default]. It supports 2 tier key architecture; it has tablespace keys for each<br />
tablespace which are located on the header of tablespace file. The master key can be obtained from external systems via a key ring plugin [https://dev.mysql.com/doc/refman/8.0/en/keyring.html]<br />
<br />
MySQL encrypts each page of both redo log and undo log with dedicated keys, not with the keys used for table encryption. The encryption key is stored in the header of the first redo/undo log file in<br />
encrypted state.<br />
<br />
== Oracle DB ==<br />
<br />
Oracle DB supports column-level and tablespace-level TDE, both approaches use a two-tiered key-based architecture<br />
[https://docs.oracle.com/database/121/ASOAG/introduction-to-transparent-data-encryption.htm#ASOAG10272].<br />
The Master Encryption Key (MEK) is stored in an external key store with both hardware and software key stores supported<br />
[https://docs.oracle.com/database/121/ASOAG/introduction-to-transparent-data-encryption.htm#ASOAG10273]. The MEK is used to secure the column- and tablespace-level keys. Column-level TDE uses one key per<br />
table, tablespace-level TDE uses one key per tablespace.<br />
Oracle TDE supports Triple-DES (3DES168) and AES (128, 192, 256 bit). Column-level TDE defaults to AES-192, tablespace-level TDE defaults to AES-128. Both methods add a salt to the plaintext before<br />
encryption by default<br />
[https://docs.oracle.com/database/121/ASOAG/introduction-to-transparent-data-encryption.htm#ASOAG9578].<br />
Column-level TDE supports a NOMAC parameter to improve performance.<br />
<br />
== MS SQL Server ==<br />
<br />
MS SQL Server supports database-level TDE with a three-tiered architecture using both symmetric and asymmetric key encryption<br />
[https://docs.microsoft.com/en-us/sql/relational-databases/security/encryption/transparent-data-encryption?view=sql-server-2017]. The Service Master Key (SMK) is generated automatically during installation<br />
(e.g. `initdb` in PostgreSQL).<br />
The Database Master Key (DMK) is created in the `master` database (e.g. postgres default database) and is<br />
encrypted by the SMK. The DMK is then used to generate the certificates actually used to secure the<br />
Database Encryption Key (DEK). The DEK is the per-database symmetricly used to encrypt data and log files.<br />
<br />
<br />
== Filesystem-level encryption (fscrypt) ==<br />
<br />
https://www.kernel.org/doc/html/latest/filesystems/fscrypt.html<br />
<br />
= Links =<br />
<br />
* [https://wiki.archlinux.org/index.php/Disk_encryption#How_the_encryption_works Disk encryption]<br />
* [https://crypto.stackexchange.com/questions/44113/what-is-a-safe-maximum-meage-size-limit-when-encrypting-files-to-disk-with-aes What is a safe maximum message size limit when encrypting files to disk with AES-GCM before the need to re-generate the key or NONCE]</div>Greghttps://wiki.postgresql.org/index.php?title=IRC2RWNames&diff=29165IRC2RWNames2017-01-07T00:53:23Z<p>Greg: Add saper / Marcin Cieślak</p>
<hr />
<div>=== List of IRC nicks with their respective real world names ===<br />
<br />
You can find many PostgreSQL users and developers chatting in [irc://irc.freenode.net/postgresql #postgresql on freenode]. Here's more information about some of the regulars there. '''Note:''' people are on the list below only when they want to be. Do not (re-)add anyone without their express permission.<br />
<br />
{| border="1"<br />
|-<br />
!Nickname || Real Name<br />
|-<br />
|ads || Andreas Scherbaum<br />
|-<br />
|agliodbs, aglio2 (freenode), jberkus (oftc) || Josh Berkus<br />
|-<br />
|ahammond || Andrew Hammond<br />
|-<br />
|akretschmer || Andreas Kretschmer<br />
|-<br />
|alvherre || Alvaro Herrera<br />
|-<br />
|andres || Andres Freund<br />
|-<br />
|Assid || Satish Alwani<br />
|-<br />
|aurynn || Aurynn Shaw<br />
|-<br />
|BlueAidan/BlueAidan_work || [[user:davidblewett | David Blewett]]<br />
|-<br />
|bmomjian || Bruce Momjian<br />
|-<br />
|cbbrowne || Christopher Browne<br />
|-<br />
|cce || Clark C. Evans<br />
|-<br />
|chicagoben || Benjamin Johnson<br />
|-<br />
|crab || Abhijit Menon-Sen<br />
|-<br />
|Crad || Gavin M. Roy<br />
|- <br />
|daamien || Damien Clochard<br />
|-<br />
|DarcyB || Darcy Buskermolen<br />
|-<br />
|darkixion || Thom Brown<br />
|-<br />
|davidfetter || David Fetter<br />
|-<br />
|dbb || Brian M Hamlin / darkblue_b<br />
|-<br />
|dcolish || [http://www.unencrypted.org Dan Colish]<br />
|-<br />
|dcramer || Dave Cramer<br />
|-<br />
|DeciBull, TheCougar || Jim C. Nasby<br />
|-<br />
|dennisb || Dennis Bj&ouml;rklund<br />
|-<br />
|depesz || Hubert Lubaczewski<br />
|-<br />
|devrimgunduz || Devrim G&uuml;nd&uuml;z<br />
|-<br />
|digicon || [http://digicondev.blogspot.com Zach Conrad]<br />
|-<br />
|digitalknight || Atri Sharma<br />
|-<br />
|dim || [http://tapoueh.org Dimitri Fontaine]<br />
|-<br />
|direvus || Brendan Jurd<br />
|-<br />
|drbair || Ryan Bair<br />
|-<br />
|DrLou || Lou Picciano<br />
|-<br />
|duck_tape || Adi Alurkar<br />
|-<br />
|dvl || [http://langille.org/ Dan Langille]<br />
|-<br />
|eggyknap || Joshua Tolley<br />
|-<br />
|endpoint_david || David Christensen<br />
|-<br />
|eulerto || Euler Taveira<br />
|-<br />
|f3ew/devdas || Devdas Vasu Bhagat<br />
|-<br />
|feivel || Michael Meskes<br />
|-<br />
|frost242 || Thomas Reiss<br />
|-<br />
|elein || Elein Mustain<br />
|-<br />
|Gibheer || Stefan Radomski<br />
|-<br />
|gleu || Guillaume Lelarge<br />
|-<br />
|gorthx || [[User:Gabrielle|Gabrielle Roth]]<br />
|-<br />
|grzm || Michael Glaesemann<br />
|-<br />
|gsmet || Guillaume Smet<br />
|-<br />
|gregs1104 || Greg Smith<br />
|-<br />
|gurjeet || [[User:singh.gurjeet|Gurjeet Singh]]<br />
|-<br />
|G_SabinoMullane || Greg Sabino Mullane<br />
|-<br />
|HarrisonF || Harrison Fisk<br />
|-<br />
|ioguix || Jehan-Guillaume de Rorthais<br />
|-<br />
|indigo || Phil Frost<br />
|-<br />
|intgr || Marti Raudsepp<br />
|-<br />
|JanniCash || Jan Wieck<br />
|-<br />
|jconway || Joe Conway<br />
|-<br />
|jdavis, jdavis_ || Jeff Davis<br />
|-<br />
|jkatz05 || Jonathan S. Katz<br />
|-<br />
|johto || Marko Tiikkaja<br />
|-<br />
|jurka || Kris Jurka<br />
|-<br />
|justatheory || David Wheeler<br />
|-<br />
|jpa || Jean-Paul Argudo<br />
|-<br />
|jwp || James Pye<br />
|-<br />
|j_williams || Josh Williams<br />
|-<br />
|keithf4 || [http://www.keithf4.com Keith Fiske]<br />
|-<br />
|kgrittn || Kevin Grittner<br />
|-<br />
|klando || [[User:c2main|Cédric Villemain]]<br />
|-<br />
|larryrtx || Larry Rosenman<br />
|-<br />
|linuxpoet, postgresman || Joshua D. Drake<br />
|-<br />
|lluad || Steve Atkins<br />
|-<br />
|lsmith || Lukas Smith<br />
|-<br />
|macdice || Thomas Munro<br />
|-<br />
|mage_ || Julien Cigar<br />
|-<br />
|magnush || Magnus Hagander<br />
|-<br />
|maiku41 || Mike Blackwell<br />
|-<br />
|marco44 || Marc Cousin<br />
|-<br />
|markwkm || Mark Wong<br />
|-<br />
|mastermind || [[user:mastermind | Stefan Kaltenbrunner]]<br />
|-<br />
|mbalmer || [[user:mbalmer | Marc Balmer]]<br />
|-<br />
|merlin83 || Chua Khee Chin<br />
|-<br />
|merlinm || Merlin Moncure<br />
|-<br />
|metatrontech || Chris Travers<br />
|-<br />
|miracee || Susanne Ebrecht<br />
|-<br />
|Moosbert || Peter Eisentraut<br />
|-<br />
|Myon || [[user:Myon | Christoph Berg]]<br />
|-<br />
|neilc || Neil Conway<br />
|-<br />
|oicu || Andrew Dunstan<br />
|-<br />
|okbobcz || Pavel Stehule<br />
|-<br />
|patryk || Patryk Kordylewski<br />
|-<br />
|pasha_golub || [http://pgolub.wordpress.com/ Pavel Golub]<br />
|-<br />
|pg_docbot || [[IRCBotSyntax]]<br />
|-<br />
|pgSnake || Dave Page<br />
|-<br />
|PJMODOS || Petr Jel&iacute;nek<br />
|-<br />
|Possible || Robert Ivens<br />
|-<br />
|postwait || Theo Schlossnagle<br />
|-<br />
|prothid || R Brenton Strickler<br />
|-<br />
|psoo || Bernd Helmle<br />
|-<br />
|PSUdaemon || Phil Sorber<br />
|-<br />
|pyarra || Philip Yarra<br />
|-<br />
|raptelan || [[user:Cshobe|Casey Allen Shobe]]<br />
|-<br />
|RhodiumToad (formerly AndrewSN) || Andrew Gierth<br />
|-<br />
|rjuju || Julien Rouhaud<br />
|-<br />
|Robe || [[user:Robe | Michael Renner]]<br />
|-<br />
|rotellaro || Federico Campoli<br />
|-<br />
|russ960 || [[user:russ960|Russ Johnson]]<br />
|-<br />
|rz || Kirill Simonov<br />
|-<br />
|saper || Marcin Cieślak<br />
|-<br />
|SAS || Stéphane Schildknecht<br />
|-<br />
|schmiddy || Josh Kupershmidt<br />
|-<br />
|scrappy || Marc G. Fournier<br />
|-<br />
|sehrope || Sehrope Sarkuni<br />
|-<br />
|selenamarie || Selena Deckelmann<br />
|-<br />
|SkippyDigits || Sherri Kalm<br />
|-<br />
|Snow-Man || Stephen Frost<br />
|-<br />
|Spritz || Matteo Beccati<br />
|-<br />
|sternocera || Peter Geoghegan<br />
|-<br />
|StuckMojo, MojoWork || Jon Erdman<br />
|-<br />
|swm || Gavin Sherry<br />
|-<br />
|vy || Volkan YAZICI<br />
|-<br />
|wulczer || Jan Urbański<br />
|-<br />
|xaprb || Baron Schwartz<br />
|-<br />
|xocolatl || Vik Fearing<br />
|-<br />
|xzilla, xzi11a || [[User:Xzilla|Robert Treat]]<br />
|}<br />
<br />
[[Category:Community]]</div>Greghttps://wiki.postgresql.org/index.php?title=Sample_Databases&diff=28256Sample Databases2016-09-25T00:21:58Z<p>Greg: Remove dead link</p>
<hr />
<div>Many database systems provide sample databases with the product. A good intro to popular ones that includes discussion of samples available for other databases is [http://www.barik.net/archive/2006/03/28/195425/ Sample Databases for PostgreSQL and More]<br />
<br />
One trivial sample that PostgreSQL ships with is the [[Pgbench]]. This has the advantage of being built-in and supporting a scalable data generator--you can make databases of any size ranging from 16MB to 600GB (approximately) with the current version.<br />
<br />
== PgFoundry Samples ==<br />
<br />
The latest collection of PostgreSQL compatible database samples is at [http://pgfoundry.org/projects/dbsamples/ PgFoundry Sample Databases]. It includes three commonly used benchmark databases:<br />
<br />
* World: Based on the [http://dev.mysql.com/doc/world-setup/en/ MySQL World] sample. Has a list of Cities, Countries, and what language they speak.<br />
* dellstore2: PostgreSQL port of a database-neutral e-commerce test application [http://linux.dell.com/dvdstore/ developed by Dell]. The original code supports three size scales in their data generator (10MB, 1GB, 100GB), currently only the normal, smallest sized data set has been ported to PostgreSQL. [http://www.storytotell.org/blog/2009/08/12/postgresql-84-windowing-functions.html PostgreSQL 8.4: Windowing Functions] uses this test data to show some advanced queries.<br />
* Pagila: Based on MySQL's replacement for World, [http://dev.mysql.com/doc/sakila/en/ Sakila], which is itself inspired by the Dell DVD Store.<br />
<br />
There are some other sample databases there as well, such as a USDA Food database and a large list of country data via ISO-3166 standards.<br />
<br />
== Other Samples ==<br />
<br />
* [https://theodi.org/blog/the-status-of-csvs-on-datagovuk The land registry file] from http://data.gov.uk has details of land sales in the UK, going back several decades, and is 3.5GB as of August 2016 (this applies only to the "complete" file, "pp-complete.csv"). No registration required.<br />
<code><pre><br />
-- Download file "pp-complete.csv", which has all records.<br />
-- If schema changes/field added, consult: https://www.gov.uk/guidance/about-the-price-paid-data<br />
<br />
-- Create table:<br />
CREATE TABLE land_registry_price_paid_uk(<br />
transaction uuid,<br />
price numeric,<br />
transfer_date date,<br />
postcode text,<br />
property_type char(1),<br />
newly_built boolean,<br />
duration char(1),<br />
paon text,<br />
saon text,<br />
street text,<br />
locality text,<br />
city text,<br />
district text,<br />
county text,<br />
ppd_category_type char(1),<br />
record_status char(1));<br />
<br />
-- Copy CSV data, with appropriate munging:<br />
COPY land_registry_price_paid_uk FROM '/path/to/pp-complete.csv' with (format csv, encoding 'win1252', header false, null '', quote '"', force_null (postcode, saon, paon, street, locality, city, district));<br />
</pre></code><br />
* [https://github.com/lorint/AdventureWorks-for-Postgres AdventureWorks 2014 for Postgres] - Scripts to set up the OLTP part of the go-to database used in training classes and for sample apps on the Microsoft stack. The result is 68 tables containing HR, sales, product, and purchasing data organized across 5 schemas. It represents a fictitious bicycle parts wholesaler with a hierarchy of nearly 300 employees, 500 products, 20000 customers, and 31000 sales each having an average of 4 line items. So it's big enough to be interesting, but not unwieldy. In addition to being a well-rounded OLTP sample, it is also a good choice to demonstrate ETL into a data warehouse.<br />
* [ftp://ftp.informatics.jax.org/pub/database_backups/ Mouse Genome sample data set]. See [http://www.informatics.jax.org/software.shtml instructions]. Custom format dump, 1.3GB compressed, but restored database is tens of GB in size. MGI is the international database resource for the laboratory mouse, providing integrated genetic, genomic, and biological data to facilitate the study of human health and disease. MGI use PostgreSQL in production [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3245042/], providing direct protocol access to researchers, so the custom format dump is not an afterthought. Apparently updated frequently.<br />
* Benchmarking databases such as [[DBT-2]] or [[TPC-H]] can be used as samples.<br />
* [http://www.freebase.com/docs/data_dumps Freebase] - Various wiki style data on places/people/things - ~600MB compressed<br />
* [http://www.imdb.com/interfaces#plain IMDB] - the IMDB database - see also http://code.google.com/p/imbi/<br />
* [http://www.data.gov/ ] - US federal government data collection see also [http://www.sunlightlabs.com/ sunlightlabs]<br />
* [http://wiki.dbpedia.org/Downloads DBpedia] - wikipedia data export project<br />
* [http://www.eoddata.com/ eoddata] - historic stock market data (requires registration - licence?)<br />
* [http://www.transtats.bts.gov/Tables.asp?DB_ID=120&DB_Name=Airline%20On-Time%20Performance%20Data&DB_Short_Name=On-Time RITA] - Airline On-Time Performance Data<br />
* [http://wiki.openstreetmap.org/wiki/Planet.osm Openstreetmap] - Openstreetmap source data<br />
* [ftp://ftp.ncbi.nih.gov/gene/DATA/ NCBI] - biological annotation from NCBI's ENTREZ system (daily updated)<br />
<br />
[[Category:Howto]]</div>Greghttps://wiki.postgresql.org/index.php?title=Bucardo&diff=24152Bucardo2015-01-27T22:12:57Z<p>Greg: Version is 5.3.1</p>
<hr />
<div>{{clusteringProject<br />
|overview='''Bucardo''' is a replication system for Postgres that supports any number of sources and targets (aka masters and slaves). It is asynchronous and trigger based. <br />
|status=Production<br />
|statusdetail=Version 5.3.1<br />
|contact=*[https://mail.endcrypt.com/mailman/listinfo/bucardo-general bucardo-general mailing list]<br />
|url=*[http://bucardo.org Bucardo Web site and wiki]<br />
|scalability=Master-slave: high with cascading slaves. Multi-master: two or more<br />
|readscaling=yes<br />
|writescaling=yes (with multimaster); no/slight inverse (master/slave only)<br />
|procedures=Yes<br />
|parallel=No<br />
|failover=Not automatic<br />
|online=No<br />
|upgrades=Yes<br />
|detach=Yes<br />
|coremod=No<br />
|languages=Perl, Pl/PgSQL, Pl/PerlU<br />
|license=BSD<br />
|complete=Yes<br />
|version=8.1 to 9.4<br />
|summary=Asynchronous cascading master-slave replication, row-based, using triggers and queueing in the database ''AND'' Asynchronous master-master replication, row-based, using triggers and customized conflict resolution<br />
|description=<br />
General model: Asynchronous cascading master-slave and/or master-master. Row-based, uses triggers and LISTEN/NOTIFY.<br />
<br />
Bucardo requires a dedicated database and runs as a Perl daemon that communicates with this database and all other <br />
databases involved in the replication. It can run as multimaster or multislave.<br />
<br />
Multimaster replication uses two or more databases, with conflict resolution <br />
(either standard choices or custom subroutines) to handle the same update on both sides.<br />
<br />
Master-slave replication involves one or more sources going to one or more targets. The source must be PostgreSQL, but the targets can be PostgreSQL, MySQL, Redis, Oracle, MariaDB, SQLite, or MongoDB.<br />
<br />
|usecase=<br />
* Load balancing via slaves<br />
* Data warehousing via slaves<br />
* Slaves are not constrained and can be written to<br />
* Upgrading from one Postgres version to another<br />
* Many hooks allow for data to be changed on the fly during replication, and ease of things like cache invalidation.<br />
* Partial replication<br />
* Replication on demand (changes can be pushed automatically or when desired)<br />
* Will handle replication of TRUNCATE for Postgres version 8.4 or greater.<br />
* Slaves can be "pre-warmed" for quick setup<br />
|drawbacks=<br />
* Cannot handle DDL (Postgres has no triggers on system tables)<br />
* Cannot handle large objects (same reason)<br />
* Cannot incrementally replicate tables without a unique key (it can "fullcopy" them)<br />
* Will not work on versions older than Postgres 8<br />
|sponsors=<br />
|support=Commercial support is available from [http://endpoint.com End Point Corporation]. Non-commercial support is available from the the bucardo-general mailing list, and the #bucardo channel on irc.freenode.net.<br />
}}<br />
<br />
==Other Information==<br />
<br />
* [http://bucardo.org Bucardo wiki]<br />
* [http://bucardo.org/bugzilla Bucardo bug tracker]</div>Greghttps://wiki.postgresql.org/index.php?title=Bucardo&diff=24017Bucardo2014-12-22T15:18:02Z<p>Greg: Version 5.3.0 of Bucardo</p>
<hr />
<div>{{clusteringProject<br />
|overview='''Bucardo''' is a replication system for Postgres that supports any number of sources and targets (aka masters and slaves). It is asynchronous and trigger based. <br />
|status=Production<br />
|statusdetail=Version 5.3.0<br />
|contact=*[https://mail.endcrypt.com/mailman/listinfo/bucardo-general bucardo-general mailing list]<br />
|url=*[http://bucardo.org Bucardo Web site and wiki]<br />
|scalability=Master-slave: high with cascading slaves. Multi-master: two or more<br />
|readscaling=yes<br />
|writescaling=yes (with multimaster); no/slight inverse (master/slave only)<br />
|procedures=Yes<br />
|parallel=No<br />
|failover=Not automatic<br />
|online=No<br />
|upgrades=Yes<br />
|detach=Yes<br />
|coremod=No<br />
|languages=Perl, Pl/PgSQL, Pl/PerlU<br />
|license=BSD<br />
|complete=Yes<br />
|version=8.1 to 9.4<br />
|summary=Asynchronous cascading master-slave replication, row-based, using triggers and queueing in the database ''AND'' Asynchronous master-master replication, row-based, using triggers and customized conflict resolution<br />
|description=<br />
General model: Asynchronous cascading master-slave and/or master-master. Row-based, uses triggers and LISTEN/NOTIFY.<br />
<br />
Bucardo requires a dedicated database and runs as a Perl daemon that communicates with this database and all other <br />
databases involved in the replication. It can run as multimaster or multislave.<br />
<br />
Multimaster replication uses two or more databases, with conflict resolution <br />
(either standard choices or custom subroutines) to handle the same update on both sides.<br />
<br />
Master-slave replication involves one or more sources going to one or more targets. The source must be PostgreSQL, but the targets can be PostgreSQL, MySQL, Redis, Oracle, MariaDB, SQLite, or MongoDB.<br />
<br />
|usecase=<br />
* Load balancing via slaves<br />
* Data warehousing via slaves<br />
* Slaves are not constrained and can be written to<br />
* Upgrading from one Postgres version to another<br />
* Many hooks allow for data to be changed on the fly during replication, and ease of things like cache invalidation.<br />
* Partial replication<br />
* Replication on demand (changes can be pushed automatically or when desired)<br />
* Will handle replication of TRUNCATE for Postgres version 8.4 or greater.<br />
* Slaves can be "pre-warmed" for quick setup<br />
|drawbacks=<br />
* Cannot handle DDL (Postgres has no triggers on system tables)<br />
* Cannot handle large objects (same reason)<br />
* Cannot incrementally replicate tables without a unique key (it can "fullcopy" them)<br />
* Will not work on versions older than Postgres 8<br />
|sponsors=<br />
|support=Commercial support is available from [http://endpoint.com End Point Corporation]. Non-commercial support is available from the the bucardo-general mailing list, and the #bucardo channel on irc.freenode.net.<br />
}}<br />
<br />
==Other Information==<br />
<br />
* [http://bucardo.org Bucardo wiki]<br />
* [http://bucardo.org/bugzilla Bucardo bug tracker]</div>Greghttps://wiki.postgresql.org/index.php?title=DTrace&diff=23463DTrace2014-10-17T20:55:35Z<p>Greg: Replace dead link with Wikipedia one</p>
<hr />
<div>DTrace is a technology for tracing arbitrary points in program execution. Originally developed for Solaris, it has since become available in one form or another on Mac OS and FreeBSD. PostgreSQL has included basic DTrace support since version 8.2, with newer versions (8.4 in particular) expanding the number of probe points in the database available.<br />
<br />
Introduction to PostgreSQl and DTrace:<br />
* [http://www.postgresql.org/docs/current/static/dynamic-trace.html Dynamic Tracing] - official manual section on DTrace probes available<br />
* [http://pgfoundry.org/docman/?group_id=1000163 PostgreSQL DTrace Users Guide]<br />
* [http://lethargy.org/~jesus/writes/postgresql-performance-through-the-eyes-of-dtrace PostgreSQL performance through the eyes of DTrace] and [http://lethargy.org/~jesus/writes/postgresql-looking-under-the-hood-with-solaris Looking under the hood with Solaris]. The pg_file_stress utility there is being migrated to [http://labs.omniti.com/trac/pgtreats Tasty Treats for PostgreSQL].<br />
<br />
General DTrace information:<br />
* [https://en.wikipedia.org/wiki/DTrace DTrace at Wikipedia]<br />
<br />
Example PostgreSQL DTrace scripts:<br />
* [http://przemol.blogspot.com/2007/06/dtrace-postgresql-io-tuning.html DTrace & Postgresql - io tuning]<br />
* [http://blog.whatever-company.com/index.php/2009/07/some-quick-numbers-about-ssd-for-postgresql/ Some quick numbers about SDD for PostgreSQL]<br />
<br />
== SystemTap & Linux ==<br />
<br />
It's also possible to use the PostgreSQL DTrace probes on some recent Linux systems through the [http://gnu.wildebeest.org/diary/2009/02/24/systemtap-09-markers-everywhere/ Systemtap user space markers] feature:<br />
* [http://blog.endpoint.com/2009/05/postgresql-with-systemtap.html PostgreSQL with SystemTap]<br />
* [http://www.pgcon.org/2010/schedule/events/220.en.html Probing PostgreSQL with DTrace and SystemTap]<br />
<br />
<br />
[[Category:Operating system]]</div>Greghttps://wiki.postgresql.org/index.php?title=Bucardo&diff=23457Bucardo2014-10-16T15:58:39Z<p>Greg: Version 5.1.2 actually</p>
<hr />
<div>{{clusteringProject<br />
|overview='''Bucardo''' is a replication system for Postgres that supports any number of sources and targets (aka masters and slaves). It is asynchronous and trigger based. <br />
|status=Production<br />
|statusdetail=Version 5.1.2<br />
|contact=*[https://mail.endcrypt.com/mailman/listinfo/bucardo-general bucardo-general mailing list]<br />
|url=*[http://bucardo.org Bucardo Web site and wiki]<br />
|scalability=Master-slave: high with cascading slaves. Multi-master: two or more<br />
|readscaling=yes<br />
|writescaling=yes (with multimaster); no/slight inverse (master/slave only)<br />
|procedures=Yes<br />
|parallel=No<br />
|failover=Not automatic<br />
|online=No<br />
|upgrades=Yes<br />
|detach=Yes<br />
|coremod=No<br />
|languages=Perl, Pl/PgSQL, Pl/PerlU<br />
|license=BSD<br />
|complete=Yes<br />
|version=8.1 to 9.4<br />
|summary=Asynchronous cascading master-slave replication, row-based, using triggers and queueing in the database ''AND'' Asynchronous master-master replication, row-based, using triggers and customized conflict resolution<br />
|description=<br />
General model: Asynchronous cascading master-slave and/or master-master. Row-based, uses triggers and LISTEN/NOTIFY.<br />
<br />
Bucardo requires a dedicated database and runs as a Perl daemon that communicates with this database and all other <br />
databases involved in the replication. It can run as multimaster or multislave.<br />
<br />
Multimaster replication uses two or more databases, with conflict resolution <br />
(either standard choices or custom subroutines) to handle the same update on both sides.<br />
<br />
Master-slave replication involves one or more sources going to one or more targets. The source must be PostgreSQL, but the targets can be PostgreSQL, MySQL, Redis, Oracle, MariaDB, SQLite, or MongoDB.<br />
<br />
|usecase=<br />
* Load balancing via slaves<br />
* Data warehousing via slaves<br />
* Slaves are not constrained and can be written to<br />
* Upgrading from one Postgres version to another<br />
* Many hooks allow for data to be changed on the fly during replication, and ease of things like cache invalidation.<br />
* Partial replication<br />
* Replication on demand (changes can be pushed automatically or when desired)<br />
* Will handle replication of TRUNCATE for Postgres version 8.4 or greater.<br />
* Slaves can be "pre-warmed" for quick setup<br />
|drawbacks=<br />
* Cannot handle DDL (Postgres has no triggers on system tables)<br />
* Cannot handle large objects (same reason)<br />
* Cannot incrementally replicate tables without a unique key (it can "fullcopy" them)<br />
* Will not work on versions older than Postgres 8<br />
|sponsors=<br />
|support=Commercial support is available from [http://endpoint.com End Point Corporation]. Non-commercial support is available from the the bucardo-general mailing list, and the #bucardo channel on irc.freenode.net.<br />
}}<br />
<br />
==Other Information==<br />
<br />
* [http://bucardo.org Bucardo wiki]<br />
* [http://bucardo.org/bugzilla Bucardo bug tracker]</div>Greghttps://wiki.postgresql.org/index.php?title=Bucardo&diff=23456Bucardo2014-10-16T15:56:37Z<p>Greg: Update some version information</p>
<hr />
<div>{{clusteringProject<br />
|overview='''Bucardo''' is a replication system for Postgres that supports any number of sources and targets (aka masters and slaves). It is asynchronous and trigger based. <br />
|status=Production<br />
|statusdetail=Version 5.1.1<br />
|contact=*[https://mail.endcrypt.com/mailman/listinfo/bucardo-general bucardo-general mailing list]<br />
|url=*[http://bucardo.org Bucardo Web site and wiki]<br />
|scalability=Master-slave: high with cascading slaves. Multi-master: two or more<br />
|readscaling=yes<br />
|writescaling=yes (with multimaster); no/slight inverse (master/slave only)<br />
|procedures=Yes<br />
|parallel=No<br />
|failover=Not automatic<br />
|online=No<br />
|upgrades=Yes<br />
|detach=Yes<br />
|coremod=No<br />
|languages=Perl, Pl/PgSQL, Pl/PerlU<br />
|license=BSD<br />
|complete=Yes<br />
|version=8.1 to 9.4<br />
|summary=Asynchronous cascading master-slave replication, row-based, using triggers and queueing in the database ''AND'' Asynchronous master-master replication, row-based, using triggers and customized conflict resolution<br />
|description=<br />
General model: Asynchronous cascading master-slave and/or master-master. Row-based, uses triggers and LISTEN/NOTIFY.<br />
<br />
Bucardo requires a dedicated database and runs as a Perl daemon that communicates with this database and all other <br />
databases involved in the replication. It can run as multimaster or multislave.<br />
<br />
Multimaster replication uses two or more databases, with conflict resolution <br />
(either standard choices or custom subroutines) to handle the same update on both sides.<br />
<br />
Master-slave replication involves one or more sources going to one or more targets. The source must be PostgreSQL, but the targets can be PostgreSQL, MySQL, Redis, Oracle, MariaDB, SQLite, or MongoDB.<br />
<br />
|usecase=<br />
* Load balancing via slaves<br />
* Data warehousing via slaves<br />
* Slaves are not constrained and can be written to<br />
* Upgrading from one Postgres version to another<br />
* Many hooks allow for data to be changed on the fly during replication, and ease of things like cache invalidation.<br />
* Partial replication<br />
* Replication on demand (changes can be pushed automatically or when desired)<br />
* Will handle replication of TRUNCATE for Postgres version 8.4 or greater.<br />
* Slaves can be "pre-warmed" for quick setup<br />
|drawbacks=<br />
* Cannot handle DDL (Postgres has no triggers on system tables)<br />
* Cannot handle large objects (same reason)<br />
* Cannot incrementally replicate tables without a unique key (it can "fullcopy" them)<br />
* Will not work on versions older than Postgres 8<br />
|sponsors=<br />
|support=Commercial support is available from [http://endpoint.com End Point Corporation]. Non-commercial support is available from the the bucardo-general mailing list, and the #bucardo channel on irc.freenode.net.<br />
}}<br />
<br />
==Other Information==<br />
<br />
* [http://bucardo.org Bucardo wiki]<br />
* [http://bucardo.org/bugzilla Bucardo bug tracker]</div>Greghttps://wiki.postgresql.org/index.php?title=What%27s_new_in_PostgreSQL_9.3&diff=23455What's new in PostgreSQL 9.32014-10-16T15:54:58Z<p>Greg: /* COPY FREEZE for more efficient bulk loading */ Remove dead link</p>
<hr />
<div>This page contains an overview of PostgreSQL Version 9.3's features, including descriptions, testing and usage information, and links to blog posts containing further information. See also the [http://www.postgresql.org/docs/9.3/static/release-9-3.html Release Notes] and [[PostgreSQL 9.3 Open Items]].<br />
<br />
== Configuration directive 'include_dir' ==<br />
<br />
In addition to including separate configuration files via the 'include' directive, postgresql.conf now also provides the 'include_dir' directive which reads all files ending in ".conf" in the specified directory or directories.<br />
<br />
Directories can be specified either as an absolute path or relative from the location of the main configuration file. Directories will be read in the order they occur, while files will be read sorted by C locale rules. It is possible for included files to contain their own 'include_dir' directives. <br />
<br />
'''Links'''<br />
<br />
* [http://www.postgresql.org/docs/9.3/static/config-setting.html#CONFIG-INCLUDES Documentation]<br />
<br />
== COPY FREEZE for more efficient bulk loading ==<br />
<br />
To improve initial bulk loading of tables, a ''FREEZE'' parameter has been added to the COPY command to enable data to be copied with rows already frozen. See the documentation for usage and caveats.<br />
<br />
'''Links'''<br />
* [http://www.postgresql.org/docs/9.3/static/sql-copy.html Documentation] - see the ''FREEZE'' parameter<br />
<br />
== Custom Background Workers ==<br />
<br />
This functionality enables modules to register themselves as "background worker processes", effectively operating as customised server processes. This is a powerful new feature with a wide variety of possible use cases, such as monitoring server activity, performing tasks at pre-defined intervals, customised logging etc.<br />
<br />
Background worker processes can attach to PostgreSQL's shared memory area and to connect to databases internally; by linking to libpq they can also connect to the server in the same way as a regular client application. Background worker processes are written in C, and as server processes they have unrestricted access to all data and can potentially impact other server processes, meaning they represent a potential security / stability risk. Consequently background worker processes should be developed and deployed with appropriate caution.<br />
<br />
Providing an example would go beyond the scope of this article; please refer to the blogs linked below, which provide annotated sample code. The PostgreSQL source also contains a sample background worker process in contrib/worker_spi.<br />
<br />
'''Links'''<br />
<br />
* [http://www.postgresql.org/docs/9.3/static/bgworker.html Documentation]<br />
* [http://www.depesz.com/2012/12/07/waiting-for-9-3-background-worker-processes/ Background worker processes] <br />
* [http://michael.otacoo.com/postgresql-2/postgres-9-3-feature-highlight-handling-signals-with-custom-bgworkers/ Postgres 9.3 feature highlight: handling signals with custom bgworkers] <br />
* [http://michael.otacoo.com/postgresql-2/postgres-9-3-feature-highlight-custom-background-workers/ Custom background workers] <br />
* [http://michael.otacoo.com/postgresql-2/postgres-9-3-feature-highlight-hello-world-with-custom-bgworkers/ "Hello World" with custom bgworkers]<br />
* [http://sql-info.de/postgresql/notes/custom-background-worker-bgw-practical-example.html Custom Background Workers - a practical example]<br />
<br />
== Data Checksums ==<br />
<br />
It is now possible for PostgreSQL to checksum data pages and report corruption. This is a cluster-wide setting and cannot be applied to individual databases or objects. Also be aware that this facility may incur a noticeable performance penalty. This option must be enabled during initdb and cannot be changed (although there is a new GUC parameter "[http://www.postgresql.org/docs/9.3/static/runtime-config-developer.html#GUC-IGNORE-CHECKSUM-FAILURE ignore_checksum_failure]" which will force PostgreSQL to continue processing a transaction even if corruption is detected). <br />
<br />
'''Links'''<br />
<br />
* Documentation<br />
** [http://www.postgresql.org/docs/9.3/static/app-initdb.html#APP-INITDB-DATA-CHECKSUMS initdb -k/--data-checksums]<br />
* [http://michael.otacoo.com/postgresql-2/postgres-9-3-feature-highlight-data-checksums/ Postgres 9.3 feature highlight: Data Checksums]<br />
<br />
== JSON: Additional functionality ==<br />
<br />
The [http://www.postgresql.org/docs/9.2/static/functions-json.html JSON datatype] and [http://www.postgresql.org/docs/9.2/static/functions-json.html two supporting functions] for converting rows and arrays were introduced in PostgreSQL 9.2. With PostgreSQL 9.3, dedicated JSON operators have been introduced and the number of functions expanded to 12, including JSON parsing support. The JSON parser has exposed for use by other modules such as extensions as an API.<br />
<br />
Additionally, the [http://www.postgresql.org/docs/9.3/static/hstore.html hstore] extension has gained two JSON-related functions, ''hstore_to_json(hstore)'' and ''hstore_to_json_loose(hstore)''. The former is used when an hstore value is cast to json.<br />
<br />
'''Links'''<br />
<br />
* Documentation<br />
** [http://www.postgresql.org/docs/9.3/static/datatype-json.html Documentation: JSON Datatype]<br />
** [http://www.postgresql.org/docs/9.3/static/functions-json.html Documentation: JSON Functions and Operators]<br />
* [http://www.depesz.com/2013/03/11/waiting-for-9-3-json-generation-improvements/ Waiting for 9.3 – JSON generation improvements] <br />
* [http://www.depesz.com/2013/03/30/waiting-for-9-3-add-new-json-processing-functions-and-parser-api/ Waiting for 9.3 – Add new JSON processing functions and parser API]<br />
* [http://michael.otacoo.com/postgresql-2/postgres-9-3-feature-highlight-json-data-generation/ Postgres 9.3 feature highlight: JSON data generation] <br />
* [http://michael.otacoo.com/postgresql-2/postgres-9-3-feature-highlight-json-operators/ Postgres 9.3 feature highlight: JSON operators] <br />
* [http://michael.otacoo.com/postgresql-2/postgres-9-3-feature-highlight-json-parsing-functions/ Postgres 9.3 feature highlight: JSON parsing functions]<br />
<br />
== LATERAL JOIN ==<br />
<br />
Put simply, a <tt>LATERAL JOIN</tt> enables a subquery in the <tt>FROM</tt> part of a clause to reference columns from preceding items in the FROM list.<br />
<br />
The following is a self-contained (if quite pointless) example of the kind of clause it is sometimes useful to be able to write:<br />
<br />
SELECT base.nr,<br />
multiples.multiple<br />
FROM (SELECT generate_series(1,10) AS nr) base<br />
JOIN (SELECT generate_series(1,10) AS b_nr, base.nr * 2 AS multiple) multiples<br />
ON multiples.b_nr = base.nr<br />
<br />
but which produces an error message like the following:<br />
<br />
<pre> LINE 4: JOIN (SELECT generate_series(1,10) AS base, base.nr * 2 A...<br />
^<br />
HINT: There is an entry for table "base", but it cannot be referenced from this part of the query.</pre><br />
<br />
Using <tt>LATERAL JOIN</tt>, it's now possible for the second subquery to reference a value from the first:<br />
<br />
SELECT base.nr,<br />
multiples.multiple<br />
FROM (SELECT generate_series(1,10) AS nr) base,<br />
LATERAL (<br />
SELECT multiples.multiple FROM<br />
( SELECT generate_series(1,10) AS b_nr, base.nr * 2 AS multiple ) multiples<br />
WHERE multiples.b_nr = base.nr<br />
) multiples;<br />
<br />
Note that function calls can now directly reference columns from preceding <tt>FROM</tt> items, even without the <tt>LATERAL</tt> keyword. Example:<br />
<br />
CREATE FUNCTION multiply(INT, INT)<br />
RETURNS INT<br />
LANGUAGE SQL<br />
AS<br />
$$<br />
SELECT $1 * $2;<br />
$$<br />
<br />
Query with function call in the <tt>FROM</tt> list:<br />
<br />
SELECT base.nr,<br />
multiple<br />
FROM (SELECT generate_series(1,10) AS nr) base,<br />
multiply(base.nr, 2) AS multiple<br />
<br />
In previous versions, this query would generate an error like this:<br />
<br />
ERROR: function expression in FROM cannot refer to other relations of same query level<br />
LINE 4: multiply(base.nr, 2) AS multiple<br />
<br />
See the articles linked below for some more realistic examples.<br />
<br />
'''Links'''<br />
<br />
* [http://www.postgresql.org/docs/9.3/static/sql-select.html Documentation: SELECT] ''(see section <tt>LATERAL</tt>)''<br />
* [http://www.depesz.com/2012/08/19/waiting-for-9-3-implement-sql-standard-lateral-subqueries/ Waiting for 9.3: Implement SQL standard lateral subqueries]<br />
* [http://www.postgresonline.com/journal/archives/284-PostgreSQL-9.3-Lateral-Part-1-Use-with-HStore.html PostgreSQL 9.3 Lateral Part 1: Use with HStore] <br />
* [http://www.postgresonline.com/journal/archives/285-PostgreSQL-9.3-Lateral-Part2-The-Lateral-Left-Join.html PostgreSQL 9.3 Lateral Part 2: The Lateral Left Join]<br />
<br />
== Parallel pg_dump for faster backups ==<br />
<br />
The new ''-j '''njobs''''' (''--jobs=''''njobs'''''') option enables pg_dump to dump '''njobs''' tables simultaneously, reducing the time it takes to dump a database. Example:<br />
<br />
pg_dump -U postgres -j4 -Fd -f /tmp/mydb-dump mydb<br />
<br />
This dumps the contents of database "mydb" to the directory "/tmp/mydb-dump" using four simultaneous connections.<br />
<br />
Caveats:<br />
* Parallel dumps can only be in directory format<br />
* Parellel dumps will place more load on the database, although total dump time should be shorter<br />
* pg_dump will open njobs + 1 connections to the database, so max_connections should be set appropriately<br />
* Requesting exclusive locks on database objects while running a parallel dump could cause the dump to fail<br />
* Parallel dumps from pre-9.2 servers need special attention<br />
<br />
An ad-hoc test of this feature on a 4.5GB database (which compresses to around 370MB as a dump) with different values of ''-j '' produced following timings:<br />
<br />
* (''no -j''): 1m3s<br />
* -j2: 0m28s<br />
* -j3: 0m24s<br />
* -j4: 0m24s<br />
* -j5: 0m25s<br />
<br />
'''Links'''<br />
<br />
* [http://www.postgresql.org/docs/9.3/static/app-pgdump.html pg_dump documentation]<br />
* [http://www.depesz.com/2013/03/26/2646/ Waiting for 9.3 – Add parallel pg_dump option]<br />
* [http://michael.otacoo.com/postgresql-2/postgres-9-3-feature-highlight-parallel-pg_dump/ Postgres 9.3 feature highlight: parallel pg_dump]<br />
<br />
== 'pg_isready' server monitoring tool ==<br />
<br />
pg_isready is a wrapper for PQping created as a standard client application. It accepts a libpq-style connection string and returns one of four exit statuses:<br />
<br />
* 0: server is accepting connections normally<br />
* 1: server is rejecting connections (for example during startup)<br />
* 2: server did not response to the connection attempt<br />
* 3: no connection attempt was made (e.g. due to invalid connection parameters)<br />
<br />
Example usage:<br />
<br />
barwick@localhost:~$ pg_isready<br />
/tmp:5432 - accepting connections<br />
barwick@localhost:~$ pg_isready --quiet && echo "OK"<br />
OK<br />
barwick@localhost:~$ pg_isready -p5431 -h localhost<br />
localhost:5431 - accepting connections<br />
barwick@localhost:~$ pg_isready -h example.com<br />
example.com:5432 - no response<br />
<br />
'''Links'''<br />
<br />
* [http://www.postgresql.org/docs/9.3/static/app-pg-isready.html Documentation]<br />
* [http://www.depesz.com/2013/01/26/waiting-for-9-3-pg_isready/ pg_isready]<br />
* [http://michael.otacoo.com/postgresql-2/postgres-9-3-feature-highlight-server-monitoring-with-pg_isready/ Server monitoring with pg_isready]<br />
<br />
== Switch to Posix shared memory and mmap() ==<br />
<br />
In 9.3, PostgreSQL has switched from using SysV shared memory to using Posix shared memory and mmap for memory management. This allows easier installation and configuration of PostgreSQL, and means that except in unusual cases, system parameters such as SHMMAX and SHMALL no longer need to be adjusted. We need users to rigorously test and ensure that no memory management issues have been introduced by the change. <br />
<br />
'''Links'''<br />
<br />
* [http://www.postgresql.org/docs/9.3/static/kernel-resources.html#SYSVIPC Documentation]<br />
<br />
== Trigger Features ==<br />
=== Event Triggers ===<br />
<br />
Triggers can now be defined on DDL events (CREATE, ALTER, DROP).<br />
<br />
'''Links'''<br />
* Documentation:<br />
** [http://www.postgresql.org/docs/9.3/interactive/sql-createeventtrigger.html CREATE EVENT TRIGGER]<br />
** [http://www.postgresql.org/docs/9.3/interactive/event-trigger-matrix.html Event Trigger Firing Matrix]<br />
** [http://www.postgresql.org/docs/9.3/interactive/plpgsql-trigger.html#PLPGSQL-EVENT-TRIGGER Triggers on events]<br />
<br />
* [http://www.depesz.com/2012/07/29/waiting-for-9-3-event-triggers/ Waiting for 9.3 – Event triggers]<br />
<br />
== VIEW Features ==<br />
=== Materialized Views ===<br />
<br />
Materialized views are a special kind of view which cache the view's output as a physical table, rather than executing the underlying query on every access. Conceptually they are similar to "CREATE TABLE AS", but store the view definition so it can be easily refreshed.<br />
<br />
Note that materialized views cannot be auto-refreshed; refreshes are not incremental; and the base table cannot be manipulated. They will however be automatically populated by pg_restore (more precisely, pg_dump includes a "REFRESH MATERIALIZED VIEW" statement).<br />
<br />
'''Contrived example'''<br />
<br />
Create and populate a table with some arbitrary data:<br />
<br />
CREATE TABLE matview_test_table (<br />
id SERIAL PRIMARY KEY,<br />
ts TIMESTAMPTZ NOT NULL<br />
)<br />
<br />
INSERT INTO matview_test_table VALUES (<br />
DEFAULT,<br />
((NOW() - '2 days'::INTERVAL) + (generate_series(1,1000) || ' seconds')::INTERVAL)::TIMESTAMPTZ<br />
)<br />
<br />
Create a materialized view which lists the 5 most recent entries:<br />
<br />
CREATE MATERIALIZED VIEW matview_test_view AS<br />
SELECT id, ts<br />
FROM matview_test_table<br />
ORDER BY id DESC <br />
LIMIT 5<br />
<br />
postgres=# SELECT * from matview_test_view ;<br />
id | ts <br />
------+-------------------------------<br />
1000 | 2013-05-06 12:02:10.974711+09<br />
999 | 2013-05-06 12:02:09.974711+09<br />
998 | 2013-05-06 12:02:08.974711+09<br />
997 | 2013-05-06 12:02:07.974711+09<br />
996 | 2013-05-06 12:02:06.974711+09<br />
(5 rows)<br />
<br />
Add more data to the table:<br />
<br />
INSERT INTO matview_test_table VALUES (<br />
DEFAULT,<br />
((NOW() - '1 days'::INTERVAL) + (generate_series(1,1000) || ' seconds')::INTERVAL)::TIMESTAMPTZ<br />
)<br />
<br />
View output does not change:<br />
<br />
postgres=# SELECT * from matview_test_view ;<br />
id | ts <br />
------+-------------------------------<br />
1000 | 2013-05-06 12:02:10.974711+09<br />
999 | 2013-05-06 12:02:09.974711+09<br />
998 | 2013-05-06 12:02:08.974711+09<br />
997 | 2013-05-06 12:02:07.974711+09<br />
996 | 2013-05-06 12:02:06.974711+09<br />
(5 rows)<br />
<br />
Refresh the view to display the latest table entries:<br />
<br />
postgres=# REFRESH MATERIALIZED VIEW matview_test_view ;<br />
REFRESH MATERIALIZED VIEW<br />
postgres=# SELECT * from matview_test_view ;<br />
id | ts <br />
------+-------------------------------<br />
2001 | 2013-05-07 12:03:10.696626+09<br />
2000 | 2013-05-07 12:03:09.696626+09<br />
1999 | 2013-05-07 12:03:08.696626+09<br />
1998 | 2013-05-07 12:03:07.696626+09<br />
1997 | 2013-05-07 12:03:06.696626+09<br />
(5 rows)<br />
<br />
The links below contain more detailed information and examples.<br />
<br />
'''Links'''<br />
* Documentation:<br />
** [http://www.postgresql.org/docs/9.3/static/rules-materializedviews.html Overview]<br />
** [http://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html CREATE command]<br />
* [http://www.depesz.com/2013/03/04/waiting-for-9-3-add-a-materialized-view-relations/ Waiting for 9.3 – Add a materialized view relations]<br />
* [http://michael.otacoo.com/postgresql-2/postgres-9-3-feature-highlight-materialized-views/ Postgres 9.3 feature highlight: Materialized views]<br />
<br />
=== Recursive View Syntax ===<br />
<br />
The CREATE RECURSIVE VIEW syntax provides a shorthand way of formulating a recursive common table expression (CTE) as a view.<br />
<br />
Taking the example from the [http://www.postgresql.org/docs/current/static/queries-with.html#QUERIES-WITH-SELECT CTE documentation]:<br />
<br />
WITH RECURSIVE t(n) AS (<br />
VALUES (1)<br />
UNION ALL<br />
SELECT n+1 FROM t WHERE n < 100<br />
)<br />
SELECT * FROM t;<br />
<br />
This can be created as a recursive view as follows:<br />
<br />
CREATE RECURSIVE VIEW t(n) AS<br />
VALUES (1)<br />
UNION ALL<br />
SELECT n+1 FROM t WHERE n < 100;<br />
<br />
'''Links'''<br />
* [http://www.postgresql.org/docs/9.3/static/sql-createview.html Documentation]<br />
* [http://www.depesz.com/2013/03/04/waiting-for-9-3-add-create-recursive-view-syntax/ Waiting for 9.3 – Add CREATE RECURSIVE VIEW syntax]<br />
<br />
=== Updatable Views ===<br />
<br />
Simple views can now be updated in the same way as regular tables. The view can only reference one table (or another updatable view) and must not contain more complex operators, join types etc. <br />
<br />
If the view has a WHERE condition, UPDATEs and DELETEs on the underlying table will be restricted to those rows it defines. However UPDATEs may change a row so that it is no longer visible in the view, and an INSERT command can potentiall insert rows which do not satisfy the WHERE condition.<br />
<br />
More complex views can be made updatable as before using INSTEAD OF triggers or INSTEAD rules.<br />
<br />
Simple example using the following table and view:<br />
<code> <br />
CREATE TABLE postgres_versions (<br />
version VARCHAR(3) PRIMARY KEY,<br />
nickname TEXT NOT NULL<br />
);<br />
<br />
INSERT INTO postgres_versions VALUES<br />
('8.0', 'Excitable Element'),<br />
('8.1', 'Fishy Foreign Key'),<br />
('8.2', 'Grumpy Grant'),<br />
('8.3', 'Hysterical Hstore'),<br />
('8.4', 'Insane Index'),<br />
('9.0', 'Jumpy Join'),<br />
('9.1', 'Killer Key'),<br />
('9.2', 'Laconical Lexer'),<br />
('9.3', 'Morose Module');<br />
<br />
CREATE VIEW postgres_versions_9 AS<br />
SELECT version, nickname<br />
FROM postgres_versions<br />
WHERE version LIKE '9.%';<br />
</code><br />
<br />
<code> <br />
postgres=# SELECT * from postgres_versions_9;<br />
version | nickname <br />
---------+-----------------<br />
9.0 | Jumpy Join<br />
9.1 | Killer Key<br />
9.2 | Laconical Lexer<br />
9.3 | Morose Module<br />
(4 rows)<br />
<br />
postgres=# UPDATE postgres_versions_9 SET nickname='Maniac Master' WHERE version='9.3';<br />
UPDATE 1<br />
postgres=# SELECT * from postgres_versions_9;<br />
version | nickname <br />
---------+-----------------<br />
9.0 | Jumpy Join<br />
9.1 | Killer Key<br />
9.2 | Laconical Lexer<br />
9.3 | Maniac Master<br />
(4 rows)<br />
</code><br />
<br />
'''Links'''<br />
* [http://www.postgresql.org/docs/9.3/static/sql-createview.html#SQL-CREATEVIEW-UPDATABLE-VIEWS Documentation]<br />
* [http://www.depesz.com/2012/12/11/waiting-for-9-3-support-automatically-updatable-views/ Waiting for 9.3 – Support automatically-updatable views]<br />
* [http://michael.otacoo.com/postgresql-2/postgres-9-3-feature-highlight-auto-updatable-views/ Postgres 9.3 feature highlight: auto-updatable views]<br />
<br />
== Writeable Foreign Tables ==<br />
<br />
"Foreign Data Wrappers" (FDW) were introduced in PostgreSQL 9.1, providing a way of accessing external data sources from within PostgreSQL using SQL. The original implementation was read-only, but 9.3 will enable write access as well, provided the individual FDW drivers have been updated to support this. At the time of writing, only the PostgreSQL driver has write support.<br />
<br />
See [[#postgres_fdw|below]] for more information on the PostgreSQL driver and a simple example.<br />
<br />
'''Links'''<br />
<br />
* [http://www.postgresql.org/docs/9.3/static/sql-createserver.html CREATE SERVER]<br />
* [http://www.postgresql.org/docs/9.3/static/sql-createforeigndatawrapper.html CREATE FOREIGN DATA WRAPPER]<br />
* [http://www.postgresql.org/docs/9.3/static/fdwhandler.html Documentation: Writing A Foreign Data Wrapper]<br />
* [http://michael.otacoo.com/postgresql-2/postgres-9-3-feature-highlight-writable-foreign-tables/ Postgres 9.3 feature highlight: writable foreign tables]<br />
* [http://www.depesz.com/2013/03/17/waiting-for-9-3-support-writable-foreign-tables/ Waiting for 9.3 – Support writable foreign tables] <br />
<br />
=== postgres_fdw ===<br />
<br />
A new contrib module, postgres_fdw, provides the eponymous foreign data wrapper for read/write access to remote PostgreSQL servers (or to another database on the local server).<br />
<br />
A simple usage example (connecting to a different database on the same server for ease of testing).<br />
<br />
1. Build the postgres_fdw contrib module<br />
<br />
cd contrib/postgres_fdw<br />
make install<br />
<br />
2. Install the module as an extension<br />
<br />
postgres=# CREATE EXTENSION postgres_fdw;<br />
CREATE EXTENSION<br />
<br />
3. Create a test "remote" database<br />
<br />
postgres=# CREATE DATABASE fdw_test;<br />
CREATE DATABASE<br />
postgres=# \c fdw_test<br />
You are now connected to database "fdw_test" as user "barwick".<br />
fdw_test=# CREATE TABLE world (greeting TEXT);<br />
CREATE TABLE<br />
<br />
4. Create the server, user and table mapping so that the local PostgreSQL server knows about the remote database:<br />
<br />
postgres=# CREATE SERVER postgres_fdw_test FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'localhost', dbname 'fdw_test');<br />
CREATE SERVER<br />
postgres=# CREATE USER MAPPING FOR PUBLIC SERVER postgres_fdw_test OPTIONS (password <nowiki>''</nowiki>);<br />
CREATE USER MAPPING<br />
postgres=# CREATE FOREIGN TABLE other_world (greeting TEXT) SERVER postgres_fdw_test OPTIONS (table_name 'world');<br />
CREATE FOREIGN TABLE<br />
postgres=# \det<br />
List of foreign tables<br />
Schema | Table | Server <br />
--------+-------------+-------------------<br />
public | other_world | postgres_fdw_test<br />
(1 row)<br />
<br />
5. Manipulate the remote table as if it were a local one:<br />
<br />
postgres=# INSERT INTO other_world VALUES('Take me to your leader');<br />
INSERT 0 1 <br />
postgres=# \c fdw_test<br />
You are now connected to database "fdw_test" as user "barwick".<br />
fdw_test=# SELECT * FROM world;<br />
hello <br />
------------------------<br />
Take me to your leader<br />
(1 row)<br />
<br />
Here's another example, where we link to the "account" and "branches" tables on a remote pgbench database:<br />
<br />
create extension postgres_fdw;<br />
create user mapping for current_user server remotesrv options ( user 'postgres', password 'password' );<br />
create server remotesrv foreign data wrapper postgres_fdw options ( host '192.168.1.5', port '5433', dbname 'bench');<br />
create foreign table remoteacct (aid int, bid int, abalance int, filler char(84)) <br />
server remotesrv options ( table_name 'pgbench_accounts' );<br />
create foreign table remotebranch ( bid int, bbalance int, filler char(88) ) <br />
server remotesrv options ( table_name 'pgbench_branches');<br />
<br />
Having set this up, we can query the remote server:<br />
<br />
explain select * from remotebranch join remoteacct using ( bid ) where bid = 5;<br />
QUERY PLAN<br />
----------------------------------------------------------------------------<br />
Nested Loop (cost=200.00..225.40 rows=1 width=712)<br />
-> Foreign Scan on remotebranch (cost=100.00..112.66 rows=1 width=364)<br />
-> Foreign Scan on remoteacct (cost=100.00..112.73 rows=1 width=352)<br />
<br />
Notice a couple of things: first, JOIN push-down to the remote server isn't implemented yet (wait for 9.4!). Second, we're not getting real estimates for the remote tables. This is fixable, but telling Postgres to query the remote DB for EXPLAIN information:<br />
<br />
alter foreign table remotebranch options (add use_remote_estimate 'true');<br />
alter foreign table remoteacct options (add use_remote_estimate 'true');<br />
bench=# explain select * from remotebranch join remoteacct using ( bid ) where bid = 5;<br />
QUERY PLAN<br />
------------------------------------------------------------------------------<br />
Nested Loop (cost=200.42..7648.07 rows=99400 width=712)<br />
-> Foreign Scan on remotebranch (cost=100.00..101.14 rows=1 width=364)<br />
-> Foreign Scan on remoteacct (cost=100.42..6552.93 rows=99400 width=97)<br />
<br />
'''Links'''<br />
* [http://www.postgresql.org/docs/9.3/static/postgres-fdw.html Documentation]<br />
<br />
== Replication Improvements ==<br />
<br />
PostgreSQL's built-in binary replication has been improved in four ways: streaming-only remastering, fast failover, and architecture-independent streaming, and pg_basebackup conf setup.<br />
<br />
=== Streaming-Only Remastering ===<br />
<br />
"Remastering" is the process whereby a replica in a set of replicas becomes the new master for all of the other replicas. For example:<br />
<br />
# Master M1 is replicating to replicas R1, R2 and R3.<br />
# Master M1 needs to be taken down for a hardware upgrade.<br />
# The DBA promotes R1 to be the master. <br />
# R2 and R3 are reconfigured & restarted, and now replicate from R1<br />
<br />
That's remastering in a nutshell. It's even more useful in combination with cascading replication (introduced in 9.2).<br />
<br />
In prior versions of PostgreSQL, remastering required using WAL file archiving. Cascading replicas could not switch masters using streaming alone; they would have to be re-cloned. That restriction has now been lifted, allowing remastering from just the stream. This makes it much easier to set up large replication clusters; administrators no longer have to set up an online WAL archive if they don't need one for disaster recovery.<br />
<br />
Incidentally, this also makes it possible to set up "cycles" where replication is going in a circle. Whether that's a feature or a bug depends on your perspective.<br />
<br />
Links:<br />
* [http://www.databasesoup.com/2013/01/cascading-replication-and-cycles.html Cascading Replication and Cycles]<br />
<br />
=== Fast Failover ===<br />
<br />
Allows replicas to be promoted in less than a second, permitting 99.999% uptime. More details TBD.<br />
<br />
=== Architecture-Independent Streaming ===<br />
<br />
Allows streaming of base backups (using pg_basebackup) and log archiving (using pg_receivexlog) between different OSes and hardware architectures. (Note that you still need the same architecture to restore the backups, but this is useful for example with centralized backup servers)<br />
<br />
=== pg_basebackup conf setup ===<br />
<br />
If you use the -R switch, pg_basebackup will create a simple (streaming-only) recovery.conf file in the newly cloned data directory. This means that you can immediately start the new database server without doing additional editing.<br />
<br />
=Backward compatibility=<br />
<br />
These changes may incur regressions in your applications.<br />
<br />
== CREATE TABLE output ==<br />
<br />
CREATE TABLE will no longer output messages about implicit index and sequence creation unless the log level is set to DEBUG1.<br />
<br />
== Server settings ==<br />
<br />
* Parameter 'commit_delay' is restricted to superusers only<br />
* Parameter 'replication_timeout' has been renamed to 'wal_sender_timeout'<br />
* Parameter 'unix_socket_directory' has been replaced 'unix_socket_directories'<br />
* In-memory sorts to use their full memory allocation; if work_mem was set on the basis of the pre-9.3 behavior, its value may need to be reviewed.<br />
<br />
== WAL filenames may end in FF ==<br />
<br />
WAL files will now be written in a continuous stream, rather than skipping the last 16MB segment every 4GB, meaning WAL filenames may end in FF. WAL backup or restore scripts may need to be adapted.<br />
<br />
<br />
[[Category:What's_new_in_this_release]]</div>Greghttps://wiki.postgresql.org/index.php?title=Bucardo&diff=22763Bucardo2014-06-24T14:21:15Z<p>Greg: Update for Bucardo 5</p>
<hr />
<div>{{clusteringProject<br />
|overview='''Bucardo''' is a replication system for Postgres that supports any number of sources and targets (aka masters and slaves). It is asynchronous and trigger based. <br />
|status=Production<br />
|statusdetail=Version 5.0.0<br />
|contact=*[https://mail.endcrypt.com/mailman/listinfo/bucardo-general bucardo-general mailing list]<br />
|url=*[http://bucardo.org Bucardo Web site and wiki]<br />
|scalability=Master-slave: high with cascading slaves. Multi-master: two or more<br />
|readscaling=yes<br />
|writescaling=yes (with multimaster); no/slight inverse (master/slave only)<br />
|procedures=Yes<br />
|parallel=No<br />
|failover=Not automatic<br />
|online=No<br />
|upgrades=Yes<br />
|detach=Yes<br />
|coremod=No<br />
|languages=Perl, Pl/PgSQL, Pl/PerlU<br />
|license=BSD<br />
|complete=Yes<br />
|version=8.1 to 9.3<br />
|summary=Asynchronous cascading master-slave replication, row-based, using triggers and queueing in the database ''AND'' Asynchronous master-master replication, row-based, using triggers and customized conflict resolution<br />
|description=<br />
General model: Asynchronous cascading master-slave and/or master-master. Row-based, uses triggers and LISTEN/NOTIFY.<br />
<br />
Bucardo requires a dedicated database and runs as a Perl daemon that communicates with this database and all other <br />
databases involved in the replication. It can run as multimaster or multislave.<br />
<br />
Multimaster replication uses two or more databases, with conflict resolution <br />
(either standard choices or custom subroutines) to handle the same update on both sides.<br />
<br />
Master-slave replication involves one or more sources going to one or more targets. The source must be PostgreSQL, but the targets can be PostgreSQL, MySQL, Redis, Oracle, MariaDB, SQLite, or MongoDB.<br />
<br />
|usecase=<br />
* Load balancing via slaves<br />
* Data warehousing via slaves<br />
* Slaves are not constrained and can be written to<br />
* Upgrading from one Postgres version to another<br />
* Many hooks allow for data to be changed on the fly during replication, and ease of things like cache invalidation.<br />
* Partial replication<br />
* Replication on demand (changes can be pushed automatically or when desired)<br />
* Will handle replication of TRUNCATE for Postgres version 8.4 or greater.<br />
* Slaves can be "pre-warmed" for quick setup<br />
|drawbacks=<br />
* Cannot handle DDL (Postgres has no triggers on system tables)<br />
* Cannot handle large objects (same reason)<br />
* Cannot incrementally replicate tables without a unique key (it can "fullcopy" them)<br />
* Will not work on versions older than Postgres 8<br />
|sponsors=<br />
|support=Commercial support is available from [http://endpoint.com End Point Corporation]. Non-commercial support is available from the the bucardo-general mailing list, and the #bucardo channel on irc.freenode.net.<br />
}}<br />
<br />
==Other Information==<br />
<br />
* [http://bucardo.org Bucardo wiki]<br />
* [http://bucardo.org/bugzilla Bucardo bug tracker]</div>Greghttps://wiki.postgresql.org/index.php?title=DTrace&diff=21817DTrace2014-02-08T19:49:23Z<p>Greg: All the blogs.sun.com links are dead</p>
<hr />
<div>DTrace is a technology for tracing arbitrary points in program execution. Originally developed for Solaris, it has since become available in one form or another on Mac OS and FreeBSD. PostgreSQL has included basic DTrace support since version 8.2, with newer versions (8.4 in particular) expanding the number of probe points in the database available.<br />
<br />
Introduction to PostgreSQl and DTrace:<br />
* [http://www.postgresql.org/docs/current/static/dynamic-trace.html Dynamic Tracing] - official manual section on DTrace probes available<br />
* [http://pgfoundry.org/docman/?group_id=1000163 PostgreSQL DTrace Users Guide]<br />
* [http://lethargy.org/~jesus/writes/postgresql-performance-through-the-eyes-of-dtrace PostgreSQL performance through the eyes of DTrace] and [http://lethargy.org/~jesus/writes/postgresql-looking-under-the-hood-with-solaris Looking under the hood with Solaris]. The pg_file_stress utility there is being migrated to [http://labs.omniti.com/trac/pgtreats Tasty Treats for PostgreSQL].<br />
<br />
General DTrace information:<br />
* [http://ph7spot.com/musings/getting-started-with-dtrace Getting Started with DTrace]<br />
<br />
Example PostgreSQL DTrace scripts:<br />
* [http://przemol.blogspot.com/2007/06/dtrace-postgresql-io-tuning.html DTrace & Postgresql - io tuning]<br />
* [http://blog.whatever-company.com/index.php/2009/07/some-quick-numbers-about-ssd-for-postgresql/ Some quick numbers about SDD for PostgreSQL]<br />
<br />
== SystemTap & Linux ==<br />
<br />
It's also possible to use the PostgreSQL DTrace probes on some recent Linux systems through the [http://gnu.wildebeest.org/diary/2009/02/24/systemtap-09-markers-everywhere/ Systemtap user space markers] feature:<br />
* [http://blog.endpoint.com/2009/05/postgresql-with-systemtap.html PostgreSQL with SystemTap]<br />
* [http://www.pgcon.org/2010/schedule/events/220.en.html Probing PostgreSQL with DTrace and SystemTap]<br />
<br />
<br />
[[Category:Operating system]]</div>Greghttps://wiki.postgresql.org/index.php?title=DTrace&diff=21816DTrace2014-02-08T19:48:07Z<p>Greg: Remove dead links</p>
<hr />
<div>DTrace is a technology for tracing arbitrary points in program execution. Originally developed for Solaris, it has since become available in one form or another on Mac OS and FreeBSD. PostgreSQL has included basic DTrace support since version 8.2, with newer versions (8.4 in particular) expanding the number of probe points in the database available.<br />
<br />
Introduction to PostgreSQl and DTrace:<br />
* [http://www.postgresql.org/docs/current/static/dynamic-trace.html Dynamic Tracing] - official manual section on DTrace probes available<br />
* [http://pgfoundry.org/docman/?group_id=1000163 PostgreSQL DTrace Users Guide]<br />
* [http://lethargy.org/~jesus/writes/postgresql-performance-through-the-eyes-of-dtrace PostgreSQL performance through the eyes of DTrace] and [http://lethargy.org/~jesus/writes/postgresql-looking-under-the-hood-with-solaris Looking under the hood with Solaris]. The pg_file_stress utility there is being migrated to [http://labs.omniti.com/trac/pgtreats Tasty Treats for PostgreSQL].<br />
<br />
General DTrace information:<br />
* [http://ph7spot.com/musings/getting-started-with-dtrace Getting Started with DTrace]<br />
<br />
Example PostgreSQL DTrace scripts:<br />
* [http://przemol.blogspot.com/2007/06/dtrace-postgresql-io-tuning.html DTrace & Postgresql - io tuning]<br />
* [http://blog.whatever-company.com/index.php/2009/07/some-quick-numbers-about-ssd-for-postgresql/ Some quick numbers about SDD for PostgreSQL]<br />
* [http://blogs.sun.com/jkshah/entry/postgresql_transactions_per_second_using PostgreSQL Transactions Per Second Using Dtrace]<br />
<br />
== SystemTap & Linux ==<br />
<br />
It's also possible to use the PostgreSQL DTrace probes on some recent Linux systems through the [http://gnu.wildebeest.org/diary/2009/02/24/systemtap-09-markers-everywhere/ Systemtap user space markers] feature:<br />
* [http://blog.endpoint.com/2009/05/postgresql-with-systemtap.html PostgreSQL with SystemTap]<br />
* [http://www.pgcon.org/2010/schedule/events/220.en.html Probing PostgreSQL with DTrace and SystemTap]<br />
<br />
<br />
[[Category:Operating system]]</div>Greghttps://wiki.postgresql.org/index.php?title=Index_Maintenance&diff=15498Index Maintenance2011-09-29T03:05:18Z<p>Greg: /* Index size/usage statistics */ Must quote the table and index names!</p>
<hr />
<div>One day, you will probably need to cope with [http://www.postgresql.org/docs/current/static/routine-reindex.html routine reindexing] on your database, particularly if you don't use VACUUM aggressively enough. A particularly handy command in this area is [http://www.postgresql.org/docs/8.3/static/sql-cluster.html CLUSTER], which can help with other types of cleanup.<br />
<br />
Avoid using [[VACUUM FULL]].<br />
<br />
== Index summary ==<br />
<br />
Here's a sample query to pull the number of rows, indexes, and some info about those indexes for each table. (Only works on 8.3; ditch the pg_size_pretty if you’re on an earlier version)<br />
<br />
{{SnippetInfo|Index summary|lang=SQL|version=>=8.1|category=Performance}}<br />
<source lang="sql"><br />
SELECT<br />
pg_class.relname,<br />
pg_size_pretty(pg_class.reltuples::bigint) AS rows_in_bytes,<br />
pg_class.reltuples AS num_rows,<br />
count(indexname) AS number_of_indexes,<br />
CASE WHEN x.is_unique = 1 THEN 'Y'<br />
ELSE 'N'<br />
END AS UNIQUE,<br />
SUM(case WHEN number_of_columns = 1 THEN 1<br />
ELSE 0<br />
END) AS single_column,<br />
SUM(case WHEN number_of_columns IS NULL THEN 0<br />
WHEN number_of_columns = 1 THEN 0<br />
ELSE 1<br />
END) AS multi_column<br />
FROM pg_namespace <br />
LEFT OUTER JOIN pg_class ON pg_namespace.oid = pg_class.relnamespace<br />
LEFT OUTER JOIN<br />
(SELECT indrelid,<br />
max(CAST(indisunique AS integer)) AS is_unique<br />
FROM pg_index<br />
GROUP BY indrelid) x<br />
ON pg_class.oid = x.indrelid<br />
LEFT OUTER JOIN<br />
( SELECT c.relname AS ctablename, ipg.relname AS indexname, x.indnatts AS number_of_columns FROM pg_index x<br />
JOIN pg_class c ON c.oid = x.indrelid<br />
JOIN pg_class ipg ON ipg.oid = x.indexrelid )<br />
AS foo<br />
ON pg_class.relname = foo.ctablename<br />
WHERE <br />
pg_namespace.nspname='public'<br />
AND pg_class.relkind = 'r'<br />
GROUP BY pg_class.relname, pg_class.reltuples, x.is_unique<br />
ORDER BY 2;<br />
</source><br />
<br />
== Index size/usage statistics ==<br />
<br />
Table & index sizes along which indexes are being scanned and how many tuples are fetched. See [[Disk Usage]] for another view that includes both table and index sizes.<br />
<br />
{{SnippetInfo|Index statistics|lang=SQL|version=>=8.1|category=Performance}}<br />
<source lang="sql"><br />
SELECT<br />
t.tablename,<br />
indexname,<br />
c.reltuples AS num_rows,<br />
pg_size_pretty(pg_relation_size(quote_ident(t.tablename)::text)) AS table_size,<br />
pg_size_pretty(pg_relation_size(quote_ident(indexrelname)::text)) AS index_size,<br />
CASE WHEN x.is_unique = 1 THEN 'Y'<br />
ELSE 'N'<br />
END AS unique,<br />
idx_scan AS number_of_scans,<br />
idx_tup_read AS tuples_read,<br />
idx_tup_fetch AS tuples_fetched<br />
FROM pg_tables t<br />
LEFT OUTER JOIN pg_class c ON t.tablename=c.relname<br />
LEFT OUTER JOIN<br />
(SELECT indrelid,<br />
max(CAST(indisunique AS integer)) AS is_unique<br />
FROM pg_index<br />
GROUP BY indrelid) x<br />
ON c.oid = x.indrelid<br />
LEFT OUTER JOIN<br />
( SELECT c.relname as ctablename, ipg.relname as indexname, x.indnatts as number_of_columns, idx_scan, idx_tup_read, idx_tup_fetch,indexrelname FROM pg_index x<br />
JOIN pg_class c ON c.oid = x.indrelid<br />
JOIN pg_class ipg ON ipg.oid = x.indexrelid<br />
JOIN pg_stat_all_indexes psai ON x.indexrelid = psai.indexrelid )<br />
as foo<br />
ON t.tablename = foo.ctablename<br />
WHERE t.schemaname='public'<br />
order by 1,2;<br />
</source><br />
<br />
== Index Bloat ==<br />
<br />
One of the common needs for a REINDEX is when indexes become bloated due to either sparse deletions or use of VACUUM FULL. An estimator for the amount of bloat in a table has been included in the [http://bucardo.org/wiki/Check_postgres check_postgres] script, which you can call directly or incorporate into a larger monitoring system. Scripts based on this code and/or its concepts from other sources include:<br />
* [http://pgsql.tapoueh.org/site/html/news/20080131.bloat.html bloat view] (Dimitri Fontaine)<br />
* [http://www.pgcon.org/2009/schedule/events/153.en.html Visualizing Postgres] - index_byte_sizes view (Michael Glaesemann, myYearbook)<br />
* [http://labs.omniti.com/trac/pgtreats/browser/trunk/tools OmniTI Tasty Treats for PostgreSQL] - shell and Perl pg_bloat_report scripts<br />
<br />
== Unused Indexes ==<br />
<br />
Since indexes add significant overhead to any table change operation, they should be removed if they are not being used for either queries or constraint enforcement (such as making sure a value is unique). How to find such indexes:<br />
<br />
* [http://www.xzilla.net/blog/2008/Jul/Index-pruning-techniques.html Index pruning techniques]<br />
* [http://hype-free.blogspot.com/2008/09/finding-unused-indexes-in-postgresql.html Finding unused indexes]<br />
* [http://it.toolbox.com/blogs/database-soup/finding-useless-indexes-28796 Finding useless indexes]<br />
* [http://radek.cc/2009/09/05/psqlrc-tricks-indexes/ Missing and unused indexes]<br />
<br />
== References ==<br />
<br />
* Index statistics queries from [http://www.baconandtech.com/2009/06/06/book-review-part-i-refactoring-sql-applications-with-bonus-queries/ "Refactoring SQL Applications" review]<br />
<br />
[[Category:Administration]][[Category:Performance]]</div>Greghttps://wiki.postgresql.org/index.php?title=Events&diff=14783Events2011-06-27T22:35:04Z<p>Greg: Pgcon is over</p>
<hr />
<div>== Eventos PostgreSQL ==<br />
<br />
Most PostgreSQL-specific events are tracked on the [http://www.postgresql.org/about/eventarchive PostgreSQL Events] page. This is a listing of events at which we expect, or would like to have, a PostgreSQL presence. Please keep the events in order by starting date and follow the existing examples. Please also tag the events with the MediaWiki "PostgreSQL Events" category. If you are going to be organizing a PostgreSQL booth, please adhere to [[BoothPolicies]]. PostgreSQL Europe conference coordination [[PGUG EU Conference Coordination|is here]].<br />
<br />
{| border="1"<br />
|+ <br />
|- style="background:Khaki;"<br />
'''Upcoming PostgreSQL Events Listing'''<br />
| '''Event''' || '''Web Page''' || '''Date''' || '''Country''' || '''City''' || '''Activities'''<br />
|-<br />
| Postgres Open || [http://postgresopen.org/2011/home/ Postgres Open 2011] || Sep 14-16, 2011 || USA || Chicago || Talks<br />
|-<br />
| 2011 China PostgreSQL User conference ||[http://wiki.postgresql.org/wiki/Pgconchina2011 2011 China PostgreSQL User conference] || July 16-17, 2011 || China || GUANGZHOU ||Talks,Tutorial<br />
|-<br />
| PgDay at OSCON 2011 || [http://pugs.postgresql.org/node/1663 PgDay at OSCON 2011] || July 24, 2011 || USA || Portland, OR || Talks, party<br />
|-<br />
| Pg Conf Colombia || [http://www.pgconf.org Pg Conf Colombia 2011]|| August 4-5, 2011 || Colombia || Bucaramanga ||<br />
|-<br />
| PostgreSQL Conference West 2011 || [http://www.postgresqlconference.org/ #PgWest 2011] || September 27-30, 2011 || San Jose || California || Training, Talks, Booth<br />
|-<br />
| PostgreSQL Conference Europe 2011 || [http://2011.pgconf.eu/ PGConf.EU 2011] || October 18-21, 2011 || The Netherlands || Amsterdam || Training, Talks<br />
|-<br />
| PGBR2011 || [http://pgbr.postgresql.org.br/ PGBR2011] || Nov 3-4, 2011 || Brazil || São Paulo || Tutorials, Talks, Booth <br />
|}<br />
<br />
<br />
{| border="1"<br />
|+ <br />
|- style="background:Khaki;"<br />
'''Older PostgreSQL Events Listing'''<br />
| '''Event''' || '''Web Page''' || '''Date''' || '''Country''' || '''City''' || '''Activities'''<br />
|-<br />
| PGCon 2011 || [http://www.pgcon.org/2011/ PGCon 2011] || May 17-20, 2011 || Canada || Ottawa || Talks, Training<br />
|-<br />
| PostgresSQL Conference 2011 Japan || || February 25-26, 2011 || Tokyo || Japan ||<br />
|-<br />
| [[FOSDEM, Brussels 2011]] || [http://www.fosdem.org/2011/ FOSDEM '11] || February 05-06, 2011 || Belgium || Brussels || Booth, Devroom<br />
|-<br />
| [[PGDAY-Latino, La Habana 2011]] || [http://postgresql.uci.cu/news/19 PGDAY-Latino '11] || February 01-05, 2011 || Cuba || La Habana || Talks, Workshop<br />
|-<br />
| PGEast 2011 || [https://www.postgresqlconference.org/ PGWest] || March 22-25, 2011 || USA || New York, NY || Talks, Training<br />
|-<br />
| [[European PGDay 2010]] || [http://www.pgday.eu/ European PGDay 2010] || Dec 6-8, 2010 || Stuttgart || Germany || Tutorials, Talks, Trainings, Party<br />
|-<br />
| OpenRheinRuhr 2010 || [http://www.openrheinruhr.de/ OpenRheinRuhr 2010] || Nov 13-14, 2010 || Oberhausen || Germany || Tutorials, Talks, Booth<br />
|-<br />
| BLIT 2010 || [http://www.blit.org/ BLIT 2010] || Nov 06, 2010 || Potsdam || Germany || Talk, Booth<br />
|-<br />
| PGWest || [https://www.postgresqlconference.org/2010/west PGWest 2010] || Nov 2-4, 2010 || USA || San Francisco || Talks, Tutorials<br />
|-<br />
| FrOSCamp || [http://www.froscamp.org/ FrOSCamp 2010] || Sep 17-18, 2010 || Zurich || Switzerland || Tutorials, Talks<br />
|-<br />
| FrOSCon || [http://www.froscon.de/ FrOSCon 2010] || Aug 20-21, 2010 || St. Augustin || Germany || [http://psoos.blogspot.com/2010/08/postgresql-at-froscon-2010.html Tutorials, Talks]<br />
|-<br />
| [[PDXPUGDay2010]] (at OSCON)|| [http://wiki.postgresql.org/wiki/PDXPUGDay2010 PDXPUGDay 2010] || July 18, 2010 || USA || Portland, OR || Talks, Lightning Talks, Party<br />
|-<br />
| [[CLT 10]] || [http://chemnitzer.linux-tage.de/2010/info/ Chemnitzer Linuxtage] || March 13-14, 2010 || Germany || Chemnitz || Booth, Talks, Workshop<br />
|-<br />
| [[FOSDEM, Brussels 2010]] || [http://www.fosdem.org/2010/ FOSDEM '10] || February 06-07, 2010 || Belgium || Brussels || Booth, Devroom<br />
|-<br />
| [[Large Installations and System Administration Conference]] || [http://www.usenix.org/event/lisa09/ LISA 2009] || November 1-6, 2009 || USA || Baltimore, Maryland || Booth <br />
|-<br />
| [[European PGDay 2009]] || [http://www.pgday.eu/ European PGDay 2009] || Nov 6-7, 2009 || France || Paris || [http://wiki.postgresql.fr/en:pgday2009:start Tutorials, Talks, Party]<br />
|-<br />
| [[PostgreSQL Cluster Developers' Meeting]] || [http://www.postgresql.jp/events/pgcon09j/e/dev_mtg Call for participants] || November 19, 2009 || Japan || Tokyo || <br />
|-<br />
| [[PostgreSQL Conference 2009 Japan]] || [http://www.postgresql.jp/events/pgcon09j/e/ JPUG 10th Anniversary Conference] || November 20-21, 2009 || Japan || Tokyo || <br />
|-<br />
| [[OSDC 2009]] || [http://2009.osdc.com.au/ OSDC Brisbane] || November 25-27, 2009 || Australia || Brisbane || <br />
|-<br />
| [[PGCon Brazil]] || [http://pgcon.postgresql.org.br/2009/index.en.php PGCon Brazil 2009] || October 23-24, 2009 || Brazil || Unicamp, Campinas, SP ||<br />
|-<br />
| [[PgWest]] || [http://www.postgresqlconference.org/2009/west PgWest 2009 Seattle] || October 16-18, 2009 || USA || Seattle, WA || [http://www.postgresqlconference.org/2009/west/schedule Tutorials, Talks],[[Hackers' Lounge]], [[After-Thing]]<br />
|-<br />
| [[FrOSCon 2009]] || [http://www.froscon.org/ FrOSCon 2009] || August 22-23, 2009 || Germany || St. Augustin || Devroom<br />
|-<br />
| [[USENIX]] || [http://www.usenix.org/events/sec09/ USENIX Security '09] || August 10-14, 2009 || Canada || Montreal || <br />
|- <br />
| [[OSCON]] || [http://conferences.oreillynet.com/ OSCON 2009] || July 20-24, 2009 || USA || San Jose, CA || pgDay, Speakers, Booth<br />
|- <br />
| [[pgDaySanJose2009]] || [[pgDaySanJose2009]] || July 19, 2009 || USA || San Jose, CA || full day<br />
|-<br />
| [[SIGMOD]] || [http://www.sigmod09.org/ ACM SIGMOD/PODS 2009] || June 29-July 2, 2009 || USA || Providence, RI || <br />
|-<br />
| [[BOSC]] || [http://www.open-bio.org/wiki/BOSC_2009 BOSC 2009] || June 25-26, 2009 || Sweden || Stockholm || <br />
|-<br />
| [[USENIX]] || [http://www.usenix.org/events/usenix09/ USENIX '09] || June 14-19, 2009 || USA || San Diego, CA || <br />
|- <br />
| [[PGCon 2009]] || [http://www.pgcon.org/ PGCon 2009] || May 19-22, 2009 || Canada || Ottawa || Tutorials, Talks, Party<br />
|-<br />
| [[NWLinuxFest]] || [http://linuxfestnorthwest.org/ NWLinuxFest 2009] || April 25-26, 2009 || USA || Bellingham, WA || Booth<br />
|-<br />
| [[DASFAA]] || [http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=3177 DASFAA] || April 21-23, 2009 || Australia || Brisbane || <br />
|- <br />
| [[Pg Conference Spring 2009]] || [http://www.postgresqlconference.org/ PostgreSQL Conference East 2009] || April 3-5, 2009 || USA || Philadelphia || Speakers(s), Party <br /> [[PostgreSQLConferenceEast2009|Papers]]<br />
|-<br />
| [[EDBT]] || [http://www.math.spbu.ru/edbticdt/ EDBT/ICDT 2009] || March 23-26, 2009 || Russia || St. Petersburg || <br />
|-<br />
| [[CLT 09]] || [http://chemnitzer.linux-tage.de/2009/info/ Chemnitzer Linuxtage] || March 14-15, 2009 || Germany || Chemnitz || Booth, Talks, Workshop ([http://andreas.scherbaum.la/blog/archives/525-PostgreSQL-auf-den-Chemnitzer-Linuxtagen.html Infos])<br />
|-<br />
| [[OSBC]] || [http://www.infoworld.com/event/osbc/09/ Open Source Business Conference] || March 10-11, 2009 || USA || San Francisco, CA || No plans<br />
|-<br />
| [[Solutions Linux]] || [http://www.solutionslinux.fr/ Solutions Linux] || March 31 & April 1-2 , 2009|| France || Paris || [http://wiki.postgresql.fr/sl2009:start Booth, Talks]<br />
|- <br />
| [[FAST09]] || [http://www.usenix.org/events/fast09/ USENIX FAST '09] || February 24-27, 2009 || USA || San Francisco, CA || Didn't attend<br />
|-<br />
| [[Perl Workshop 2009]] || [http://www.perl-workshop.de/talks/151/view Perl Workshop '09] || February 25-27, 2009 || Germany || Frankfurt/Main || [http://andreas.scherbaum.la/blog/archives/530-Unterlagen-fuer-Tutorial-beim-Perl-Workshop-in-FrankfurtMain.html Tutorial]<br />
|-<br />
| [[SCALE]] || [http://scale7x.socallinuxexpo.org/ SCALE 7x] || February 20-22, 2009 || USA || Los Angeles, CA || Booth, BoF<br />
|-<br />
| [[FOSDEM, Brussels 2009]] || [http://www.fosdem.org/2009/ FOSDEM '09] || February 07-08, 2009 || Belgium || Brussels || Booth, Devroom<br />
|-<br />
| [[ADBC]] || [http://www.cse.unsw.edu.au/~adc09/ 20th Australasian Database Conference] || January 20-23, 2009 || New Zealand || Wellington || <br />
|-<br />
| [[LinuxConfAU]] || [http://linux.conf.au/ linux.conf.au] || January 21-23, 2009 || Australia || Tasmania || <br />
|-<br />
| colspan="6" | [[Events/2008 | 2008 events]]<br />
|-<br />
| colspan="6" | [[Events/2007 | 2007 events]]<br />
|}<br />
<br />
== External Links ==<br />
<br />
* [http://conferences.oreillynet.com/ O'Reilly conferences]<br />
* [http://opencheese.com/2007/10/14/open-source-events-2008/ "Open Source and Linux events in 2008"]<br />
<br />
[[Category:PostgreSQL Events]]<br />
[[Category:Advocacy]]</div>Greghttps://wiki.postgresql.org/index.php?title=Events&diff=14782Events2011-06-27T22:33:16Z<p>Greg: Add Postgres Open 2011</p>
<hr />
<div>== Eventos PostgreSQL ==<br />
<br />
Most PostgreSQL-specific events are tracked on the [http://www.postgresql.org/about/eventarchive PostgreSQL Events] page. This is a listing of events at which we expect, or would like to have, a PostgreSQL presence. Please keep the events in order by starting date and follow the existing examples. Please also tag the events with the MediaWiki "PostgreSQL Events" category. If you are going to be organizing a PostgreSQL booth, please adhere to [[BoothPolicies]]. PostgreSQL Europe conference coordination [[PGUG EU Conference Coordination|is here]].<br />
<br />
{| border="1"<br />
|+ <br />
|- style="background:Khaki;"<br />
'''Upcoming PostgreSQL Events Listing'''<br />
| '''Event''' || '''Web Page''' || '''Date''' || '''Country''' || '''City''' || '''Activities'''<br />
|-<br />
| Postgres Open || [http://postgresopen.org/2011/home/ Postgres Open 2011] || Sep 14-16, 2011 || USA || Chicago || Talks<br />
|-<br />
|-<br />
| PGCon 2011 || [http://www.pgcon.org/2011/ PGCon 2011] || May 17-20, 2011 || Canada || Ottawa || Talks, Training<br />
|-<br />
| 2011 China PostgreSQL User conference ||[http://wiki.postgresql.org/wiki/Pgconchina2011 2011 China PostgreSQL User conference] || July 16-17, 2011 || China || GUANGZHOU ||Talks,Tutorial<br />
|-<br />
| PgDay at OSCON 2011 || [http://pugs.postgresql.org/node/1663 PgDay at OSCON 2011] || July 24, 2011 || USA || Portland, OR || Talks, party<br />
|-<br />
| Pg Conf Colombia || [http://www.pgconf.org Pg Conf Colombia 2011]|| August 4-5, 2011 || Colombia || Bucaramanga ||<br />
|-<br />
| PostgreSQL Conference West 2011 || [http://www.postgresqlconference.org/ #PgWest 2011] || September 27-30, 2011 || San Jose || California || Training, Talks, Booth<br />
|-<br />
| PostgreSQL Conference Europe 2011 || [http://2011.pgconf.eu/ PGConf.EU 2011] || October 18-21, 2011 || The Netherlands || Amsterdam || Training, Talks<br />
|-<br />
| PGBR2011 || [http://pgbr.postgresql.org.br/ PGBR2011] || Nov 3-4, 2011 || Brazil || São Paulo || Tutorials, Talks, Booth <br />
|}<br />
<br />
<br />
{| border="1"<br />
|+ <br />
|- style="background:Khaki;"<br />
'''Older PostgreSQL Events Listing'''<br />
| '''Event''' || '''Web Page''' || '''Date''' || '''Country''' || '''City''' || '''Activities'''<br />
|-<br />
| PostgresSQL Conference 2011 Japan || || February 25-26, 2011 || Tokyo || Japan ||<br />
|-<br />
| [[FOSDEM, Brussels 2011]] || [http://www.fosdem.org/2011/ FOSDEM '11] || February 05-06, 2011 || Belgium || Brussels || Booth, Devroom<br />
|-<br />
| [[PGDAY-Latino, La Habana 2011]] || [http://postgresql.uci.cu/news/19 PGDAY-Latino '11] || February 01-05, 2011 || Cuba || La Habana || Talks, Workshop<br />
|-<br />
| PGEast 2011 || [https://www.postgresqlconference.org/ PGWest] || March 22-25, 2011 || USA || New York, NY || Talks, Training<br />
|-<br />
| [[European PGDay 2010]] || [http://www.pgday.eu/ European PGDay 2010] || Dec 6-8, 2010 || Stuttgart || Germany || Tutorials, Talks, Trainings, Party<br />
|-<br />
| OpenRheinRuhr 2010 || [http://www.openrheinruhr.de/ OpenRheinRuhr 2010] || Nov 13-14, 2010 || Oberhausen || Germany || Tutorials, Talks, Booth<br />
|-<br />
| BLIT 2010 || [http://www.blit.org/ BLIT 2010] || Nov 06, 2010 || Potsdam || Germany || Talk, Booth<br />
|-<br />
| PGWest || [https://www.postgresqlconference.org/2010/west PGWest 2010] || Nov 2-4, 2010 || USA || San Francisco || Talks, Tutorials<br />
|-<br />
| FrOSCamp || [http://www.froscamp.org/ FrOSCamp 2010] || Sep 17-18, 2010 || Zurich || Switzerland || Tutorials, Talks<br />
|-<br />
| FrOSCon || [http://www.froscon.de/ FrOSCon 2010] || Aug 20-21, 2010 || St. Augustin || Germany || [http://psoos.blogspot.com/2010/08/postgresql-at-froscon-2010.html Tutorials, Talks]<br />
|-<br />
| [[PDXPUGDay2010]] (at OSCON)|| [http://wiki.postgresql.org/wiki/PDXPUGDay2010 PDXPUGDay 2010] || July 18, 2010 || USA || Portland, OR || Talks, Lightning Talks, Party<br />
|-<br />
| [[CLT 10]] || [http://chemnitzer.linux-tage.de/2010/info/ Chemnitzer Linuxtage] || March 13-14, 2010 || Germany || Chemnitz || Booth, Talks, Workshop<br />
|-<br />
| [[FOSDEM, Brussels 2010]] || [http://www.fosdem.org/2010/ FOSDEM '10] || February 06-07, 2010 || Belgium || Brussels || Booth, Devroom<br />
|-<br />
| [[Large Installations and System Administration Conference]] || [http://www.usenix.org/event/lisa09/ LISA 2009] || November 1-6, 2009 || USA || Baltimore, Maryland || Booth <br />
|-<br />
| [[European PGDay 2009]] || [http://www.pgday.eu/ European PGDay 2009] || Nov 6-7, 2009 || France || Paris || [http://wiki.postgresql.fr/en:pgday2009:start Tutorials, Talks, Party]<br />
|-<br />
| [[PostgreSQL Cluster Developers' Meeting]] || [http://www.postgresql.jp/events/pgcon09j/e/dev_mtg Call for participants] || November 19, 2009 || Japan || Tokyo || <br />
|-<br />
| [[PostgreSQL Conference 2009 Japan]] || [http://www.postgresql.jp/events/pgcon09j/e/ JPUG 10th Anniversary Conference] || November 20-21, 2009 || Japan || Tokyo || <br />
|-<br />
| [[OSDC 2009]] || [http://2009.osdc.com.au/ OSDC Brisbane] || November 25-27, 2009 || Australia || Brisbane || <br />
|-<br />
| [[PGCon Brazil]] || [http://pgcon.postgresql.org.br/2009/index.en.php PGCon Brazil 2009] || October 23-24, 2009 || Brazil || Unicamp, Campinas, SP ||<br />
|-<br />
| [[PgWest]] || [http://www.postgresqlconference.org/2009/west PgWest 2009 Seattle] || October 16-18, 2009 || USA || Seattle, WA || [http://www.postgresqlconference.org/2009/west/schedule Tutorials, Talks],[[Hackers' Lounge]], [[After-Thing]]<br />
|-<br />
| [[FrOSCon 2009]] || [http://www.froscon.org/ FrOSCon 2009] || August 22-23, 2009 || Germany || St. Augustin || Devroom<br />
|-<br />
| [[USENIX]] || [http://www.usenix.org/events/sec09/ USENIX Security '09] || August 10-14, 2009 || Canada || Montreal || <br />
|- <br />
| [[OSCON]] || [http://conferences.oreillynet.com/ OSCON 2009] || July 20-24, 2009 || USA || San Jose, CA || pgDay, Speakers, Booth<br />
|- <br />
| [[pgDaySanJose2009]] || [[pgDaySanJose2009]] || July 19, 2009 || USA || San Jose, CA || full day<br />
|-<br />
| [[SIGMOD]] || [http://www.sigmod09.org/ ACM SIGMOD/PODS 2009] || June 29-July 2, 2009 || USA || Providence, RI || <br />
|-<br />
| [[BOSC]] || [http://www.open-bio.org/wiki/BOSC_2009 BOSC 2009] || June 25-26, 2009 || Sweden || Stockholm || <br />
|-<br />
| [[USENIX]] || [http://www.usenix.org/events/usenix09/ USENIX '09] || June 14-19, 2009 || USA || San Diego, CA || <br />
|- <br />
| [[PGCon 2009]] || [http://www.pgcon.org/ PGCon 2009] || May 19-22, 2009 || Canada || Ottawa || Tutorials, Talks, Party<br />
|-<br />
| [[NWLinuxFest]] || [http://linuxfestnorthwest.org/ NWLinuxFest 2009] || April 25-26, 2009 || USA || Bellingham, WA || Booth<br />
|-<br />
| [[DASFAA]] || [http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=3177 DASFAA] || April 21-23, 2009 || Australia || Brisbane || <br />
|- <br />
| [[Pg Conference Spring 2009]] || [http://www.postgresqlconference.org/ PostgreSQL Conference East 2009] || April 3-5, 2009 || USA || Philadelphia || Speakers(s), Party <br /> [[PostgreSQLConferenceEast2009|Papers]]<br />
|-<br />
| [[EDBT]] || [http://www.math.spbu.ru/edbticdt/ EDBT/ICDT 2009] || March 23-26, 2009 || Russia || St. Petersburg || <br />
|-<br />
| [[CLT 09]] || [http://chemnitzer.linux-tage.de/2009/info/ Chemnitzer Linuxtage] || March 14-15, 2009 || Germany || Chemnitz || Booth, Talks, Workshop ([http://andreas.scherbaum.la/blog/archives/525-PostgreSQL-auf-den-Chemnitzer-Linuxtagen.html Infos])<br />
|-<br />
| [[OSBC]] || [http://www.infoworld.com/event/osbc/09/ Open Source Business Conference] || March 10-11, 2009 || USA || San Francisco, CA || No plans<br />
|-<br />
| [[Solutions Linux]] || [http://www.solutionslinux.fr/ Solutions Linux] || March 31 & April 1-2 , 2009|| France || Paris || [http://wiki.postgresql.fr/sl2009:start Booth, Talks]<br />
|- <br />
| [[FAST09]] || [http://www.usenix.org/events/fast09/ USENIX FAST '09] || February 24-27, 2009 || USA || San Francisco, CA || Didn't attend<br />
|-<br />
| [[Perl Workshop 2009]] || [http://www.perl-workshop.de/talks/151/view Perl Workshop '09] || February 25-27, 2009 || Germany || Frankfurt/Main || [http://andreas.scherbaum.la/blog/archives/530-Unterlagen-fuer-Tutorial-beim-Perl-Workshop-in-FrankfurtMain.html Tutorial]<br />
|-<br />
| [[SCALE]] || [http://scale7x.socallinuxexpo.org/ SCALE 7x] || February 20-22, 2009 || USA || Los Angeles, CA || Booth, BoF<br />
|-<br />
| [[FOSDEM, Brussels 2009]] || [http://www.fosdem.org/2009/ FOSDEM '09] || February 07-08, 2009 || Belgium || Brussels || Booth, Devroom<br />
|-<br />
| [[ADBC]] || [http://www.cse.unsw.edu.au/~adc09/ 20th Australasian Database Conference] || January 20-23, 2009 || New Zealand || Wellington || <br />
|-<br />
| [[LinuxConfAU]] || [http://linux.conf.au/ linux.conf.au] || January 21-23, 2009 || Australia || Tasmania || <br />
|-<br />
| colspan="6" | [[Events/2008 | 2008 events]]<br />
|-<br />
| colspan="6" | [[Events/2007 | 2007 events]]<br />
|}<br />
<br />
== External Links ==<br />
<br />
* [http://conferences.oreillynet.com/ O'Reilly conferences]<br />
* [http://opencheese.com/2007/10/14/open-source-events-2008/ "Open Source and Linux events in 2008"]<br />
<br />
[[Category:PostgreSQL Events]]<br />
[[Category:Advocacy]]</div>Greghttps://wiki.postgresql.org/index.php?title=PgCon_2011_Developer_Meeting&diff=14222PgCon 2011 Developer Meeting2011-05-05T17:30:00Z<p>Greg: /* Attendees */ See you next year</p>
<hr />
<div>A meeting of the most active PostgreSQL developers and senior figures from PostgreSQL-developer-sponsoring companies is being planned for Wednesday 18th May, 2010 near the University of Ottawa, prior to pgCon 2011. In order to keep the numbers manageable, this meeting is '''by invitation only'''. Unfortunately it is quite possible that we've overlooked important code developers during the planning of the event - if you feel you fall into this category and would like to attend, please contact Dave Page (dpage@pgadmin.org).<br />
<br />
This is a PostgreSQL Community event. Room and refreshments/food sponsored by EnterpriseDB. Other companies sponsored attendance for their developers.<br />
<br />
== Time & Location ==<br />
<br />
The meeting will be from 9AM to 5PM, and will be in the "Albion B" room at:<br />
<br />
Novotel Ottawa<br />
33 Nicholas Street<br />
Ottawa<br />
Ontario<br />
K1N 9M7<br />
<br />
Food and drink will be provided throughout the day, including breakfast from 8AM.<br />
<br />
[http://maps.google.ca/maps?f=q&source=s_q&hl=en&geocode=&q=novotel+ottawa&aq=&sll=49.891235,-97.15369&sspn=36.237851,79.013672&ie=UTF8&hq=novotel+ottawa&hnear=&ll=45.421528,-75.683699&spn=0.036869,0.077162&z=14&iwloc=A&layer=c&cbll=45.425741,-75.689638&panoid=Z4FUGnkZkdHAOkIxyjjS9Q&cbp=12,25.83,,0,-0.6 View on Google Maps]<br />
<br />
== Attendees ==<br />
<br />
The following people have RSVPed to the meeting:<br />
<br />
* Oleg Bartunov<br />
* Josh Berkus<br />
* Jeff Davis<br />
* Selena Deckelmann<br />
* Andrew Dunstan<br />
* David Fetter<br />
* Marc Fournier<br />
* Dimitri Fontaine<br />
* Stephen Frost<br />
* Kevin Grittner<br />
* Robert Haas<br />
* Magnus Hagander<br />
* Alvaro Herrera<br />
* Tatsuo Ishii<br />
* Marko Kreen<br />
* KaiGai Kohei<br />
* Tom Lane<br />
* Heikki Linnakangas<br />
* Fuji Masao<br />
* Bruce Momjian<br />
* Dave Page<br />
* Simon Riggs<br />
* Teodor Sigaev<br />
* Greg Smith<br />
* Koichi Suzuki<br />
* Robert Treat<br />
* David Wheeler<br />
* Mark Wong<br />
<br />
== Proposed Agenda Items ==<br />
<br />
Please list proposed agenda items here:<br />
<br />
* Review of the move from CVS to GIT (Dave Page)<br />
* Build and Test Automation (David Fetter)<br />
* SELinux/PG Update and what-about-RLS? (Stephen Frost, KaiGai Kohei)<br />
* What to do about MaxAllocSize? (Stephen Frost)<br />
* Improving logging (Stephen Frost, David Fetter)<br />
* Slave-only based backups (Robert Treat)<br />
* Authorization issues (Alvaro Herrera)<br />
* Resource control (Simon Riggs)<br />
* User Defined Daemons (Simon Riggs)<br />
* Streaming SRFs and FDW WHERE clauses (Simon Riggs)<br />
<br />
== Agenda ==<br />
<br />
{| border="1" cellpadding="4" cellspacing="0"<br />
!Time<br />
!Item<br />
!Presenter<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|08:00<br />
|Breakfast<br />
|<br />
|-<br />
|08:45 - 09:00<br />
|Welcome and introductions<br />
|Dave Page<br />
|-<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|10:30 - 10:45<br />
|Coffee break<br />
|<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|12:30 - 13:30<br />
|Lunch <br />
|<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|15:00 - 15:15<br />
|Tea break<br />
|<br />
|-<br />
|16:45 - 17:00<br />
|Any other business<br />
|Dave Page<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|17:00<br />
|Finish<br />
| <br />
|}</div>Greghttps://wiki.postgresql.org/index.php?title=PgCon2011CanadaClusterSummit&diff=14221PgCon2011CanadaClusterSummit2011-05-05T17:29:04Z<p>Greg: /* RSVP List */ Not me</p>
<hr />
<div>== Clustering and Replication Developers Summit pgCon 2011==<br />
<br />
Tuesday, May 17th<br />
9:30AM to 5:15pm<br />
University of Ottawa<br />
Building/Room TBA<br />
<br />
'''Sponsored by NTT Open Source'''<br />
<br />
=== Agenda ===<br />
<br />
==== 9AM to 9:30AM ====<br />
<br />
Seating and coffee. Please bring any last-minute agenda items to Josh Berkus at this time.<br />
<br />
==== 9:30AM to 10:15AM ====<br />
<br />
Introductions, and status reports from Replication/Clustering Projects. <br />
<br />
If you are at the summit representing a specific replication or clustering tool, you should prepare a 1-3 minute summary of current progress and issues. If you want to use slides, please provide slides in PDF form to Josh Berkus by Friday, May 13.<br />
<br />
==== 10:15AM to 10:30 AM ====<br />
<br />
Break<br />
<br />
==== 10:30AM to 11:00AM ====<br />
<br />
Summary of Clustering API projects.<br />
<br />
Summit attendees who have been working on [[ClusterFeatures|core clustering features]] should give an update as to progress and current issues. Please present a 2-3 minute summary. Attendees who wish to use slides should provide PDF slides to Josh Berkus by Friday May 13th.<br />
<br />
==== 11:00AM to 12:30PM ====<br />
<br />
Discussion of priorities, progress and ideas for core clustering projects and APIs.<br />
<br />
Goal of this discussion is to modify the list of core clustering features and get commitments for hackers to work on specific features. Also, to supply discussion items for the following day's Developer Meeting<br />
<br />
==== 12:30PM to 1:30PM ====<br />
<br />
Lunch. Box lunches will be supplied.<br />
<br />
==== 1:30 to 2:00 ====<br />
<br />
Breakout sessions scheduling session.<br />
<br />
==== 2:00 to 3:15 ==== <br />
<br />
Breakout sessions part I<br />
<br />
Summit attendees should break into affinity groups and discuss specific core features and APIs. Attendees should come back from breakout session with rough specifications and/or plans.<br />
<br />
==== 3:15 to 3:30 ====<br />
<br />
All attendees gather and deliver summary of breakout sessions.<br />
<br />
==== 3:30 to 3:45 ====<br />
<br />
Coffee Break<br />
<br />
==== 3:45 to 4:45 ====<br />
<br />
Breakout sessions II<br />
<br />
==== 4:45 to 5:15 ====<br />
<br />
Attendees gather and deliver summaries of 2nd breakout sessions.<br />
<br />
Conclusion of Summit.<br />
<br />
=== RSVP List ===<br />
<br />
# Josh Berkus<br />
# Andres Freund<br />
# Selena Deckelmann<br />
# Kevin Grittner<br />
# Dimitri Fontaine<br />
# Christopher Browne<br />
# Steve Singer<br />
# Marko Kreen<br />
# Jan Wieck<br />
# Tatsuo Ishii<br />
# Koichi Suzuki<br />
# Kevin Grittner<br />
# Andrew Dunstan<br />
# Jan Wieck<br />
# Fujii Masao<br />
# Ahsan Hadi<br />
# Michael Paquier<br />
# Jehan-Guillaume (ioguix) de Rorthais<br />
# Mason Sharp<br />
# Pavan Deolasee<br />
# Ghulam Abbas Butt<br />
# Sakata Tetsuo<br />
# Greg Smith [after lunch]<br />
# Simon Riggs</div>Greghttps://wiki.postgresql.org/index.php?title=PgCon_2011_Developer_Meeting&diff=13210PgCon 2011 Developer Meeting2011-03-04T03:19:02Z<p>Greg: /* Attendees */ Add Greg #1 (of many)</p>
<hr />
<div>A meeting of the most active PostgreSQL developers and senior figures from PostgreSQL-developer-sponsoring companies is being planned for Wednesday 18th May, 2010 near the University of Ottawa, prior to pgCon 2011. In order to keep the numbers manageable, this meeting is '''by invitation only'''. Unfortunately it is quite possible that we've overlooked important code developers during the planning of the event - if you feel you fall into this category and would like to attend, please contact Dave Page (dpage@pgadmin.org).<br />
<br />
This is a PostgreSQL Community event. Room and refreshments/food sponsored by EnterpriseDB. Other companies sponsored attendance for their developers.<br />
<br />
== Time & Location ==<br />
<br />
The meeting will be from 9AM to 5PM, and will be in the "Albion B" room at:<br />
<br />
Novotel Ottawa<br />
33 Nicholas Street<br />
Ottawa<br />
Ontario<br />
K1N 9M7<br />
<br />
Food and drink will be provided throughout the day, including breakfast from 8AM.<br />
<br />
== Attendees ==<br />
<br />
The following people have RSVPed to the meeting:<br />
<br />
* Josh Berkus<br />
* Selena Deckelmann<br />
* David Fetter<br />
* Stephen Frost<br />
* Magnus Hagander<br />
* Dave Page<br />
* Greg Sabino Mullane<br />
* David Wheeler<br />
* Bruce Momjian<br />
<br />
== Proposed Agenda Items ==<br />
<br />
Please list proposed agenda items here:<br />
<br />
* Review of the move from CVS to GIT (Dave Page)<br />
* Build and Test Automation (David Fetter)<br />
<br />
== Agenda ==<br />
<br />
{| border="1" cellpadding="4" cellspacing="0"<br />
!Time<br />
!Item<br />
!Presenter<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|08:00<br />
|Breakfast<br />
|<br />
|-<br />
|08:45 - 09:00<br />
|Welcome and introductions<br />
|Dave Page<br />
|-<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|10:30 - 10:45<br />
|Coffee break<br />
|<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|12:30 - 13:30<br />
|Lunch <br />
|<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|15:00 - 15:15<br />
|Tea break<br />
|<br />
|-<br />
|16:45 - 17:00<br />
|Any other business<br />
|Dave Page<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|17:00<br />
|Finish<br />
| <br />
|}</div>Greghttps://wiki.postgresql.org/index.php?title=PgCon_2010_Developer_Meeting&diff=10903PgCon 2010 Developer Meeting2010-05-20T13:40:15Z<p>Greg: Link to DDL trigger page</p>
<hr />
<div>A meeting of the most active PostgreSQL developers and senior figures from PostgreSQL-developer-sponsoring companies is being planned for Wednesday 19th May, 2010 near the University of Ottawa, prior to pgCon 2010. In order to keep the numbers manageable, this meeting is '''by invitation only'''. Unfortunately it is quite possible that we've overlooked important code developers during the planning of the event - if you feel you fall into this category and would like to attend, please contact Dave Page (dpage@pgadmin.org).<br />
<br />
This is a PostgreSQL Community event. Room and lunch sponsored by EnterpriseDB. Other companies sponsored attendance for the their developers.<br />
<br />
== Actions ==<br />
<br />
* Josh & Selena will '''track open items''' and make sure they get listed or tracked and resolved.<br />
* Selena to '''get reviewers to do a review fest June 15'''<br />
* '''Branch''' on July 1, CF July 15.<br />
* '''Announce a plan''' for next development schedule.<br />
* '''No more branching''' for alphas.<br />
* Stephen's '''intern to develop PerformanceFarm application'''. Will need help from Dunstan/Drake etc.<br />
* Kaigai, Stephen, Smith, etc. to get together at pgCon and hash out some more security provider issues.<br />
* Magnus to set up git environment emulator.<br />
* Andrew to publish checklist of how to set up your Git<br />
* '''Move to Git August 17-20''': Magnus, Haas, Dunstan. Frost will be out.<br />
* Koichi to '''extract patch from PostgresXC for snapshot cloning''' and submit.<br />
* Koichi to '''come up with proposed patch design for XID feed'''<br />
* Develop '''specification for commit sequence / LSN data'''<br />
* '''[[DDL Triggers]] Wiki page to be updated''' with spec by Jan, Greg M, et al<br />
* Dimitri to do '''patch (regarding extensions..)''' More detail?<br />
* EDB '''to decide on opening code''' or not for SQL/MED<br />
* '''Review Itagaki's git repo code''': Heikki, Peter SQL/MED<br />
* '''Itagaki to keep working on API''' -- what about Peter? SQL/MED<br />
* '''Document what the plan is to do a conversion upgrade''' (Greg Smith) -- pg_update<br />
* '''Copy Zdenek's code''' (Greg Smith) related to pg_update<br />
<br />
== Time & Location ==<br />
<br />
The meeting will be from 9AM to 5PM, and will be in the O'Connor room at:<br />
<br />
Arc The Hotel<br />
140 Slater Street<br />
Ottawa<br />
Ontario<br />
K1P 5H6<br />
<br />
[http://maps.google.ca/maps?f=q&source=s_q&hl=en&geocode=&q=ARC+THE.HOTEL+|+140++Slater+Street,+Ottawa,+Ontario,+K1P+5H6&sll=49.891235,-97.15369&sspn=45.043582,78.486328&ie=UTF8&hq=ARC+THE.HOTEL+|&hnear=140+Slater+St,+Ottawa,+ON&z=16&iwloc=A Google Maps]<br />
<br />
Food and drink will be provided throughout the day.<br />
<br />
== Invitees ==<br />
<br />
The following people have RSVPed to the meeting:<br />
<br />
* Oleg Bartunov<br />
* Josh Berkus<br />
* Joe Conway<br />
* Jeff Davis<br />
* Selena Deckelmann<br />
* Andrew Dunstan<br />
* David Fetter<br />
* Dimitri Fontaine<br />
* Marc Fournier<br />
* Stephen Frost<br />
* Magnus Hagander<br />
* Robert Haas<br />
* Tatsuo Ishii<br />
* Takahiro Itagaki<br />
* KaiGai Kohei<br />
* Marko Kreen<br />
* Tom Lane<br />
* Heikki Linnakangas<br />
* Michael Meskes<br />
* Bruce Momjian<br />
* Dave Page<br />
* Teodor Sigaev<br />
* Greg Sabino Mullane<br />
* Greg Smith<br />
* Greg Stark<br />
* Koichi Suzuki<br />
* Joshua Tolley<br />
* Robert Treat<br />
* Jan Wieck<br />
<br />
== Agenda ==<br />
<br />
{| border="1" cellpadding="4" cellspacing="0"<br />
!Time<br />
!Item<br />
!Presenter<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|09:00<br />
|Tea, coffee upon arrival <br />
|<br />
|-<br />
|09:15 - 09:25<br />
|Welcome and introductions<br />
|Dave Page<br />
|-<br />
|09:25 - 09:45<br />
|Review of the 9.0 development process <br />
|Dave Page<br />
|-<br />
|09:45 - 10:35<br />
|Development Priorities for 9.1: General discussion <br />
|Josh Berkus<br />
|-<br />
|10:35 - 10:45<br />
|9.1 Development timeline<br />
|Robert Treat<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|10:45 - 11:00<br />
|Coffee break<br />
|<br />
|-<br />
|11:00 - 11:15<br />
|Performance QA/Performance Farm planning update<br />
|Greg Smith<br />
|-<br />
|11:15 - 11:50<br />
|Advanced access control features<br />
* Steps to support [[ESP|external security providers]]<br />
* [[RLS#Issues|Issues]] of row-level access control<br />
|KaiGai Kohei<br />
|-<br />
|11:50 - 12:30<br />
|CVS to GIT: The finale?<br />
|Dave Page<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|12:30 - 13:30<br />
|Lunch <br />
|<br />
|-<br />
|13:30 - 13:55<br />
|[[ClusterFeatures#Export_snapshots_to_other_sessions|Snapshot Cloning]]<br />
|Koichi Suzuki<br />
|-<br />
|13:55 - 14:20<br />
|[[ClusterFeatures#XID_feed|XID feed for clones]]<br />
|Koichi Suzuki<br />
|-<br />
|14:20 - 14:45<br />
|[[ClusterFeatures#Modification_trigger_into_core_.2F_Generalized_Data_Queue|General Modification Queue]]<br />
|Itagaki Takahiro, Jan Wieck, Marko Kreen<br />
|-<br />
|14:45 - 15:10<br />
|[[ClusterFeatures#DDL_Triggers|DDL "triggers"]]<br />
|Jan Wieck<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|15:10 - 15:25<br />
|Tea break<br />
|<br />
|-<br />
|15:25 - 15:55<br />
|[[SQL/MED]]<br />
* Including: [[ClusterFeatures#Function_scan_push-down|function scan push-down]]<br />
|Itagaki Takahiro<br />
|-<br />
|15:55 - 16:20<br />
|Status report on Modules<br />
|Dimitri Fontaine<br />
|-<br />
|16:20 - 16:45<br />
|In-place upgrade with pg_migrator progress<br />
|Bruce Momjian<br />
|-<br />
|16:45 - 17:00<br />
|Any other business<br />
|Dave Page<br />
|- style="font-style:italic;background-color:lightgray;"<br />
|17:00<br />
|Finish<br />
| <br />
|}<br />
<br />
== Minutes ==<br />
<br />
= 2010 PostgreSQL Developer Meeting =<br />
<br />
Ottawa, Canada<br />
<br />
Present: Tatsuo Ishii, Andrew Dunstan, Bruce Momjian, David Fetter, Jeff Davis, Itagaki Takahiro, Koichi Suzuki, Josh Berkus, Dave Page, Dimitri Fontaine, Marko Kreen, Michael Meskes, Joe Conway, Josh Tolley, Greg Sabino-Mullaine, Selena Deckelman, Stephen Frost, Robert Treat, Robert Haas, Magnus Hagander, Kohei Kaigai, Heikki Linnakangas, Tom Lane, Jam Wieck, Oleg Bartunov, Teodor Sigaev, Marc Fournier, Greg Smith, Greg Stark, Peter Eisentraut (via Skype)<br />
<br />
== Review of The 9.0 Development Process ==<br />
<br />
How did the commitfest work? Do we feel that the process worked in general, do we like Robert's CF application? What other parts of the process should we improve?<br />
<br />
David Fetter commented that the writable CTE patch went through more than one CF without adequate feedback, and the patch got rejected. Should we not allow things to be bumped, or not bumped twice? Still listed as open on November Commitfest. RH thinks feedback was provided but it might not have been very clear. It ''was'' reviewed more than once. RT: maybe we shouldn't be so quick to bump people in the last CF. Writeable CTE wasn't bumped until Feb. 10. Part of the issue is that it's a very complicated patch.<br />
<br />
JB feels that integration/testing needs to be more structured. Still amorphous. If we had more structure, maybe it would go faster. RH we had a lot of open items, we closed them an released Beta1. Agrees that we need more concrete critera. BMomj: we end up with a pile of really hard problems which we don't know how to fix. Even now can't fix max_standby_delay. Needs to be a fire under someone. Open items list is a big win, but doesn't show the scope of the problem. Didn't have anything on open items list until this AM for Beta. Need to reconstitute list.<br />
<br />
If stuff is on the open items list it stops release. DF: maybe we should have ratings of complex/not? Perhaps we need release manager to keep up on open items etc.? Or Beta manager. Not everyone knows what every open item is. Someone should track the list and see status. Need list to know what to work on. JB would like to put stuff on the open items list. If there's a thread on hackers put it on the list.<br />
<br />
How do we get all the big patches in the first commitfest or second instead of last? Assumes people are working while releasing. Why didn't HS get into first commitfest. Post-CF, prerelease is long and delays development, people take a vacation for 6 months. Also the CF reviews were not very good for big patches. CFs worked well for small/medium patches. But for big patches not so great. KNNGiST and WriteableCTE not so great. HS at least didn't get into last commitfest.<br />
<br />
How much of a problem is this issue going to be for 9.1? Do we have anything that large? Synchronous Replication. SQL/MED?<br />
<br />
Josh & Selena will track open items and make sure they get listed or tracked and resolved.<br />
<br />
== Priorities for 9.1 ==<br />
<br />
See [[PgCon_2010_Developer_Meeting#Development_Priorities_for_9.1|priority grid]] below.<br />
<br />
== Timeline for 9.1 ==<br />
<br />
Treat: Release in July, have an immediate commitfest of pending stuff. Will we release in July? If we're late, do we want to drop a commitfest and have shorter cycle? Maybe the development cycle should go, even if the release is delayed? Lane doesn't think we have enough manpower for that.<br />
<br />
Issue is that people are waiting 6 months to resume development. Are there enough reviewers, though? Maybe we could have a reviewfest. Haas thinks that we have manpower. Berkus likes the idea of a reviewfest, new people. Haas says that we could commit stuff or at least put it "pending commit". Smith says that pruning patches would be valuable. What's the main bottleneck of people? Maybe Kevin could run it.<br />
<br />
We would need to branch first. Which would involve backpatching. Branch on July 1, first CF on July 15? Or 15 and August 1? Tom Lane: if we're not close to releaseable by July 1, then it's not feasible. Frost would like to have reviewfest in June. We could ask for reviews right now. RRRs need more direction. Selena will help.<br />
<br />
In the future, do we want to start earlier? We should get more people to help with getting to beta. Get people on open items list. Put it on the commitfest app? Magnus: but that makes it closer to a bug tracker. Haas: cycle of work is different for open items. No, will use wiki instead. Next year we'll have an open items app.<br />
<br />
Doing early branching will also help with bitrot. And will help with people's work schedules.<br />
<br />
Plan is to start 9.1 development on July 15, and only delay if things blow up.<br />
<br />
Alpha releases unanimously good. We might want to branch them differently. Downloads weren't huge, 10s or maybe 100 per alpha. But practice found issues with packaging, build scripts, etc. Maybe we shouldn't create branches for them, though. We should just tag it. We just wanted it to say the right name. This is probably fixed. So, for 9.1 we'll have a patch for the tarball and not a branch. Discussion of checkout/tag/branch detailed ensued.<br />
<br />
CFs need to have enough reviewers. Need to recruit more? Need to make it clear what's in it for reviewers. Reviewers should be nominated for minor contributors.<br />
<br />
Actions: <br />
* Selena to get reviewers to start now.<br />
* Branch on July 1, CF July 15.<br />
* Announce as plan for schedule.<br />
* No more branching for alphas.<br />
<br />
== Performance QA/Performance Farm ==<br />
<br />
Last year we took this as an issue at the meeting. Holdup was pgBench needed overhaul; results were useless on Linux. New pgBench should resolve those problems. Got something in pgBench tools which tries to figure out number of threads. Other thing which has been moving along well is benchfarm, and how should systems be set up to give reasonable performance. Greenplum has nice utility called gperftest, people need to test hardware before running pgbench. Nobody will let us benchmark high-end machines and talk about the results. Smith has some new high-end machines to test performance results.<br />
<br />
Smith: we are ready to write a spec for a performancefarm client. Need to build client for this. Frost has an intern to work on Postgres stuff will be working on performance farm client. Will be working for 8 weeks.<br />
<br />
Performancefarm also needs to run a battery of individual operations for performance regressions. Also needs to run a quick hardware/OS test for comparability. Need a general framework; maybe we'll eventually add DW test or TPCH. <br />
<br />
Why do we keep the same dependancy restrictions as buildfarm? It's easier to get clients that way. If we can tell people that they can just add the PerformanceFarm to the buildfarm, it's easier. Will go to assembled tool very soon. Data collection will start with 9.0 because of old pgbench. Biggest thing is to notice if someone's patch torpedoes performance.<br />
<br />
Propose that machines for the PerformanceFarm be named after plants. ;-)<br />
<br />
What about replication performance? Too big to take over.<br />
<br />
Actions: Stephen's intern to develop PerformanceFarm application. Will need help from Dunstan/Drake etc.<br />
<br />
== Advanced Security Features ==<br />
<br />
KaiGai's Presentation <link?><br />
<br />
We try to load something externally to make access control decisions. Row-level access controls have a number of issues. <br />
<br />
PostgreSQL currently has logic & access controls in the same place. (1) rework external check using same flow. (2) add label support. (3) add SELinux support. New method will have clear separation between Postgres and SELinux, possibly using a loadable module. <br />
<br />
Rework of access controls needs to do all of the access control checks at once instead of one at a time with query in between. Need to do one object at a time because otherwise it's too big. That way patches are only 200-500 lines each.<br />
<br />
Finally, add security labels to objects.<br />
<br />
The concern with the rework was that moving all of the security checks into a separate area was that that area needed to have knowledge about everything. Haas: need to provide a clean interface to security providers, but not by changing huge amounts of code. Heikki: it's not that big, it's fairly mechanical. <br />
<br />
Currently we check some basic things (like does the table exist) and later we check fine-grained permissions. Completely isolating it not possible. Locks for one thing. Also it's difficult to have a clean API because the API needs to know about everything. Kaigai says that generalized interface isn't necessary, Linux has had to add to the API with each new security provider.<br />
<br />
Why can't we put calls in the current aclcheck? Too low-level, don't pass enough information to them. We could pass more. But if we have the OID, we can look up all of the class information. Right now we have duplicate permissions checks all over the code. And the checks we need to do are not necessarily the same checks which SE wants to do.<br />
<br />
Smith: what users want isn't necessarily what we have in the patch. Maybe we should just build a subset of functionality, a lot of people don't care about DDL etc. We could implement only SELECT security, it would make the patch more digestible. All permissions checks for SELECT are all in one place. Or DML only.<br />
<br />
Does the information provided supply enough? It has to be because it's the first stage. It's basically the information the user entered. <br />
<br />
Is DML-only enough? Will it leak? Of course. Anyway, it's useful simplication. Smith says 95% of use cases are solved by that. Table discussion of all of the ACL checks. Kaigai says that checks are the same for DML and DDL, but others do not agree.<br />
<br />
Security Label discussion. Access control decisions operated by Subject, Target, Action. Label replaces Target. Syntax introduced, ALTER ... SET WITH SECURITY LABEL, SECURITY LABEL TO label. Simplified suggestion, just add seclabel[] text to catalogs. But wastes space and hurts peformance. <br />
<br />
Should SELinux be in core or be loadable module? <br />
<br />
Actions: Kaigai, Stephen, Smith, etc. to get together at pgCon and hash out some more issues.<br />
<br />
== CVS to GIT ==<br />
<br />
Its probably clear that we should change to a new VCS, and it should be GIT. No disagreement. <br />
<br />
What are the gating factors to moving now? Let's make a decision to do it, and when and we'll fix the issues. Problem with buildfarm has been solved. Buildfarm now runs git. Building Git on any older platform is impossible; bad make files. Getting all buildfarm members running Git wouldn't be possible, but we can run CVS mirror for older ones.<br />
<br />
We have a checklist on the wiki already for switching.<br />
<br />
Most buildfarm members will run either; it's a config item. We'll track which ones are using the emulator. <br />
<br />
Building older versions may have issues to build identically. Magnus claims that it's been fixed. How much do we care about old tags? There are still a couple of bad files but they're minor. Do we still have old issues with CVS? Marc says they're fixed, shouldn't show up in Git history.<br />
<br />
Are commit e-mails an issue? No. But e-mails will look different. Tom wants them to just work the same. <br />
<br />
We don't need to solve technical issues here. Just pick a date. We'll know when we're doing it and that everything will suck for a month afterwards. Will need to be a low-stress time for the project, between commitfests. Tom isn't sure how to apply commits across multiple branches. Discussion of Git details hashed out.<br />
<br />
Two issues: sheer space usage. Second, management of commits. But these are not serious problems. Andrew has checklist. Will need to test stuff and decide how to do specific stuff. Suggestion on date: after 2nd commitfest. No, halfway after first commitfest. No, immediately after first commitfest ... August 20th or similar.<br />
<br />
We will have git super-master which synchs to git.postgresql.org. Can do receive hooks. Have we considered using github? Github should not be canonical source, in case they go away. Can't do postcommit hooks on Github. People can just do both. Forking Postgres repo puts you near their limit. Put off Github questions.<br />
<br />
Issue: what about the name? People will need to reclone, will be part of suckitude. Rename old repo and create new repo. Where will secret master repo be? Maybe Canova.<br />
<br />
Mapping usernames onto e-mail addresses could be a pain. Maybe we should standardized onto committer@postgresql.org. Committers should pick names before conversion.<br />
<br />
Discussion about commit messages, merges, commits, etc. ensued.<br />
<br />
Action: <br />
* Magnus to set up emulator.<br />
* Andrew to publish checklist of how to set up your Git<br />
* Move to Git August 17-20: Magnus, Haas, Dunstan. Frost will be out.<br />
<br />
== Lunch ==<br />
<br />
== Clustering ==<br />
<br />
=== Snapshot Cloning ===<br />
<br />
Koichi: had meeting in Tokyo, and make a list of core APIs which clustering projects could use. Snapshot cloning is one such, plus it's useful for parallel query and parallel pg_dump. First use snapshot cloning to enforce consistent view of the database. Has already implemented this for PostgresXC. The same thing could be applicable for single PostgreSQL. It is a very simple implementation, and should not produce resource conflicts.<br />
<br />
For parallel pg_restore, maybe snapshot cloning will not be sufficient. Cloning the snapshot for read-only transactions is simple, not for write transactions. <br />
<br />
Smith: Using this for parallel query also works for read-only cloning.<br />
<br />
Very useful for dumping partitioned tables, with one backend for each partition. <br />
<br />
Added API to libpq. But shouldn't this be a server-side command? For cluster usage, it was useful for it to be in libpq. RH: one idea is a function we could call, and the shared snapshot would use a "cookie". Joachim W. wrote a patch with publish/subscribe. Needs to be all server-side. <br />
<br />
Tom has suggestion for simpler implementation, without locks. That is, you just need to have same snapshot start, not shared snapshot. Snapshot would die once the original transaction was gone. Koichi: this is not a problem.<br />
<br />
Tom: maybe we could just use a prepared transaction, which would keep the snapshot valid. Proposing to begin with read-only implementation.<br />
<br />
Action: Koichi to extract patch from PostgresXC and submit.<br />
<br />
== XID Feed ==<br />
<br />
PostgresXC needs to have a transaction run on multiple servers in the same cluster. The XID is needed so that you can have the same transactions. Will also be useful for parallel write operation, but that's really complicated. Parallel backend needs to be assigned same XID, but locks, resource conflicts.<br />
<br />
Heikki: let's start with parallel read queries.<br />
<br />
JB: parallel write on one server is a different feature than XID feed for clustering.<br />
<br />
Multiple backends share the same XID so they can share the same snapshot. If you're doing a multimaster update across multiple servers so you can maintain serialization. Stark explains multi-server deadlock situation.<br />
<br />
The XID is not the issue, it's the commit order. But communicating the xids means that you don't need to communicate more data to the servers. Just maintaining transaction IDs isn't enough, we need to maintain commit/abort info. If you want a snapshot which is valid on both nodes, you'd have to lock the procarray on both. You'd have to have a single global transaction manager controlling commits.<br />
<br />
What is the core feature here? You might want to make a specific instance of Postgres the global transaction manager. Or you might make one postgres a consumer of snapshots. Heikki: you could interrogate each node about what transactions were running at the time of the snapshot. Some discussion without agreement.<br />
<br />
Koichi explains how snapshots are distributed in PostgresXC, they receive them with XID. There's no negotiation between nodes. What stability would this affect with core Postgres? Vacuum and analyze need their own GXID. <br />
<br />
What is the feature: getting XID and Snapshot from PostgreSQL. Is this useful for core Postgres? Does it work for other cluster systems other than PostgresXC. Would be useful for all synchronous multi-master replication. Like Postgres-R. Or any distributed databases. Should be done as a "hook". Not really different from two-phase commit, but not testable without an external manager, which is the main problem. How could you test it?<br />
<br />
What other things do you need? What other hooks would we need in core to support GTM and other clustering functionality? If we had SQL/MED working, you could export XIDs to remote tables. But we don't have that yet.<br />
<br />
A hook will be fine.<br />
<br />
Action: Koichi to come up with proposed patch design<br />
<br />
=== General Modification Queue ===<br />
<br />
Marko: one use case is transactional que. Have some sample imentations in pgQ and Slony. Two different stragies: Slony/Londiste, and Josh wants to replicate data to external non-PostgreSQL tables. Josh is mainly concerned about write overhead, but no way around WAL.<br />
<br />
What is not solved by current LISTEN/NOTIFY? What this has is potential for really improving. Both Slony and pgQ rely on being able to filter out blocks of events and serialized sequence of individual events. Problem is eventID sequence number cannot be cached, that causes painful overhead. Both systems come up with insert/update/delete statements which go by index scan.<br />
<br />
If we can support general functions where a trigger can hand in old and new tuples and the receiver can get something which allows it to pull new data. Seems like commit order is the issue. Why do you need a sequence which can't be cached? If you knew what order they committed ... you wouldn't need a global counter. Jan isn't sure this makes sense for core because of lack of version independance.<br />
<br />
If you had a stream of commit information, you'd just have to buffer it. But that could work. The real missing piece is a commit ordering stamp, which the database should supply for you. This was a requirement of Postgres-R as well, they need to know what order to apply the writeset in.<br />
<br />
We could use the LSN of the commit record as that number. In the CLOG, for a range of XIDs, we have some LSNs. But it's not enough information. It sounds like all that's really needed is to have a way to grab LSN numbers. Maybe write it to a separate file.<br />
<br />
Commit-order table would need to be truncated. The clients have to send message about being done with it. Do we want to call gettimeofday while holding walinsertlock? Tom: we already are, but it's not exact enough. LSN plus approximate timestamp would give you order.<br />
<br />
Clients would need to look and say grab all transactions between one LSN and another.<br />
<br />
Action:<br />
* Develop specification for commit sequence / LSN data<br />
<br />
=== DDL Triggers ===<br />
<br />
Jan: it's a feature we've been missing for at least a decade. Jan starting to work on it, but DDL code is very messy. It's in tcop_utility.c function process_utility. The mess is that while the function gets a query string some calls don't put a real query string in there. <br />
<br />
Purposes include enforcement of complex CREATE requirements. Also replicating DDL to replicas. Wouldn't you like it better if the data were passed to the trigger using some kind of structured data rather than a query string. Take node structure of utility statement and create query string which can be passed, as well as passing node structure using nodetostring.<br />
<br />
We haven't exposed nodetostring because it's not a stable API. But generally changes there indicate changes in features. But if we could give the trigger an object name. Maybe we could pass before-and-after snapshots.<br />
<br />
How do Oracle and other systems get this data?<br />
<br />
Also there's an issue that some utility statements call other utility statements. <br />
<br />
Nodetostring exposure was also vetoed for other reasons. Slony and other systems can take a tree instead. Maybe we should already have a utility function. We already theoretically have a hook, but it's not being used. And also still a problem with recursive calls.<br />
<br />
pgAdmin wants a notification for changes. Would need some notification with data about object changed. But just object changed would be enough. We could also build up a set of events for DDL changes over time based on the tree ... we can start with just objectype and objectID. And type of modification: create, drop, alter.<br />
<br />
ProcessUtilityHook is there, but the problem is how it's exposed to the user. Hooks aren't used consistently and you don't know who has set it in what modules. Several people are already using it. Ordering becomes a problem. People don't want to use the hook.<br />
<br />
We also want a userspace implementation. Like maybe a trigger.<br />
<br />
Action:<br />
* Wiki page to be updated with spec by Jan et. all.<br />
<br />
== Status Report on Modules ==<br />
<br />
Slides from Dimitri<br />
<br />
Many issues and topics. Talking today about dump/reload support. If you dump and restore, you don't want to restore objects. We want to support any source language. We want to support custom GUCs and versions in extensions. We also want upgrading factilities.<br />
<br />
We are not going to talk about schema. We are not going to talk about souce level packaging, ACLs, PGAN or OS packaging and distribution. Example of extensions/foo/control. Should be in user/share. Control file will have name, version, custom_variable_classes.<br />
<br />
Then you can just do install extension foo, drop extension foo. pg_restore would call install extension foo and not its objects.<br />
<br />
Need dependancies on extensionID in pg_depends so we know what belongs to the extension.<br />
<br />
Used name = value because we already know how to parse them in control file. pg_dump will be easy, we will know how to exclude based on dependancies. uninstall.sql files will be replaced by this. <br />
<br />
What do we do about user-modifiable tables which are associated with a module? This is similar to how debian deals with config files. Or we could allow install files have items which aren't tracked as part of the module, and pg_restore would need to know about that.<br />
<br />
Should extensions have different versions or different names per version? The install script is just a sql file, you can add a DO script. Debian handles this by checking if the configuration is the default and replacing it, or failing over to the user afterwards. <br />
<br />
Need license information in the control file.<br />
<br />
We probably just need to punt configuration tables to version 2.<br />
<br />
Will this help pg_upgrade? Maybe. Right now you have to migrate shared libs yourself. Will fix some cases.<br />
<br />
Action:<br />
* Dimitri to do patch<br />
<br />
== SQL/MED ==<br />
<br />
Slides by Heikki.<br />
<br />
Heikki: in EDBAS, we already have foriegn tables. CREATE FOREIGN TABLES syntax. EDB has libpq, Oracle and ODBC. Shows slide with join plan; currently materialize locally.<br />
<br />
Have to decide what plans you can push to the remote database. Not all remote sources can handle all structures, including functions, joins etc. Even between PostgreSQL ruleutils.c is different for each version.<br />
<br />
Proposed FDW planner API. Pass parse tree, needs to say what it can take. How do we not duplicate the entire planner in the API. EnterpriseDB has been working on this but company has not committed to contributing it.<br />
<br />
Jan mentioned that at least a wrapper has to take a scan. <br />
<br />
Itagaki: didn't know EDB was also working on SQL/MED. Code is probably completely different. There's four issues: any features it should consider about. Currently considering dbLink and COPY FROM. Maybe there should be other features, like GDQ. <br />
<br />
Josh: what about PL/proxy? SQL/MED should support the functionality of PL/proxy. <br />
<br />
What is the best design to support access methods for tables? Postgres AMs for indexes, we need AMs for FDWs. Update is a problem, may start with select and insert.<br />
<br />
How to push WHERE conditions to FDW? Itagaki's WIP code uses internal tree, requires C code in FDW to parse. Might be unstable for that reason. External server might not be SQL server, so passing SQL isn't that useful.<br />
<br />
FDW vs. SRF ... can we merge these somehow? Current implementation of functionscan is to materialize. We didn't do value-per-call because it was difficult with PL/pgSQL. So we just materialized it.<br />
<br />
Heikki suggested that we don't use the FDW API from the SQL committee because nobody uses it. Supplies function for reaching into planner but can't imagine how it would work. EnterpriseDB implements pipeline.<br />
<br />
Also, an issue is cost estimation. How do we know how much it would cost. Statistics on remote tables? We could store them. It would also be nice to have joins take place on the remote server, but that's version 2 or 3. <br />
<br />
Also what about indexes on the remote server. One implementation in Japan has CREATE FOREIGN INDEX. Creating definitions locally of remote schema are very useful. DB2 implemented this and had commands to define foreign objects.<br />
<br />
Maybe we don't need to be really smart about this in the first version. But people are asking for this. Pass whole query to remote server. Wouldn't work for joins, but would work for union query. Defining an index on a FDW doesn't make much sense since we don't know what the costs are like over there. We should just have a function in the API.<br />
<br />
How would we recognize that we want to do the join on the remote server. We just need costs from the FDW. But we could also keep them in pg_stats. FDW might be used to access complex database like Infobright or Bigtable. <br />
<br />
Would EDB contribute a patch? Or just the rights for Heikki to steal bits? Patch from EDBAS would not work. Does the EDB patch help Itatgaki? EDB might get Itagaki access to the code.<br />
<br />
Action:<br />
* EDB to decide on opening code or not<br />
* Review Itagaki's git repo code: Heikki, Peter<br />
* Itagaki to keep working on API -- what about Peter?<br />
<br />
== pg_migrator ==<br />
<br />
Bruce: Just fixed a but in pg_upgrade for XID wraparound. Added docs. Migrator tries to rebuild a plane in the air. Bruce feels like he has a swiss army knife with pg_upgrade. Issue with FrozenXIDs in template0 fixed. Still a work in progress, which is why it's in contrib, still a work in progress but bugs are fixable.<br />
<br />
Will still have issues with page format, binary format changes. Haven't had those for a while, may not need them, and a dump & restore every 4-5 years is an improvement. Bunches of people have used it to migrate. The tool makes sure that it doesn't corrupt your data, it goes back if it hits an error.<br />
<br />
Stark: In the past we've had a chicken-and-egg issue if people wanted to make changes to the data format, we'd need a conversion function. And there's no hooks in pg_migrator. Could slow down the pace of adding features. We should add the functionality to do the conversion now.<br />
<br />
pg_upgrade from EDB has hook for COPY mode for conversion. And then what's the point of pg_migrator? If you don't have to rebuild indexes. But a lot of times you'd have to. But to have it in place so that it's not a hurdle for a data change. If we're going to do that we need to do it for 9.1 early in the cycle.<br />
<br />
Dump and reload is impossible for Smith's customers. page checksum thing is a good example. Read-on-conversion is a big performance hit. Background process to convert all old data. <br />
<br />
Also issue around internal representation of data type change. We either have to convert it, or have versioned data types.<br />
<br />
Is binary data conversion really faster? Read back one version, and convert while running. Something which can read one version level back and can convert is the only viable way. Would have a daemon to convert data. What about split pages, bigger page headers.<br />
<br />
CRC would be a good test for this. It's a small patch but has a lot of common upgrade issues. And it would be only one patch so we could put it off for another version if we had to. Would like to get this started for 9.1. CRC requires dealing with the split pages issue.<br />
<br />
Really tricky stuff includes changing indexing structure. Would need concurrent rebuild for unique indexes and primary keys. REINDEX concurrently was a deadlock concern. <br />
<br />
What about writing? Convert-on-read. <br />
<br />
Action:<br />
* Document what the plan is to do a conversion upgrade (Greg Smith)<br />
* Copy Zdenek's code (Greg Smith)<br />
<br />
== Other Business ==<br />
<br />
Treat: Mammoth Replicator into Core? Code has been BSD-licensed. Has features which don't exist in Hot Standby. Code is pretty big. Alvaro would say that it's not in great shape to contribute at this point.<br />
<br />
Has log-based replication including per-table. Has its own logs, and has binary vs. SQL replay. Not trigger-based. <br />
<br />
Real question is would we consider putting any replication solution into Core? Not until we've finished digesting HS/SR. Replication is one word for a dozen different solutions for 3 dozen problems. We've accepted one which solves some problems. Page things we should consider each case on its merits.<br />
<br />
Question is what parts of Mammoth make sense to be in core. Bruce thinks that mammoth is so tied into the backend that we couldn't accept it. It's too complicated. It doesn't sound like Mammoth offers enough functionality to make it worth it.<br />
<br />
Does mammoth need to be in core? It has grammar changes. We now have better ideas of what replication requirements are, we may have more commonalities in core in the future. Part of the reason have so many because we don't have one in Core.<br />
<br />
We'd consider more replication in core, but maybe not Mammoth.<br />
<br />
Action:<br />
* None<br />
<br />
== Other Business ==<br />
<br />
Peter says we're almost compliant with SQL 2008. Is it of PR value to comply with the remaining random stuff? People don't think so. <br />
<br />
What about Case Folding? We probably don't want to fix that. Wasn't on Peter's list, which is just features. Case Folding would break a lot of stuff. Thought Peter was already doing that with a couple of features per version.<br />
<br />
Compliance is only of moderate value. Is there a point of implementing the features on Peter's list?<br />
<br />
Summer of Code: how is it going?<br />
<br />
Haas has concern. His student is working on Materialized Views. Has proposed a very basic implementation. Might not be able to do even that. We can fail people.<br />
<br />
Selena just updated the open items on the mailing list. A lot of items were not closed on the mailing list.<br />
<br />
What about max_standby_delay? Tom wants to go back to just a boolean. What about Tom's original proposal? Can't really do it. Tom wants to remove time dependancy. What are the issues with max_standby_delay? Idle time on the master uses up time on the standby. Plus NTP and keepalives and some other stuff.<br />
<br />
Major discussion on max_standby_delay ensued.<br />
<br />
== Development Priorities for 9.1 ==<br />
<br />
{| border="1" cellpadding="4" cellspacing="0"<br />
!Feature<br />
!Developers<br />
!Notes<br />
|-<br />
|MERGE/UPSERT/REPLACE<br />
|GSoC with Greg Smith/Simon<br />
|Issues with predicate locking<br />
|-<br />
|Synchronous replication<br />
|Fujii/Zoltan/Simon<br />
|Review by Heikki<br />
|-<br />
|Improve Hot Standby/Streaming Rep usability<br />
|Simon/Fujii/Greg Smith<br />
|Review by Josh Berkus<br />
|-<br />
|Snapshot cloning API<br />
|Koichi<br />
|Sample app is parallel pg_dump<br />
|-<br />
|Locale/encoding <br />
|<br />
|per column/per operator collation<br />
|-<br />
|User exposed predicate locking<br />
|Simon<br />
|Interaction with serialization<br />
|-<br />
|Serializable<br />
|Kevin Grittner<br />
|<br />
|-<br />
|pg_upgrade in core<br />
|Bruce<br />
|<br />
|-<br />
|External security provider<br />
|KaiGai<br />
|<br />
|-<br />
|Row-level security<br />
|KaiGai<br />
|<br />
|-<br />
|Writeable CTEs<br />
|Marko Tiikkaja <br />
|<br />
|-<br />
|SQL/MED<br />
|Itagaki<br />
|<br />
|-<br />
|Generalized inner-indexscan plans<br />
|Tom Lane<br />
|<br />
|-<br />
|Re(?)plan parameterized plans with actual parameter values<br />
|Tom Lane<br />
|<br />
|-<br />
|COPY as a FROM clause<br />
|Andrew Dunstan<br />
|<br />
|-<br />
|Pipelining/value per call for SRFs<br />
|Joe Conway<br />
|<br />
|-<br />
|Partitioning implementation<br />
|Itagaki<br />
|<br />
|-<br />
|Index only scans<br />
|Heikki<br />
|<br />
|-<br />
|Global temp/unlogged tables<br />
|Robert Haas<br />
|<br />
|-<br />
|Inner join removal<br />
|Robert Haas<br />
|<br />
|-<br />
|Extensions<br />
|Dimitri<br />
|<br />
|-<br />
|Range types<br />
|Jeff Davis<br />
|<br />
|-<br />
|Materialized views<br />
|GSoC+Robert Haas<br />
|<br />
|-<br />
|JSON data type<br />
|GSoC<br />
|<br />
|-<br />
|DDL Triggers<br />
|Jan<br />
|<br />
|-<br />
|Leaky view security<br />
|<br />
|<br />
|-<br />
|KNNGist<br />
|Teodor<br />
|<br />
|-<br />
|Performance farm<br />
|Greg Smith<br />
|<br />
|-<br />
|Git<br />
|<br />
|<br />
|-<br />
|PGAN<br />
|David Wheeler<br />
|<br />
|}<br />
<br />
<br />
[[Category:PostgreSQL Events]]</div>Greghttps://wiki.postgresql.org/index.php?title=IRC2RWNames&diff=10885IRC2RWNames2010-05-20T03:01:30Z<p>Greg: Add digicon</p>
<hr />
<div>=== List of IRC nicks with their respective real world names ===<br />
<br />
You can find many PostgreSQL users and developers chatting in [irc://irc.freenode.net/postgresql #postgresql on freenode]. Here's more information about some of the regulars there:<br />
<br />
{| border="1"<br />
|-<br />
!Nickname || Real Name<br />
|-<br />
|ads || Andreas Scherbaum<br />
|-<br />
|agliodbs, aglio2 (freenode), jberkus (oftc) || Josh Berkus<br />
|-<br />
|ahammond || Andrew Hammond<br />
|-<br />
|alvherre || Alvaro Herrera<br />
|-<br />
|andres || Andres Freund<br />
|-<br />
|Assid || Satish Alwani<br />
|-<br />
|aurynn || Aurynn Shaw<br />
|-<br />
|BlueAidan/BlueAidan_work || [[user:davidblewett | David Blewett]]<br />
|-<br />
|bmomjian || Bruce Momjian<br />
|-<br />
|cbbrowne || Christopher Browne<br />
|-<br />
|crab || Abhijit Menon-Sen<br />
|-<br />
|Crad || Gavin M. Roy<br />
|- <br />
|daamien || Damien Clochard<br />
|-<br />
|DarcyB || Darcy Buskermolen<br />
|-<br />
|davidfetter || David Fetter<br />
|-<br />
|dcolish || [http://www.unencrypted.org Dan Colish]<br />
|-<br />
|dcramer || Dave Cramer<br />
|-<br />
|DeciBull, TheCougar || Jim C. Nasby<br />
|-<br />
|dennisb || Dennis Bj&ouml;rklund<br />
|-<br />
|depesz || Hubert Lubaczewski<br />
|-<br />
|devrimgunduz || Devrim G&uuml;nd&uuml;z<br />
|-<br />
|digicon || Zach Conrad<br />
|-<br />
|dim || Dimitri Fontaine<br />
|-<br />
|direvus || Brendan Jurd<br />
|-<br />
|drbair || Ryan Bair<br />
|-<br />
|duck_tape || Adi Alurkar<br />
|-<br />
|dvl || Dan Langille<br />
|-<br />
|eggyknap || Joshua Tolley<br />
|-<br />
|endpoint_david || David Christensen<br />
|-<br />
|f3ew/devdas || Devdas Vasu Bhagat<br />
|-<br />
|feivel || Michael Meskes<br />
|-<br />
|elein || Elein Mustain<br />
|-<br />
|gleu || Guillaume Lelarge<br />
|-<br />
|gorthx || Gabrielle Roth<br />
|-<br />
|grzm || Michael Glaesemann<br />
|-<br />
|gsmet || Guillaume Smet<br />
|-<br />
|gregs1104 || Greg Smith<br />
|-<br />
|G_SabinoMullane || Greg Sabino Mullane<br />
|-<br />
|HarrisonF || Harrison Fisk<br />
|-<br />
|indigo || Phil Frost<br />
|-<br />
|JanniCash || Jan Wieck<br />
|-<br />
|jconway || Joe Conway<br />
|-<br />
|jdavis, jdavis_ || Jeff Davis<br />
|-<br />
|johto || Marko Tiikkaja<br />
|-<br />
|jurka || Kris Jurka<br />
|-<br />
|justatheory || David Wheeler<br />
|-<br />
|jpa || Jean-Paul Argudo<br />
|-<br />
|jwp || James Pye<br />
|-<br />
|kgrittn || Kevin Grittner<br />
|-<br />
|klando || Cédric Villemain<br />
|-<br />
|larryrtx || Larry Rosenman<br />
|-<br />
|linuxpoet, postgresman || Joshua D. Drake<br />
|-<br />
|lluad || Steve Atkins<br />
|-<br />
|lsmith || Lukas Smith<br />
|-<br />
|magnush || Magnus Hagander<br />
|-<br />
|markwkm || Mark Wong<br />
|-<br />
|mastermind || [[user:mastermind | Stefan Kaltenbrunner]]<br />
|-<br />
|merlinm || Merlin Moncure<br />
|-<br />
|metatrontech || Chris Travers<br />
|-<br />
|miracee || Susanne Ebrecht<br />
|-<br />
|Moosbert || Peter Eisentraut<br />
|-<br />
|neilc || Neil Conway<br />
|-<br />
|oicu || Andrew Dunstan<br />
|-<br />
|okbobcz || Pavel Stehule<br />
|-<br />
|pgSnake || Dave Page<br />
|-<br />
|PJMODOS || Petr Jel&iacute;nek<br />
|-<br />
|Possible || Robert Ivens<br />
|-<br />
|postwait || Theo Schlossnagle<br />
|-<br />
|psoo || Bernd Helmle<br />
|-<br />
|pyarra || Philip Yarra<br />
|-<br />
|raptelan || [[user:Cshobe|Casey Allen Shobe]]<br />
|-<br />
|rhaas || Robert Haas<br />
|-<br />
|RhodiumToad (formerly AndrewSN) || Andrew Gierth<br />
|-<br />
|Robe || [[user:Robe | Michael Renner]]<br />
|-<br />
|rotellaro || Federico Campoli<br />
|-<br />
|rtfm_please || [[IRCBotSyntax]]<br />
|-<br />
|SAS || Stéphane Schildknecht<br />
|-<br />
|scrappy || Marc G. Fournier<br />
|-<br />
|selenamarie || Selena Deckelmann<br />
|-<br />
|SkippyDigits || Sherri Kalm<br />
|-<br />
|Snow-Man || Stephen Frost<br />
|-<br />
|sternocera || Peter Geoghegan<br />
|-<br />
|StuckMojo, MojoWork || Jon Erdman<br />
|-<br />
|swm || Gavin Sherry<br />
|-<br />
|vy || Volkan YAZICI<br />
|-<br />
|xaprb || Baron Schwartz<br />
|-<br />
|xzilla, xzi11a || [[User:Xzilla|Robert Treat]]<br />
|}</div>Greghttps://wiki.postgresql.org/index.php?title=DDL_Triggers&diff=10859DDL Triggers2010-05-19T19:15:19Z<p>Greg: Quick outline from the meeting</p>
<hr />
<div>==DDL Triggers==<br />
<br />
This page is meant to coordinate work on DDL triggers.<br />
<br />
===Use cases===<br />
<br />
* Enforce business logic<br />
** Example: Restrict table names to certain rules (CREATE TABLE trigger)<br />
* Auditing<br />
** Example: Need to know when somebody has dropped a table<br />
* Replication<br />
** Slony, Bucardo, etc. could replicate DDL statements automatically<br />
<br />
===Issues===<br />
<br />
* Use of the existing hook<br />
** Need to support people who cannot use the hook (e.g. PgAdmin)<br />
* Current utility function is recursive: triggers should only fire on first call<br />
* What to export? Start simple with:<br />
** type of object<br />
** OID of object<br />
** name of object<br />
** action<br />
* Use nodeToString or nodeToSql?<br />
* Returning the original string?<br />
* Syntax of the call<br />
* How to store in a system table (map action to function)</div>Greghttps://wiki.postgresql.org/index.php?title=Bucardo&diff=10345Bucardo2010-03-29T13:33:28Z<p>Greg: Try to put things into the new template</p>
<hr />
<div>{{clusteringProject<br />
|overview='''Bucardo''' is a replication system for Postgres that provides both master-master and master-slave capabilities. It is asynchronous and trigger based. Its primary goals are to provide master-master replication for load balancing and failover, and to provide load balancing and data warehousing via master-slave replication.<br />
|status=Production<br />
|statusdetail=Version 4.4.0<br />
|contact=*[https://mail.endcrypt.com/mailman/listinfo/bucardo-general bucardo-general mailing list]<br />
|url=*[http://bucardo.org Bucardo Web site and wiki]<br />
|scalability=Master-slave: high with cascading slaves. Multi-master: two masters only<br />
|readscaling=yes<br />
|writescaling=yes (with multimaster); no/slight inverse (master/slave only)<br />
|procedures=Yes<br />
|parallel=No<br />
|failover=Not automatic<br />
|online=No<br />
|upgrades=Yes<br />
|detach=Yes<br />
|coremod=No<br />
|languages=Perl, Pl/PgSQL, Pl/PerlU<br />
|license=BSD<br />
|complete=Yes<br />
|version=8.1 to 9.0<br />
|summary=Asynchronous cascading master-slave replication, row-based, using triggers and queueing in the database ''AND'' Asynchronous master-master replication, row-based, using triggers and customized conflict resolution<br />
|description=<br />
General model: Asynchronous cascading master-slave and/or master-master. Row-based, uses triggers and LISTEN/NOTIFY.<br />
<br />
Bucardo requires a dedicated database and runs as a Perl daemon that communicates with this database and all other <br />
databases involved in the replication. It can run as multimaster or multislave.<br />
<br />
Multimaster replication is limited to two databases, with conflict resolution <br />
(either standard choices or custom subroutines) to handle the same update on both sides.<br />
<br />
Master-slave replication involves one master going to one or more slaves.<br />
<br />
Chaining is possible, such the server A and B can be set as multimaster, and B can also <br />
be a master to the slaves C, D, and E, while E in turn can be the master for slaves <br />
F, G, and H.<br />
|usecase=<br />
* Load balancing via slaves<br />
* Data warehousing via slaves<br />
* Slaves are not constrained and can be written to<br />
* Upgrading from one Postgres version to another<br />
* Many hooks allow for data to be changed on the fly during replication, and ease of things like cache invalidation.<br />
* Partial replication<br />
* Replication on demand (changes can be pushed automatically or when desired)<br />
* Will handle replication of TRUNCATE for Postgres version 8.4 or greater.<br />
* Slaves can be "pre-warmed" for quick setup<br />
|drawbacks=<br />
* Cannot handle DDL (Postgres has no triggers on system tables)<br />
* Cannot handle large objects (same reason)<br />
* Cannot incrementally replicate tables without a unique key (it can "fullcopy" them)<br />
* Will not work on versions older than Postgres 8<br />
|sponsors=<br />
|support=Commercial support is available from [http://endpoint.com End Point Corporation]. Non-commercial support is available from the the bucardo-general mailing list, and the #bucardo channel on irc.freenode.net.<br />
}}<br />
<br />
==Other Information==<br />
<br />
* [http://bucardo.org Bucardo wiki]<br />
* [http://bucardo.org/bugzilla Bucardo bug tracker]</div>Greghttps://wiki.postgresql.org/index.php?title=Clustering&diff=10137Clustering2010-03-07T23:45:36Z<p>Greg: Add version</p>
<hr />
<div>== Projects ==<br />
<br />
(alphabetical order)<br />
<br />
* [[Bucardo]]<br />
* [[GridSQL]]<br />
* [[HadoopDB]]<br />
* [[Mammoth]]<br />
* [[Pgpool-II]]<br />
* [[PgCluster]]<br />
* [[PL/Proxy]]<br />
* [[Postgres-2]]<br />
* [[PostgresForest]]<br />
* [[Postgres-R]]<br />
* [[rubyrep]]<br />
* [[SkyTools]]<br />
* [[Slony]]<br />
* [[Tungsten]]<br />
<br />
Work in progress is being tracked at [[Clustering Development Projects]]<br />
<br />
== Existing Overview Docs ==<br />
* [[Replication, Clustering, and Connection Pooling]]<br />
<br />
== Template for information ==<br />
<br />
== Project Overview ==<br />
Please provide a brief overview of your project goals<br />
<br />
== Project Status ==<br />
Current status of development<br />
<br />
'''Status:''' (one of) Design, Prototype, Development, <br />
Beta, Released, Production, On Hold, Retired<br />
<br />
Details of current status<br />
<br />
== Project Contacts ==<br />
Name of developer, or development mailing list<br />
<br />
Preferred method of contact<br />
<br />
Website link<br />
<br />
== General Information ==<br />
<br />
* '''Scalability:''' # servers<br />
* '''Read Scaling:''' yes/no/negative scaling<br />
* '''Write Scaling:''' yes/no/negative scaling<br />
* '''Triggers/procedures''': Yes/No ''(does it support using triggers''<br />
''and procedures in the clustered database?)''<br />
* '''Parallel Query''': Yes/No ''(does it support partitioned data''<br />
''with parallel query across nodes?)''<br />
* '''Failover/HA''': Yes/No ''(are there built-in features for ''<br />
''rapid failover when a node fails?)''<br />
* '''Online Provisioning''': Yes/No ''(does it allow adding new ''<br />
''nodes while the cluster/system is running?)''<br />
* '''PostgreSQL Upgrades''': Yes/No ''(does it support doing ''<br />
''version upgrades to individual nodes?)''<br />
* '''Detached Node/WAN''': Yes/No ''(does it support nodes which ''<br />
''are in intermittent or low-bandwidth connections?)''<br />
* '''PostgreSQL Core Modifications Required''': Yes/No ''(does the system ''<br />
''require a fork or patches on core Postgres to run?)''<br />
* '''Programming Languages''': ''(what programming languages is ''<br />
''the system written in?)''<br />
* '''Licensing''': OSS/Proprietary. ''If OSS, which license.''<br />
* '''Complete Cluster''': Yes/No ''(is this a complete clustering solution,''<br />
''or are additional tools (poolers, etc.) required?)''<br />
* ''PostgreSQL Version''': Which versions does it support?<br />
<br />
== Clustering Model ==<br />
<br />
'''General Model''': (in 2-6 words)<br />
<br />
Paragraph-long description of model<br />
<br />
== Use Case ==<br />
<br />
Please list the things which this clustering project does particularly well.<br />
* Thing 1<br />
* Thing 2<br />
<br />
One paragraph description of the intended use case.<br />
<br />
== Drawbacks ==<br />
<br />
Please list the primary shortcomings<br />
* Problem 1<br />
* Problem 2<br />
<br />
== Project Sponsors ==<br />
<br />
If appropriate, list any organizational sponsors or <br />
large users and contributors to the project here.<br />
<br />
== Support ==<br />
<br />
If available, please list any companies which offer commercial <br />
support for this system.<br />
<br />
== Other Information ==<br />
<br />
[[Category:Replication]]<br />
[[Category:Clustering]]</div>Greghttps://wiki.postgresql.org/index.php?title=Bucardo&diff=10136Bucardo2010-03-07T22:54:18Z<p>Greg: Add use cases</p>
<hr />
<div>== Project Overview ==<br />
<br />
'''Bucardo''' is a replication system for Postgres that provides both master-master and master-slave capabilities. It is asynchronous and trigger based. Its primary goals are to provide master-master replication for load balancing and failover, and to provide load balancing and data warehousing via master-slave replication.<br />
<br />
== Project Status ==<br />
<br />
Status: '''Production'''<br />
<br />
Bucardo has been in production since 2002, and is actively developed using git, a wiki, mailing lists, IRC, and a bug tracker for coordination.<br />
<br />
== Project Contacts ==<br />
<br />
* [https://mail.endcrypt.com/mailman/listinfo/bucardo-general bucardo-general] - primary mailing list (preferred method of contact)<br />
* [http://bucardo.org/ Bucardo wiki]<br />
<br />
== General Information ==<br />
<br />
* Scalability: unknown, but potentially very large with cascading slaves. Only two masters are supported.<br />
* Read Scaling: yes<br />
* Write Scaling: yes (two masters only)<br />
* Triggers/procedures: Yes<br />
* Parallel Query: No<br />
* Failover/HA: Nothing automatic (although master-master is a form of failover)<br />
* Online Provisioning: No (requires a restart of Bucardo when adding new nodes)<br />
* PostgreSQL Upgrades: Yes (supports version upgrades to individual nodes)<br />
* Detached Node/WAN: Yes (support nodes which are in intermittent or low-bandwidth connections)<br />
* PostgreSQL Core Modifications Required: No<br />
* Programming Languages: Perl, and uses pl/pgsql and pl/perlu<br />
* Licensing: BSD (simple three clause form)<br />
* Complete Cluster: Yes (no additional tools are required)<br />
* Postgres versions: Works on 8.1 to 8.4<br />
<br />
== Clustering Model ==<br />
<br />
General model: Asynchronous cascading master-slave and/or master-master. Row-based, uses triggers and LISTEN/NOTIFY.<br />
<br />
Bucardo requires a dedicated database and runs as a Perl daemon that communicates with this database and all other <br />
databases involved in the replication. It can run as multimaster or multislave.<br />
<br />
Multimaster replication is limited to two databases, with conflict resolution <br />
(either standard choices or custom subroutines) to handle the same update on both sides.<br />
<br />
Master-slave replication involves one master going to one or more slaves.<br />
<br />
Chaining is possible, such the server A and B can be set as multimaster, and B can also <br />
be a master to the slaves C, D, and E, while E in turn can be the master for slaves <br />
F, G, and H.<br />
<br />
== Use Case ==<br />
<br />
* Multimaster replication<br />
* Load balancing via slaves<br />
* Data warehousing via slaves<br />
* Slaves are not constrained and can be written to<br />
* Upgrading from one Postgres version to another<br />
* Many hooks allow for data to be changed on the fly during replication, and ease of things like cache invalidation.<br />
* Partial replication<br />
* Replication on demand (changes can be pushed automatically or when desired)<br />
* Will handle replication of TRUNCATE for Postgres version 8.4 or greater.<br />
<br />
== Drawbacks ==<br />
<br />
* Cannot handle DDL (Postgres has notriggers on system tables)<br />
* Cannot handle large objects (same reason)<br />
* Cannot incrementally replicate tables without a unique key (it can "fullcopy" them)<br />
* Will not work on versions older than Postgres 8<br />
<br />
== Support ==<br />
<br />
Commercial support is available from [http://endpoint.com End Point Corporation].<br />
<br />
Non-commercial support is available from the the bucardo-general mailing list, and the #bucardo channel on irc.freenode.net.<br />
<br />
==Other Informations==<br />
<br />
* [http://bucardo.org Bucardo wiki]<br />
* [http://bucardo.org/bugzilla Bucardo bug tracker]<br />
<br />
<br />
[[Category:Replication]]<br />
[[Category:Clustering]]</div>Greghttps://wiki.postgresql.org/index.php?title=Slony&diff=10135Slony2010-03-07T22:53:53Z<p>Greg: /* Support */ EP does lots of Slony work (we love/hate Slony)</p>
<hr />
<div>== Project Overview ==<br />
Slony-I is a "master to multiple slaves" replication system supporting cascading (e.g. - a node can feed another node which feeds another node...) and failover.<br />
<br />
== Project Status ==<br />
<br />
Current status of development<br />
<br />
; Version 1.2<br />
: Production<br />
; Version 2.0<br />
: Released, some using it in production<br />
<br />
== Project Contacts ==<br />
<br />
; [http://lists.slony.info/mailman/listinfo Mailing Lists]<br />
; [http://slony.info Slony Web site]<br />
<br />
== General Information ==<br />
* Scalability: Single origin (master), up to 20 subscribers.<br />
* Read Scaling: yes<br />
* Write Scaling: no/slight inverse<br />
* Triggers/procedures: Yes<br />
* Parallel Query: No<br />
* Failover/HA: Yes<br />
* Online Provisioning: Yes<br />
* PostgreSQL Upgrades: Yes<br />
* Detached Node/WAN: Yes, but can be clumsy<br />
* PostgreSQL Core Modifications Required: No<br />
* Programming Languages: SQL, pl/pgsql, C/SPI, C<br />
* Licensing: OSS License ???? (cannot find on web site)<br />
* Complete Cluster: No. (load balancer, pooler required)<br />
<br />
== Clustering Model ==<br />
<br />
General Model: Asynchronous cascading master-slave replication, row-based, using triggers and queueing in the database.<br />
<br />
Slony-I can be combined with a pooler and load-balancer, such as pgPool2 or DBD::Multiplex, to form a complete clustering system.<br />
<br />
== Use Case ==<br />
<br />
* Cascading replication (master-->slave-->slave)<br />
* Partial replication<br />
* Upgrades of PostgreSQL and server platform<br />
* Failover and slave promotion<br />
* Online provisioning<br />
* Limited detach/reattach slaves<br />
* Maturity and field-testing (in production for 5 years)<br />
<br />
Slony-I is a "master to multiple slaves" replication system supporting cascading (e.g. - a node can feed another node which feeds another node...) and failover. The big picture for the development of Slony-I is that it is a master-slave replication system that includes all features and capabilities needed to replicate large databases to a reasonably limited number of slave systems. Slony-I is a system designed for use at data centers and backup sites, where the normal mode of operation is that all nodes are available.<br />
<br />
== Drawbacks ==<br />
<br />
* Not a complete clustering solution<br />
* Complexity of setup<br />
* Limitations on schema changes<br />
* Performance of large bulk loads and large object replication<br />
* Write overhead and associated maintenance (vacuum etc.)<br />
* Multiple points of monitoring required<br />
* Master-slave, replication lag<br />
<br />
The main drawback to Slony-I even as a replication system is the complexity of its setup and administration. The design of the system, with the database itself being used for queueing row updates, also greatly increases the amount of data writing and I/O done by the DBMS.<br />
<br />
Also, since Slony-I is asynchronous master-slave, all writes have to be segregated to the master. Additionally, there is a noticeable lag (1-3 seconds) between the master and the slaves which may cause users to have an inconsistent view of the data.<br />
<br />
== Project Sponsors ==<br />
<br />
Work sponsored by [http://afilias.info Afilias], most developers are at Afilias. (Jan Wieck, Christopher Browne)<br />
<br />
Web site hosted by [http://commandprompt.com CommandPrompt]<br />
<br />
== Support ==<br />
<br />
Slony-I is probably the best commercially supported replication/clustering solution. The following companies offer Slony-I support with PostgreSQL support: CommandPrompt, EnterpriseDB, Fujitsu Australia. Consulting on Slony-I is also available from such companies as 2ndQuadrant, PostgreSQL Experts, SRA, Credativ, and End Point.<br />
<br />
== Other Information ==<br />
<br />
[[Category:Replication]]<br />
[[Category:Clustering]]</div>Greghttps://wiki.postgresql.org/index.php?title=Bucardo&diff=10134Bucardo2010-03-07T22:50:46Z<p>Greg: /* Clustering Model */ Wording</p>
<hr />
<div>== Project Overview ==<br />
<br />
'''Bucardo''' is a replication system for Postgres that provides both master-master and master-slave capabilities. It is asynchronous and trigger based. Its primary goals are to provide master-master replication for load balancing and failover, and to provide load balancing and data warehousing via master-slave replication.<br />
<br />
== Project Status ==<br />
<br />
Status: '''Production'''<br />
<br />
Bucardo has been in production since 2002, and is actively developed using git, a wiki, mailing lists, IRC, and a bug tracker for coordination.<br />
<br />
== Project Contacts ==<br />
<br />
* [https://mail.endcrypt.com/mailman/listinfo/bucardo-general bucardo-general] - primary mailing list (preferred method of contact)<br />
* [http://bucardo.org/ Bucardo wiki]<br />
<br />
== General Information ==<br />
<br />
* Scalability: unknown, but potentially very large with cascading slaves. Only two masters are supported.<br />
* Read Scaling: yes<br />
* Write Scaling: yes (two masters only)<br />
* Triggers/procedures: Yes<br />
* Parallel Query: No<br />
* Failover/HA: Nothing automatic (although master-master is a form of failover)<br />
* Online Provisioning: No (requires a restart of Bucardo when adding new nodes)<br />
* PostgreSQL Upgrades: Yes (supports version upgrades to individual nodes)<br />
* Detached Node/WAN: Yes (support nodes which are in intermittent or low-bandwidth connections)<br />
* PostgreSQL Core Modifications Required: No<br />
* Programming Languages: Perl, and uses pl/pgsql and pl/perlu<br />
* Licensing: BSD (simple three clause form)<br />
* Complete Cluster: Yes (no additional tools are required)<br />
* Postgres versions: Works on 8.1 to 8.4<br />
<br />
== Clustering Model ==<br />
<br />
General model: Asynchronous cascading master-slave and/or master-master. Row-based, uses triggers and LISTEN/NOTIFY.<br />
<br />
Bucardo requires a dedicated database and runs as a Perl daemon that communicates with this database and all other <br />
databases involved in the replication. It can run as multimaster or multislave.<br />
<br />
Multimaster replication is limited to two databases, with conflict resolution <br />
(either standard choices or custom subroutines) to handle the same update on both sides.<br />
<br />
Master-slave replication involves one master going to one or more slaves.<br />
<br />
Chaining is possible, such the server A and B can be set as multimaster, and B can also <br />
be a master to the slaves C, D, and E, while E in turn can be the master for slaves <br />
F, G, and H.<br />
<br />
== Use Case ==<br />
<br />
* Multimaster replication<br />
* Load balancing via slaves<br />
* Data warehousing via slaves<br />
* Slaves are not constrained and can be written to<br />
* Upgrading from one Postgres version to another<br />
* Many hooks allow for data to be changed on the fly during replication, and ease of things like cache invalidation.<br />
* Will handle replication of TRUNCATE for Postgres version 8.4 or greater.<br />
<br />
== Drawbacks ==<br />
<br />
* Cannot handle DDL (Postgres has notriggers on system tables)<br />
* Cannot handle large objects (same reason)<br />
* Cannot incrementally replicate tables without a unique key (it can "fullcopy" them)<br />
* Will not work on versions older than Postgres 8<br />
<br />
== Support ==<br />
<br />
Commercial support is available from [http://endpoint.com End Point Corporation].<br />
<br />
Non-commercial support is available from the the bucardo-general mailing list, and the #bucardo channel on irc.freenode.net.<br />
<br />
==Other Informations==<br />
<br />
* [http://bucardo.org Bucardo wiki]<br />
* [http://bucardo.org/bugzilla Bucardo bug tracker]<br />
<br />
<br />
[[Category:Replication]]<br />
[[Category:Clustering]]</div>Greghttps://wiki.postgresql.org/index.php?title=Bucardo&diff=10133Bucardo2010-03-07T22:48:47Z<p>Greg: /* General Information */ mention internal languages for consistency</p>
<hr />
<div>== Project Overview ==<br />
<br />
'''Bucardo''' is a replication system for Postgres that provides both master-master and master-slave capabilities. It is asynchronous and trigger based. Its primary goals are to provide master-master replication for load balancing and failover, and to provide load balancing and data warehousing via master-slave replication.<br />
<br />
== Project Status ==<br />
<br />
Status: '''Production'''<br />
<br />
Bucardo has been in production since 2002, and is actively developed using git, a wiki, mailing lists, IRC, and a bug tracker for coordination.<br />
<br />
== Project Contacts ==<br />
<br />
* [https://mail.endcrypt.com/mailman/listinfo/bucardo-general bucardo-general] - primary mailing list (preferred method of contact)<br />
* [http://bucardo.org/ Bucardo wiki]<br />
<br />
== General Information ==<br />
<br />
* Scalability: unknown, but potentially very large with cascading slaves. Only two masters are supported.<br />
* Read Scaling: yes<br />
* Write Scaling: yes (two masters only)<br />
* Triggers/procedures: Yes<br />
* Parallel Query: No<br />
* Failover/HA: Nothing automatic (although master-master is a form of failover)<br />
* Online Provisioning: No (requires a restart of Bucardo when adding new nodes)<br />
* PostgreSQL Upgrades: Yes (supports version upgrades to individual nodes)<br />
* Detached Node/WAN: Yes (support nodes which are in intermittent or low-bandwidth connections)<br />
* PostgreSQL Core Modifications Required: No<br />
* Programming Languages: Perl, and uses pl/pgsql and pl/perlu<br />
* Licensing: BSD (simple three clause form)<br />
* Complete Cluster: Yes (no additional tools are required)<br />
* Postgres versions: Works on 8.1 to 8.4<br />
<br />
== Clustering Model ==<br />
<br />
Bucardo uses asynchronous multimaster and master-slave replication.<br />
<br />
Bucardo requires a dedicated database and runs as a Perl daemon that communicates with this database and all other <br />
databases involved in the replication. It can run as multimaster or multislave.<br />
<br />
Multimaster replication is limited to two databases, with conflict resolution <br />
(either standard choices or custom subroutines) to handle the same update on both sides.<br />
<br />
Master-slave replication involves one master going to one or more slaves.<br />
<br />
Chaining is possible, such the server A and B can be set as multimaster, and B can also <br />
be a master to the slaves C, D, and E, while E in turn can be the master for slaves <br />
F, G, and H.<br />
<br />
== Use Case ==<br />
<br />
* Multimaster replication<br />
* Load balancing via slaves<br />
* Data warehousing via slaves<br />
* Slaves are not constrained and can be written to<br />
* Upgrading from one Postgres version to another<br />
* Many hooks allow for data to be changed on the fly during replication, and ease of things like cache invalidation.<br />
* Will handle replication of TRUNCATE for Postgres version 8.4 or greater.<br />
<br />
== Drawbacks ==<br />
<br />
* Cannot handle DDL (Postgres has notriggers on system tables)<br />
* Cannot handle large objects (same reason)<br />
* Cannot incrementally replicate tables without a unique key (it can "fullcopy" them)<br />
* Will not work on versions older than Postgres 8<br />
<br />
== Support ==<br />
<br />
Commercial support is available from [http://endpoint.com End Point Corporation].<br />
<br />
Non-commercial support is available from the the bucardo-general mailing list, and the #bucardo channel on irc.freenode.net.<br />
<br />
==Other Informations==<br />
<br />
* [http://bucardo.org Bucardo wiki]<br />
* [http://bucardo.org/bugzilla Bucardo bug tracker]<br />
<br />
<br />
[[Category:Replication]]<br />
[[Category:Clustering]]</div>Greghttps://wiki.postgresql.org/index.php?title=Bucardo&diff=10132Bucardo2010-03-07T22:46:38Z<p>Greg: /* Use Case */ Mention truncate</p>
<hr />
<div>== Project Overview ==<br />
<br />
'''Bucardo''' is a replication system for Postgres that provides both master-master and master-slave capabilities. It is asynchronous and trigger based. Its primary goals are to provide master-master replication for load balancing and failover, and to provide load balancing and data warehousing via master-slave replication.<br />
<br />
== Project Status ==<br />
<br />
Status: '''Production'''<br />
<br />
Bucardo has been in production since 2002, and is actively developed using git, a wiki, mailing lists, IRC, and a bug tracker for coordination.<br />
<br />
== Project Contacts ==<br />
<br />
* [https://mail.endcrypt.com/mailman/listinfo/bucardo-general bucardo-general] - primary mailing list (preferred method of contact)<br />
* [http://bucardo.org/ Bucardo wiki]<br />
<br />
== General Information ==<br />
<br />
* Scalability: unknown, but potentially very large with cascading slaves. Only two masters are supported.<br />
* Read Scaling: yes<br />
* Write Scaling: yes (two masters only)<br />
* Triggers/procedures: Yes<br />
* Parallel Query: No<br />
* Failover/HA: Nothing automatic (although master-master is a form of failover)<br />
* Online Provisioning: No (requires a restart of Bucardo when adding new nodes)<br />
* PostgreSQL Upgrades: Yes (supports version upgrades to individual nodes)<br />
* Detached Node/WAN: Yes (support nodes which are in intermittent or low-bandwidth connections)<br />
* PostgreSQL Core Modifications Required: No<br />
* Programming Languages: Perl (Bucardo is written in Perl)<br />
* Licensing: BSD (simple three clause form)<br />
* Complete Cluster: Yes (no additional tools are required)<br />
* Postgres versions: Works on 8.1 to 8.4<br />
<br />
== Clustering Model ==<br />
<br />
Bucardo uses asynchronous multimaster and master-slave replication.<br />
<br />
Bucardo requires a dedicated database and runs as a Perl daemon that communicates with this database and all other <br />
databases involved in the replication. It can run as multimaster or multislave.<br />
<br />
Multimaster replication is limited to two databases, with conflict resolution <br />
(either standard choices or custom subroutines) to handle the same update on both sides.<br />
<br />
Master-slave replication involves one master going to one or more slaves.<br />
<br />
Chaining is possible, such the server A and B can be set as multimaster, and B can also <br />
be a master to the slaves C, D, and E, while E in turn can be the master for slaves <br />
F, G, and H.<br />
<br />
== Use Case ==<br />
<br />
* Multimaster replication<br />
* Load balancing via slaves<br />
* Data warehousing via slaves<br />
* Slaves are not constrained and can be written to<br />
* Upgrading from one Postgres version to another<br />
* Many hooks allow for data to be changed on the fly during replication, and ease of things like cache invalidation.<br />
* Will handle replication of TRUNCATE for Postgres version 8.4 or greater.<br />
<br />
== Drawbacks ==<br />
<br />
* Cannot handle DDL (Postgres has notriggers on system tables)<br />
* Cannot handle large objects (same reason)<br />
* Cannot incrementally replicate tables without a unique key (it can "fullcopy" them)<br />
* Will not work on versions older than Postgres 8<br />
<br />
== Support ==<br />
<br />
Commercial support is available from [http://endpoint.com End Point Corporation].<br />
<br />
Non-commercial support is available from the the bucardo-general mailing list, and the #bucardo channel on irc.freenode.net.<br />
<br />
==Other Informations==<br />
<br />
* [http://bucardo.org Bucardo wiki]<br />
* [http://bucardo.org/bugzilla Bucardo bug tracker]<br />
<br />
<br />
[[Category:Replication]]<br />
[[Category:Clustering]]</div>Greghttps://wiki.postgresql.org/index.php?title=Bucardo&diff=10131Bucardo2010-03-07T22:45:50Z<p>Greg: Updates from template</p>
<hr />
<div>== Project Overview ==<br />
<br />
'''Bucardo''' is a replication system for Postgres that provides both master-master and master-slave capabilities. It is asynchronous and trigger based. Its primary goals are to provide master-master replication for load balancing and failover, and to provide load balancing and data warehousing via master-slave replication.<br />
<br />
== Project Status ==<br />
<br />
Status: '''Production'''<br />
<br />
Bucardo has been in production since 2002, and is actively developed using git, a wiki, mailing lists, IRC, and a bug tracker for coordination.<br />
<br />
== Project Contacts ==<br />
<br />
* [https://mail.endcrypt.com/mailman/listinfo/bucardo-general bucardo-general] - primary mailing list (preferred method of contact)<br />
* [http://bucardo.org/ Bucardo wiki]<br />
<br />
== General Information ==<br />
<br />
* Scalability: unknown, but potentially very large with cascading slaves. Only two masters are supported.<br />
* Read Scaling: yes<br />
* Write Scaling: yes (two masters only)<br />
* Triggers/procedures: Yes<br />
* Parallel Query: No<br />
* Failover/HA: Nothing automatic (although master-master is a form of failover)<br />
* Online Provisioning: No (requires a restart of Bucardo when adding new nodes)<br />
* PostgreSQL Upgrades: Yes (supports version upgrades to individual nodes)<br />
* Detached Node/WAN: Yes (support nodes which are in intermittent or low-bandwidth connections)<br />
* PostgreSQL Core Modifications Required: No<br />
* Programming Languages: Perl (Bucardo is written in Perl)<br />
* Licensing: BSD (simple three clause form)<br />
* Complete Cluster: Yes (no additional tools are required)<br />
* Postgres versions: Works on 8.1 to 8.4<br />
<br />
== Clustering Model ==<br />
<br />
Bucardo uses asynchronous multimaster and master-slave replication.<br />
<br />
Bucardo requires a dedicated database and runs as a Perl daemon that communicates with this database and all other <br />
databases involved in the replication. It can run as multimaster or multislave.<br />
<br />
Multimaster replication is limited to two databases, with conflict resolution <br />
(either standard choices or custom subroutines) to handle the same update on both sides.<br />
<br />
Master-slave replication involves one master going to one or more slaves.<br />
<br />
Chaining is possible, such the server A and B can be set as multimaster, and B can also <br />
be a master to the slaves C, D, and E, while E in turn can be the master for slaves <br />
F, G, and H.<br />
<br />
== Use Case ==<br />
<br />
* Multimaster replication<br />
* Load balancing via slaves<br />
* Data warehousing via slaves<br />
* Slaves are not constrained and can be written to<br />
* Upgrading from one Postgres version to another<br />
* Many hooks allow for data to be changed on the fly during replication, and ease of things like cache invalidation.<br />
<br />
== Drawbacks ==<br />
<br />
* Cannot handle DDL (Postgres has notriggers on system tables)<br />
* Cannot handle large objects (same reason)<br />
* Cannot incrementally replicate tables without a unique key (it can "fullcopy" them)<br />
* Will not work on versions older than Postgres 8<br />
<br />
== Support ==<br />
<br />
Commercial support is available from [http://endpoint.com End Point Corporation].<br />
<br />
Non-commercial support is available from the the bucardo-general mailing list, and the #bucardo channel on irc.freenode.net.<br />
<br />
==Other Informations==<br />
<br />
* [http://bucardo.org Bucardo wiki]<br />
* [http://bucardo.org/bugzilla Bucardo bug tracker]<br />
<br />
<br />
[[Category:Replication]]<br />
[[Category:Clustering]]</div>Greghttps://wiki.postgresql.org/index.php?title=Bucardo&diff=10130Bucardo2010-03-07T22:37:32Z<p>Greg: /* Project Contacts */ new template</p>
<hr />
<div>== Project Overview ==<br />
<br />
'''Bucardo''' is a replication system for Postgres that provides both master-master and master-slave capabilities. It is asynchronous and trigger based. Its primary goals are to provide master-master replication for load balancing and failover, and to provide load balancing and data warehousing via master-slave replication.<br />
<br />
== Project Status ==<br />
<br />
Status: '''Production'''<br />
<br />
Bucardo has been in production since 2002, and is actively developed using git, a wiki, mailing lists, and a bug tracker for coordination.<br />
<br />
== Project Contacts ==<br />
<br />
* [https://mail.endcrypt.com/mailman/listinfo/bucardo-general bucardo-general] - primary mailing list (preferred method of contact)<br />
* [http://bucardo.org/ Bucardo wiki]<br />
<br />
== General Information ==<br />
<br />
* Scalability: unknown, but very large with cascading slaves. Only two masters are supported.<br />
* Read Scaling: yes<br />
* Write Scaling: yes (two masters only)<br />
* Triggers/procedures: Yes<br />
* Parallel Query: No<br />
* Failover/HA: Nothing automatic (although master-master is a form of failover)<br />
* Online Provisioning: ?<br />
* PostgreSQL Upgrades: Yes<br />
* Detached Node/WAN: Yes<br />
* PostgreSQL Core Modifications Required: No<br />
* Programming Languages: Perl<br />
<br />
== Clustering Model ==<br />
<br />
Bucardo uses multimaster and master-slave replication.<br />
<br />
Bucardo requires a dedicated database and runs as a Perl daemon that communicates with this database and all other <br />
databases involved in the replication. It can run as multimaster or multislave.<br />
<br />
Multimaster replication is limited to two databases, with conflict resolution <br />
(either standard choices or custom subroutines) to handle the same update on both sides.<br />
<br />
Master-slave replication involves one master going to one or more slaves.<br />
<br />
Chaining is possible, such the server A and B can be set as multimaster, and B can also <br />
be a master to the slaves C, D, and E, while E in turn can be the master for slaves <br />
F, G, and H.<br />
<br />
== Use Case ==<br />
<br />
* Multimaster replication<br />
* Load balancing via slaves<br />
* Data warehousing via slaves<br />
* Slaves are not constrained and can be written to<br />
* Upgrading from one Postgres version to another<br />
* Many hooks allow for data to be changed on the fly during replication, and ease of things like cache invalidation.<br />
<br />
== Drawbacks ==<br />
<br />
* Cannot handle DDL (Postgres has notriggers on system tables)<br />
* Cannot handle large objects (same reason)<br />
* Cannot incrementally replicate tables without a unique key (it can "fullcopy" them)<br />
* Will not work on versions older than Postgres 8<br />
<br />
==Links==<br />
<br />
* [http://bucardo.org Bucardo wiki]<br />
* [http://bucardo.org/bugzilla Bucardo bug tracker]<br />
<br />
<br />
[[Category:Replication]]<br />
[[Category:Clustering]]</div>Greghttps://wiki.postgresql.org/index.php?title=Bucardo&diff=10129Bucardo2010-03-07T22:36:47Z<p>Greg: /* Project Status */ Update inre template updates</p>
<hr />
<div>== Project Overview ==<br />
<br />
'''Bucardo''' is a replication system for Postgres that provides both master-master and master-slave capabilities. It is asynchronous and trigger based. Its primary goals are to provide master-master replication for load balancing and failover, and to provide load balancing and data warehousing via master-slave replication.<br />
<br />
== Project Status ==<br />
<br />
Status: '''Production'''<br />
<br />
Bucardo has been in production since 2002, and is actively developed using git, a wiki, mailing lists, and a bug tracker for coordination.<br />
<br />
== Project Contacts ==<br />
<br />
* [https://mail.endcrypt.com/mailman/listinfo/bucardo-general bucardo-general] - primary mailing list<br />
* [http://bucardo.org/ Bucardo wiki]<br />
<br />
== General Information ==<br />
<br />
* Scalability: unknown, but very large with cascading slaves. Only two masters are supported.<br />
* Read Scaling: yes<br />
* Write Scaling: yes (two masters only)<br />
* Triggers/procedures: Yes<br />
* Parallel Query: No<br />
* Failover/HA: Nothing automatic (although master-master is a form of failover)<br />
* Online Provisioning: ?<br />
* PostgreSQL Upgrades: Yes<br />
* Detached Node/WAN: Yes<br />
* PostgreSQL Core Modifications Required: No<br />
* Programming Languages: Perl<br />
<br />
== Clustering Model ==<br />
<br />
Bucardo uses multimaster and master-slave replication.<br />
<br />
Bucardo requires a dedicated database and runs as a Perl daemon that communicates with this database and all other <br />
databases involved in the replication. It can run as multimaster or multislave.<br />
<br />
Multimaster replication is limited to two databases, with conflict resolution <br />
(either standard choices or custom subroutines) to handle the same update on both sides.<br />
<br />
Master-slave replication involves one master going to one or more slaves.<br />
<br />
Chaining is possible, such the server A and B can be set as multimaster, and B can also <br />
be a master to the slaves C, D, and E, while E in turn can be the master for slaves <br />
F, G, and H.<br />
<br />
== Use Case ==<br />
<br />
* Multimaster replication<br />
* Load balancing via slaves<br />
* Data warehousing via slaves<br />
* Slaves are not constrained and can be written to<br />
* Upgrading from one Postgres version to another<br />
* Many hooks allow for data to be changed on the fly during replication, and ease of things like cache invalidation.<br />
<br />
== Drawbacks ==<br />
<br />
* Cannot handle DDL (Postgres has notriggers on system tables)<br />
* Cannot handle large objects (same reason)<br />
* Cannot incrementally replicate tables without a unique key (it can "fullcopy" them)<br />
* Will not work on versions older than Postgres 8<br />
<br />
==Links==<br />
<br />
* [http://bucardo.org Bucardo wiki]<br />
* [http://bucardo.org/bugzilla Bucardo bug tracker]<br />
<br />
<br />
[[Category:Replication]]<br />
[[Category:Clustering]]</div>Greghttps://wiki.postgresql.org/index.php?title=Bucardo&diff=10069Bucardo2010-03-03T14:52:19Z<p>Greg: Flush things out more</p>
<hr />
<div>== Project Overview ==<br />
<br />
'''Bucardo''' is a replication system for Postgres that provides both master-master and master-slave capabilities. It is asynchronous and trigger based. Its primary goals are to provide master-master replication for load balancing and failover, and to provide load balancing and data warehousing via master-slave replication.<br />
<br />
== Project Status ==<br />
<br />
Bucardo has been in production since 2002, and is actively developed using git, a wiki, mailing lists, and a bug tracker for coordination.<br />
<br />
== Project Contacts ==<br />
<br />
* [https://mail.endcrypt.com/mailman/listinfo/bucardo-general bucardo-general] - primary mailing list<br />
* [http://bucardo.org/ Bucardo wiki]<br />
<br />
== General Information ==<br />
<br />
* Scalability: unknown, but very large with cascading slaves. Only two masters are supported.<br />
* Read Scaling: yes<br />
* Write Scaling: yes (two masters only)<br />
* Triggers/procedures: Yes<br />
* Parallel Query: No<br />
* Failover/HA: Nothing automatic (although master-master is a form of failover)<br />
* Online Provisioning: ?<br />
* PostgreSQL Upgrades: Yes<br />
* Detached Node/WAN: Yes<br />
* PostgreSQL Core Modifications Required: No<br />
* Programming Languages: Perl<br />
<br />
== Clustering Model ==<br />
<br />
Bucardo uses multimaster and master-slave replication.<br />
<br />
Bucardo requires a dedicated database and runs as a Perl daemon that communicates with this database and all other <br />
databases involved in the replication. It can run as multimaster or multislave.<br />
<br />
Multimaster replication is limited to two databases, with conflict resolution <br />
(either standard choices or custom subroutines) to handle the same update on both sides.<br />
<br />
Master-slave replication involves one master going to one or more slaves.<br />
<br />
Chaining is possible, such the server A and B can be set as multimaster, and B can also <br />
be a master to the slaves C, D, and E, while E in turn can be the master for slaves <br />
F, G, and H.<br />
<br />
== Use Case ==<br />
<br />
* Multimaster replication<br />
* Load balancing via slaves<br />
* Data warehousing via slaves<br />
* Slaves are not constrained and can be written to<br />
* Upgrading from one Postgres version to another<br />
* Many hooks allow for data to be changed on the fly during replication, and ease of things like cache invalidation.<br />
<br />
== Drawbacks ==<br />
<br />
* Cannot handle DDL (Postgres has notriggers on system tables)<br />
* Cannot handle large objects (same reason)<br />
* Cannot incrementally replicate tables without a unique key (it can "fullcopy" them)<br />
* Will not work on versions older than Postgres 8<br />
<br />
==Links==<br />
<br />
* [http://bucardo.org Bucardo wiki]<br />
* [http://bucardo.org/bugzilla Bucardo bug tracker]<br />
<br />
<br />
[[Category:Replication]]<br />
[[Category:Clustering]]</div>Greghttps://wiki.postgresql.org/index.php?title=Bucardo&diff=10068Bucardo2010-03-03T14:35:17Z<p>Greg: Quick stub page</p>
<hr />
<div>'''Bucardo''' is a replication system for Postgres that provides both master-master and master-slave capabilities. It is asynchronous and trigger based.<br />
<br />
==Links==<br />
<br />
* [http://bucardo.org Bucardo wiki]<br />
* [http://bucardo.org/bugzilla Bucardo bug tracker]</div>Greghttps://wiki.postgresql.org/index.php?title=Advocacy&diff=9684Advocacy2010-01-26T16:43:47Z<p>Greg: Remove News link</p>
<hr />
<div>__NOTOC__<br />
<br />
* [[Events|Upcoming events & conferences]]<br />
* [[AdvocacyGuides|Guides to promoting PostgreSQL]]<br />
* [[Postgres| Postgres - Changing the name]]: Requirements and issues<br />
* [[:Category:Software Ports|Software Ports]]: Getting PostgreSQL support into applications<br />
* [[Advocacy/Better|Things Advocacy could be doing better]]<br />
<br />
===PostgreSQL Booth===<br />
<br />
Everything about running a PostgreSQL booth at a conference or other event.<br />
* [[BoothCheckList|Booth Preparation Checklist]]<br />
* Organizing [[BoothVols|Booth Volunteers]]<br />
* [[HowToBooth|How To Booth]] for both Booth Bunnies and Organizers<br />
* [[BoothDocs|Booth Related Documents]] List<br />
<br />
=== Resources===<br />
* [[Identity Guidelines]], [[Color Palette]]<br />
* [[Logo]], [[Buttons]], [[Banners]]<br />
* [[Posters]], [[Rollups]]<br />
* [[Flyers]], [[Brochures]]<br />
* [[Surveys]]<br />
<br />
You can also check out [http://pgfoundry.org/docman/?group_id=1000089 PgFoundry's Graphic Project] for more material.<br />
<br />
===New release management===<br />
* [[WhatsNew84| What's New in 8.4]]<br />
* [[84ReleaseDraft|8.4 Release Drafts]]<br />
* [[ReleasePrep|Tasks to do before announcing a release]]<br />
* [[MajorReleaseTimeline|Major Release Timeline]]<br />
* [[HowToTranslate|HOWTO Translate The Release]]<br />
* [[YourPostgreSQLAddress|Your @postgresql.org e-mail address]]<br />
* [[HowToRC|How to be a Regional Contact]]<br />
<br />
===Work in Progress===<br />
* [[UserGroupOperatingManual|PUG Operating Manual]]<br />
* [[:Category:Advocacy WIP|Advocacy WIP]] (Category)<br />
* [[PgDayManual|Advice on running a PgDay]]<br />
* [[Short Topic Books]]<br />
* [[CodeSprint2008|Code Sprint Session at Pg Conference West 2008]]<br />
<br />
===Education Resources===<br />
* [[Curriculum Development]]<br />
<br />
[[Category:Advocacy]]</div>Greghttps://wiki.postgresql.org/index.php?title=News&diff=9683News2010-01-26T16:43:27Z<p>Greg: Deprecate old page</p>
<hr />
<div>This page is deprecated - there is too much to keep up! Check news.google.com for the latest.</div>Greghttps://wiki.postgresql.org/index.php?title=Events&diff=7958Events2009-09-05T12:03:34Z<p>Greg: Updates</p>
<hr />
<div>== PostgreSQL Events ==<br />
<br />
This is a listing of events at which we expect, or would like to have, a PostgreSQL presence. Please keep the events in order by starting date and follow the existing examples. Please also tag the events with the MediaWiki "PostgreSQL Events" category. If you are going to be organizing a PostgreSQL booth, please adhere to [[BoothPolicies]].<br />
<br />
{| border="1"<br />
|+ <br />
'''Upcoming PostgreSQL Events Listing'''<br />
|-<br />
| '''Event''' || '''Web Page''' || '''Date''' || '''Country''' || '''City''' || '''Activities'''<br />
|- style="background:Khaki;"<br />
| [[PGCon Brazil]] || [http://pgcon.postgresql.org.br/2009/index.en.php PGCon Brazil 2009] || October 23-24, 2009 || Brazil || Unicamp, Campinas, SP ||<br />
|-<br />
| [[European PGDay 2009]] || [http://www.pgday.eu/ European PGDay 2009] || Nov 6-7, 2009 || France || Paris || [http://wiki.postgresql.fr/en:pgday2009:start Tutorials, Talks, Party]<br />
|-<br />
| [[PostgreSQL Conference 2009 Japan]] || [http://www.postgresql.jp/events/pgcon09j/e/ JPUG 10th Anniversary Conference] || November 20-21, 2009 || Japan || Tokyo || <br />
|-<br />
| [[OSDC 2009]] || [http://2009.osdc.com.au/ OSDC Brisbane] || November 25-27, 2009 || Australia || Brisbane || <br />
|}<br />
<br />
<br />
{| border="1"<br />
|+ <br />
'''Older PostgreSQL Events Listing'''<br />
| '''Event''' || '''Web Page''' || '''Date''' || '''Country''' || '''City''' || '''Activities'''<br />
|-<br />
| [[FrOSCon 2009]] || [http://www.froscon.org/ FrOSCon 2009] || August 22-23, 2009 || Germany || St. Augustin || Devroom<br />
|-<br />
| [[USENIX]] || [http://www.usenix.org/events/sec09/ USENIX Security '09] || August 10-14, 2009 || Canada || Montreal || <br />
|- style="background:Khaki;"<br />
| [[OSCON]] || [http://conferences.oreillynet.com/ OSCON 2009] || July 20-24, 2009 || USA || San Jose, CA || pgDay, Speakers, Booth<br />
|- style="background:Khaki;"<br />
| [[pgDaySanJose2009]] || [[pgDaySanJose2009]] || July 19, 2009 || USA || San Jose, CA || full day<br />
|-<br />
| [[SIGMOD]] || [http://www.sigmod09.org/ ACM SIGMOD/PODS 2009] || June 29-July 2, 2009 || USA || Providence, RI || <br />
|-<br />
| [[BOSC]] || [http://www.open-bio.org/wiki/BOSC_2009 BOSC 2009] || June 25-26, 2009 || Sweden || Stockholm || <br />
|-<br />
| [[USENIX]] || [http://www.usenix.org/events/usenix09/ USENIX '09] || June 14-19, 2009 || USA || San Diego, CA || <br />
|- style="background:Khaki;"<br />
| [[PGCon 2009]] || [http://www.pgcon.org/ PGCon 2009] || May 19-22, 2009 || Canada || Ottawa || Tutorials, Talks, Party<br />
|-<br />
| [[NWLinuxFest]] || [http://linuxfestnorthwest.org/ NWLinuxFest 2009] || April 25-26, 2009 || USA || Bellingham, WA || Booth<br />
|-<br />
| [[DASFAA]] || [http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=3177 DASFAA] || April 21-23, 2009 || Australia || Brisbane || <br />
|- style="background:Khaki;"<br />
| [[Pg Conference Spring 2009]] || [http://www.postgresqlconference.org/ PostgreSQL Conference East 2009] || April 3-5, 2009 || USA || Philadelphia || Speakers(s), Party <br /> [[PostgreSQLConferenceEast2009|Papers]]<br />
|-<br />
| [[EDBT]] || [http://www.math.spbu.ru/edbticdt/ EDBT/ICDT 2009] || March 23-26, 2009 || Russia || St. Petersburg || <br />
|-<br />
| [[CLT 09]] || [http://chemnitzer.linux-tage.de/2009/info/ Chemnitzer Linuxtage] || March 14-15, 2009 || Germany || Chemnitz || Booth, Talks, Workshop ([http://andreas.scherbaum.la/blog/archives/525-PostgreSQL-auf-den-Chemnitzer-Linuxtagen.html Infos])<br />
|-<br />
| [[OSBC]] || [http://www.infoworld.com/event/osbc/09/ Open Source Business Conference] || March 10-11, 2009 || USA || San Francisco, CA || No plans<br />
|-<br />
| [[Solutions Linux]] || [http://www.solutionslinux.fr/ Solutions Linux] || March 31 & April 1-2 , 2009|| France || Paris || [http://wiki.postgresql.fr/sl2009:start Booth, Talks]<br />
|- <br />
| [[FAST09]] || [http://www.usenix.org/events/fast09/ USENIX FAST '09] || February 24-27, 2009 || USA || San Francisco, CA || Didn't attend<br />
|-<br />
| [[Perl Workshop 2009]] || [http://www.perl-workshop.de/talks/151/view Perl Workshop '09] || February 25-27, 2009 || Germany || Frankfurt/Main || [http://andreas.scherbaum.la/blog/archives/530-Unterlagen-fuer-Tutorial-beim-Perl-Workshop-in-FrankfurtMain.html Tutorial]<br />
|-<br />
| [[SCALE]] || [http://scale7x.socallinuxexpo.org/ SCALE 7x] || February 20-22, 2009 || USA || Los Angeles, CA || Booth, BoF<br />
|-<br />
| [[FOSDEM, Brussels 2009]] || [http://www.fosdem.org/2009/ FOSDEM '09] || February 07-08, 2009 || Belgium || Brussels || Booth, Devroom<br />
|-<br />
| [[ADBC]] || [http://www.cse.unsw.edu.au/~adc09/ 20th Australasian Database Conference] || January 20-23, 2009 || New Zealand || Wellington || <br />
|-<br />
| [[LinuxConfAU]] || [http://linux.conf.au/ linux.conf.au] || January 21-23, 2009 || Australia || Tasmania || <br />
|-<br />
| colspan="6" | [[Events/2008 | 2008 events]]<br />
|-<br />
| colspan="6" | [[Events/2007 | 2007 events]]<br />
|}<br />
<br />
== External Links ==<br />
<br />
* [http://conferences.oreillynet.com/ O'Reilly conferences]<br />
* [http://opencheese.com/2007/10/14/open-source-events-2008/ "Open Source and Linux events in 2008"]<br />
<br />
[[Category:PostgreSQL Events]]<br />
[[Category:Advocacy]]</div>Greghttps://wiki.postgresql.org/index.php?title=Postgres&diff=7957Postgres2009-09-05T11:53:51Z<p>Greg: /* Brainstorm of Items */ Tweaks</p>
<hr />
<div>= Changing name from PostgreSQL to Postgres =<br />
<br />
'''Postgres''' is a proposed new name for the project, to replace the official name '''PostgreSQL'''. The latter would still be a perfectly fine way to refer to the project, but the former would be encouraged.<br />
<br />
==Pros for changing the name to "Postgres"==<br />
<br />
* Already used that way by many people<br />
* Almost everyone speaks it that way<br />
* Many of the projects, scripts, etc. use that name<br />
** Default database for all new clusters<br />
** Default username for all distros<br />
** check_postgres, Postgres-R, Postgres Plus<br />
* Less problems translating to other languages<br />
* No pronunciation problems<br />
* Does not encourage weird derivations such as "Postgre" and "PostgresQL"<br />
* SQL is the de-facto standard, no need to emphasize it any more in the name...<br />
* ...but if we ever change to something besides, it won't be a problem. :)<br />
* No more worries about mixed caps and color schemes on letters.<br />
* Google search on "Postgres" still brings back PostgreSQL hits (e.g. first hit is postgresql.org, which does not contain the word "Postgres")<br />
** Is this actually a "pro"? This suggests to me that it some efforts may be unnecessary, which is more of a "con."<br />
* Advocacy efforts could start with features, not pronunciation corrections/disclaimers.<br />
* Some friendly companies already using this name would benefit.<br />
<br />
==Cons for changing the name to "Postgres"==<br />
<br />
* Abandonment<br />
** Discards 8+ years of advocacy of the "PostgreSQL" name<br />
** No other major free software project has changed names voluntarily<br />
*** ''Rebuttal: But no other major project has such an ugly unpronounceable name''<br />
* Confusion<br />
** 1-2 years of answering "Is Postgres a fork? Which should I use?"<br />
** Increased confusion with Ingres and Progress (yes, it's still around)<br />
* Some community may be opposed to the change (research needed)<br />
** Downstream projects may get annoyed with the name change and drop PostgreSQL support<br />
*** ''Rebuttal: seems very unlikely''<br />
** Some corporate supporters may be unhappy about renaming materials/marketing/packaging<br />
** Some OSes/distributions may get frustrated and drop PostgreSQL distribution<br />
*** ''Rebuttal: extraordinarily unlikely, bordering on FUD''<br />
* Work (see below)<br />
** Will be ''lots'' of work<br />
** Currently not enough Advocacy volunteers for routine tasks<br />
*** May result in other advocacy tasks not getting done<br />
* A minority of the community apparently prefers "PostgreSQL"<br />
* Some big regional groups (like .fr, .eu) don't have control over "postgres.something" domain<br />
* Some friendly companies using PostgreSQL would lose benefits or have to rephrase their taglines or other references to PostgreSQL.<br />
<br />
==Things that would need to be changed==<br />
<br />
* Website graphics<br />
* FAQ<br />
* Website URL<br />
* Internal documentation references<br />
* Every single page mentioned by Google (just kidding)<br />
<br />
==Things that may not need to be changed==<br />
<br />
* postgresql.conf and other internal file/directory names<br />
* Mailing list names<br />
* Things using 'pgsql'<br />
* Older documentation<br />
<br />
== Tasks in the Upgrading The Name Project ==<br />
<br />
=== Further Information Needed ===<br />
<br />
* Need an inventory of graphics containing "PostgreSQL"<br />
** Only one graphic on the main site: [http://www.postgresql.org/layout/images/hdr_left.png http://www.postgresql.org/layout/images/hdr_left.png].<br />
*** New versions created, with original width and without. [[Image:Hdr_left2.png|thumb|New website graphic, original width]] [[Image:Hdr_left3.png|thumb|New website graphic, new width]]<br />
** The downloadable buttons on [http://www.postgresql.org/community/propaganda the propoganda page].<br />
** [http://pgfoundry.org/projects/graphics/ The graphics project] on [[pgfoundry]].<br />
* Poll ''non-users'' to find out if name has any effect on adoption<br />
* Poll corporate supporters<br />
* Check with press contacts / analysis on likely press reaction<br />
* Poll downstream projects on reaction to name change<br />
* Poll regional/language groups & packagers<br />
* Check for legal issues<br />
* Detailed listing of all materials which need changing.<br />
<br />
=== Brainstorm of Items ===<br />
<br />
* Research<br />
** Lots needed; see "Further Information" below.<br />
* Change Advocacy<br />
** Coordinate/track all necessary work<br />
** Prepare FAQ & Handouts on "why the name change", "not a fork", and "not Ingres"<br />
*** ''Rebuttal: Why draw attention to Igres? Answer when asked, but no need to point it out''<br />
** Announcement & Press release<br />
* Websites<br />
** Websites to be updated<br />
*** postgresql.org<br />
*** planetpostgresql.org<br />
*** postgresql.pl/jp/ar/etc, pgsql.cz/etc.<br />
*** pgFoundry<br />
*** developer wiki<br />
*** jdbc.postgresql.org, slony.info, more?<br />
** New URLs: determine if we can get them all and buy them.<br />
** Create redirection tech so old links work<br />
** Update website contents to read "Postgres"<br />
** New website graphics<br />
* Advocacy Materials<br />
** Update advocacy materials<br />
** New advocacy graphics & t-shirt designs<br />
* Documentation<br />
** Update all docs & translations<br />
** Update all server strings & translations<br />
* Packaging<br />
** Contact all packagers<br />
*** RPM packager Devrim [http://archives.postgresql.org/pgsql-advocacy/2007-09/msg00263.php opposed] to name change<br />
**** ''Actually, he is opposed to making RPM changes *before* the change is official''<br />
*** Fix dependencies in each packaging system<br />
** Do the work for packagers who don't have time<br />
** Help packagers/users with upgrade confusion caused by new names<br />
* Downstream Projects<br />
** Contact all major downstream projects to warn them of upcoming change and gauge reaction<br />
** Coordinate name change with downstream projects<br />
** Do the work for projects who will otherwise drop PostgreSQL support<br />
<br />
=== Controversial Items ===<br />
<br />
These are items where people have said both "this needs to change" and "this does not need to change." That makes it manifestly clear that we cannot assume, without some discussion/debate, what ought to be done with these items.<br />
<br />
* Filenames & Binaries<br />
** Rename paths and filenames and processes from postgres to postgresql.<br />
** Update paths on website that use "pgsql" instead of "PostgreSQL".<br />
** Change filenames which are explicitly "postgresql"<br />
** Maybe change filenames/paths which are pgsql<br />
*** otherwise future confusion<br />
** Update all code which uses explicit filenames and paths to use the new ones<br />
* Plan/coordinate updating package names & file paths<br />
* Mailing Lists<br />
** Maybe rename from pgsql-?<br />
** Archives issues?<br />
<br />
== Things that may need to be changed even if the name does not ==<br />
<br />
* Make "Postgre" an officially acceptable short form because it's the obvious short form of PostgreSQL; which will prevent some of the abuse of new members, users, and customers of the product.<br />
* Alternative: Hold a marketing campaign to educate the world that we're not Postgre.<br />
* Produce new swag as the old inventory is used up.<br />
* Make links use PostgreSQL.org instead of postgresql.org to re-enforce the brand.<br />
* Update the mailing list names from "pgsql-advocacy" to "postgresql-advocacy"<br />
<br />
== Alternatives to changing to Postgres ==<br />
<br />
* Moving to PostgresQL - which keeps domain names, etc but encourages the preferred short form.<br />
* Approving of "Postgre".<br />
* A marketing campaign to educate the world against "Postgre"<br />
<br />
==External Links==<br />
<br />
* [http://svr5.postgresql.org/pgsql-novice/2006-07/msg00063.php Tom Lane quote]: "Arguably, the 1996 decision to call it PostgreSQL instead of reverting to plain Postgres was the single worst mistake this project ever made."<br />
* [http://people.planetpostgresql.org/greg/index.php?/archives/67-Celebrity-Deathmatch-Postgres-vs.-PostgreSQL.html Greg's Celebrity Deathmatch blog post]<br />
* [http://groups.google.com/group/pgsql.advocacy/browse_thread/thread/7007916c825af061/94dc2016f8293a4f Advocacy mailing list thread #1]<br />
* [http://groups.google.com/group/pgsql.advocacy/browse_thread/thread/cae33a59d6591ccc/d096a90018ef3632 Advocacy mailing list thread #2]<br />
* [http://andyastor.blogspot.com/2007/08/postgresql-or-postgres.html Andy's blog post]<br />
* [http://postgres.enterprisedb.com/ Poll at enterprisedb.com]<br />
<br />
<br />
[[Category:Advocacy]]</div>Greghttps://wiki.postgresql.org/index.php?title=Postgres&diff=7956Postgres2009-09-05T11:50:01Z<p>Greg: /* Cons for changing the name to "Postgres" */ Rebuttals, cleanup, remove 'postfix' argument</p>
<hr />
<div>= Changing name from PostgreSQL to Postgres =<br />
<br />
'''Postgres''' is a proposed new name for the project, to replace the official name '''PostgreSQL'''. The latter would still be a perfectly fine way to refer to the project, but the former would be encouraged.<br />
<br />
==Pros for changing the name to "Postgres"==<br />
<br />
* Already used that way by many people<br />
* Almost everyone speaks it that way<br />
* Many of the projects, scripts, etc. use that name<br />
** Default database for all new clusters<br />
** Default username for all distros<br />
** check_postgres, Postgres-R, Postgres Plus<br />
* Less problems translating to other languages<br />
* No pronunciation problems<br />
* Does not encourage weird derivations such as "Postgre" and "PostgresQL"<br />
* SQL is the de-facto standard, no need to emphasize it any more in the name...<br />
* ...but if we ever change to something besides, it won't be a problem. :)<br />
* No more worries about mixed caps and color schemes on letters.<br />
* Google search on "Postgres" still brings back PostgreSQL hits (e.g. first hit is postgresql.org, which does not contain the word "Postgres")<br />
** Is this actually a "pro"? This suggests to me that it some efforts may be unnecessary, which is more of a "con."<br />
* Advocacy efforts could start with features, not pronunciation corrections/disclaimers.<br />
* Some friendly companies already using this name would benefit.<br />
<br />
==Cons for changing the name to "Postgres"==<br />
<br />
* Abandonment<br />
** Discards 8+ years of advocacy of the "PostgreSQL" name<br />
** No other major free software project has changed names voluntarily<br />
*** ''Rebuttal: But no other major project has such an ugly unpronounceable name''<br />
* Confusion<br />
** 1-2 years of answering "Is Postgres a fork? Which should I use?"<br />
** Increased confusion with Ingres and Progress (yes, it's still around)<br />
* Some community may be opposed to the change (research needed)<br />
** Downstream projects may get annoyed with the name change and drop PostgreSQL support<br />
*** ''Rebuttal: seems very unlikely''<br />
** Some corporate supporters may be unhappy about renaming materials/marketing/packaging<br />
** Some OSes/distributions may get frustrated and drop PostgreSQL distribution<br />
*** ''Rebuttal: extraordinarily unlikely, bordering on FUD''<br />
* Work (see below)<br />
** Will be ''lots'' of work<br />
** Currently not enough Advocacy volunteers for routine tasks<br />
*** May result in other advocacy tasks not getting done<br />
* A minority of the community apparently prefers "PostgreSQL"<br />
* Some big regional groups (like .fr, .eu) don't have control over "postgres.something" domain<br />
* Some friendly companies using PostgreSQL would lose benefits or have to rephrase their taglines or other references to PostgreSQL.<br />
<br />
==Things that would need to be changed==<br />
<br />
* Website graphics<br />
* FAQ<br />
* Website URL<br />
* Internal documentation references<br />
* Every single page mentioned by Google (just kidding)<br />
<br />
==Things that may not need to be changed==<br />
<br />
* postgresql.conf and other internal file/directory names<br />
* Mailing list names<br />
* Things using 'pgsql'<br />
* Older documentation<br />
<br />
== Tasks in the Upgrading The Name Project ==<br />
<br />
=== Further Information Needed ===<br />
<br />
* Need an inventory of graphics containing "PostgreSQL"<br />
** Only one graphic on the main site: [http://www.postgresql.org/layout/images/hdr_left.png http://www.postgresql.org/layout/images/hdr_left.png].<br />
*** New versions created, with original width and without. [[Image:Hdr_left2.png|thumb|New website graphic, original width]] [[Image:Hdr_left3.png|thumb|New website graphic, new width]]<br />
** The downloadable buttons on [http://www.postgresql.org/community/propaganda the propoganda page].<br />
** [http://pgfoundry.org/projects/graphics/ The graphics project] on [[pgfoundry]].<br />
* Poll ''non-users'' to find out if name has any effect on adoption<br />
* Poll corporate supporters<br />
* Check with press contacts / analysis on likely press reaction<br />
* Poll downstream projects on reaction to name change<br />
* Poll regional/language groups & packagers<br />
* Check for legal issues<br />
* Detailed listing of all materials which need changing.<br />
<br />
=== Brainstorm of Items ===<br />
<br />
* Research<br />
** Lots needed; see "Further Information" below.<br />
* Change Advocacy<br />
** Coordinate/track all necessary work<br />
** Prepare FAQ & Handouts on "why the name change", "not a fork", and "not Ingres"<br />
** Announcement & Press release<br />
* Websites<br />
** Websites to be updated<br />
*** postgresql.org<br />
*** planetpostgresql.org<br />
*** postgresql.pl/jp/ar/etc, pgsql.cz/etc.<br />
*** pgFoundry<br />
*** developer wiki<br />
*** jdbc.postgresql.org, slony.info, more?<br />
** New URLs: determine if we can get them all and buy them.<br />
** Create redirection tech so old links work<br />
** Update website contents to read "Postgres"<br />
** New website graphics<br />
* Advocacy Materials<br />
** Update advocacy materials<br />
** New advocacy graphics & t-shirt designs<br />
* Documentation<br />
** Update all docs & translations<br />
** Update all server strings & translations<br />
* Packaging<br />
** Contact all packagers<br />
*** RPM packager Devrim [http://archives.postgresql.org/pgsql-advocacy/2007-09/msg00263.php opposed] to name change<br />
*** Fix dependencies in each packaging system<br />
** Do the work for packagers who don't have time<br />
** Help packagers/users with upgrade confusion caused by new names<br />
* Downstream Projects<br />
** Contact all major downstream projects to warn them of upcoming change and gauge reaction<br />
** Coordinate name change with downstream projects<br />
** Do the work for projects who will otherwise drop PostgreSQL support<br />
<br />
=== Controversial Items ===<br />
<br />
These are items where people have said both "this needs to change" and "this does not need to change." That makes it manifestly clear that we cannot assume, without some discussion/debate, what ought to be done with these items.<br />
<br />
* Filenames & Binaries<br />
** Rename paths and filenames and processes from postgres to postgresql.<br />
** Update paths on website that use "pgsql" instead of "PostgreSQL".<br />
** Change filenames which are explicitly "postgresql"<br />
** Maybe change filenames/paths which are pgsql<br />
*** otherwise future confusion<br />
** Update all code which uses explicit filenames and paths to use the new ones<br />
* Plan/coordinate updating package names & file paths<br />
* Mailing Lists<br />
** Maybe rename from pgsql-?<br />
** Archives issues?<br />
<br />
== Things that may need to be changed even if the name does not ==<br />
<br />
* Make "Postgre" an officially acceptable short form because it's the obvious short form of PostgreSQL; which will prevent some of the abuse of new members, users, and customers of the product.<br />
* Alternative: Hold a marketing campaign to educate the world that we're not Postgre.<br />
* Produce new swag as the old inventory is used up.<br />
* Make links use PostgreSQL.org instead of postgresql.org to re-enforce the brand.<br />
* Update the mailing list names from "pgsql-advocacy" to "postgresql-advocacy"<br />
<br />
== Alternatives to changing to Postgres ==<br />
<br />
* Moving to PostgresQL - which keeps domain names, etc but encourages the preferred short form.<br />
* Approving of "Postgre".<br />
* A marketing campaign to educate the world against "Postgre"<br />
<br />
==External Links==<br />
<br />
* [http://svr5.postgresql.org/pgsql-novice/2006-07/msg00063.php Tom Lane quote]: "Arguably, the 1996 decision to call it PostgreSQL instead of reverting to plain Postgres was the single worst mistake this project ever made."<br />
* [http://people.planetpostgresql.org/greg/index.php?/archives/67-Celebrity-Deathmatch-Postgres-vs.-PostgreSQL.html Greg's Celebrity Deathmatch blog post]<br />
* [http://groups.google.com/group/pgsql.advocacy/browse_thread/thread/7007916c825af061/94dc2016f8293a4f Advocacy mailing list thread #1]<br />
* [http://groups.google.com/group/pgsql.advocacy/browse_thread/thread/cae33a59d6591ccc/d096a90018ef3632 Advocacy mailing list thread #2]<br />
* [http://andyastor.blogspot.com/2007/08/postgresql-or-postgres.html Andy's blog post]<br />
* [http://postgres.enterprisedb.com/ Poll at enterprisedb.com]<br />
<br />
<br />
[[Category:Advocacy]]</div>Greghttps://wiki.postgresql.org/index.php?title=Postgres&diff=7955Postgres2009-09-05T11:45:06Z<p>Greg: /* Pros for changing the name to "Postgres" */ Tweak the list</p>
<hr />
<div>= Changing name from PostgreSQL to Postgres =<br />
<br />
'''Postgres''' is a proposed new name for the project, to replace the official name '''PostgreSQL'''. The latter would still be a perfectly fine way to refer to the project, but the former would be encouraged.<br />
<br />
==Pros for changing the name to "Postgres"==<br />
<br />
* Already used that way by many people<br />
* Almost everyone speaks it that way<br />
* Many of the projects, scripts, etc. use that name<br />
** Default database for all new clusters<br />
** Default username for all distros<br />
** check_postgres, Postgres-R, Postgres Plus<br />
* Less problems translating to other languages<br />
* No pronunciation problems<br />
* Does not encourage weird derivations such as "Postgre" and "PostgresQL"<br />
* SQL is the de-facto standard, no need to emphasize it any more in the name...<br />
* ...but if we ever change to something besides, it won't be a problem. :)<br />
* No more worries about mixed caps and color schemes on letters.<br />
* Google search on "Postgres" still brings back PostgreSQL hits (e.g. first hit is postgresql.org, which does not contain the word "Postgres")<br />
** Is this actually a "pro"? This suggests to me that it some efforts may be unnecessary, which is more of a "con."<br />
* Advocacy efforts could start with features, not pronunciation corrections/disclaimers.<br />
* Some friendly companies already using this name would benefit.<br />
<br />
==Cons for changing the name to "Postgres"==<br />
<br />
* Abandonment<br />
** Discards 8 years of advocacy of the "PostgreSQL" name<br />
** No other major free software project has changed names voluntarily<br />
* Confusion<br />
** 1-2 years of answering "Is Postgres a fork? Which should I use?"<br />
** Increased confusion with Ingres and Progress (yes, it's still around)<br />
** Increased confusion with postfix<br />
* Some community may be opposed to the change (research needed)<br />
** Downstream projects may get annoyed with the name change and drop PostgreSQL support<br />
** Some corporate supporters may be unhappy about renaming materials/marketing/packaging<br />
** Some OSes/distributions may get frustrated and drop PostgreSQL distribution<br />
* Work (see below)<br />
** Will be ''lots'' of work<br />
** Currently not enough Advocacy volunteers for routine tasks<br />
*** Will result in other advocacy tasks not getting done<br />
* A minority of the community apparently prefers "PostgreSQL"<br />
* Some big regional groups (like .fr, .eu) don't have control over "postgres.something" domain<br />
* Some friendly companies using PostgreSQL would lose benefits or have to rephrase their taglines or other references to PostgreSQL.<br />
<br />
==Things that would need to be changed==<br />
<br />
* Website graphics<br />
* FAQ<br />
* Website URL<br />
* Internal documentation references<br />
* Every single page mentioned by Google (just kidding)<br />
<br />
==Things that may not need to be changed==<br />
<br />
* postgresql.conf and other internal file/directory names<br />
* Mailing list names<br />
* Things using 'pgsql'<br />
* Older documentation<br />
<br />
== Tasks in the Upgrading The Name Project ==<br />
<br />
=== Further Information Needed ===<br />
<br />
* Need an inventory of graphics containing "PostgreSQL"<br />
** Only one graphic on the main site: [http://www.postgresql.org/layout/images/hdr_left.png http://www.postgresql.org/layout/images/hdr_left.png].<br />
*** New versions created, with original width and without. [[Image:Hdr_left2.png|thumb|New website graphic, original width]] [[Image:Hdr_left3.png|thumb|New website graphic, new width]]<br />
** The downloadable buttons on [http://www.postgresql.org/community/propaganda the propoganda page].<br />
** [http://pgfoundry.org/projects/graphics/ The graphics project] on [[pgfoundry]].<br />
* Poll ''non-users'' to find out if name has any effect on adoption<br />
* Poll corporate supporters<br />
* Check with press contacts / analysis on likely press reaction<br />
* Poll downstream projects on reaction to name change<br />
* Poll regional/language groups & packagers<br />
* Check for legal issues<br />
* Detailed listing of all materials which need changing.<br />
<br />
=== Brainstorm of Items ===<br />
<br />
* Research<br />
** Lots needed; see "Further Information" below.<br />
* Change Advocacy<br />
** Coordinate/track all necessary work<br />
** Prepare FAQ & Handouts on "why the name change", "not a fork", and "not Ingres"<br />
** Announcement & Press release<br />
* Websites<br />
** Websites to be updated<br />
*** postgresql.org<br />
*** planetpostgresql.org<br />
*** postgresql.pl/jp/ar/etc, pgsql.cz/etc.<br />
*** pgFoundry<br />
*** developer wiki<br />
*** jdbc.postgresql.org, slony.info, more?<br />
** New URLs: determine if we can get them all and buy them.<br />
** Create redirection tech so old links work<br />
** Update website contents to read "Postgres"<br />
** New website graphics<br />
* Advocacy Materials<br />
** Update advocacy materials<br />
** New advocacy graphics & t-shirt designs<br />
* Documentation<br />
** Update all docs & translations<br />
** Update all server strings & translations<br />
* Packaging<br />
** Contact all packagers<br />
*** RPM packager Devrim [http://archives.postgresql.org/pgsql-advocacy/2007-09/msg00263.php opposed] to name change<br />
*** Fix dependencies in each packaging system<br />
** Do the work for packagers who don't have time<br />
** Help packagers/users with upgrade confusion caused by new names<br />
* Downstream Projects<br />
** Contact all major downstream projects to warn them of upcoming change and gauge reaction<br />
** Coordinate name change with downstream projects<br />
** Do the work for projects who will otherwise drop PostgreSQL support<br />
<br />
=== Controversial Items ===<br />
<br />
These are items where people have said both "this needs to change" and "this does not need to change." That makes it manifestly clear that we cannot assume, without some discussion/debate, what ought to be done with these items.<br />
<br />
* Filenames & Binaries<br />
** Rename paths and filenames and processes from postgres to postgresql.<br />
** Update paths on website that use "pgsql" instead of "PostgreSQL".<br />
** Change filenames which are explicitly "postgresql"<br />
** Maybe change filenames/paths which are pgsql<br />
*** otherwise future confusion<br />
** Update all code which uses explicit filenames and paths to use the new ones<br />
* Plan/coordinate updating package names & file paths<br />
* Mailing Lists<br />
** Maybe rename from pgsql-?<br />
** Archives issues?<br />
<br />
== Things that may need to be changed even if the name does not ==<br />
<br />
* Make "Postgre" an officially acceptable short form because it's the obvious short form of PostgreSQL; which will prevent some of the abuse of new members, users, and customers of the product.<br />
* Alternative: Hold a marketing campaign to educate the world that we're not Postgre.<br />
* Produce new swag as the old inventory is used up.<br />
* Make links use PostgreSQL.org instead of postgresql.org to re-enforce the brand.<br />
* Update the mailing list names from "pgsql-advocacy" to "postgresql-advocacy"<br />
<br />
== Alternatives to changing to Postgres ==<br />
<br />
* Moving to PostgresQL - which keeps domain names, etc but encourages the preferred short form.<br />
* Approving of "Postgre".<br />
* A marketing campaign to educate the world against "Postgre"<br />
<br />
==External Links==<br />
<br />
* [http://svr5.postgresql.org/pgsql-novice/2006-07/msg00063.php Tom Lane quote]: "Arguably, the 1996 decision to call it PostgreSQL instead of reverting to plain Postgres was the single worst mistake this project ever made."<br />
* [http://people.planetpostgresql.org/greg/index.php?/archives/67-Celebrity-Deathmatch-Postgres-vs.-PostgreSQL.html Greg's Celebrity Deathmatch blog post]<br />
* [http://groups.google.com/group/pgsql.advocacy/browse_thread/thread/7007916c825af061/94dc2016f8293a4f Advocacy mailing list thread #1]<br />
* [http://groups.google.com/group/pgsql.advocacy/browse_thread/thread/cae33a59d6591ccc/d096a90018ef3632 Advocacy mailing list thread #2]<br />
* [http://andyastor.blogspot.com/2007/08/postgresql-or-postgres.html Andy's blog post]<br />
* [http://postgres.enterprisedb.com/ Poll at enterprisedb.com]<br />
<br />
<br />
[[Category:Advocacy]]</div>Greghttps://wiki.postgresql.org/index.php?title=Events&diff=7293Events2009-07-13T16:36:23Z<p>Greg: Updates</p>
<hr />
<div>== PostgreSQL Events ==<br />
<br />
This is a listing of events at which we expect, or would like to have, a PostgreSQL presence. Please keep the events in order by starting date and follow the existing examples. Please also tag the events with the MediaWiki "PostgreSQL Events" category. If you are going to be organizing a PostgreSQL booth, please adhere to [[BoothPolicies]].<br />
<br />
{| border="1"<br />
|+ <br />
'''Upcoming PostgreSQL Events Listing'''<br />
|-<br />
| '''Event''' || '''Web Page''' || '''Date''' || '''Country''' || '''City''' || '''Activities'''<br />
|- style="background:Khaki;"<br />
| [[pgDaySanJose2009]] || [[pgDaySanJose2009]] || July 19, 2009 || USA || San Jose, CA || full day<br />
|- style="background:Khaki;"<br />
| [[OSCON]] || [http://conferences.oreillynet.com/ OSCON 2009] || July 20-24, 2009 || USA || San Jose, CA || pgDay, Speakers, Booth<br />
|-<br />
| [[USENIX]] || [http://www.usenix.org/events/sec09/ USENIX Security '09] || August 10-14, 2009 || Canada || Montreal || <br />
|- style="background:Khaki;"<br />
| [[European PGDay 2009]] || [http://www.pgday.eu/ European PGDay 2009] || Nov 6-7, 2009 || France || Paris || [http://wiki.postgresql.fr/en:pgday2009:start Tutorials, Talks, Party]<br />
|-<br />
| [[PostgreSQL Conference 2009 Japan]] || [http://www.postgresql.jp/events/pgcon09j/e/ JPUG 10th Anniversary Conference] || November 20-21, 2009 || Japan || Tokyo || <br />
|}<br />
<br />
<br />
{| border="1"<br />
|+ <br />
'''Older PostgreSQL Events Listing'''<br />
| '''Event''' || '''Web Page''' || '''Date''' || '''Country''' || '''City''' || '''Activities'''<br />
|-<br />
| [[SIGMOD]] || [http://www.sigmod09.org/ ACM SIGMOD/PODS 2009] || June 29-July 2, 2009 || USA || Providence, RI || <br />
|-<br />
| [[BOSC]] || [http://www.open-bio.org/wiki/BOSC_2009 BOSC 2009] || June 25-26, 2009 || Sweden || Stockholm || <br />
|-<br />
| [[USENIX]] || [http://www.usenix.org/events/usenix09/ USENIX '09] || June 14-19, 2009 || USA || San Diego, CA || <br />
|- style="background:Khaki;"<br />
| [[PGCon 2009]] || [http://www.pgcon.org/ PGCon 2009] || May 19-22, 2009 || Canada || Ottawa || Tutorials, Talks, Party<br />
|-<br />
| [[NWLinuxFest]] || [http://linuxfestnorthwest.org/ NWLinuxFest 2009] || April 25-26, 2009 || USA || Bellingham, WA || Booth<br />
|-<br />
| [[DASFAA]] || [http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=3177 DASFAA] || April 21-23, 2009 || Australia || Brisbane || <br />
|- style="background:Khaki;"<br />
| [[Pg Conference Spring 2009]] || [http://www.postgresqlconference.org/ PostgreSQL Conference East 2009] || April 3-5, 2009 || USA || Philadelphia || Speakers(s), Party <br /> [[PostgreSQLConferenceEast2009|Papers]]<br />
|-<br />
| [[EDBT]] || [http://www.math.spbu.ru/edbticdt/ EDBT/ICDT 2009] || March 23-26, 2009 || Russia || St. Petersburg || <br />
|-<br />
| [[CLT 09]] || [http://chemnitzer.linux-tage.de/2009/info/ Chemnitzer Linuxtage] || March 14-15, 2009 || Germany || Chemnitz || Booth, Talks, Workshop ([http://andreas.scherbaum.la/blog/archives/525-PostgreSQL-auf-den-Chemnitzer-Linuxtagen.html Infos])<br />
|-<br />
| [[OSBC]] || [http://www.infoworld.com/event/osbc/09/ Open Source Business Conference] || March 10-11, 2009 || USA || San Francisco, CA || No plans<br />
|-<br />
| [[Solutions Linux]] || [http://www.solutionslinux.fr/ Solutions Linux] || March 31 & April 1-2 , 2009|| France || Paris || [http://wiki.postgresql.fr/sl2009:start Booth, Talks]<br />
|- <br />
| [[FAST09]] || [http://www.usenix.org/events/fast09/ USENIX FAST '09] || February 24-27, 2009 || USA || San Francisco, CA || Didn't attend<br />
|-<br />
| [[Perl Workshop 2009]] || [http://www.perl-workshop.de/talks/151/view Perl Workshop '09] || February 25-27, 2009 || Germany || Frankfurt/Main || [http://andreas.scherbaum.la/blog/archives/530-Unterlagen-fuer-Tutorial-beim-Perl-Workshop-in-FrankfurtMain.html Tutorial]<br />
|-<br />
| [[SCALE]] || [http://scale7x.socallinuxexpo.org/ SCALE 7x] || February 20-22, 2009 || USA || Los Angeles, CA || Booth, BoF<br />
|-<br />
| [[FOSDEM, Brussels 2009]] || [http://www.fosdem.org/2009/ FOSDEM '09] || February 07-08, 2009 || Belgium || Brussels || Booth, Devroom<br />
|-<br />
| [[ADBC]] || [http://www.cse.unsw.edu.au/~adc09/ 20th Australasian Database Conference] || January 20-23, 2009 || New Zealand || Wellington || <br />
|-<br />
| [[LinuxConfAU]] || [http://linux.conf.au/ linux.conf.au] || January 21-23, 2009 || Australia || Tasmania || <br />
|-<br />
| colspan="6" | [[Events/2008 | 2008 events]]<br />
|-<br />
| colspan="6" | [[Events/2007 | 2007 events]]<br />
|}<br />
<br />
== External Links ==<br />
<br />
* [http://conferences.oreillynet.com/ O'Reilly conferences]<br />
* [http://opencheese.com/2007/10/14/open-source-events-2008/ "Open Source and Linux events in 2008"]<br />
<br />
[[Category:PostgreSQL Events]]</div>Greg