What's new in PostgreSQL 9.4

From PostgreSQL wiki

(Difference between revisions)
Jump to: navigation, search
(Initial page setup)
 
(Replication slots)
Line 10: Line 10:
  
 
TODO: Blog posts, redundancy of wal_keep_segments et al.
 
TODO: Blog posts, redundancy of wal_keep_segments et al.
 +
 +
Replication slots allow standbys to provide information to the primary or upstream cascading standby as to the point they've reached in the write-ahead log.  This information is available to the primary even when the standby is offline or disconnected.  This eliminates the need for wal_keep_segments, which required the admin to estimate how many extra WAL files would be needed to be kept around in order to ensure supply to the standby in case of excessive replication lag.
 +
 +
Here is an example:
 +
 +
On the primary server, set max_replication_slots to allow the standby to register to a slot:
 +
 +
  max_replication_slots = 1
 +
 +
The wal_level will also need to be set to at least "archive", but as this example will be using a hot standby, we'll set it to "hot_standby":
 +
 +
  wal_level = hot_standby
 +
 +
Restart the primary for these settings to take effect, then connect to the primary and create a replication slot:
 +
 +
  SELECT * FROM pg_create_physical_replication_slot('standby_replication_slot');
 +
 +
A standby can then use this replication slot by specifying it as the primary_slotname parameter in its recovery.conf:
 +
 +
  primary_slotname = 'standby_replication_slot'
 +
 +
On the primary, you can see this being used:
 +
 +
  postgres=# SELECT * FROM pg_replication_slots;
 +
        slot_name      | plugin | slot_type | datoid | database | active | xmin | catalog_xmin | restart_lsn
 +
  -----------------------+--------+-----------+--------+----------+--------+------+--------------+-------------
 +
  standby_physical_slot |        | physical  |        |          | t      |      |              | 0/3000420
 +
  (1 row)
 +
 +
Now the primary will keep hold of WAL for as along as it needs to, which is until all standbys have reported that they have progressed beyond that point.  Note, however, that if a standby goes offline, and the replication slow isn't dropped, WAL will build up on the primary.  So if a standby needs to be decommissioned, it's slot should then be dropped like so:
 +
 +
  # SELECT pg_drop_replication_slot('standby_replication_slot');
 +
  pg_drop_replication_slot
 +
  --------------------------
 +
 
 +
  (1 row)
 +
 +
Otherwise, ensure there is enough disk capacity available to keep WAL on the primary until the standby returns.
 +
 +
In future logical replication will also take advantage of replication slots.
  
 
==Logical decoding <!-- Andres Freund, Robert Haas, Álvaro Herrera, Abhijit Menon-Sen, Peter Geoghegan, Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve Singer -->==
 
==Logical decoding <!-- Andres Freund, Robert Haas, Álvaro Herrera, Abhijit Menon-Sen, Peter Geoghegan, Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve Singer -->==

Revision as of 22:30, 5 March 2014

This page contains an overview of PostgreSQL Version 9.3's features, including descriptions, testing and usage information, and links to blog posts containing further information. See also PostgreSQL 9.4 Open Items.

Contents

Major new features

Replication improvements

Replication slots

TODO: Blog posts, redundancy of wal_keep_segments et al.

Replication slots allow standbys to provide information to the primary or upstream cascading standby as to the point they've reached in the write-ahead log. This information is available to the primary even when the standby is offline or disconnected. This eliminates the need for wal_keep_segments, which required the admin to estimate how many extra WAL files would be needed to be kept around in order to ensure supply to the standby in case of excessive replication lag.

Here is an example:

On the primary server, set max_replication_slots to allow the standby to register to a slot:

 max_replication_slots = 1

The wal_level will also need to be set to at least "archive", but as this example will be using a hot standby, we'll set it to "hot_standby":

 wal_level = hot_standby

Restart the primary for these settings to take effect, then connect to the primary and create a replication slot:

 SELECT * FROM pg_create_physical_replication_slot('standby_replication_slot');

A standby can then use this replication slot by specifying it as the primary_slotname parameter in its recovery.conf:

 primary_slotname = 'standby_replication_slot'

On the primary, you can see this being used:

 postgres=# SELECT * FROM pg_replication_slots;
        slot_name       | plugin | slot_type | datoid | database | active | xmin | catalog_xmin | restart_lsn 
 -----------------------+--------+-----------+--------+----------+--------+------+--------------+-------------
  standby_physical_slot |        | physical  |        |          | t      |      |              | 0/3000420
 (1 row)

Now the primary will keep hold of WAL for as along as it needs to, which is until all standbys have reported that they have progressed beyond that point. Note, however, that if a standby goes offline, and the replication slow isn't dropped, WAL will build up on the primary. So if a standby needs to be decommissioned, it's slot should then be dropped like so:

 # SELECT pg_drop_replication_slot('standby_replication_slot');
  pg_drop_replication_slot 
 --------------------------
  
 (1 row)

Otherwise, ensure there is enough disk capacity available to keep WAL on the primary until the standby returns.

In future logical replication will also take advantage of replication slots.

Logical decoding

TODO: Blog posts, explanation of what it's the groundwork for

Performance improvements

GIN Index's now faster and smaller

TODO: Blog posts, benchmark results

pg_prewarm

When a database instance is restarted, its shared buffers are empty again, which means all queries will initially have to read data in direct from disk. pg_prewarm can load relation data back into the buffers to "warm" the buffers back up again. This will mean that queries, that would otherwise have to load parts of a table in bit by bit, can have the data available in shared buffers ready for them.

TODO: Example


Other new features

REFRESH MATERIALIZED VIEW CONCURRENTLY

Backwards compatibility

Personal tools