What's new in PostgreSQL 9.4

From PostgreSQL wiki

(Difference between revisions)
Jump to: navigation, search
(Aggregate features: Performance of NUMERIC aggregates)
(Logical decoding)
Line 51: Line 51:
In future logical replication will also take advantage of replication slots.
In future logical replication will also take advantage of replication slots.
==Logical decoding <!-- Andres Freund, Robert Haas, Álvaro Herrera, Abhijit Menon-Sen, Peter Geoghegan, Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve Singer -->==
==Changeset Streaming / Logical decoding <!-- Andres Freund, Robert Haas, Álvaro Herrera, Abhijit Menon-Sen, Peter Geoghegan, Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve Singer -->==
TODO: Blog posts, explanation of what it's the groundwork for
TODO: Blog posts, explanation of what it's the groundwork for

Revision as of 23:44, 25 August 2014

This page contains an overview of PostgreSQL Version 9.4's features, including descriptions, testing and usage information, and links to blog posts containing further information. See also PostgreSQL 9.4 Open Items.


Major new features

Replication improvements

Replication slots

TODO: Blog posts, redundancy of wal_keep_segments et al.

Replication slots allow standbys to provide information to the primary or upstream cascading standby as to the point they've reached in the write-ahead log. This information is available to the primary even when the standby is offline or disconnected. This eliminates the need for wal_keep_segments, which required the admin to estimate how many extra WAL files would be needed to be kept around in order to ensure supply to the standby in case of excessive replication lag.

Here is an example:

On the primary server, set max_replication_slots to allow the standby to register to a slot:

 max_replication_slots = 1

The wal_level will also need to be set to at least "archive", but as this example will be using a hot standby, we'll set it to "hot_standby":

 wal_level = hot_standby

Restart the primary for these settings to take effect, then connect to the primary and create a replication slot:

 SELECT * FROM pg_create_physical_replication_slot('standby_replication_slot');

A standby can then use this replication slot by specifying it as the primary_slotname parameter in its recovery.conf:

 primary_slotname = 'standby_replication_slot'

On the primary, you can see this being used:

 postgres=# SELECT * FROM pg_replication_slots;
        slot_name          | plugin | slot_type | datoid | database | active | xmin | catalog_xmin | restart_lsn 
  standby_replication_slot |        | physical  |        |          | t      |      |              | 0/3000420
 (1 row)

Now the primary will keep hold of WAL for as along as it needs to, which is until all standbys have reported that they have progressed beyond that point. Note, however, that if a standby goes offline, and the replication slot isn't dropped, WAL will build up on the primary. So if a standby needs to be decommissioned, it's slot should then be dropped like so:

 # SELECT pg_drop_replication_slot('standby_replication_slot');
 (1 row)

Otherwise, ensure there is enough disk capacity available to keep WAL on the primary until the standby returns.

In future logical replication will also take advantage of replication slots.

Changeset Streaming / Logical decoding

TODO: Blog posts, explanation of what it's the groundwork for

Performance improvements

GIN indexes now faster and smaller

TODO: Blog posts, benchmark results


When a database instance is restarted, its shared buffers are empty again, which means all queries will initially have to read data in direct from disk. pg_prewarm can load relation data back into the buffers to "warm" the buffers back up again. This will mean that queries, that would otherwise have to load parts of a table in bit by bit, can have the data available in shared buffers ready for them.

TODO: Example

Other new features


Normally, cluster-wide settings need to be edited in the postgresql.conf, but now changes can be made via the ALTER SYSTEM command.


 postgres=# ALTER SYSTEM SET log_min_duration_statement = '5s';

This doesn't actually change postgresql.conf. Instead it writes the setting to a file called postgresql.auto.conf. This file *always* gets read last, so it will always override any settings in postgresql.conf. Setting it to DEFAULT removes the line from postgresql.auto.conf:

 postgres=# ALTER SYSTEM SET log_min_duration_statement = DEFAULT;

An advantage of making changes this way is that the settings are more likely to be correct as they are validated before they are added:

 postgres=# ALTER SYSTEM SET wal_level = 'like the legend of the phoenix';
 ERROR:  invalid value for parameter "wal_level": "like the legend of the phoenix"
 HINT:  Available values: minimal, archive, hot_standby, logical.

If this had been set in postgresql.conf instead, and someone attempted to restart the cluster, it would fail. Naturally postgresql.auto.conf should never be edited manually.


Prior to PostgreSQL 9.4, refreshing a materialized view meant locking the entire table, and therefore preventing anything querying it, and if a refresh took a long time to acquire the exclusive lock (while it waits for queries using it to finish), it in turn is holding up subsequent queries. This can now been mitigated with the CONCURRENTLY keyword:


A unique index will need to exist on the materialized view though. Instead of locking the materialized view up, it instead creates a temporary updated version of it, compares the two versions, then applies INSERTs and DELETEs against the materialized view to apply the difference. This means queries can still use the materialized view while it's being updated.

Unlike its non-concurrent form, tuples aren't frozen, and it needs VACUUMing due to the aforementioned DELETEs that will leave dead tuples behind.

Updatable view improvements

Automatically updatable views were introduced in PostgreSQL 9.3, allowing data in simple views to be updated in the same way as regular tables without the need to define triggers or rules. In PostgreSQL 9.4 this feature has been enhanced to apply to a wider class of views and to allow new data to be validated against the view definition.

Support views with a mix of updatable and non-updatable columns

Previously, a view was only automatically updatable if all of its columns were updatable (i.e., simple references to columns of the underlying relation). The presence of any non-updatable columns such as expressions, literals, function calls or subqueries would prevent automatic updates. Now a view may be automatically updatable even if it contains some non-updatable columns, provided that an INSERT or UPDATE command does not attempt to assign new values to any of the non-updatable columns.

This allows views with computed columns to support automatic updates:

CREATE TABLE products (
  product_id SERIAL PRIMARY KEY,
  product_name TEXT NOT NULL,
  quantity INT,
  reserved INT DEFAULT 0);

CREATE VIEW products_view AS
 SELECT product_id,
        (quantity - reserved) AS available
   FROM products
  WHERE quantity IS NOT NULL;
postgres=# INSERT INTO products_view (product_name, quantity) VALUES
 ('Budget laptop', 100),
 ('Premium laptop', 10);

postgres=# SELECT * FROM products;
 product_id |  product_name  | quantity | reserved 
          1 | Budget laptop  |      100 |        0
          2 | Premium laptop |       10 |        0
(2 rows)

In this view the available column is non-updatable, but all the other columns can be updated:

postgres=# UPDATE products_view SET quantity = quantity - 10 WHERE product_id = 1;

postgres=# UPDATE products_view SET available = available - 10 WHERE product_id = 1;
ERROR:  cannot update column "available" of view "products_view"
DETAIL:  View columns that are not columns of their base relation are not updatable.



By default, when an INSERT or UPDATE command modifies data in a view, the data is not checked against the view's WHERE clause. Thus it is possible that the new data is not visible in the view. For example, with the view above, setting the quantity of a product to NULL would cause it to disappear from the view. To prevent this, specify WITH CHECK OPTION when defining the view:

CREATE VIEW products_view AS
 SELECT product_id,
        (quantity - reserved) AS available
   FROM products
  WHERE quantity IS NOT NULL

Now any attempt to insert or update data in a way that does not match the view definition will be rejected:

postgres=# UPDATE products_view SET quantity = NULL WHERE product_id = 1;
ERROR:  new row violates WITH CHECK OPTION for view "products_view"
DETAIL:  Failing row contains (1, Budget laptop, null, 0).


Updatable security barrier views

Previously, marking a view with the security_barrier option would prevent it from being automatically updatable. In PostgreSQL 9.4, this restriction has been removed.

This allows views to be used to implement row- and column-level security. For example:

CREATE TABLE employees (
 employee_id SERIAL PRIMARY KEY,
 employee_name TEXT NOT NULL,
 department TEXT,
 salary MONEY);

CREATE VIEW sales_employees WITH (security_barrier = true) AS
 SELECT employee_id,
   FROM employees
  WHERE department = 'Sales'

REVOKE ALL ON employees FROM bob;

This allows the user bob to read and update the details of any sales employees, but blocks access to salary information and any employees from other departments. In addition, the check option prevents bob from moving an employee to a different department.



Set-returning functions are most frequently used in the FROM clause of a query, and now if it's suffixed with WITH ORDINALITY, a column containing an ordered sequence of numbers is added.

For example:

 postgres=# SELECT * FROM json_object_keys('{"mobile": 4234234232, "email": "x@me.com", "address": "1 Street Lane"}'::json) WITH ORDINALITY;
  json_object_keys | ordinality 
  mobile           |          1
  email            |          2
  address          |          3
 (3 rows)

Aggregate features

Ordered-set aggregates

There is now support for ordered-set aggregates which provide order-sensitive information on columns. One example is the mode average:

 postgres=# SELECT mode() WITHIN GROUP (ORDER BY eye_colour) FROM population;
 (1 row)

Or get the value of a column the specified percentage in, such as 20%

 postgres=# SELECT percentile_disc(0.2) WITHIN GROUP (ORDER BY age) FROM population;
 (1 row)

Aggregate FILTER clause

The FILTER clause is a SQL standard syntax to control which rows are passed to an aggregate function in a query:

SELECT agg_fn(val) FILTER (WHERE condition) FROM ...

Only rows for which the condition evaluates to true are passed to the aggregate function. For example:

SELECT array_agg(i) FILTER (WHERE i % 2 = 0) AS twos,
       array_agg(i) FILTER (WHERE i % 3 = 0) AS threes,
       array_agg(i) FILTER (WHERE i % 5 = 0) AS fives,
       array_agg(i) FILTER (WHERE i % 7 = 0) AS sevens
  FROM generate_series(1, 20) AS g(i);

            twos             |      threes      |    fives     | sevens
 {2,4,6,8,10,12,14,16,18,20} | {3,6,9,12,15,18} | {5,10,15,20} | {7,14}
(1 row)


Moving-aggregate support

If an aggregate function is used in a window with a moving frame start, referred to as moving-aggregate mode, it is now possible to define additional aggregate support functions to optimize performance by making it unnecessary for PostgreSQL to re-compute the entire aggregate each time the frame start moves.

For example, consider a query that computes the sum of values starting from the current row and including the next 10 rows after that:


In previous versions of PostgreSQL, this would involve re-computing the entire sum for each output row (10 additions per row). In PostgreSQL 9.4, the aggregate for each row can now be computed from the previous row's aggregate by subtracting the first value added and adding the new value (1 addition and 1 subtraction per row). This can greatly improve the performance of such queries, particularly for large window frames.

Many of the built-in aggregate functions support this optimization, and user-defined aggregates may also support it by defining suitable inverse transition functions where appropriate.


state_data_size parameter

This parameter may be passed to CREATE AGGREGATE to allow an aggregate function to provide an estimate for the size (in bytes) of its state data. This estimate will be used by PostgreSQL when planning a grouped aggregate query. For example, the query:

SELECT department, AVG(salary)
  FROM employees
 GROUP BY department;

may be executed using hash aggregation, computing all the aggregate groups at the same time:

         QUERY PLAN
   Group Key: department
   ->  Seq Scan on employees

or by sorting the data and computing the aggregate for each group in turn:

            QUERY PLAN
   Group Key: department
   ->  Sort
         Sort Key: department
         ->  Seq Scan on employees

The plan using hash aggregation may be more efficient, but it may also use much more memory, and therefore it will only be considered if it is estimated to fit into work_mem, which will depend on the estimated size of the aggregate's state data.


Performance of NUMERIC aggregates

The performance of various aggregate functions that use NUMERIC types internally has been improved in PostgreSQL 9.4. This includes the following aggregates:

  • SUM() and AVG() over bigint and numeric values.
  • STDDEV_POP(), STDDEV_SAMP(), STDDEV(), VAR_POP(), VAR_SAMP() and VARIANCE() over smallint, int, bigint and numeric values.

Backwards compatibility

Personal tools