What's new in PostgreSQL 9.4

From PostgreSQL wiki
Jump to navigationJump to search

This page contains an overview of PostgreSQL Version 9.4's features, including descriptions, testing and usage information, and links to blog posts containing further information. See also PostgreSQL 9.4 Open Items.

Major new features

JSONB Binary JSON storage

With the new JSONB data type for PostgreSQL, users no longer have to choose between relational and non-relational data stores: they can have both at the same time. JSONB supports fast lookups and simple expression search queries using Generalized Inverted Indexes (GIN). Multiple new support functions enables users to extract and manipulate JSON data, with a performance which matches or surpasses the most popular document databases. With JSONB, table data can be easily integrated with document data for a fully integrated database environment.

For example, imagine we have a large set of JSON documents which contain publication data about various books. We can create store them in a JSON column like so:

   table booksdata (
       title citext not null,
       isbn isbn not null primary key,
       pubinfo jsonb not null

And then we can create a general index on the JSONB like so:

   CREATE INDEX ON booksdata USING GIN (pubinfo);


   CREATE INDEX ON booksdata USING GIN (pubinfo json_path_ops);

There are two different versions depending on the operations you expect to run. The standard GIN index supports every kind of operation we support for JSONB. The path_ops index supports only the search path operator "@>" (see below), but produces a smaller and faster index for these kinds of searches.

Once we have the index, we can do arbitrary path searches on JSONB, and these will be index searches, depending on what the planner picks. Examples of JSONB queries:

Count all books published by "It Books":

   SELECT count(*) FROM booksdata
   WHERE pubinfo @> '{ "publisher" : "It Books" }';

Get the ISBN, and extract the cost of the book as a string, for all books.

   SELECT isbn, pubinfo #>> '{"cost"}'
   FROM booksdata;

Give me the count of all books which have both the "publisher" and "cost" keys:

   SELECT count(*) FROM booksdata
   WHERE pubinfo ?& array['publisher','cost'];

JSON operations can be combined with standard PostgreSQL aggregates and other query operations. This allows us to provide far more functionality than non-relational databases have. For example:

Give me the average cost of all books from "It Books":

   SELECT avg((pubinfo #>> '{"cost"}')::NUMERIC)
   FROM booksdata
   WHERE pubinfo @> '{ "publisher" : "It Books" }';

Return the publisher name as a string, and the average cost for all books grouped by each publisher name:

   SELECT pubinfo #>> '{"publisher"}' as publisher,
       avg((pubinfo #>> '{"cost"}')::NUMERIC)
   FROM booksdata
   ORDER BY publisher;

Further, JSONB is sortable. It sorts in a sensible way, first by keys and then by values, as their primitive JSON types (integer, numeric, boolean and text). This allows JSONB values to be compared and even paginated. For example, this query pulls the full pubinfo from all "It Books" titles, sorts them, and limits to 25:

   SELECT pubinfo
   FROM booksdata
   WHERE pubinfo @> '{ "publisher" : "It Books" }'
   ORDER BY pubinfo LIMIT 25 OFFSET 0;

This query grabs all pubinfo for "It Books" titles and then sorts them by the JSON of the publication date. Since JSON has no internal type for dates, you'll want to be careful to use dates in Unix format:

   SELECT pubinfo
   FROM booksdata
   WHERE pubinfo @> '{ "publisher" : "It Books" }'
   ORDER BY pubinfo #> '{"published_on"}'


Replication improvements

Replication slots

TODO: Blog posts, redundancy of wal_keep_segments et al.

Replication slots allow standbys to provide information to the primary or upstream cascading standby as to the point they've reached in the write-ahead log. This information is available to the primary even when the standby is offline or disconnected. This eliminates the need for wal_keep_segments, which required the admin to estimate how many extra WAL files would be needed to be kept around in order to ensure supply to the standby in case of excessive replication lag.

Here is an example:

On the primary server, set max_replication_slots to allow the standby to register to a slot:

 max_replication_slots = 1

The wal_level will also need to be set to at least "archive", but as this example will be using a hot standby, we'll set it to "hot_standby":

 wal_level = hot_standby

Restart the primary for these settings to take effect, then connect to the primary and create a replication slot:

 SELECT * FROM pg_create_physical_replication_slot('standby_replication_slot');

A standby can then use this replication slot by specifying it as the primary_slotname parameter in its recovery.conf:

 primary_slotname = 'standby_replication_slot'

On the primary, you can see this being used:

 postgres=# SELECT * FROM pg_replication_slots;
        slot_name          | plugin | slot_type | datoid | database | active | xmin | catalog_xmin | restart_lsn 
  standby_replication_slot |        | physical  |        |          | t      |      |              | 0/3000420
 (1 row)

Now the primary will keep hold of WAL for as along as it needs to, which is until all standbys have reported that they have progressed beyond that point. Note, however, that if a standby goes offline, and the replication slot isn't dropped, WAL will build up on the primary. So if a standby needs to be decommissioned, it's slot should then be dropped like so:

 # SELECT pg_drop_replication_slot('standby_replication_slot');
 (1 row)

Otherwise, ensure there is enough disk capacity available to keep WAL on the primary until the standby returns.

In future logical replication will also take advantage of replication slots.

Changeset Streaming / Logical decoding

In 9.4, Changeset Streaming, otherwise known as "Logical Decoding", supplies an new API for reading, filtering and manipulating the PostgreSQL replication stream. This interface is the foundation for new replication tools, such as Bi-Directional Replication which supports the creation of multi-master PostgreSQL clusters.

TODO: Blog posts, explanation of what it's the groundwork for

Performance improvements

GIN indexes now faster and smaller

TODO: Blog posts, benchmark results


When a database instance is restarted, its shared buffers are empty again, which means all queries will initially have to read data in direct from disk. pg_prewarm can load relation data back into the buffers to "warm" the buffers back up again. This will mean that queries, that would otherwise have to load parts of a table in bit by bit, can have the data available in shared buffers ready for them.

TODO: Example

Other new features


Normally, cluster-wide settings need to be edited in the postgresql.conf, but now changes can be made via the ALTER SYSTEM command.


 postgres=# ALTER SYSTEM SET log_min_duration_statement = '5s';

This doesn't actually change postgresql.conf. Instead it writes the setting to a file called postgresql.auto.conf. This file *always* gets read last, so it will always override any settings in postgresql.conf. Setting it to DEFAULT removes the line from postgresql.auto.conf:

 postgres=# ALTER SYSTEM SET log_min_duration_statement = DEFAULT;

An advantage of making changes this way is that the settings are more likely to be correct as they are validated before they are added:

 postgres=# ALTER SYSTEM SET wal_level = 'like the legend of the phoenix';
 ERROR:  invalid value for parameter "wal_level": "like the legend of the phoenix"
 HINT:  Available values: minimal, archive, hot_standby, logical.

If this had been set in postgresql.conf instead, and someone attempted to restart the cluster, it would fail. Naturally postgresql.auto.conf should never be edited manually.


Prior to PostgreSQL 9.4, refreshing a materialized view meant locking the entire table, and therefore preventing anything querying it, and if a refresh took a long time to acquire the exclusive lock (while it waits for queries using it to finish), it in turn is holding up subsequent queries. This can now been mitigated with the CONCURRENTLY keyword:


A unique index will need to exist on the materialized view though. Instead of locking the materialized view up, it instead creates a temporary updated version of it, compares the two versions, then applies INSERTs and DELETEs against the materialized view to apply the difference. This means queries can still use the materialized view while it's being updated.

Unlike its non-concurrent form, tuples aren't frozen, and it needs VACUUMing due to the aforementioned DELETEs that will leave dead tuples behind.

Updatable view improvements

Automatically updatable views were introduced in PostgreSQL 9.3, allowing data in simple views to be updated in the same way as regular tables without the need to define triggers or rules. In PostgreSQL 9.4 this feature has been enhanced to apply to a wider class of views and to allow new data to be validated against the view definition.

Support views with a mix of updatable and non-updatable columns

Previously, a view was only automatically updatable if all of its columns were updatable (i.e., simple references to columns of the underlying relation). The presence of any non-updatable columns such as expressions, literals, function calls or subqueries would prevent automatic updates. Now a view may be automatically updatable even if it contains some non-updatable columns, provided that an INSERT or UPDATE command does not attempt to assign new values to any of the non-updatable columns.

This allows views with computed columns to support automatic updates:

CREATE TABLE products (
  product_id SERIAL PRIMARY KEY,
  product_name TEXT NOT NULL,
  quantity INT,
  reserved INT DEFAULT 0);

CREATE VIEW products_view AS
 SELECT product_id,
        (quantity - reserved) AS available
   FROM products
  WHERE quantity IS NOT NULL;
postgres=# INSERT INTO products_view (product_name, quantity) VALUES
 ('Budget laptop', 100),
 ('Premium laptop', 10);

postgres=# SELECT * FROM products;
 product_id |  product_name  | quantity | reserved 
          1 | Budget laptop  |      100 |        0
          2 | Premium laptop |       10 |        0
(2 rows)

In this view the available column is non-updatable, but all the other columns can be updated:

postgres=# UPDATE products_view SET quantity = quantity - 10 WHERE product_id = 1;

postgres=# UPDATE products_view SET available = available - 10 WHERE product_id = 1;
ERROR:  cannot update column "available" of view "products_view"
DETAIL:  View columns that are not columns of their base relation are not updatable.



By default, when an INSERT or UPDATE command modifies data in a view, the data is not checked against the view's WHERE clause. Thus it is possible that the new data is not visible in the view. For example, with the view above, setting the quantity of a product to NULL would cause it to disappear from the view. To prevent this, specify WITH CHECK OPTION when defining the view:

CREATE VIEW products_view AS
 SELECT product_id,
        (quantity - reserved) AS available
   FROM products
  WHERE quantity IS NOT NULL

Now any attempt to insert or update data in a way that does not match the view definition will be rejected:

postgres=# UPDATE products_view SET quantity = NULL WHERE product_id = 1;
ERROR:  new row violates WITH CHECK OPTION for view "products_view"
DETAIL:  Failing row contains (1, Budget laptop, null, 0).


Updatable security barrier views

Previously, marking a view with the security_barrier option would prevent it from being automatically updatable. In PostgreSQL 9.4, this restriction has been removed.

This allows views to be used to implement row- and column-level security. For example:

CREATE TABLE employees (
 employee_id SERIAL PRIMARY KEY,
 employee_name TEXT NOT NULL,
 department TEXT,
 salary MONEY);

CREATE VIEW sales_employees WITH (security_barrier = true) AS
 SELECT employee_id,
   FROM employees
  WHERE department = 'Sales'

REVOKE ALL ON employees FROM bob;

This allows the user bob to read and update the details of any sales employees, but blocks access to salary information and any employees from other departments. In addition, the check option prevents bob from moving an employee to a different department.



Set-returning functions are most frequently used in the FROM clause of a query, and now if it's suffixed with WITH ORDINALITY, a column containing an ordered sequence of numbers is added.

For example:

 postgres=# SELECT * FROM json_object_keys('{"mobile": 4234234232, "email": "x@me.com", "address": "1 Street Lane"}'::json) WITH ORDINALITY;
  json_object_keys | ordinality 
  mobile           |          1
  email            |          2
  address          |          3
 (3 rows)

Here's a less trivial example. Imagine you have some time-series data like so:

             ts           │              speed              │           temperature
   2014-10-11 12:00:00-07 │ {11.1,11.1,11.3,11.4,12.2,11.2} │ {38.1,38.0,38.5,38.7,39.1,39.7}
   2014-10-11 12:01:00-07 │ {11.1,10.5,10.1,10.1,10.2,10.3} │ {39.5,39.2,39.2,39.0,38.9,38.7}
   2014-10-11 12:02:00-07 │ {10.3,10.1,9.9,9.8,9.7,9.7}     │ {38.6,38.6,38.3,38.2,38.2,38.1}

To expand the arrays into a column of values which you can then do computations on, like so:

   select o.ts, u.* from observations o, unnest(speed,temperature) with ordinality as u(speed,temperature,ord);

This gives you:

             ts           │ speed │ temperature │ ord
   2014-10-11 12:00:00-07 │  11.1 │        38.1 │   1
   2014-10-11 12:00:00-07 │  11.1 │        38.0 │   2
   2014-10-11 12:00:00-07 │  11.3 │        38.5 │   3
   2014-10-11 12:00:00-07 │  11.4 │        38.7 │   4
   2014-10-11 12:00:00-07 │  12.2 │        39.1 │   5
   2014-10-11 12:00:00-07 │  11.2 │        39.7 │   6
   2014-10-11 12:01:00-07 │  11.1 │        39.5 │   1
   2014-10-11 12:01:00-07 │  10.5 │        39.2 │   2
   2014-10-11 12:01:00-07 │  10.1 │        39.2 │   3
   2014-10-11 12:01:00-07 │  10.1 │        39.0 │   4
   2014-10-11 12:01:00-07 │  10.2 │        38.9 │   5
   2014-10-11 12:01:00-07 │  10.3 │        38.7 │   6
   2014-10-11 12:02:00-07 │  10.3 │        38.6 │   1
   2014-10-11 12:02:00-07 │  10.1 │        38.6 │   2
   2014-10-11 12:02:00-07 │   9.9 │        38.3 │   3
   2014-10-11 12:02:00-07 │   9.8 │        38.2 │   4
   2014-10-11 12:02:00-07 │   9.7 │        38.2 │   5
   2014-10-11 12:02:00-07 │   9.7 │        38.1 │   6

Aggregate features

Ordered-set aggregates

There is now support for ordered-set aggregates which provide order-sensitive information on columns. One example is the mode average:

 postgres=# SELECT mode() WITHIN GROUP (ORDER BY eye_colour) FROM population;
 (1 row)

Or get the value of a column the specified percentage in, such as 20%

 postgres=# SELECT percentile_disc(0.2) WITHIN GROUP (ORDER BY age) FROM population;
 (1 row)

Aggregate FILTER clause

The FILTER clause is a SQL standard syntax to control which rows are passed to an aggregate function in a query:

SELECT agg_fn(val) FILTER (WHERE condition) FROM ...

Only rows for which the condition evaluates to true are passed to the aggregate function. For example:

SELECT array_agg(i) FILTER (WHERE i % 2 = 0) AS twos,
       array_agg(i) FILTER (WHERE i % 3 = 0) AS threes,
       array_agg(i) FILTER (WHERE i % 5 = 0) AS fives,
       array_agg(i) FILTER (WHERE i % 7 = 0) AS sevens
  FROM generate_series(1, 20) AS g(i);

            twos             |      threes      |    fives     | sevens
 {2,4,6,8,10,12,14,16,18,20} | {3,6,9,12,15,18} | {5,10,15,20} | {7,14}
(1 row)


Moving-aggregate support

If an aggregate function is used in a window with a moving frame start, referred to as moving-aggregate mode, it is now possible to define additional aggregate support functions to optimize performance by making it unnecessary for PostgreSQL to re-compute the entire aggregate each time the frame start moves.

For example, consider a query that computes the sum of values starting from the current row and including the next 10 rows after that:


In previous versions of PostgreSQL, this would involve re-computing the entire sum for each output row (10 additions per row). In PostgreSQL 9.4, the aggregate for each row can now be computed from the previous row's aggregate by subtracting the first value added and adding the new value (1 addition and 1 subtraction per row). This can greatly improve the performance of such queries, particularly for large window frames.

Many of the built-in aggregate functions support this optimization, and user-defined aggregates may also support it by defining suitable inverse transition functions where appropriate.


state_data_size parameter

This parameter may be passed to CREATE AGGREGATE to allow an aggregate function to provide an estimate for the size (in bytes) of its state data. This estimate will be used by PostgreSQL when planning a grouped aggregate query. For example, the query:

SELECT department, AVG(salary)
  FROM employees
 GROUP BY department;

may be executed using hash aggregation, computing all the aggregate groups at the same time:

         QUERY PLAN
   Group Key: department
   ->  Seq Scan on employees

or by sorting the data and computing the aggregate for each group in turn:

            QUERY PLAN
   Group Key: department
   ->  Sort
         Sort Key: department
         ->  Seq Scan on employees

The plan using hash aggregation may be more efficient, but it may also use much more memory, and therefore it will only be considered if it is estimated to fit into work_mem, which will depend on the estimated size of the aggregate's state data.


Performance of NUMERIC aggregates

The performance of various aggregate functions that use NUMERIC types internally has been improved in PostgreSQL 9.4. This includes the following aggregates:

  • SUM() and AVG() over bigint and numeric values.
  • STDDEV_POP(), STDDEV_SAMP(), STDDEV(), VAR_POP(), VAR_SAMP() and VARIANCE() over smallint, int, bigint and numeric values.

Backwards compatibility