Parallel Query

From PostgreSQL wiki

Jump to: navigation, search

NOTE that this page has not been updated to cover the improvements done in PostgreSQL 10.

Parallel query is a new feature in PostgreSQL 9.6. It can make some queries much faster, especially if they scan a lot of data but return few rows to the client. This will often be the case for queries that involve aggregates, like SELECT COUNT(*) FROM table. However, parallel query also has some important limitations. This wiki page attempts to explain what you can expect from parallel query in PostgreSQL 9.6. Currently, parallelism is used only for queries; utility commands such as VACUUM and CLUSTER are not queries, and do not take advantage of parallelism.

For most users, the only thing that must be done in order to take advantage of parallel query is to set max_parallel_degree to a value greater than zero. A value between 1 and 4 is recommended. The number should be set with due regard for max_worker_processes, which limits the number of background worker processes that may exist at any one time. This value includes not only parallel workers but any other background worker processes which may have been configured. If a query plan is generated that intends to use a particular number of worker processes, but fewer processes are available at run time than the configured value, the query will be executed with the smaller number of workers actually available. If no workers are available at all, the leader will do all the work, executing both the portion of the plan intended to be run in parallel and any portion which could not be run in parallel. This will do little harm if the parallel plan chosen is essentially equivalent to the optimal non-parallel plan, but may be inefficient otherwise.

Contents

When Can Parallel Query Be Used?

The query planner will never generate a parallel plan when max_parallel_degree = 0. This is a special case of the more general principle that no more workers should be used than the number configured via max_parallel_degree. The query planner also will not generate parallel plans when dynamic_shared_memory_type = none, because parallel query needs dynamic shared memory segments in order to exchange data between cooperating processes. Furthermore, parallel plans will also not be generated when transaction_isolation = serializable, due to implementation restrictions.

Even when parallel query is enabled in general, the query planner will never generate a parallel plan if any of the following are true:

  • The query writes data. If a query contains a data-modifying operation either at the top level or within a CTE, no parallel plans for that query will be generated. This is a limitation of the current implementation which could be lifted in a future release.
  • The query might be suspended during execution. In any situation in which the system thinks that partial or incremental execution might occur, no parallel plan is generated. For example, a cursor created using DECLARE CURSOR will never use a parallel plan. Similarly, a PL/pgsql loop of the form FOR x IN query LOOP .. END LOOP will never use a parallel plan, because the parallel query system is unable to verify that the code in the loop is safe to execute while parallel query is active.
  • The query uses any function marked PARALLEL UNSAFE. Most system-defined functions are PARALLEL SAFE, but user-defined functions are marked PARALLEL UNSAFE by default. See the section on "Function Labeling For Parallel Safety", below.
  • The query is running inside of another query that is already parallel. For example, if a function called by a parallel query issues an SQL query itself, that query will never use a parallel plan. This is a limitation of the current implementation, but it may not be desirable to remove this limitation, since it could result in a single query using a very large number of processes.
  • The system is running in single-user mode. No background workers will be available in this situation.

Even if a parallel plan is generated, there are several reasons why parallelism might not be used at runtime. First, the number of background workers across the entire system is limited by max_worker_processes; parallel workers are included in this total. This may result in a query running with fewer workers than planned, or even with no workers at all. Second, if a query is submitted using the extended query protocol, it will be planned on the assumption that the Execute message will include a fetch count of 0. If a non-zero fetch count is used, the plan will be executed with no workers. Third, if a query plan is generated at a transaction isolation level other than serializable, but the transaction isolation level is changed to serializable between plan time and execution time, the parallel plan will be executed with no workers. In any case where a parallel plan is executed with no workers, the entire plan is executed by the leader, which may be inefficient. The plan was chosen based on the idea that workers would be used, and may be different from the plan that would have been chosen had it been thought that no workers would be available.

What Types of Parallel Plans Are Supported?

Generally, a parallel plan will consist of up to four parts, two of them optional:

1. The innermost path will be a parallel sequential scan. Currently, parallel query generates only paths for queries where the driving table is a scanned sequentially rather than via an index. In place of the Seq Scan node that would normally be used to scan the table, a Parallel Seq Scan node will be used. This will partition the data among the workers; each worker will process its own subset of the pages in the table. Pages are handed one at a time as they become available; each worker processes all tuples on a page before advancing to the next page.

2. This path may be joined to one or more other tables using a nested loop or hash join. The outer side of the join may be any kind of path that is otherwise supported by the planner; for example, it may be an index scan which looks up a value based on a column taken from the inner table. Each worker will execute the outer side of the plan in full, which is why merge joins are not supported here. The outer side of a merge join will often involve sorting the entire inner table; even if it involves an index, it is unlikely to be productive to have multiple processes each conduct a full index scan of the inner table.

3. The path resulting from the join steps may be aggregated using either grouping or hashing. This will only happen if (a) the query actually involves an aggregate step, (b) the aggregate step involves only aggregates marked PARALLEL SAFE, (c) the aggregates have combine functions and, if they use a transition state of type internal, serialization and deserialization functions, and (d) all join steps in the query were included in the parallel portion of the query.

4. Finally, a Gather node will appear above all parallel steps included in the plan. This operation collects the tuples from the individual workers and brings them back to the leader, from where they can be returned to the client.

As normal, the SQL command EXPLAIN can used to examine the plan for a query. If a query that is expected to produce a parallel plan does not do so, it may be possible to get a parallel plan by reducing the value of parallel_startup_cost or parallel_tuple_cost. If the plan is producing a parallel query and more detail about how the work is being distributed is desired, EXPLAIN (ANALYZE, VERBOSE) will execute the query and display per-worker statistics for each plan node.

What Parts of a Query Can Run In Parallel?

Since any portion of the plan which is above the Gather node will be executed only by the leader, it is generally desirable for as much of the plan as possible to be below the Gather node. However, there are currently significant restrictions on what things may be done in parallel. In particular:

  • Scans of common table expressions may not appear below Gather.
  • Scans of set-returning functions may not appear below Gather if they are marked PARALLEL RESTRICTED.
  • Scans of plain tables may not appear below Gather if (1) they are temporary tables, or (2) any filter condition on the table uses a function which is marked PARALLEL RESTRICTED, or (3) a TABLESAMPLE clause is used and the tablesample function is marked PARALLEL RESTRICTED.
  • Scans of foreign tables may appear below Gather only if this is is supported by the foreign data wrapper. postgres_fdw does not support this, but file_fdw does.
  • Subqueries may not appear below Gather. Note that in some cases the query planner may "flatten" a subquery into the containing query, avoiding this restriction.
  • Plan nodes with InitPlans or SubPlans may not appear below Gather.

Function Labeling For Parallel Safety

Functions and aggregates may be marked PARALLEL UNSAFE (the default), PARALLEL RESTRICTED, or PARALLEL SAFE. When a function or aggregate is marked PARALLEL UNSAFE, queries making reference to that function will never be run in parallel. A marking of PARALLEL RESTRICTED means that the function may be present in a parallel query, but only in the portion of that query that executes in the leader. Parallel workers can never be asked to execute PARALLEL RESTRICTED functions. A function which is PARALLEL SAFE can participate fully in parallel query.

Functions must be marked PARALLEL UNSAFE if they write to the database, access sequences, change the transaction state even temporarily (e.g. a PL/pgsql function which establishes an EXCEPTION block to catch errors), or make persistent changes to settings. Functions must be marked PARALLEL RESTRICTED if they access temporary tables, client connection state, cursors, prepared statements, or miscellaneous backend-local state which the system cannot synchronize in parallel mode. For example, setseed and random are parallel restricted for this last reason. In general, if a function is labeled as being safe when it is restricted or unsafe, or if it is labeled as being restricted when it is in fact unsafe, it may throw errors or produce wrong answers when used in a parallel query. C-language functions could in theory exhibit totally undefined behavior if mislabeled, since there is no way for the system to protect itself against arbitrary C code, but in most likely cases the result will be no worse than for any other function. If in doubt, functions should be labeled as UNSAFE, which is the default.

If a function executed within a parallel worker acquires locks which are not held by the leader, for example by querying a table not referenced in the query, those locks will be released at worker exit, not end of transaction. If you write a function which does this, and this behavior difference is important to you, mark such functions as PARALLEL RESTRICTED to ensure that they execute only in the leader.

Personal tools