Parallel Query Execution

From PostgreSQL wiki
Jump to navigationJump to search
This is currently under development. See the ToDo list.

Purpose

Postgres currently supports full parallelism in client-side code. Applications can open multiple database connections and manage them asyncronously, or via threads.

On the server-side, there is already some parallelism:

  • Server-side languages can potentially do parallel operations

Benefits

There are three possible benefits of parallelism:

  • Using multiple CPUs
  • Using multiple I/O channels (for sequential and random I/O)
  • Using multiple CPUs and I/O channels

Approaches

There are several methods to add parallelism:

  • Use fork (or a thread on Windows -- I think it would be a process, too not a thread; threads share address spaces, processes don't) and only call libc and parallel-specific functions to do parallel computation or I/O (isn't this only read operations?). This avoids the problem of trying to make the existing backend code thread-safe. Do we need to wait until we can share a transaction among back-end processes?
  • Same as above, but modify some existing backend modules to be fork/thread-safe, with or without shared memory access; this might allow entire executor node trees to be run in parallel
  • Create full backends that can execute parts of a query in parallel and return results
  • Create a pool of backends waiting for parallel requests
  • An initial approach might start by modifying individual plan nodes to run in parallel in the executor. Eventually we'd need to educate the planner and optimizer about how to model parallelizing queries.

Challenges

Finding Appropriate Tasks: For parallelism to be added to a single-threaded task, the task must be able to be broken into sufficiently-large parts and executed independently. (If the sub-parts are too small, the overhead of doing parallelism overwhelms the benefits of parallelism.) Unfortunately, unlike a GUI application, the Postgres backend executes a query by performing many small tasks that must be executed in sequence, e.g. parser, planner, executor. -- could someone document where I get a quantification of how much time is spent in each of these three phases? Can we offer the users a participation in metrics collection?

This means that databases allow parallelism only in limited situations, mostly for large queries that can become CPU or I/O bound. For example, it is unlikely that selecting a row based on a primary key would benefit from parallelism. In contrast, large queries can often benefit from parallelism.

Returning Data: Another challenge is returning data from the helper process/thread. For something like SUM(), it is easy, but passing a large volume of data back can be complex.

Avoiding Overhead: Parallelism has its own costs so there will need to be a way to control when parallel execution is used.

Limiting Excessive Parallelism: There also needs to be some mechnism that detects parallelism by other sessions so CPU and I/O resources are not exceeded.

Specific Opportunities

Parallel opportunities include:

  • Tablespaces
  • Partitions
  • Foreign tables
  • Multi-table access
  • Joins (e.g. nested loop), CTEs, UNION
  • Sequential scans on 1GB segment files
  • Per-Page visibility checks and tuple filtering
  • Aggregates
  • Data import/export
  • COPY (to reduce the CPU overhead of parsing)
  • Index builds
  • Constraint checking
  • Expensive functions, e.g. PostGIS

related work

PargreSQL [1] and [2] The paper describes the architecture and the design of PargreSQL parallel database management system (DBMS) for distributed memory multiprocessors. PargreSQL is based upon PostgreSQL open-source DBMS and exploits partitioned parallelism