Slow Counting

From PostgreSQL wiki

Jump to: navigation, search

Index-only scans have been implemented in Postgres 9.2. providing some performance improvements where the visibility map of the table allows it.

A full count of rows in a table can be comparatively slow performing in PostgreSQL, typically using this SQL:


The reason why this is slow is related to the MVCC implementation in PostgreSQL. The fact that multiple transactions can see different states of the data means that there can be no straightforward way for "COUNT(*)" to summarize data across the whole table; PostgreSQL must walk through all rows, in some sense. This normally results in a sequential scan reading information about every row in the table. EXPLAIN ANALYZE reveals what's going on:

                                                      QUERY PLAN                                                       
 Aggregate  (cost=4499.00..4499.01 rows=1 width=0) (actual time=465.588..465.591 rows=1 loops=1)
   ->  Seq Scan on tbl  (cost=0.00..4249.00 rows=100000 width=0) (actual time=0.011..239.212 rows=100000 loops=1)
 Total runtime: 465.642 ms
(3 rows)

It is worth observing that it is only this precise form of aggregate that must be so pessimistic; if augmented with a "WHERE" clause like

SELECT COUNT(*) FROM tbl WHERE status = 'something';

PostgreSQL will take advantage of available indexes against the restricted field(s) to limit how many records must be counted, which can greatly accelerate such queries. PostgreSQL will still need to read the resulting rows to verify that they exist; other database systems may only need to reference the index in this situation.

Estimating the row count

PostgreSQL can query "estimates" or "cached" values for a table's size (much faster) see Count estimate for details.

Personal tools