https://wiki.postgresql.org/api.php?action=feedcontributions&user=Rizwan1218&feedformat=atomPostgreSQL wiki - User contributions [en]2024-03-29T01:58:42ZUser contributionsMediaWiki 1.35.13https://wiki.postgresql.org/index.php?title=Slow_Counting&diff=20451Slow Counting2013-07-17T17:19:11Z<p>Rizwan1218: /* Estimating the row count */</p>
<hr />
<div>{{Languages}}<br />
'''Note that the following article only applies to versions of PostgreSQL prior to 9.2. Index-only scans are now implemented.'''<br />
<br />
One operation that PostgreSQL is known to be slow performing is doing a full count of rows in a table, typically using this SQL:<br />
<br />
<code><pre><br />
SELECT COUNT(*) FROM tbl<br />
</pre></code><br />
<br />
The reason why this is slow is related to the [[MVCC]] implementation in PostgreSQL. The fact that multiple transactions can see different states of the data means that there can be no straightforward way for "COUNT(*)" to summarize data across the whole table; PostgreSQL '''must''' walk through all rows, in some sense. This normally results in a sequential scan reading information about every row in the table. A good way to see what is going on with your query is to use EXPLAIN ANALYZE:<br />
<br />
<code><pre><br />
postgres=# EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl;<br />
QUERY PLAN <br />
-----------------------------------------------------------------------------------------------------------------------<br />
Aggregate (cost=4499.00..4499.01 rows=1 width=0) (actual time=465.588..465.591 rows=1 loops=1)<br />
-> Seq Scan on tbl (cost=0.00..4249.00 rows=100000 width=0) (actual time=0.011..239.212 rows=100000 loops=1)<br />
Total runtime: 465.642 ms<br />
(3 rows)<br />
</pre></code><br />
<br />
It is worth observing that it is only this precise form of aggregate that must be so pessimistic; if augmented with a "WHERE" clause like<br />
<br />
<code><pre><br />
SELECT COUNT(*) FROM tbl WHERE status = 'something'<br />
</pre></code><br />
<br />
PostgreSQL will take advantage of available indexes against the restricted field(s) to limit how many records must be counted, which can greatly accelerate such queries. PostgreSQL will still need to read the resulting rows to verify that they exist; other database systems may only need to reference the index in this situation.<br />
<br />
== Estimating the row count ==<br />
<br />
One PostgreSQL alternative when only an approximate count is needed is to use the reltuples field from the pg_class catalog table:<br />
<br />
<code><pre><br />
pgbench=# SELECT reltuples FROM pg_class WHERE relname = 'tbl';<br />
reltuples <br />
-----------<br />
250<br />
</pre></code><br />
<br />
Or, to avoid ambiguity, since tables with the same name can exist in multiple schemas in a database:<br />
<code><pre>SELECT reltuples FROM pg_class WHERE oid = 'my_schema.tbl'::regclass;</pre></code><br />
<br />
This presumes you have been running ANALYZE on the table enough to keep these statistics up to date. If you have [http://www.postgresql.org/docs/current/interactive/routine-vacuuming.html#AUTOVACUUM autovacuum] on ANALYZE is run automatically.<br />
<br />
Another popular approach is to use a trigger-based [http://san-diegomovers.net/ mechanism] to count the rows in the table. One or both of these techniques are covered in the following:<br />
* [http://www.varlena.com/GeneralBits/120.php Counting Rows]<br />
* [http://www.varlena.com/GeneralBits/49.php Tracking the Row Count]<br />
<p><br />
* Source material: [[Why PostgreSQL Instead of MySQL: Comparing Reliability and Speed in 2007|Why PostgreSQL Instead of MySQL]] (which also discusses how this is different in MySQL)<br />
<br />
[[Category:FAQ]]<br />
[[Category:Performance]]</div>Rizwan1218