20110408pg upgrade fix

From PostgreSQL wiki
Jump to navigationJump to search

Critical Fix for pg_upgrade/pg_migrator Users 2011-04-08: Details

The following are details of the pg_upgrade and (formerly) pg_migrator fix published to the announce email list on 2011-04-08 and fixed in Postgres 8.4.8 and 9.0.4 releases.

What can you tell me about this bug?

This is the announcement sent out to all users about this bug:

A bug has been discovered in all released versions of pg_upgrade and (formerly) pg_migrator. Anyone who has used pg_upgrade or pg_migrator should take the following corrective actions as soon as possible. You might also want to make a backup of the pg_clog directory if you do not already have a recent copy. If you fail to take action promptly, it might result in unexpected downtime.

This bug can cause queries to return the following error:

       ERROR: could not access status of transaction ######
       DETAIL: could not open file "pg_clog/####": No such file or directory=20

This error prevents access to very wide values stored in the database. First, immediately on reading this you should make a file-level backup of the pg_clog directory in your PostgreSQL database data directory. Next, to prevent such failures users need to run the following psql script, as the superuser, in all upgraded databases as soon as possible:

       -- This script fixes data in pre-8.4.8 and pre-PG 9.0.4
       -- servers that were upgraded by pg_upgrade and pg_migrator.
       -- Run the script using psql for every database in the cluster
       -- except 'template0', e.g.:
       --     psql -U postgres -a -f pg_upgrade_fix.sql dbname
       -- This must be run from a writable directory.
       -- Depending on the size and configuration of your database,
       -- this script might generate a lot of I/O and degrade database
       -- performance.  Users should execute this script during a low
       -- traffic period and monitor the database load.
       CREATE TEMPORARY TABLE pg_upgrade_fix AS
               SELECT 'VACUUM FREEZE pg_toast.' || c.relname || ';'
               FROM    pg_class c, pg_namespace n
               WHERE   c.relnamespace = n.oid AND
                       n.nspname = 'pg_toast' AND
                       c.relkind = 't'
               ORDER by c.oid;
       \copy pg_upgrade_fix TO 'pg_upgrade_tmp.sql'
       \i pg_upgrade_tmp.sql

A fix will be included in upcoming Postgres releases 8.4.8 and 9.0.4. These releases will remove the need to run the above script after upgrades by correctly restoring all TOAST tables in the migrated databases. However, databases which have already been upgraded still need the script run, even if they are running Postgres 9.0.4.

Who is affected by this bug?

If you upgraded your PostgreSQL installation using pg_upgrade or pg_migrator prior to Postgres 8.4.8 and 9.0.4, you are probably affected by the bug.

Specifically, affected users had TOAST tables (compressed data extents) with very old records in them before upgrade. However, since this is a difficult thing to verify after-the-fact, the PostgreSQL Project recommends that all users who used pg_upgrade apply the cleanup script below.

Users who are in the affected group need to run the cleanup script as soon as possible in order to prevent being trapped by the bug as detailed below.

How do I know if I am being affected by the bug?

In general, you will not see any issues immediately after upgrade. Instead, months or weeks after the upgrade, you will see error messages similar to this one, either in sessions or in the logs:

  ERROR: could not access status of transaction ######
  DETAIL: could not open file "pg_clog/####": No such file or directory

This error will prevent some queries from running.

Resolving the errors

If you are getting the above errors (regarding missing clog files), you will need to restore the missing clog files from backups of the PostgreSQL data directory, if at all possible. If you have clog files which are lower than the clog file being requested then a "wrap-around" has occurred and time may be short to implement the fix without more serious corruption happening- you may want to consider disabling access to the database to prevent the clog from progressing to the point which are covered by the requested files. If the clog progresses back over the point where the errors were happening then records may suddenly become visible which would previously cause errors and other records may disappear (if the new clog marks them as aborted).

clog files hold transaction commit/abort status and rarely change once a new clog file is created.

The process for restoring the clog files is:

   Stop the PostgreSQL system (we don't want PG to see a partial clog file).
   Modify your postgresql.conf and increase autovacuum_freeze_max_age to 500000000 (500m)
   Restore as many clog files as possible,
       where the clog files aren't greater than existing clog files in pg_clog and aren't overwriting existing files.
   Once the restore is complete, start the PostgreSQL system
   Ideally, then run a CLUSTER-WIDE 'VACUUM FREEZE;' (on every database!)
       NOTE: This will create a *lot* of I/O on your system and may take some time to complete.
       This can be done from the command-line with: vacuumdb -a -v -F
   Alternatively, run the script from above (in every database!!)
       If you had to restore clog files then be prepared to restore them again as they may be removed too early 
       when doing table-by-table VACUUM FREEZE commands, as the script above does.
  (RUNNING VACUUM FREEZE IS NECESSARY!, do not forget this step or the clogs may be removed again and the problem will recur!)

If you do not have backups of your pg_clog directory, fixing the issue at this stage is significantly more involved. If you are at this stage you will need help from hackers in the PostgreSQL community. Either request help on irc.freenode.net, channel #postgresql, or post to the pgsql-admin or pgsql-general mailing lists, or email bruce -at- postgresql.org.

Will I lose any data due to this error?

YES - If you are getting errors regarding missing clog files and you do not have a backup of those clog files. If you are not yet receiving errors due to missing clog then the issue should be resolvable with only a bit of downtime. If you are getting errors but *do* have a backup of the clogs then it is *extremely important* that you address this as soon as possible or you may end up losing data. If you have already run a 'VACUUM FREEZE;' cluster-wide (in every database) (not typical..) then you are safe already.

What is the underlying cause of this bug?

The cause is that pg_upgrade did not properly restore pg_class.relfrozenxid in TOAST tables. TOAST tables are used to store wide values that will not fit in the 8k heap pages; see TOAST Storage for details.

How can I fix my cluster if I already used pg_upgrade that had this bug?

The announcement above has a script that must be run. It effectively marks all TOAST transaction ids as frozen so they do not rely on the pg_clog files for status. This is necessary because the pg_class.relfrozenxid was not properly set for TOAST tables. Tuple freezing is a normal process performed by autovacuum so the correction is really just performing that process before it is usually necessary.

What if I can't run the fix promptly?

If you cannot, copy the data/pg_clog directory contents to a safe backup in case they are recycled before you can run the script.

I am running the clean-up script, but it's causing too much I/O and my database is unresponsive. What do I do?

First, stop the script.

Then you will want to increase vacuum_cost_delay so that the script takes longer, but does less intense I/O. In the session where you run the script, before running it first do the following:

   SET vacuum_cost_delay = 40;
   SET vacuum_cost_limit = 200;

Then re-run the script.

How was this bug fixed in Postgres 8.4.8 and 9.0.4

The fix was to modify pg_dump to properly restore pg_class.relfrozenxid for TOAST tables.

Who discovered this bug?

This problem was reported by two users on irc.freenode.net #postgresql and was diagnosed by Andrew Gierth and Bruce Momjian.