Difference between revisions of "Streaming Replication"

From PostgreSQL wiki
Jump to: navigation, search
(v8.5)
(Added missing quotes)
Line 123: Line 123:
 
== Future release ==
 
== Future release ==
 
* '''Synchronization capability'''
 
* '''Synchronization capability'''
** Introduce the synchronization mode which can control how long transaction commit waits for replication before the commit command returns a "success" to a client. The valid modes are ''async'', ''recv'', ''fsync and ''apply''.
+
** Introduce the synchronization mode which can control how long transaction commit waits for replication before the commit command returns a "success" to a client. The valid modes are ''async'', ''recv'', ''fsync'' and ''apply''.
 
*** ''async'' doesn't make transaction commit wait for replication, i.e., asynchronous replication.
 
*** ''async'' doesn't make transaction commit wait for replication, i.e., asynchronous replication.
 
*** ''recv'', ''fsync'' and ''apply'' makes transaction commit wait for XLOG records to be received, fsynced and applied on the standby, respectively.
 
*** ''recv'', ''fsync'' and ''apply'' makes transaction commit wait for XLOG records to be received, fsynced and applied on the standby, respectively.

Revision as of 11:35, 18 January 2010

Streaming Replication (SR) provides the capability to continuously ship and apply the WAL XLOG records to some number of standby servers in order to keep them current.

Project

SR is being developed for inclusion in PostgreSQL 8.5 by NTT OSS Center. The lead developer is Masao Fujii

The feature is now committed to CVS and is included in PostgreSQL 8.5 Alpha4.

Repository

Usage

Users Overview

  • Log-shipping
    • XLOG records generated in the primary are periodically shipped to the standby via the network.
    • In the existing warm standby, only records in a filled file are shipped, what's referred to as file-based log-shipping. In SR, XLOG records in partially-filled XLOG file are shipped too, implementing record-based log-shipping. This means the window for data loss in SR is usually smaller than in warm standby, unless the warm standby was also configured for record-based shipping (which is complicated to setup).
    • The content of XLOG files written to the standby are exactly the same as those on the primary. XLOG files shipped can be used for a normal recovery and PITR.
  • Multiple standbys
    • More than one standby can establish a connection to the primary for SR. XLOG records are concurrently shipped to all these standbys. The delay/death of a standby does not harm log-shipping to another standbys.
    • The maximum number of standbys can be specified as a GUC variable.
  • Continuous recovery
    • The standby continuously replays XLOG records shipped without using pg_standby.
    • XLOG records shipped are replayed as soon as possible without waiting until XLOG file has been filled. The combination of Hot Standby and SR would make the latest data inserted into the primary visible in the standby almost immediately.
    • The standby periodically removes old XLOG files which are no longer needed for recovery, to prevent excessive disk usage.
  • Setup
    • The start of log-shipping does not interfere with any query processing on the primary.
    • The standby can be started in various conditions.
      • If there are XLOG files in archive directory and restore_command is supplied, at first those files are replayed. Then the standby requests XLOG records following the last applied one to the primary. This prevents XLOG files already present in the standby from being shipped again. Similarly, XLOG files in pg_xlog are also replayed before starting log-shipping.
      • If there is no XLOG files on the standby, the standby requests XLOG records following the starting XLOG location of recovery (the redo starting location).
  • Connection settings and authentication
    • A user can configure the same settings as a normal connection to a connection for SR (e.g., keepalive, pg_hba.conf).
  • Activation
    • The standby can keep waiting for activation as long as a user likes. This prevents the standby from being automatically brought up by the failure of recovery or the network outage.
  • Progress report
    • The primary and standby report the progresss of log-shipping in PS display.
  • Graceful shutdown
    • When smart/fast shutdown is requested, the primary waits to exit until XLOG records have been sent to the standby, up to the shutdown checkpoint record.

Restrictions

  • Synchronous log-shipping
    • Currently SR supports only asynchronous log-shipping. The commit command might return a "success" to a client before the corresponding XLOG records are shipped to the standby.
  • Replication beyond timeline
    • A user has to get a fresh backup whenever making the old standby catch up.
  • Clustering
    • Postgres doesn't provide any clustering feature.

How to Use

  • 1. Install postgres in the primary and standby server as usual. This requires only configure, make and make install.
  • 2. Create the initial database cluster in the primary server as usual, using initdb.
  • 3. Set up connections and authentication so that the standby server can successfully connect to the pseudo replication database of the primary.
$ emacs postgresql.conf

listen_addresses = '192.168.0.10'

$ emacs pg_hba.conf

# The standby server must have superuser access privileges.
host  replication  postgres  192.168.0.20/22  trust
  • 4. Enable XLOG archiving in the primary server because we need to make a base backup of it later.
$ emacs postgresql.conf

archive_mode    = on
archive_command = 'cp %p /path_to/archive/%f'
  • 5. Set the maximum number of concurrent connections from the standby servers.
$ emacs postgresql.conf

max_wal_senders = 5
  • 6. Start postgres on the primary server.
  • 7. Make a base backup of the primary server, load this data onto the standby.
$ psql -c "SELECT pg_start_backup('label', true)"
$ rsync -a ${PGDATA}/ .....
$ psql -c "SELECT pg_stop_backup()"
  • 8. Set up XLOG archiving, connections and authentication in the standby server like the primary, so that the standby might work as a primary after failover.
  • 9. Create a recovery command file in the standby server; the following parameters are required for streaming replication.
$ emacs recovery.conf

# Specifies whether to start the server as a standby. In streaming replication,
# this parameter must to be set to on.
standby_mode          = 'on'

# Specifies a connection string which is used for the standby server to connect
# with the primary.
primary_conninfo      = 'host=192.168.0.10 port=5432 user=postgres'

# Specifies a trigger file whose presence should cause streaming replication to
# end (i.e., failover).
trigger_file = '/path_to/trigger'
  • 10. Start postgres in the standby server. It will start streaming replication.
  • 11. You can check the progress of streaming replication by using ps command.
# The displayed LSNs indicate the byte position that the standby server has
# written up to in the xlogs.
[primary] $ ps -ef | grep sender
postgres  6879  6831  0 10:31 ?        00:00:00 postgres: wal sender process postgres 127.0.0.1(44663) streaming 0/2000000

[standby] $ ps -ef | grep receiver
postgres  6878  6872  1 10:31 ?        00:00:01 postgres: wal receiver process   streaming 0/2000000
  • How to do failover
    • Create the trigger file in the standby after the primary fails.
  • How to stop the primary or the standby server
    • Shut down it as usual (pg_ctl stop). Note that the smart shutdown cannot stop only the standby server because it's in the recovery state.
  • How to restart streaming replication after failover
    • Repeat the operations from 7th; making a fresh backup, some configurations and starting the original primary as the standby. The primary server doesn't need to be stopped during these operations.
  • How to restart streaming replication after the standby fails
    • Restart postgres in the standby server after eliminating the cause of failure.
  • How to disconnect the standby from the primary
    • Create the trigger file in the standby while the primary is running. Then the standby would be brought up.
  • How to re-synchronize the stand-alone standby after isolation
    • Shut down the primary as usual. And repeat the operations from 7th.

Todo

v8.5

Future release

  • Synchronization capability
    • Introduce the synchronization mode which can control how long transaction commit waits for replication before the commit command returns a "success" to a client. The valid modes are async, recv, fsync and apply.
      • async doesn't make transaction commit wait for replication, i.e., asynchronous replication.
      • recv, fsync and apply makes transaction commit wait for XLOG records to be received, fsynced and applied on the standby, respectively.
    • Change walsender to be able to read XLOG from not only the disk but also shared memory.
    • Add new parameter replication_timeout which is the maximum time to wait until XLOG records are replicated to the standby.
    • Add new parameter (replication_timeout_action) to specify the reaction to replication_timeout.
  • Monitoring
    • Provide the capability to check the progress and gap of streaming replication via one query. A collaboration of HS and SR is necessary to provide that capability on the standby side.
    • Provide the capability to check if the specified repliation is in progress via a query. Also more detailed status information might be necessary, e.g, the standby is catching up now, has already gotten into sync, and so on.
    • Change the stats collector to collect the statistics information about replication, e.g., average delay of replication time.
    • Develop the tool to calculate the latest XLOG position from XLOG files. This is necessary to check the gap of replication after the server fails.
    • Also develop the tool to extract the user-readable contents from XLOG files. This is necessary to see the contents of the gap, and manually restore them.
  • Easy to Use
    • Introduce the parameters like:
      • replication_halt_timeout - replication will halt if no data has been sent for this much time.
      • replication_halt_segments - replication will halt if number of WAL files in pg_xlog exceeds this threshold.
      • These parameters allow us to avoid disk overflow.
    • Add new feature which transfers also base backup via the direct connection between the primary and the standby.
    • Add new hooks like walsender_hook and walreceiver_hook to cooperate with the add-on program for compression like pglesslog.
    • Provide a graceful termination of replication via a query on the primary. On the standby, a trigger file mechanism already provides that capability.
    • Support replication beyond timeline. The timeline history files need to be shipped from the primary to the standby.
  • Robustness
    • Support keepalive in libpq. This is useful for a client and the standby to detect a failure of the primary immediately.
  • Miscellaneous
    • Standalone walreceiver tool, which connects to the primary, continuously receives and writes XLOG records, independently from postgres server.
    • Cascade streaming replication. Allow walsender to send XLOG to another standby during recovery.
    • WAL archiving during recovery.