BDR Project
This page is a historical discussion of the BDR development project. Check the official BDR page for further information about Postgres-BDR®.
Design work began in late 2011 to look at ways of adding new features to PostgreSQL core to support a flexible new infrastructure for replication that built upon and enhanced the existing streaming replication features added in 9.1-9.2. Initial design and project planning was by Simon Riggs, CEO at 2ndQuadrant. He is also the current product lead. Various additional development contributions from the wider 2ndQuadrant team as well as reviews and input from other community devs.
At the PgCon2012CanadaInCoreReplicationMeeting an initial version of the design was presented. A presentation containing reasons leading to the current design and a prototype of it, including preliminary performance results, is available here.
Project Overview and Plans
Project Aims
- To be included as part of the main PostgreSQL open source distribution
- Fast
- Reusable individual parts, usable by other projects (slony, ...)
- Form the basis for easier sharding/write scalability
- Wide geographic distribution of replicated nodes
Aspects of Postgres-BDR®
BDR consists of a number of related features
- AlwaysOn Availability: It provides up to 99.9999% (six 9s) availability for your PostgreSQL databases using rapid switchover and rolling upgrades for database and schemas.
- Worldwide Clusters: It supports geographically distributed clusters, includes the ability for geo-fencing, and is designed to minimize latency across nodes.
- Cloud Native: It is designed ground up for cloud deployments. Using the self-healing Kubernetes operator BDR can be deployed quickly in trusted configurations.
- Multi-Master Replication: Multi-Master Replication allows efficient replication for both DDL and DML with automatic conflict resolution. It also offers Conflict-free Replicated Data Types (CRDTs).
Note that these features aren't "clustering" in the sense that Oracle RAC uses the term. There is no distributed lock manager, global transaction coordinator, etc. The vision here is interconnected yet still separate servers, allowing each server to have radically different workloads and yet still work together, even across a global scale and large geographic separation.
Other Terminology
(Physical) Streaming replication talks about Master and Standby, so we could also talk about Master and Physical Standby, and then use Master and Logical Standby to describe LLSR. That terminology does not work when we consider that replication might be bi-directional, or could be reconfigured that way in the future.
Similarly, the terms Origin, Provider, and Subscriber only work with one Origin.
Tuning
As a result of the architecture, there are few physical tuning parameters. That may grow as the implementation matures, but not significantly.
There are no parameters for tuning transfer latency.
The only likely tunable is the amount of memory used to accumulate changes before we send them downstream. Similar in many ways to the setting of shared_buffers and should be increased on larger machines.
A variant of hot_standby_feedback could be implemented also, though would likely need renaming.
The CRC check while reading WAL is not useful in this context and there will likely be an option to skip that for logical decoding since it can be a CPU bottleneck.
Operation
BDR usage is described in BDR User Guide.
Selective Replication (Table/Row-level filtering)
LLSR does not yet support the selection of data at the table or row-level, only at the database level. It is a design goal to be able to support this in the future.
DDL replication
DDL replication is supported using event triggers and DDL deparse, which is to be submitted to 9.5.
BDR restricts some DDL. See BDR Command Restrictions.
Core changes
Logical changeset extraction
Merged in 9.4.