PG-EU 2015 Cluster Summit

From PostgreSQL wiki

Jump to: navigation, search

Contents

PG-EU 2015 Cluster Summit

This event was held on Saturday, October 31 from 9:00 - 13:00 at the Vienna Marriott Hotel as part of PostgreSQL Conference Europe 2015. The goal of the meeting was to discuss the possibility of built-in sharding with minimal backend code changes or additions. These email threads provide some background:

The meeting was to be in four parts (answers in italics):

Desirability

  • History of external Postgres sharding projects, commercial and open source, e.g. Netezza, Greenplum, Postgres XC/XL, pg_shard
  • Is built-in sharding a desired goal? advantages/disadvantages

There is concern the FDW API, even if modified, might not be sufficient for all use-cases.

  • Is now the right time for built-in sharding? Comparison with external/built-in replication

There will probably always be a need for external solutions, as there is now with replication. When adding features to support built-in sharding, we should not block, where possible, the use of these facilities by external sharding solutions.

  • Is a coodinator-based architecture acceptable?

Yes, but there should be plans for multi-coordinator, and no-coordinator architectures.

  • Is cross-worker-node result-set-shipping necessary?, e.g. pg_shard

Yes, perhaps built-in, perhaps only external projects.

  • What workloads are we targeting? Read-only, read-write? Short queries, long queries?

All of them, but perhaps we can't do them all with built-in technology. We will need to add sharding infrastructure and then see what workloads it solves.

  • How many clusters is our target design?

Could be several, dozens, or perhaps thousands.

  • Is this for high availability (HA)?

While users want HA, almost anything with a coordinator will be less reliable than using streaming replication and a replica, so in general, coordinator-based sharding is not for high availability. A non-coordinator sharding solution might enable this, but no one has a design proposal for such a system, though if implemented, it would require some kind of Paxos or leader-election protcol.

  • Do we need to add/remove nodes easily?

Yes, and the design should consider this.

Design

  • Are existing Postgres features sufficiently developed to make built-in sharding a reasonable goal? FDW, parallelism

There is agreement that these facilities should be enhanced enough to enable benchmarking of a proof-of-concept built-in sharding implementation. We can then consider what workloads it handles well. Yes, this seems backwards, but we can only guess from Postgres XC/XL how well this will work.

  • Can existing Postgres features be enhanced to benefit non-sharding and sharding use-cases?

Yes, this is clearly the case.

  • What else is needed? atomic commits, atomic snapshots, partitioning API

Not discussed

  • Limitations of this approach? What will it not do?

Not discussed. I don't think we can know the answer until we have the facilities complete.

  • Are the limitations acceptable? Will built-in sharding be useful enough with these limitations?

We don't know.

  • One or multiple coordinators?

Multiple must be possible, though perhaps not in the initial implementation. Benchmarking of Postgres XC/XL shows that multiple coordinators are often required to get good results. However, built-in sharding might not have the same limitations as XC/XL, so again, we will not know until it can be tested.

Implementation

  • What things are missing from FDWs?

Not discussed, but the list is generally known.

  • What parallelism features are needed?

Not discussed

  • Are atomic commits and atomic snapshots sufficient?

Not discussed

  • Add node status registry?

There is agreement that a node status registry would help sharding and other use-cases.

Demos

Slides were discussed

  • pg_shard (Marco/Citus Data) [2]

Slides were discussed

Commentary & Concerns

From Simon Riggs, following preparation of minutes.

All attendees agreed that some form of clustering/sharding was desirable, though there was no agreement that this is the same thing as FDWs.

There is major concern that the FDW approach is completely unsuited for use in powering a sharding solution. No explanation has ever been given as to how that would work, even though it is standard practice to post designs to Hackers before work commences. No slides or discussion was provided at the meeting. The concern about FDW is shared by many developers; though since that has never been discussed the number of people who agree or disagree is unknown in detail, but in the opinion of this author is likely to be in excess of 50% of experienced hackers. This has been discussed multiple times prior to the meeting and nothing changed during the meeting.

Nobody has any issue with extending the FDW API, but that is a completely different thing to it being the right way to design sharding. The list of FDW deficiencies isn't generally known (to me), nor is it documented anywhere.

So while nobody wishes to block work on FDWs, various people expressed the wish for FDWs to not block other approaches.

Discussion of coordinators above seems incorrect. Why do coordinators make anything less robust? All/most systems have multiple/standby coordinators.

There was agreement on only one point: "we need a node registry". "Node status registry" wasn't mentioned; we discussed bringing together the concepts of foreign server, replication slot, replication origin, cluster name and replication connections into one concept, if possible.

Personal tools