2013UnconfPluggableStorage

From PostgreSQL wiki
Jump to navigationJump to search

Galy Lee from Huawei talked about their wanting a decoupled storage engine.

They have several needs for special storage engine properties. They want MySQL-style pluggable storage engines. They tried to do this themselves in 9.1, but they couldn't do it themselves, because the interface wasn't clean and they couldn't get it to work.

Tom Lane: the first thing to talk about is why don't FDWs solve it? Who's going to store the data, asks Peter. We're abdicating responsibiliy, says Tom. If we go that route, we're pulling a MySQL. You could do it for some file-based mechanism where the FDW had complete control. And what about the FDW overhead? And what about transactional semantics?

Imagine a database with just FDWs and no native storage? How would it work? Peter's use case is the cloud, with horrible storage. Would PostgreSQL abdicate and just become a query broker?

Josh brought up how to we create and manage FDW tables? And what about maintenance? And security or changes to files?

How do you deal with indexes? FDWs need their own indexes. But what about GiST and GIN then?

What are we trying to replace with pluggable storage engines? We have a pluggable interface for the index, but not heap. Frost brings up lack of parallelism in FDW?

Porting the DML is hard. Like we couldn't make triggers work.

Citus is using FDWs and storing their stuff in Hadoop. They had to trim a lot of SQL though, and not support it. Likes the idea of having FDW vs. Native Data Wrappers. You need more guarantees for a NDW.

Use cases:

  • Column store
  • Telecom in-memory database with own MVCC mechanism and HA.
  • Analytics database
  • Streamining tables
  • fractal index organized tables
  • PostgreSQL on immutable storage
  • in-memory database

Transactional semantics for Huawei database? Maybe FDW is better for them. They need special information for the planner. Need remote JOIN for FDWs.

For NDW, we need atomic commit. And index methods. And do we want each FDW to reinvent the wheel in that area? The spec is that the FDW is responsible for returning paths.

Jan gave case of compressed tables, storage engines should be lower level, like page-level. Also gave example of switching his storage to black hole for overloaded table.

Issuses with FDW as SE:

  • transactions -- can't be solved without 2PC
  • indexing -- can't use Postgres indexes
  • performance
  • backup -- would need hooks, flag for backup
  • management -- would need hooks
  • maintenance
  • replication
  • WAL/crash safety -- would require WAL API, replication API
  • data types? seems easy to resolve

Questions for SE vs. FDW, set of binary choices, which can be any way:

  • do your tuples look like tuples?
  • do your transactions look like transactions?
  • do your pages look like pages?
  • does your indexes look like indexes?

Configuring planning/costs etc. on a per-FT basis is very painful. It would be better to have defaults by type. But maybe it is on the server basis. The wrapper is responsible for providing the costs.

Discussion of replacing COPY with FDWs.

Josh brough up fractal indexes case, with different licensing.

We should have a generic WAL API which allows an extension etc. to create WAL. The pages would need to look like pages, though. This limits you to specific types of storage.

We would also need a logical replication API too so that FDWs could support logical replication.

So, are FDWs suitable or not?

Could use background workers with files for in-memory database.

Much discussion ensued regarding FDW vs Native storage.

We need an API. FDW won't work for too many cases.