- 1 Development Process
- 2 Administration
- 3 Data Types
- 4 Functions
- 5 Multi-Language Support
- 6 Views and Rules
- 7 SQL Commands
- 8 Integrity Constraints
- 9 Server-Side Languages
- 10 Clients
- 11 Triggers
- 12 Inheritance
- 13 Indexes
- 14 Sorting
- 15 Cache Usage
- 16 Vacuum
- 17 Locking
- 18 Startup Time Improvements
- 19 Write-Ahead Log
- 20 Optimizer / Executor
- 21 Background Writer
- 22 Concurrent Use of Resources
- 23 TOAST
- 24 Monitoring
- 25 Miscellaneous Performance
- 26 Miscellaneous Other
- 27 Source Code
- 28 Documentation
- 29 Exotic Features
- 30 Features We Do Not Want
This list contains some known PostgreSQL bugs, some feature requests, and some things we are not even sure we want. Many of these items are hard, and some are perhaps impossible. If you would like to work on an item, please read the Developer FAQ first. There is also a development information page.
- - marks ordinary, incomplete items
- [E] - marks items that are easier to implement
- [D] - marks changes that are done, and will appear in the PostgreSQL 15 release.
Over time, it may become clear that a TODO item has become outdated or otherwise determined to be either too controversial or not worth the development effort. Such items should be retired to the Not Worth Doing page.
WARNING for Developers: Unfortunately this list does not contain all the information necessary for someone to start coding a feature. Some of these items might have become unnecessary since they were added --- others might be desirable but the implementation might be unclear. When selecting items listed below, be prepared to first discuss the value of the feature. Do not assume that you can select one, code it and then expect it to be committed. Always discuss design on Hackers list before starting to code. The flow should be:
Desirability -> Design -> Implement -> Test -> Review -> Commit
- Check for unreferenced table files created by transactions that were in-progress when the server terminated abruptly
- Allow log_min_messages to be specified on a per-module basis
- This would allow administrators to see more detailed information from specific sections of the backend, e.g. checkpoints, autovacuum, etc. Another idea is to allow separate configuration files for each module, or allow arbitrary SET commands to be passed to them. See also Logging Brainstorm.
- Prevent query cancel packets from being replayed by an attacker, especially when using SSL
- Consider supporting incremental base backups
- [D] Allow pg_hba.conf to process include files
- Allow a database in tablespace t1 with tables created in tablespace t2 to be used as a template for a new database created with default tablespace t2
- Currently all objects in the default database tablespace must have default tablespace specifications. This is because new databases are created by copying directories. If you mix default tablespace tables and tablespace-specified tables in the same directory, creating a new database from such a mixed directory would create a new database with tables that had incorrect explicit tablespaces. To fix this would require modifying pg_class in the newly copied database, which we don't currently do.
- Allow reporting of which objects are in which tablespaces
- This item is difficult because a tablespace can contain objects from multiple databases. There is a server-side function that returns the databases which use a specific tablespace, so this requires a tool that will call that function and connect to each database to find the objects in each database for that tablespace.
- Allow WAL replay of CREATE TABLESPACE to work when the directory structure on the recovery computer is different from the original
- Allow tablespaces on RAM-based partitions for unlogged tables
- Allow toast tables to be moved to a different tablespace
- Testing pgstat via pg_regress is tricky and inefficient. Consider making a dedicated pgstat test-suite.
- Teach stats collector to differentiate between internal and leaf index pages
- Allow automatic selection of SSL client certificates from a certificate store
Standby server mode
- Prevent variables inherited from the server environment from being used for making streaming replication connections
- Add support for public SYNONYMs
- Consider a special data type for regular expressions
- Allow deleting enumerated values from an existing enumerated data type
- Add overlaps geometric operators that ignore point overlaps
Dates and Times
- Allow infinite intervals just like infinite timestamps
- Consider changing error to warning for strings larger than one megabyte
- Improve default parser, to more easily allow adding new tokens
- Add additional support functions
- Report errors returned by the XSLT library
- XML Canonical: Convert XML documents to canonical form to compare them. libxml2 has support for this.
- Add pretty-printed XML output option
- Parse a document and serialize it back in some indented form. libxml2 might support this.
- Allow XML shredding
- In some cases shredding could be better option (if there is no need to keep XML docs entirely, e.g. if we have already developed tools that understand only relational data. This would be a separate module that implements annotated schema decomposition technique, similar to DB2 and SQL Server functionality.
- Implement Boyer-Moore searching in LIKE queries
- Prevent malicious functions from being executed with the permissions of unsuspecting users
- Indexed functions are safe, so VACUUM and ANALYZE are safe too. Triggers, CHECK and DEFAULT expressions, and rules are still vulnerable.
- Fix /contrib/btree_gist's implementation of inet indexing
- Add NCHAR (as distinguished from ordinary varchar)
- Integrate collations with text search configurations
- Integrate collations with to_char() and related functions
- Support collation-sensitive equality and hashing functions
- Fix contrib/fuzzystrmatch to work with multibyte encodings
- Change memory allocation for multi-byte functions so memory is allocated inside conversion functions
- Currently we preallocate memory based on worst-case usage.
- Add ability to use case-insensitive regular expressions on multi-byte characters
- Currently it works for UTF-8, but not other multi-byte encodings
- Improve encoding of connection startup messages sent to the client
- Currently some authentication error messages are sent in the server encoding
- Windows: Cache MessageEncoding conversion for use outside transactions
Views and Rules
- Improve ability to modify views via ALTER TABLE
- Add CORRESPONDING BY to UNION/INTERSECT/EXCEPT
- Improve type determination of unknown (NULL or quoted literal) result columns for UNION/INTERSECT/EXCEPT
- Allow prepared transactions with temporary tables created and dropped in the same transaction, and when an ON COMMIT DELETE ROWS temporary table is accessed
- Allow LISTEN on patterns
- Add support for WITH RECURSIVE ... CYCLE
- Add DEFAULT .. AS OWNER so permission checks are done as the table owner
- This would be useful for SERIAL nextval() calls and CHECK constraints.
- Add comments on system tables/columns using the information in catalogs.sgml
- Ideally the information would be pulled from the SGML file automatically.
- Prevent the specification of conflicting transaction read/write options
- Have DISCARD PLANS discard plans cached by functions
- DISCARD ALL should do the same.
- Avoid multiple-evaluation of BETWEEN and IN arguments containing volatile expressions
- Have WITH CONSTRAINTS also create constraint indexes
- Move NOT NULL constraint information to pg_constraint
Currently NOT NULL constraints are stored in pg_attribute without any designation of their origins, e.g. primary keys. One manifest problem is that dropping a PRIMARY KEY constraint does not remove the NOT NULL constraint designation. Another issue is that we should probably force NOT NULL to be propagated from parent tables to children, just as CHECK constraints are. (But then does dropping PRIMARY KEY affect children?)
- Prevent ALTER TABLE DROP NOT NULL on child tables if parent column has it
- Prevent concurrent CREATE TABLE from sometimes returning a cryptic error message
- Fix CREATE OR REPLACE FUNCTION to not leave objects depending on the function in inconsistent state
- Allow temporary tables to exist as empty by default in all sessions
- Allow the creation of "distinct" types
- Consider analyzing temporary tables when they are first used in a query
- Autovacuum cannot analyze or vacuum temporary tables.
- Research self-referential UPDATEs that see inconsistent row versions in read-committed mode
- Improve performance of EvalPlanQual mechanism that rechecks already-updated rows
- This is related to the previous item, which questions whether it even has the right semantics
- Have ALTER TABLE RENAME of a SERIAL column rename the sequence
- Allow moving system tables to other tablespaces, where possible
- Currently non-global system tables must be in the default database tablespace. Global system tables can never be moved.
- Allow column display reordering by recording a display, storage, and permanent id for every column?
- Allow deactivating (and reactivating) indexes via ALTER TABLE
- Add ALTER OPERATOR ... RENAME
- needs to consider effects of changing operator precedence
- Automatically maintain clustering on a table
- This might require some background daemon to maintain clustering during periods of low usage. It might also require tables to be only partially filled for easier reorganization. Another idea would be to create a merged heap/index data file so an index lookup would automatically access the heap data too. A third idea would be to store heap rows in hashed groups, perhaps using a user-supplied hash function.
- Allow CLUSTER to be used on partial indexes
- Allow COPY to report error lines and continue
- This requires the use of a savepoint before each COPY line is processed, with ROLLBACK on COPY failure.
- Allow COPY to report errors sooner
- Allow COPY FROM to create index entries in bulk
- Improve COPY performance
- Allow a stalled COPY to exit if the backend is terminated
- Allow COPY "text" format to output a header
- Have COPY FREEZE set PD_ALL_VISIBLE
- Allow dropping of a role that has connection rights
- Provide some guarantees about the behavior of cursors that invoke volatile functions
- Rationalize the discrepancy between settings that use values in bytes and SHOW that returns the object count
- Improve how ANALYZE computes in-doubt tuples
- Remove quadratic time in statistics sender when analyzing many tables
- Reduce memory use when analyzing many tables in a single command by making catcache and syscache flushable or bounded.
- Have EXPLAIN ANALYZE issue NOTICE messages when the estimated and actual row counts differ by a specified percentage
- Have EXPLAIN ANALYZE report rows as floating-point numbers
- Have EXPLAIN ANALYZE report buckets and memory usage for HashAggregate
- Support creation of user-defined window functions
- We have the ability to create new window functions written in C. Is it worth the effort to create an API that would let them be written in PL/pgsql, etc?
- Implement full support for window framing clauses
In addition to done clauses described in the latest doc, these clauses are not implemented yet.
- RANGE BETWEEN ... PRECEDING/FOLLOWING
- Investigate tuplestore performance issues
- The tuplestore_in_memory() thing is just a band-aid, we ought to try to solve it properly. tuplestore_advance seems like a weak spot as well.
- Teach planner to evaluate multiple windows in the optimal order
- Currently windows are always evaluated in the query-specified order.
- Change foreign key constraint for array -> element to mean element in array?
- Fix problem when cascading referential triggers make changes on cascaded tables, seeing the tables in an intermediate state
- Are ri_KeysEqual checks in the RI enforcement triggers still necessary?
- Run check constraints only when affected columns are changed
- Do not scan the table when a check constraint is added in the same command that adds the column
- Add more fine-grained specification of functions taking arbitrary data types
- Rethink query plan caching and timing of parse analysis within SQL-language functions
- They should work more like plpgsql functions do ...
- Allow listing of record column names, and access to record columns via variables, e.g. columns := r.(*), tval2 := r.(colname)
- Allow row and record variables to be set to NULL constants, and allow NULL tests on such variables
- Because a row is not scalar, do not allow assignment from NULL-valued scalars.
- Consider keeping separate cached copies when search_path changes
- Improve handling of NULL row values vs. NULL rows
- Improve PERFORM handling of WITH queries or document limitation
- Create a new restricted execution class that will allow passing function arguments in as locals. Passing them as globals means functions cannot be called recursively.
- Add a DB-API compliant interface on top of the SPI interface
- For functions returning a setof record with a composite type, cache the I/O functions for the composite type
- Split out pg_resetxlog output into pre- and post-sections
- Improve pg_rewind
- Move psql backslash database information into the backend, use mnemonic commands?
- This would allow non-psql clients to pull the same information out of the database as psql.
- Make psql's \d commands distinguish default privileges from no privileges
- ACL displays were visibly different for the two cases before we "improved" them by using array_to_string.
- Add a \set variable to control whether \s displays line numbers
- Another option is to add \# which lists line numbers, and allows command execution.
- Include the symbolic SQLSTATE name in verbose error reports
- Add option to wrap column values at whitespace boundaries, rather than chopping them at a fixed width.
- Currently, "wrapped" format chops values into fixed widths. Perhaps the word wrapping could use the same algorithm documented in the W3C specification.
- Add option to print advice for people familiar with other databases
- Fix FETCH_COUNT to handle SELECT ... INTO and WITH queries
- Prevent psql from sending remaining single-line multi-statement queries after reconnecting
- Improve line drawing characters
- Consider improving the continuation prompt
- Improve speed of tab completion by using LIKE
pg_dump / pg_restore
- [E] Dump security labels and comments on databases in a way that allows to load a dump into a differently named database
- [E] Add full object name to the tag field. eg. for operators we need '=(integer, integer)', instead of just '='.
- Avoid using platform-dependent names for locales in pg_dumpall output
- Using native locale names puts roadblocks in the way of porting a dump to another platform. One possible solution is to get CREATE DATABASE to accept some agreed-on set of locale names and fix them up to meet the platform's requirements.
- Preserve sparse storage of large objects over dump/restore
- Prevent PL/pgSQL comment from throwing an error in a non-superuser restore
- Delay REFRESH MATERIALIZED VIEW until dependent indexes are created
- Handle large object comments
- This is difficult to do because the large object doesn't exist when --schema-only is loaded.
- Migrate pg_statistic by dumping it out as a flat file, so analyze is not necessary
- Find cleaner way to start/stop dedicated servers for upgrades
- Desired changes that would prevent upgrades with pg_upgrade
- 32-bit page checksums
- Add metapage to GiST indexes
- Clean up hstore's internal representation
- Remove tuple infomask bit HEAP_MOVED_OFF and HEAP_MOVED_IN
- fix char() index trailing space handling
- Use non-collation-aware comparisons for GIN opclasses
- Document differences between ecpg and the SQL standard and information about the Informix-compatibility module.
- Provide a way to specify size of a bytea parameter
- Allow reuse of cursor name variables
- Add PQexecf() that allows complex parameter substitution
- Add SQLSTATE and severity to errors generated within libpq itself
- Add support for interface/ipaddress binding to libpq
- When receiving a FATAL error remember it, so that it doesn't profess ingnorance about why the session was closed
- Pipelining support for libpq async API and an array-valued PQexecPrepared that uses it
- Improve storage of deferred trigger queue
- Right now all deferred trigger information is stored in backend memory. This could exhaust memory for very large trigger queues. This item involves dumping large queues into files, or doing some kind of join to process all the triggers, some bulk operation, or a bitmap.
- Allow triggers to be disabled in only the current session.
- This is currently possible by starting a multi-statement transaction, modifying the system tables, performing the desired SQL, restoring the system tables, and committing the transaction. ALTER TABLE ... TRIGGER requires a table lock so it is not ideal for this usage.
- With disabled triggers, allow pg_dump to use ALTER TABLE ADD FOREIGN KEY
- If the dump is known to be valid, allow foreign keys to be added without revalidating the data.
- When statement-level triggers are defined on a parent table, have them fire only on the parent table, and fire child table triggers only where appropriate
- Tighten trigger permission checks
- Allow BEFORE INSERT triggers on views
- Add database and transaction-level triggers
- Avoid requirement for AFTER trigger functions to return a value
- Allow creation of inline triggers
- Allow unique indexes across inherited tables (requires multi-table indexes)
- Postgres 11 allows unique indexes across partitions if the partition key is part of the index.
- Research whether ALTER TABLE / SET SCHEMA should work on inheritance hierarchies (and thus support ONLY)
- ALTER TABLE variants sometimes support recursion and sometimes not, but this is poorly/not documented, and the ONLY marker would then be silently ignored. Clarify the documentation, and reject ONLY if it is not supported.
- Prevent index uniqueness checks when UPDATE does not modify the column
- Uniqueness (index) checks are done when updating a column even if the column is not modified by the UPDATE. However, HOT already short-circuits this in common cases, so more work might not be helpful.
- Allow multiple indexes to be created concurrently, ideally via a single heap scan
- pg_restore allows parallel index builds, but it is done via subprocesses, and there is no SQL interface for this. Cluster could definitely benefit from this.
- Consider sorting entries before inserting into btree index
- Consider using "effective_io_concurrency" for index scans
- Currently only bitmap scans use this, which might be fine because most multi-row index scans use bitmap scans.
- Allow GIN indexes to be used for exclusion constraints
- Allow "loose" or "skip" scans on btree indexes in which the first column has low cardinality
- Make the planner's "special index operator" mechanism extensible
- Improve GIN performance
- Teach GIN cost estimation about "fast scans"
- Allow unlogged indexes
- Fix performance issues in contrib/seg and contrib/cube GiST support
- Add UNIQUE capability to hash indexes
- Allow multi-column hash indexes
- This requires all columns to be specified for a query to use the index.
- Write Ahead Logging for Hash Indexes
- Allow sorts of skinny tuples to use even more available memory.
- Now that it is not limited by MaxAllocSize, don't limit by INT_MAX either.
- Consider automatic caching of statements at various levels:
- Parsed query tree
- Query execute plan
- Query results
- Cached Query Plans (was: global prepared statements)
- PoC plpgsql - possibility to force custom or generic plan
- Cached/global query plans, autopreparation
- Consider allowing higher priority queries to have referenced shared buffer pages stay in memory longer
- Fix memory leak caused by negative catcache entries
- Consider having single-page pruning update the visibility map
- Re: visibility maps and heap_prune
- Allow VACUUM FULL and CLUSTER to update the visibility map
- Improve tracking of total relation tuple counts now that vacuum doesn't always scan the whole heap
- Bias FSM towards returning free space near the beginning of the heap file, in hopes that empty pages at the end can be truncated by VACUUM
- Add a way to compact tables without exclusive locking, similar to pre-9.0 VACUUM FULL
- Consider a more compact data representation for dead tuple locations within VACUUM
- Provide more information in order to improve user-side estimates of dead space bloat in relations
- Reduce the number of table scans performed by vacuum
- Vacuum Gin indexes in physically order rather than logical order
- Avoid creation of the free space map for small tables
- Issue log message to suggest VACUUM FULL if a table is nearly empty?
- Prevent long-lived temporary tables from causing frozen-xid advancement starvation
- The problem is that autovacuum cannot vacuum them to set frozen xids; only the session that created them can.
- Prevent autovacuum from running if an old transaction is still running from the last vacuum
- Have autoanalyze of parent tables occur when child tables are modified
- Allow visibility map all-visible bits to be set even when an auto-ANALYZE is running
- Improve autoanalyze thresholds for small tables
- Fix problem when multiple subtransactions of the same outer transaction hold different types of locks, and one subtransaction aborts
- Improve deadlock detection when a page cleaning lock conflicts with a shared buffer that is pinned
- Detect deadlocks involving LockBufferForCleanup()
- Allow finer control over who is cancelled in a deadlock
Startup Time Improvements
- Allow backends to change their database without restart
- This allows for faster server startup.
- Eliminate need to write full pages to WAL before page modification
Currently, to protect against partial disk page writes, we write full page images to WAL before they are modified so we can correct any partial page writes during recovery. These pages can also be eliminated from point-in-time archive files.
- Re: Index Scans become Seq Scans after VACUUM ANALYSE
- WIP double writes
- double writes
- Double-write with Fast Checksums
- double writes using "double-write buffer" approach
- When full page writes are off, write CRC to WAL and check file system blocks on recovery
- If CRC check fails during recovery, remember the page in case a later CRC for that page properly matches. The difficulty is that hint bits are not WAL logged, meaning a valid page might not match the earlier CRC.
- Write full pages during file system write and not when the page is modified in the buffer cache
- This allows most full page writes to happen in the background writer. It might cause problems for applying WAL on recovery into a partially-written page, but later the full page will be replaced from WAL.
- Allow WAL information to recover corrupted pg_controldata
- Speed WAL recovery by allowing more than one page to be prefetched
- This should be done utilizing the same infrastructure used for prefetching in general to avoid introducing complex error-prone code in WAL replay.
- Improve WAL concurrency by increasing lock granularity
- Have resource managers report the duration of their status changes
- Close deleted WAL files held open in *nix by long-lived read-only backends
Optimizer / Executor
- Improve ability to display optimizer analysis using OPTIMIZER_DEBUG
- Log statements where the optimizer row estimates were dramatically different from the number of rows actually found?
- Consider compressed annealing to search for query plans
- This might replace GEQO.
- Allow single batch hash joins to preserve outer pathkeys
- Avoid building the same hash table more than once during the same query
- Consider having the background writer update the transaction status hint bits before writing out the page
- Implementing this requires the background writer to have access to system catalogs and the transaction status log.
- Consider adding buffers the background writer finds reusable to the free list
- Automatically tune bgwriter_delay based on activity rather then using a fixed interval
- Consider whether increasing BM_MAX_USAGE_COUNT improves performance
- Test to see if calling PreallocXlogFiles() from the background writer will help with WAL segment creation latency
Concurrent Use of Resources
- Do async I/O for faster random read-ahead of data
Async I/O allows multiple I/O requests to be sent to the disk with results coming back asynchronously.
- Asynchronous I/O Support
- Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1
- There's random access and then there's random access
- Bitmap index scan preread using posix_fadvise (Was: There's random access and then there's random access)
- SMP scalability improvements
- Allow user configuration of TOAST thresholds
- Reduce unnecessary cases of deTOASTing
- Reduce costs of repeat de-TOASTing of values
- Have pg_stat_activity display query strings in the correct client encoding
- Allow reporting of stalls due to wal_buffer wrap-around
- Restructure pg_stat_database columns tup_returned and tup_fetched to return meaningful values
- Improve handling of pg_stat_statements handling of bind "IN" variables
- Rather than consider mmap()-ing in 8k pages, consider mmap()'ing entire files into a backend?
- Doing I/O to large tables would consume a lot of address space or require frequent mapping/unmapping. Extending the file also causes mapping problems that might require mapping only individual pages, leading to thousands of mappings. Another problem is that there is no way to _prevent_ I/O to disk from the dirty shared buffers so changes could hit disk before WAL is written.
- Allow configuration of backend priorities via the operating system
- Though backend priorities make priority inversion during lock waits possible, research shows that this is not a huge problem.
- Consider if CommandCounterIncrement() can avoid its AcceptInvalidationMessages() call
- Consider Cartesian joins when both relations are needed to form an indexscan qualification for a third relation
- Consider not storing a NULL bitmap on disk if all the NULLs are trailing
- Sort large UPDATE/DELETEs so it is done in heap order
- Add auto-tuning of work_mem
- Consider decreasing the I/O caused by updating tuple hint bits
- Hint Bits and Write I/O
- Re: [HACKERS] Hint Bits and Write I/O
- Avoid reading in b-tree pages when replaying vacuum records in hot standby mode
- Restructure truncation logic to be more resistant to failure
- This also involves not writing dirty buffers for a truncated or dropped relation
- Enhance foreign data wrappers, parallelism, partitioning, and perhaps add a global snapshot/transaction manager to allow creation of a proof-of-concept built-in sharding solution
- Ideally these enhancements and new facilities will be available to external sharding solutions as well.
- Deal with encoding issues for filenames in the server filesystem
- Provide schema name and other fields available from SQL GET DIAGNOSTICS in error reports
- Use sa_mask to close race conditions between signal handlers
- Allow pg_export_snapshot() to run on hot standby servers
- This will allow parallel pg_dump on such servers.
- Provide a way to enumerate and unregister background workers
- Right now the only way to unregister bgworkers is from within the worker with proc_exit(0) or registering with BGW_NEVER_RESTART
- Rationalize division of labor between initdb and bootstrap
- Allow creation of universal binaries for Darwin
- Consider GnuTLS if OpenSSL license becomes a problem
- Consider making NAMEDATALEN more configurable
- There is demand for making 128 the default, but there are also concerns about storage and memory usage and performance. So a rearchitecting to make the storage variable-length might be preferred.
- Research use of signals and sleep wake ups
- Consider simplifying how memory context resets handle child contexts
- Implement the non-threaded Avahi service discovery protocol
- Reduce data row alignment requirements on some 64-bit systems
- Restructure TOAST internal storage format for greater flexibility
- Consider removing the attribute options cache
- Restructure /contrib section
- Improve signal handling
- Fix MSVC NLS support, like for to_char()
- Fix global namespace issues when using multiple terminal server sessions
- Change from the current autoconf/gmake build system to cmake
- Improve consistency of path separator usage
- Fix cross-compiling on Windows
- Reduce file statistics overhead on directory reads
- Fix hang with long file paths
Wire Protocol Changes / v4 Protocol
- Ensure the client can determine the encoding of messages sent early in the handshake
- Let the client indicate character encoding of database names, user names, passwords, and of pre-auth error messages returned by the server
- Send numeric version to clients in fixed header
- Mark result columns as known-not-null when possible
- Use compression
- Specify and implement wire protocol compression. If SSL transparent compression is used, hopefully avoid the overhead of key negotiation and encryption when SSL is configured only for compression. Note that compression is being removed from TLS 1.3 so we really need to do it ourselves.
- Update clients to use data types, typmod, schema.table.column names of result sets using new statement protocol
- Set protocol for wire format negotiation
- Make sure upgrading to a 4.1 protocol version will actually work smoothly
- Allow multi-state authentication (e.g. try client peer, fall back to md5)
- Allow re-authentication
- Let the client request re-authentication as a different user mid session, for connection pools that pass through the handshake.
- Identify the affected object in CommandComplete message?
- Allow negotiation of encryption, STARTTLS style, rather than forcing client to decide on SSL or !SSL before connecting
- Permit lazy fetches of large values, at least out-of-line TOASTED values
- Add session-level whitelisting of types for binary-mode transfer
- Send client the xid when it is allocated
- Lets the client later ask the server "did this commit or not?" after interterminate result due to crash or connection loss
- Report xlog position in commit message
- Help enable client-side failover by providing a token clients can use to see if a commit has replayed to replicas yet
- Changes to make cancellations more reliable and more secure
- Clarify semantics of statement_timeout in extended query protocol
- Batched and pipelined queries have unexpected behaviour with statement_timeout. Client needs to be able to specify statement boundary with protocol message.
- Create a more efficient way to handle out-of-line parameters
- Separate transaction delineation from protocol error recovery (in v3 both are managed via the same Sync message)
- Provide a manpage for postgresql.conf
- Document support for N' ' national character string literals, if it matches the SQL standard
- Add pre-parsing phase that converts non-ISO syntax to supported syntax
- This could allow SQL written for other databases to run without modification.
- Add features of Oracle-style packages
- A package would be a schema with session-local variables, public/private functions, and initialization functions. It is also possible to implement these capabilities in any schema and not use a separate "packages" syntax at all.
- Consider allowing control of upper/lower case folding of unquoted identifiers
- Bringing PostgreSQL torwards the standard regarding case folding
- Re: [SQL] Case Preservation disregarding case sensitivity?
- TODO Item: Consider allowing control of upper/lower case folding of unquoted, identifiers
- Identifier case folding notes
- Identifier case folding notes
- Cluster wide option to control symbol case folding
- Add autonomous transactions
- Give query progress indication
- Rethink our type system
Features We Do Not Want
The following features have been discussed ad nauseum on the PostgreSQL mailing lists and the consensus has been that the project is not interested in them. As such, if you are going to bring them up as potential features, you will want to be familiar with all of the arguments against these features which have been previously made over the years. If you decide to work on such features anyway, you should be aware that you face a higher-than-normal barrier to get the Project to accept them.
- Rewrite the code in a different language (not wanted)
- All backends running as threads in a single process (not wanted)
- This eliminates the process protection we get from the current setup. Thread creation is usually the same overhead as process creation on modern systems, so it seems unwise to use a pure threaded model, and MySQL and DB2 have demonstrated that threads introduce as many issues as they solve. Threading specific operations such as I/O, seq scans, and connection management has been discussed and will probably be implemented to enable specific performance features. Moving to a threaded engine would also require halting all other work on PostgreSQL for one to two years.
- "Oracle-style" optimizer hints (not wanted)
- Optimizer hints, as implemented in Oracle and other RDBMSes, are used to work around problems in the optimizer and introduce upgrade and maintenance issues. We would rather have such problems reported and fixed. We have discussed a more sophisticated system of per-class cost adjustment instead, but a specification remains to be developed. See Optimizer Hints Discussion for further information.
- Embedded server (not wanted)
- While PostgreSQL clients runs fine in limited-resource environments, the server requires multiple processes and a stable pool of resources to run reliably and efficiently. Stripping down the PostgreSQL server to run in the same process address space as the client application would add too much complexity and failure cases. Besides, there are several very mature embedded SQL databases already available.
- Obfuscated function source code (not wanted)
- Obfuscating function source code has minimal protective benefits because anyone with super-user access can find a way to view the code. At the same time, it would greatly complicate backups and other administrative tasks. To prevent non-super-users from viewing function source code, remove SELECT permission on pg_proc.
- Indeterminate behavior for the GROUP BY clause (not wanted)
- At least one other database product allows specification of a subset of the result columns which GROUP BY would need to be able to provide predictable results; the server is free to return any value from the group. This is not viewed as a desirable feature. PostgreSQL 9.1 allows result columns that are not referenced by GROUP BY if a primary key for the same table is referenced in GROUP BY.
- On-disk bitmap indexes (not wanted)
- The rigidity of on-disk bitmap indexes, and the existence of GIN and in-memory bitmaps make this undesirable.