What's new in PostgreSQL 9.2

From PostgreSQL wiki

Revision as of 10:25, 19 July 2012 by Sternocera (Talk | contribs)

Jump to: navigation, search

This document showcases many of the latest developments in PostgreSQL 9.2, compared to the last major release – PostgreSQL 9.1. There are many improvements in this release, so this wiki page covers many of the more important changes in detail. The full list of changes is itemised in Release Notes.

This page is incomplete!

Contents

Major new features

Index-only scans

In PostgreSQL, indexes have no "visibility" information. It means that when you access a record by its index, PostgreSQL has to visit the real tuple in the table to be sure it is visible to you: the tuple the index points to may simply be an old version of the record you are looking for.

It can be a very big performance problem: the index is mostly ordered, so accessing its records is quite efficient, while the records may be scattered all over the place (that's a reason why PostgreSQL has a cluster command, but that's another story). In 9.2, PostgreSQL will use an "Index Only Scan" when possible, and not access the record itself if it doesn't need to.

There is still no visibility information in the index. So in order to do this, PostgreSQL uses the visibility map (visibility map) , which tells it whether the whole content of a (usually) 8K page is visible to all transactions or not. When the index record points to a tuple contained in an «all visible» page, PostgreSQL won't have to access the tuple, it will be able to build it directly from the index. Of course, all the columns requested by the query must be in the index.

The visibility map is maintained by VACUUM (it sets the visible bit), and by the backends doing SQL work (they unset the visible bit).

Here is an example.

 create table demo_ios (col1 float, col2 float, col3 text);

In this table, we'll put random data, in order to have "scattered" data. We'll put 100 million records, to have a big recordset, and have it not fit in memory (that's a 4GB-ram machine). This is an ideal case, made for this demo. The gains wont be that big in real life.

 insert into demo_ios select generate_series(1,100000000),random(), 'mynotsolongstring';
 
 select pg_size_pretty(pg_total_relation_size('demo_ios'));
  pg_size_pretty 
 ----------------
  6512 MB

Let's pretend that the query is this:

 SELECT col1,col2 FROM demo_ios where col2 BETWEEN 0.02 AND 0.03

In order to use an index only scan on this, we need an index on col2,col1 (col2 first, as it is used in the WHERE clause).

 CREATE index idx_demo_ios on demo_ios(col2,col1);

We vacuum the visibility map to be up-to-date:

 VACUUM demo_ios;

All the timing you'll see below are done on a cold OS and PostgreSQL cache (that's where the gains are, as the purpose on Index Only Scans is to reduce I/O).

Let's first try without Index Only Scans:

 set enable_indexonlyscan to off;
 
 explain (analyze,buffers) select col1,col2 from demo_ios where col2 between 0.01 and 0.02;
                                                                QUERY PLAN                                                               
 ----------------------------------------------------------------------------------------------------------------------------------------
  Bitmap Heap Scan on demo_ios  (cost=25643.01..916484.44 rows=993633 width=16) (actual time=763.391..362963.899 rows=1000392 loops=1)
    Recheck Cond: ((col2 >= 0.01::double precision) AND (col2 <= 0.02::double precision))
    Rows Removed by Index Recheck: 68098621
    Buffers: shared hit=2 read=587779
    ->  Bitmap Index Scan on idx_demo_ios  (cost=0.00..25394.60 rows=993633 width=0) (actual time=759.011..759.011 rows=1000392 loops=1)
          Index Cond: ((col2 >= 0.01::double precision) AND (col2 <= 0.02::double precision))
          Buffers: shared hit=2 read=3835
  Total runtime: 364390.127 ms


With Index Only Scans:

 explain (analyze,buffers) select col1,col2 from demo_ios where col2 between 0.01 and 0.02;
                                                                   QUERY PLAN                                                                   
 -----------------------------------------------------------------------------------------------------------------------------------------------
  Index Only Scan using idx_demo_ios on demo_ios  (cost=0.00..35330.93 rows=993633 width=16) (actual time=58.100..3250.589 rows=1000392 loops=1)
    Index Cond: ((col2 >= 0.01::double precision) AND (col2 <= 0.02::double precision))
    Heap Fetches: 0
    Buffers: shared hit=923073 read=3848
  Total runtime: 4297.405 ms


As nothing is free, there are a few things to note:

  • Adding indexes for index only scans obviously adds indexes to your table. So updates will be slower.
  • You will index columns that weren't indexed before. So there will be less opportunities for HOT updates.
  • Gains will probably be smaller in real life situations.

This required making visibility map changes crash-safe, so visibility map bit changes are now WAL-logged.

Replication improvements

Streaming Replication is getting even more polished with this release. One on the main remaining gripes about streaming replication is that all the slaves have to be connected to the same and unique master, consuming its resources.

Moreover, in case of a failover, it was very complicated to reconnect all the remaining slaves to the newly promoted master.

To be on the safe side, it was easier to re-synchronize the slaves to the new masters from scratch, meaning that during this failover, only one server was active, and under heavy load, as it was used to rebuild all the slaves.

  • With 9.2, a slave can also be a replication master, allowing for cascading replication.

Let's build this. We start with an already working 9.2 database.

We set it up for replication:

postgresql.conf:

 wal_level=hot_standby #(could be archive too)
 max_wal_senders=5
 hot_standby=on

You'll probably also want to activate archiving in production, it won't be done here.

pg_hba.conf (do not use trust in production):

 host   replication replication_user          0.0.0.0/0                      md5

Create the user:

 create user replication_user replication password 'secret';

Clone the database:

 pg_basebackup -h localhost -U replication_user -D data2
 Password:

We have a brand new cluster in the data2 directory. We'll change the port so that it can start (postgresql.conf):

 port=5433

We add a recovery.conf to tell it how to stream from the master database:

 standby_mode = on
 primary_conninfo = 'host=localhost port=5432 user=replication_user password=secret' 
 pg_ctl -D data2 start
 server starting
 LOG:  database system was interrupted; last known up at 2012-07-03 17:58:09 CEST
 LOG:  creating missing WAL directory "pg_xlog/archive_status"
 LOG:  entering standby mode
 LOG:  streaming replication successfully connected to primary
 LOG:  redo starts at 0/9D000020
 LOG:  consistent recovery state reached at 0/9D0000B8
 LOG:  database system is ready to accept read only connections

Now, let's add a second slave, which will use this slave:


 pg_basebackup -h localhost -U replication_user -D data3 -p 5433
 Password: 

We edit data3's postgresql.conf to change the port:

 port=5434

We modify the recovery.conf to stream from the slave:

 standby_mode = on
 primary_conninfo = 'host=localhost port=5433 user=replication_user password=secret'             # e.g. 'host=localhost port=5432'

We start the cluster:

 pg_ctl -D data3 start
 server starting
 LOG:  database system was interrupted while in recovery at log time 2012-07-03 17:58:09 CEST
 HINT:  If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target.
 LOG:  creating missing WAL directory "pg_xlog/archive_status"
 LOG:  entering standby mode
 LOG:  streaming replication successfully connected to primary
 LOG:  redo starts at 0/9D000020
 LOG:  consistent recovery state reached at 0/9E000000
 LOG:  database system is ready to accept read only connections

Now, everything modified on the master cluster get streamed to the first slave, and from there to the second slave. This second replication has to be monitored from the first slave (the master knows nothing about it).


  • As you may have noticed from the examble, pg_basebackup now works from slaves.
  • There is another use case that wasn't covered: what if a user didn't care for having a full fledged slave, and only wanted to stream the WAL files to another location, to benefit from the reduced data loss without the burden of maintaining a slave ?

pg_receivexlog is provided just for this purpose: it pretends to be a PostgreSQL slave, but only stores the log files as they are streamed, in a directory:

 pg_receivexlog -D /tmp/new_logs -h localhost -U replication_user

will connect to the master (or a slave), and start creating files:

 ls /tmp/new_logs/
 00000001000000000000009E.partial

Files are of the segment size, so they can be used for a normal recovery of the database. It's the same as an archive command, but with a much smaller granularity.

  • synchronous_commit has a new value: remote_write. It can be used when there is a synchronous slave (synchronous_standby_names is set), meaning that the master doesn't have to wait for the slave to have written the data to disk, only for the slave to have acknowledged the data. With this set, data is protected from a crash on the master, but could still be lost if the slave crashed at the same time (i.e. before having written the in flight data to disk). As this is a quite remote possibility, some people will be interested in this compromise.



JSON datatype

The JSON datatype is meant for storing JSON-structured data. (More info: depesz blog)

Range Types

RangeTypes are added. (More info: [1])

Performance improvements

This version has performance improvements on a very large range of domains (non-exaustive):

  • The most visible will probably be the Index Only Scans, which has already been introduced in this document.
  • The lock contention of several big locks has been significantly reduced, leading to better multi-processor scalability, for machines with over 32 cores mostly.
  • The performance of in-memory sorts has been improved by up to 25% in some situations, with certain specialized sort functions introduced.
  • An idle PostgreSQL server now makes less wakeups, leading to lower power consumption . This is especially useful on virtualized and embedded environments.
  • COPY has been improved, it will generate less WAL volume and less locks of tables's pages.
  • The system can now track IO durations

This one deserves a little explanation, as it can be a little tricky. Tracking IO durations means asking repeatedly the time to the operating system. Depending on the operating system and the hardware, this can be quite cheap, or extremely costly. The most import factor here is where the system gets its time from. It could be directly retrieved from the processor (TSC), dedicated hardware such as HPET, or an ACPI call. What's most important is that the cost of getting time can vary from a factor of thousands.

If you are interested in this timing data, it's better to first check if your system will support it without to much of a performance hit. PostgreSQL provides you with the pg_test_timing tool:

$ pg_test_timing 
Testing timing overhead for 3 seconds.
Per loop time including overhead: 28.02 nsec
Histogram of timing durations:
   < usec:      count   percent
       32:         41  0.00004%
       16:       1405  0.00131%
        8:        200  0.00019%
        4:        388  0.00036%
        2:    2982558  2.78523%
        1:  104100166 97.21287%

Here, everything is good: getting time costs around 28 nanoseconds, and has a very small variation. Anything under 100 nanoseconds should be good for production. If you get higher values, you may still find a way to tune your system. You'd better check on the documentation.

Anyway, here is the data you'll be able to collect if your system is ready for this:

First, you'll get per-database statistics, which will now give accurate information about which database is doing most I/O:

=# select * from pg_stat_database where datname = 'mydb';
-[ RECORD 1 ]--+------------------------------
datid          | 16384
datname        | mydb
numbackends    | 1
xact_commit    | 270
xact_rollback  | 2
blks_read      | 1961
blks_hit       | 17944
tup_returned   | 269035
tup_fetched    | 8850
tup_inserted   | 16
tup_updated    | 4
tup_deleted    | 45
conflicts      | 0
temp_files     | 0
temp_bytes     | 0
deadlocks      | 0
blk_read_time  | 583.774
blk_write_time | 0
stats_reset    | 2012-07-03 17:18:54.796817+02

We see here that mydb has only consumed 583.774 milliseconds of read time.

Explain will benefit from this too:

=# explain (analyze,buffers) select count(*) from mots ;
                                                   QUERY PLAN                                                   
----------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=1669.95..1669.96 rows=1 width=0) (actual time=21.943..21.943 rows=1 loops=1)
   Buffers: shared read=493
   I/O Timings: read=2.578
   ->  Seq Scan on mots  (cost=0.00..1434.56 rows=94156 width=0) (actual time=0.059..12.933 rows=94156 loops=1)
         Buffers: shared read=493
         I/O Timings: read=2.578
 Total runtime: 22.059 ms

We now have a separate information about the time taken to retrieve data from the operating system. Obviously, here, the data was in the operating system's cache (2 milliseconds to read 493 blocks).

And last, if you have enabled pg_stat_statements:

select * from pg_stat_statements where query ~ 'words';
-[ RECORD 1 ]-------+---------------------------
userid              | 10
dbid                | 16384
query               | select count(*) from words;
calls               | 2
total_time          | 78.332
rows                | 2
shared_blks_hit     | 0
shared_blks_read    | 986
shared_blks_dirtied | 0
shared_blks_written | 0
local_blks_hit      | 0
local_blks_read     | 0
local_blks_dirtied  | 0
local_blks_written  | 0
temp_blks_read      | 0
temp_blks_written   | 0
blk_read_time       | 58.427
blk_write_time      | 0
  • As for every version, the optimizer has received its share of improvements
    • Prepared statements used to be optimized once, without any knowledge of the parameters' values. With 9.2, the planner will use specific plans regarding to the parameters sent (the query will be planned at execution), except if the query is executed several times and the planner decides that the generic plan is not too much more expensive than the specific plans.
    • A new feature has been added: parameterized paths. Simply put, it means that a sub-part of a query plan can use parameters it has got from a parent node. It fixes several bad plans that could occur, especially when the optimizer couldn't reorder joins to put nested loops where it wanted to.

This example is straight from the developpers mailing lists :

CREATE TABLE a (
    a_id serial PRIMARY KEY NOT NULL,
    b_id integer
);
CREATE INDEX a__b_id ON a USING btree (b_id);


CREATE TABLE b (
    b_id serial NOT NULL,
    c_id integer
);
CREATE INDEX b__c_id ON b USING btree (c_id);


CREATE TABLE c (
    c_id serial PRIMARY KEY NOT NULL,
    value integer UNIQUE
);

INSERT INTO b (b_id, c_id)
    SELECT g.i, g.i FROM generate_series(1, 50000) g(i);

INSERT INTO a(b_id)
    SELECT g.i FROM generate_series(1, 50000) g(i);

INSERT INTO c(c_id,value)
    VALUES (1,1);

So we have a referencing b, b referencing c.

Here is an example of a query working badly with PostgreSQL 9.1:

EXPLAIN ANALYZE SELECT 1                                                                           
FROM                                    
    c
WHERE
    EXISTS (
        SELECT *   
        FROM a
            JOIN b USING (b_id)
        WHERE b.c_id = c.c_id)
    AND c.value = 1;
                                                      QUERY PLAN                                                       
-----------------------------------------------------------------------------------------------------------------------
 Nested Loop Semi Join  (cost=1347.00..3702.27 rows=1 width=0) (actual time=13.799..13.802 rows=1 loops=1)
   Join Filter: (c.c_id = b.c_id)
   ->  Index Scan using c_value_key on c  (cost=0.00..8.27 rows=1 width=4) (actual time=0.006..0.008 rows=1 loops=1)
         Index Cond: (value = 1)
   ->  Hash Join  (cost=1347.00..3069.00 rows=50000 width=4) (actual time=13.788..13.788 rows=1 loops=1)
         Hash Cond: (a.b_id = b.b_id)
         ->  Seq Scan on a  (cost=0.00..722.00 rows=50000 width=4) (actual time=0.007..0.007 rows=1 loops=1)
         ->  Hash  (cost=722.00..722.00 rows=50000 width=8) (actual time=13.760..13.760 rows=50000 loops=1)
               Buckets: 8192  Batches: 1  Memory Usage: 1954kB
               ->  Seq Scan on b  (cost=0.00..722.00 rows=50000 width=8) (actual time=0.008..5.702 rows=50000 loops=1)
 Total runtime: 13.842 ms

Not that bad, 13 milliseconds. Still, we are doing sequential scans on a and b, when our common sense tells us that c.value=1 should be used to filter rows more aggressively.

Here's what 9.2 does with this query:

                                                      QUERY PLAN     
----------------------------------------------------------------------------------------------------------------------------
 Nested Loop Semi Join  (cost=0.00..16.97 rows=1 width=0) (actual time=0.035..0.037 rows=1 loops=1)
   ->  Index Scan using c_value_key on c  (cost=0.00..8.27 rows=1 width=4) (actual time=0.007..0.009 rows=1 loops=1)
         Index Cond: (value = 1)
   ->  Nested Loop  (cost=0.00..8.69 rows=1 width=4) (actual time=0.025..0.025 rows=1 loops=1)
         ->  Index Scan using b__c_id on b  (cost=0.00..8.33 rows=1 width=8) (actual time=0.007..0.007 rows=1 loops=1)
               Index Cond: (c_id = c.c_id)
         ->  Index Only Scan using a__b_id on a  (cost=0.00..0.35 rows=1 width=4) (actual time=0.014..0.014 rows=1 loops=1)
               Index Cond: (b_id = b.b_id)
 Total runtime: 0.089 ms

The «parameterized path» is:

   ->  Nested Loop  (cost=0.00..8.69 rows=1 width=4) (actual time=0.025..0.025 rows=1 loops=1)
         ->  Index Scan using b__c_id on b  (cost=0.00..8.33 rows=1 width=8) (actual time=0.007..0.007 rows=1 loops=1)
               Index Cond: (c_id = c.c_id)
         ->  Index Only Scan using a__b_id on a  (cost=0.00..0.35 rows=1 width=4) (actual time=0.014..0.014 rows=1 loops=1)
               Index Cond: (b_id = b.b_id)
 Total runtime: 0.089 ms

This part of the plan depends on a parent node (c_id=c.c_id). This part of the plan is called each time with a different parameter coming from the parent node.

This plan is of course much faster, as there is no need to fully scan a, and to fully scan AND hash b.


SP-GIST

TODO

pg_stat_statements

This contrib module has received a lot of improvements in this version:

  • Queries are normalized: queries that are identical except for their constant values will be considered the same, as long as their post-parse analysis query tree (that is, the internal representation of the query before rule expansion) are the same. This also implies that differences that are not semantically essential to the query, such as variations in whitespace or alias names, or the use of one particular syntax over another equivalent one will not differentiate queries.
=#select * from words where word= 'foo';
 word 
------
(0 ligne)

=# select * from words where word= 'bar';
 word 
------
 bar

=#select * from pg_stat_statements where query like '%words where%';
-[ RECORD 1 ]-------+-----------------------------------
userid              | 10
dbid                | 16384
query               | select * from words where word= ?;
calls               | 2
total_time          | 142.314
rows                | 1
shared_blks_hit     | 3
shared_blks_read    | 5
shared_blks_dirtied | 0
shared_blks_written | 0
local_blks_hit      | 0
local_blks_read     | 0
local_blks_dirtied  | 0
local_blks_written  | 0
temp_blks_read      | 0
temp_blks_written   | 0
blk_read_time       | 142.165
blk_write_time      | 0

The two queries are shown as one in pg_stat_statements.

  • For prepared statements, the execution part (execute statement) is charged on the prepare statement. That way it is easier to use, and avoids the double-counting there was with PostgreSQL 9.1.
  • pg_stat_statements displays timing in milliseconds, to be consistent with other system views.

Explain improvements

  • Timing can now be disabled with EXPLAIN (analyze on, timing off), leading to lower overhead on platforms where getting the current time is expensive


Have EXPLAIN ANALYZE report the number of rows rejected by filter steps (Marko Tiikkaja)

Backward compatibility

These changes may incur regressions in your applications.

Ensure that xpath() escapes special characters in string values

Before 9.2:

SELECT (XPATH('/*/text()', '<root><</root>'))[1];
 xpath 
-------
 <

'<' Isn't valid XML.

With 9.2:

SELECT (XPATH('/*/text()', '<root><</root>'))[1];
 xpath 
-------
 &lt;

Remove hstore's => operator

Up to 9.1, one could use the => operator to create a hstore. Hstore is a contrib, used to store key/values pairs in a column.

In 9.1:

SELECT pg_typeof('a'=>'b');
 pg_typeof 
-----------
 hstore
(1 row)

=# SELECT 'a'=>'b';
 ?column? 
----------
 "a"=>"b"
(1 row)

=# SELECT pg_typeof('a'=>'b');
 pg_typeof 
-----------
 hstore
(1 row)

With 9.2:

SELECT 'a'=>'b';
ERROR:  operator does not exist: unknown => unknown at character 11
HINT:  No operator matches the given name and argument type(s). You might need to add explicit type casts.
STATEMENT:  SELECT 'a'=>'b';
ERROR:  operator does not exist: unknown => unknown
LINE 1: SELECT 'a'=>'b';
                  ^
HINT:  No operator matches the given name and argument type(s). You might need to add explicit type casts.

It doesn't mean one cannot use '=>' in hstores, it just isn't an operator anymore:

=# select hstore('a=>b');
  hstore  
----------
 "a"=>"b"
(1 row)

=# select hstore('a','b');
  hstore  
----------
 "a"=>"b"
(1 row)

are still two valid ways to input a hstore.

"=>" is removed as an operator as it is a reserved keyword in SQL.


Have pg_relation_size() and friends return NULL if the object does not exist

A relation could be dropped by a concurrent session, while one was doing a pg_relation_size on it, leading to a SQL exception. Now, it merely returns NULL for this record.


Remove the spclocation field from pg_tablespace

The spclocation field provided the real location of the tablespace. It was filled in during the CREATE or ALTER TABLESPACE command. So it could be wrong: somebody just had to shutdown the cluster, move the tablespace's directory, re-create the symlink in pg_tblspc, and forget to update the spclocation field. The cluster would still run, as the spclocation wasn't used.

So this field has been removed. To get the tablespace's location, use pg_tablespace_location():

=# select *, pg_tablespace_location(oid) as spclocation from pg_tablespace;
  spcname   | spcowner | spcacl | spcoptions |  spclocation   
------------+----------+--------+------------+----------------
 pg_default |       10 |        |            | 
 pg_global  |       10 |        |            | 
 tmptblspc  |       10 |        |            | /tmp/tmptblspc

Have EXTRACT of a non-timezone-aware value measure the epoch from local midnight, not UTC midnight

With PostgreSQL 9.1:

=#SELECT extract(epoch from '2012-07-02 00:00:00'::timestamp);
 date_part  
------------
 1341180000
(1 row)

=# SELECT extract(epoch from '2012-07-02 00:00:00'::timestamptz);
 date_part  
------------
 1341180000
(1 row)

There is no difference in behaviour between a timstamp with or without timezone.

With 9.1:

=#SELECT extract(epoch from '2012-07-02 00:00:00'::timestamp);
 date_part  
------------
 1341187200
(1 row)

=# SELECT extract(epoch from '2012-07-02 00:00:00'::timestamptz);
 date_part  
------------
 1341180000
(1 row)

When the timestamp has no timezone, the epoch is calculated with the "local midnight", meaning the 1st january of 1970 at midnight, local-time.


Fix to_date() and to_timestamp() to wrap incomplete dates toward 2020

The wrapping was not consistent between 2 digit dates and 3 digit dates: 2 digit dates always chose the date closest to 2020, 3 digit dates mapped dates from 100 to 999 on 1100 to 1999, and 000 to 099 on 2000 to 2099.

Now PostgreSQL chooses the date closest to 2020, for 2 and 3 digit dates.

With 9.1:

=# SELECT to_date('200-07-02','YYY-MM-DD');
  to_date   
------------
 1200-07-02

With 9.2:

SELECT to_date('200-07-02','YYY-MM-DD');
  to_date   
------------
 2200-07-02

pg_stat_activity's definition has changed

The view pg_stat_activity has changed. It's not backward compatible, but let's see what this new definition brings us:

  • current_query disappears and is replaced by two columns:
    • state: is the session running a query, waiting
    • query: what is the last run (or still running) query
  • The column procpid is renamed to pid, to be consistent with other system views

The benefit is mostly for tracking «idle in transaction» sessions. Up until now, all we could know was that one of these sessions was idle in transaction, meaning it has started a transaction, maybe done some operations, but still not committed. If that session stayed in this state for a while, there was no way of knowing how it got in this state.

Here is an example:

-[ RECORD 1 ]----+---------------------------------
datid            | 16384
datname          | postgres
pid              | 20804
usesysid         | 10
usename          | postgres
application_name | psql
client_addr      | 
client_hostname  | 
client_port      | -1
backend_start    | 2012-07-02 15:02:51.146427+02
xact_start       | 2012-07-02 15:15:28.386865+02
query_start      | 2012-07-02 15:15:30.410834+02
state_change     | 2012-07-02 15:15:30.411287+02
waiting          | f
state            | idle in transaction
query            | DELETE FROM test;

With PostgreSQL 9.1, all we would have would be «idle in transaction».

As this change was backward-incompatible, procpid was also renamed to pid, to be more consistent with other system views.

Change all SQL-level statistics timing values to float8-stored milliseconds

pg_stat_user_functions.total_time, pg_stat_user_functions.self_time, pg_stat_xact_user_functions.total_time, pg_stat_xact_user_functions.self_time, and pg_stat_statements.total_time (contrib) are now in milliseconds, to be consistent with the rest of the timing values.

postgresql.conf parameters changes

  • silent_mode has been removed. Use pg_ctl -l postmaster.log
  • wal_sender_delay has been removed. It is no longer needed
  • custom_variable_classes has been removed. All «classes» are accepted without declaration now
  • ssl_ca_file, ssl_cert_file, ssl_crl_file, ssl_key_file have been added, meaning you can now specify the ssl files
Personal tools