9.1参考2
clusterdb — 对一个PostgreSQL数据库进行建簇
名称
clusterdb -- 对一个PostgreSQL数据库进行建簇
摘要
clusterdb [connection-option...] [--verbose | -v] [--table | -t table ] [dbname]
clusterdb [connection-option...] [--verbose | -v] [--all | -a]
描述
clusterdb 是一个用于对某个 PostgreSQL 数据库中的表进行重新建簇的工具. 它寻找以前建过簇的表,然后在上次用过的同一个索引上再次建簇. 没有创建过簇的表将不会受到影响.
clusterdb 是对 SQL 命令 CLUSTER 的封装.使用这两种方法对数据库建簇实际上没有任何区别.
clusterdb 接受的参数说明:
-a
--all
对所有数据库建簇.
[-d] dbname
[--dbname=]dbname
声明要建簇的数据库名字. 如果没有声明这个并且没有使用 -a(或者 --all), 那么数据库名从环境变量 PGDATABASE 中读取。 如果那个也没有设置,那么使用连接数据库的用户名。
-e
--echo
回显 clusterdb 生成并发送给服务器的命令。
-q
--quiet
不显示响应信息。
-t table
--table=table
只对表 table 建簇。
-v
--verbose
在程序处理过程中,显示详细的处理信息...
-V
--version
显示 clusterdb 的版本信息,然后退出.
-?
--help
显示 clusterdb 命令的帮助信息,然后退出.
clusterdb 还接受下列命令行参数获取连接参数:
-h host
--host=host
声明服务器运行所在的机器的主机名。如果数值以斜杠开头, 那么就用作 Unix 域套接字的目录。
-p port
--port=port
声明服务器监听的网际网 TCP 端口或者本地 Unix 域套接字文件扩展。
-U username
--username=username
进行连接的用户名 。
-w
--no-password
当服务器需要密码验证,或者一些脚本文件中,需要使用密码.
-W
--password
强制提示口令。
环境
PGDATABASE PGHOST PGPORT PGUSER
默认连接参数.
这个命令,和大多数PostgreSQL命令一样,也是使用libpq 支持的环境变量(详见:31.13) 诊断
出现困难的时候,参阅 CLUSTER 和 psql 获取有关可能的错误和错误信息的讨论。 数据库服务器必须在目标主机上运行。同样,任何缺省连接设置和 libpq 前端库使用的环境变量都将得以应用。 例子
对数据库 test 建簇: :
$ clusterdb test
对数据库 (xyzzy) 中的表( foo )建簇:
$ clusterdb --table foo xyzzy
createdb — 创建一个新的 PostgreSQL 数据库
名称
createdb -- 创建一个新的 PostgreSQL 数据库
摘要
createdb [connection-option...] [option...] [dbname] [description]
描述
createdb 创建一个新的 PostgreSQL 数据库。
通常,执行这个命令的数据库用户成为新数据库的所有者。 不过,如果执行用户拥有合适的权限,那么他可以通过 -O 指定合适的用户。
createdb 是一个 SQL 命令 CREATE DATABASE 的封装。 因此,用哪种方法创建数据库都没什么特别的。
选项
createdb 接受下列命令行参数:
dbname
声明要创建的数据库名。该名称应该在本节点的 PostgreSQL 所有数据库里面是唯一的。缺省是与当前系统用户同名。
description
这个选项声明一个与新创建的数据库相关的描述。
-D tablespace
--tablespace=tablespace
声明数据库的缺省表空间。
-e
--echo
回显 createdb 生成的命令并且把它发送到服务器。
-E encoding
--encoding=encoding
声明用于此数据库的字符编码方式。 PostgreSQL 服务器支持的字符集在 Section 22.3.1. 里列出。
-l locale
--locale=locale
在数据库中可以使用区域设置. 需要同时指定 --lc-collate 和--lc-ctype.
--lc-collate=locale
在数据库中指定 LC_COLLATE.
--lc-ctype=locale
数据库中指定 LC_CTYPE.
-O owner
--owner=owner
指定新创建数据库的管理用户.
-T template
--template=template
指定创建数据库的模板数据库.
-V --version
显示出 createdb 命令的版本号.
-?
--help
显示出 createdb 命令的帮助信息.
选项 -h,-p,-U, -W,-e 是以文本形式传递给 psql。 选项-O,-D,-T 和 -E 转换为下层的 SQL 命令 CREATE DATABASE 的选项;参考该条目获取更多相关信息
createdb 同样接受以下的链接参数:
-h host
--host=host
声明运行服务器的主机名。 如果数值以斜杠开头,则它被用做到 Unix 域套接字的路径。
-p port
--port=port
声明服务器 侦听的等待连接的互联网 TCP 端口或一个本地 Unix 域套接字文件扩展(描述符)。
-U username
--username=username
连接的用户名.
-w
--no-password
关闭口令提示符。
password.
-W
--password
强制口令提示符。
环境
PGDATABASE
如果设置了,那么就是要创建的数据库名字,除非在命令行上覆盖了。
PGHOST
PGPORT
PGUSER
缺省连接参数。PGUSER 还决定了要创建的数据库名字-- 如果我们没有在命令行上声明数据库名字,也没有用 PGDATABASE 声明的话。
这个命令和其它的PostgreSQL一样, 同样可以使用被libpq 支持的环境变量。 (详见 31.13).
诊断
如果出现错误,将会显示后端错误信息。参阅 CREATE DATABASE 和 psql 获取可能信息描述。 数据库服务器必须在目标服务器上运行。同样,前端库 libpq 使用的缺省连接设置和环境变量都将适用。
例子
用缺省数据库服务器创建一个数据库 demo:
$ createdb demo
用在主机eden上的服务器创建数据库 demo, 端口是 5000,使用 LATIN1 编码方式,并且显示执行的命令:
$ createdb -p 5000 -h eden -E LATIN1 -e demo CREATE DATABASE demo ENCODING 'LATIN1';
createlang — 定义一种新的 PostgreSQL 过程语言
名称
createlang -- 定义一种新的 PostgreSQL 过程语言
摘要
createlang [connection-option...] langname [dbname]
createlang [connection-option...] --list | -l dbname
描述
createlang 是一个用于向 PostgreSQL 数据库增加新的编程语言的工具。 createlang 可以处理所有 PostgreSQL 版本缺省就提供的语言,但是不能处理其它方面提供的语言。
尽管可以用 SQL 命令直接增加后端编程语言,我们还是推荐使用 createlang, 因为它进行了一些检查而且更容易使用。参阅 CREATE LANGUAGE 获取更多信息。
注意
createlang 命令已经没有使用了,在将来的版本可能会被删除.请慎重使用!
选项
createlang 接受下列命令行参数:
langname
声明要被定义的过程编程语言的名称.
[-d] dbname
[--dbname=]dbname
声明要向哪个数据库增加该语言。 缺省是使用和当前系统用户同名地数据库.
-e
--echo
执行的时候显示所执行的 SQL 命令。
-l
--list
显示一个在目标数据库里已经安装的语言的列表。
-V
--version
显示 createlang 的版本号.
-? --help
显示createlang命令的帮助信息.
createlang 还接受以下的链接命参数:
-h host
--host=host
声明运行服务器的机器的主机名。 如果数值以斜杠开头,则它被用做到 Unix 域套接字的路径。
-p port
--port=port
声明服务器侦听着等待连接的互联网 TCP 端口或一个本地 Unix 域套接字文件扩展(描述符)。
-U username
--username=username
进行连接的用户名.
-w --no-password
关闭口令命令提示符
-W --password
强制口令提示符.
环境
PGDATABASE
PGHOST
PGPORT
PGUSER
缺省连接参数
这个命令,和大多数PostgreSQL命令一样,也是使用libpq 支持的环境变量(详见:31.13)
诊断
多数错误信息是自解释的。如果没有,带着 --echo 参数运行 createlang 然后在相应的SQL命令下面检查细节。
注意
使用 droplang 删除一种语言。
例子
把 pltcl 语言安装到数据库 template1里:
$ createlang pltcl template1
请注意,把语言安装到 template1 将导致该语言自动被安装到随后创建的数据库中。
createuser — 定义一个新的 PostgreSQL 用户帐户
名称
createuser -- - 定义一个新的 PostgreSQL 用户帐户
摘要
createuser [connection-option...] [option...] [username]
描述
createuser创建一个新的 PostgreSQL 用户。只有超级用户和具有CREATEROLE 权限的用户可以创建新的 PostgreSQL 用户。 因此,createuser 必须由某位可以以 PostgreSQL 超级用户连接的用户或者具有CREATEROLE 权限的用户执行。
作为超级用户同时也意味着绕开数据库内访问检查的能力, 因此我们应该少赋予超级用户权限。
createuser 是 SQL 命令 CREATE ROLE的封装。 因此,用哪种方法创建新用户都没什么不同的。
选项
createuser 接受下列命令行参数:
username
声明要创建的 PostgreSQL 用户名称。 该名称必须在该 PostgreSQL 安装中唯一。
-c number
--connection-limit=number
设置限制该用户最大的连接数,默认是不进行限制.
-d
--createdb
允许该新建用户创建数据库。
-D
--no-createdb
禁止该新建用户创建数据库。
-e
--echo
回显 createuser 生成的命令并发送给服务器。
-E
--encrypted
对保存在数据库里的用户口令加密。如果没有声明, 则使用缺省值。
-i
--inherit
新的角色成员会自动继承角色的权限,该选项为缺省设置.
-I
--no-inherit
新的角色成员不会继承角色的权限,该选项为缺省设置.
-l
--login
允许创建的用户登录,缺省选项.
-L
--no-login
不允许用户登录.
-N
--unencrypted
默认不加密存储在数据空中的用户密码.
-P
--pwprompt
将会提示新的用户输入密码,如果不需要密码验证则不需要设置该项.
-r
--createrole
允许用户创建角色. (也即是具有 CREATEROLE 的权限).
-R
--no-createrole
不允许用户创建角色
-s
--superuser
设置该用户为超级管理员。
-S
--no-superuser
设置该用户不为超级管理员。
-V
--version
显示 createuser 的版本信息.
-?
--help
显示关于createuser 的帮助信息.
You will be prompted for a name and other missing information if it is not specified on the command line.
createuser also accepts the following command-line arguments for connection parameters:
-h host
--host=host
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket.
-p port
--port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections.
-U username
--username=username
User name to connect as (not the user name to create).
-w
--no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-W
--password
Force createuser to prompt for a password (for connecting to the server, not for the password of the new user).
This option is never essential, since createuser will automatically prompt for a password if the server demands password authentication. However, createuser will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W to avoid the extra connection attempt.
Environment
PGHOST
PGPORT
PGUSER
Default connection parameters
This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq (see Section 31.13). Diagnostics
In case of difficulty, see CREATE ROLE and psql for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment variables used by the libpq front-end library will apply. Examples
To create a user joe on the default database server:
$ createuser joe
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n
To create the same user joe using the server on host eden, port 5000, avoiding the prompts and taking a look at the underlying command:
$ createuser -h eden -p 5000 -S -D -R -e joe
CREATE ROLE joe NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT LOGIN;
To create the user joe as a superuser, and assign a password immediately:
$ createuser -P -s -e joe
Enter password for new role: xyzzy
Enter it again: xyzzy
CREATE ROLE joe PASSWORD 'md5b5f5ba1a423792b526f799ae4eb3d59e' SUPERUSER CREATEDB CREATEROLE INHERIT LOGIN;
In the above example, the new password isn't actually echoed when typed, but we show what was typed for clarity. As you see, the password is encrypted before it is sent to the client. If the option --unencrypted is used, the password will appear in the echoed command (and possibly also in the server log and elsewhere), so you don't want to use -e in that case, if anyone else can see your screen.
dropdb — 删除一个现有 PostgreSQL 数据库
Name dropdb -- remove a PostgreSQL database Synopsis
dropdb [connection-option...] [option...] dbname Description
dropdb destroys an existing PostgreSQL database. The user who executes this command must be a database superuser or the owner of the database.
dropdb is a wrapper around the SQL command DROP DATABASE. There is no effective difference between dropping databases via this utility and via other methods for accessing the server. Options
dropdb accepts the following command-line arguments:
dbname
Specifies the name of the database to be removed.
-e --echo
Echo the commands that dropdb generates and sends to the server.
-i --interactive
Issues a verification prompt before doing anything destructive.
-V --version
Print the dropdb version and exit.
-? --help
Show help about dropdb command line arguments, and exit.
dropdb also accepts the following command-line arguments for connection parameters:
-h host --host=host
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket.
-p port --port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections.
-U username --username=username
User name to connect as.
-w --no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-W --password
Force dropdb to prompt for a password before connecting to a database.
This option is never essential, since dropdb will automatically prompt for a password if the server demands password authentication. However, dropdb will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W to avoid the extra connection attempt.
Environment
PGHOST PGPORT PGUSER
Default connection parameters
This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq (see Section 31.13). Diagnostics
In case of difficulty, see DROP DATABASE and psql for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment variables used by the libpq front-end library will apply. Examples
To destroy the database demo on the default database server:
$ dropdb demo
To destroy the database demo using the server on host eden, port 5000, with verification and a peek at the underlying command:
$ dropdb -p 5000 -h eden -i -e demo Database "demo" will be permanently deleted. Are you sure? (y/n) y DROP DATABASE demo;
droplang — 删除一种 PostgreSQL 过程语言
Name droplang -- remove a PostgreSQL procedural language Synopsis
droplang [connection-option...] langname [dbname]
droplang [connection-option...] --list | -l dbname Description
droplang is a utility for removing an existing procedural language from a PostgreSQL database.
droplang is just a wrapper around the DROP EXTENSION SQL command. Caution
droplang is deprecated and may be removed in a future PostgreSQL release. Direct use of the DROP EXTENSION command is recommended instead. Options
droplang accepts the following command line arguments:
langname
Specifies the name of the procedural language to be removed.
[-d] dbname [--dbname=]dbname
Specifies from which database the language should be removed. The default is to use the database with the same name as the current system user.
-e --echo
Display SQL commands as they are executed.
-l --list
Show a list of already installed languages in the target database.
-V --version
Print the droplang version and exit.
-? --help
Show help about droplang command line arguments, and exit.
droplang also accepts the following command line arguments for connection parameters:
-h host --host=host
Specifies the host name of the machine on which the server is running. If host begins with a slash, it is used as the directory for the Unix domain socket.
-p port --port=port
Specifies the Internet TCP/IP port or local Unix domain socket file extension on which the server is listening for connections.
-U username --username=username
User name to connect as.
-w --no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-W --password
Force droplang to prompt for a password before connecting to a database.
This option is never essential, since droplang will automatically prompt for a password if the server demands password authentication. However, droplang will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W to avoid the extra connection attempt.
Environment
PGDATABASE PGHOST PGPORT PGUSER
Default connection parameters
This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq (see Section 31.13). Diagnostics
Most error messages are self-explanatory. If not, run droplang with the --echo option and see under the respective SQL command for details. Also, any default connection settings and environment variables used by the libpq front-end library will apply. Notes
Use createlang to add a language. Examples
To remove the language pltcl:
$ droplang pltcl dbname
dropuser — 删除一个 PostgreSQL 用户帐户
Name dropuser -- remove a PostgreSQL user account Synopsis
dropuser [connection-option...] [option...] [username] Description
dropuser removes an existing PostgreSQL user. Only superusers and users with the CREATEROLE privilege can remove PostgreSQL users. (To remove a superuser, you must yourself be a superuser.)
dropuser is a wrapper around the SQL command DROP ROLE. There is no effective difference between dropping users via this utility and via other methods for accessing the server. Options
dropuser accepts the following command-line arguments:
username
Specifies the name of the PostgreSQL user to be removed. You will be prompted for a name if none is specified on the command line.
-e --echo
Echo the commands that dropuser generates and sends to the server.
-i --interactive
Prompt for confirmation before actually removing the user.
-V --version
Print the dropuser version and exit.
-? --help
Show help about dropuser command line arguments, and exit.
dropuser also accepts the following command-line arguments for connection parameters:
-h host --host=host
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket.
-p port --port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections.
-U username --username=username
User name to connect as (not the user name to drop).
-w --no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-W --password
Force dropuser to prompt for a password before connecting to a database.
This option is never essential, since dropuser will automatically prompt for a password if the server demands password authentication. However, dropuser will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W to avoid the extra connection attempt.
Environment
PGHOST PGPORT PGUSER
Default connection parameters
This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq (see Section 31.13). Diagnostics
In case of difficulty, see DROP ROLE and psql for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment variables used by the libpq front-end library will apply. Examples
To remove user joe from the default database server:
$ dropuser joe
To remove user joe using the server on host eden, port 5000, with verification and a peek at the underlying command:
$ dropuser -p 5000 -h eden -i -e joe Role "joe" will be permanently removed. Are you sure? (y/n) y DROP ROLE joe;
ecpg — 嵌入的 SQL C 预处理器
Name ecpg -- embedded SQL C preprocessor Synopsis
ecpg [option...] file... Description
ecpg is the embedded SQL preprocessor for C programs. It converts C programs with embedded SQL statements to normal C code by replacing the SQL invocations with special function calls. The output files can then be processed with any C compiler tool chain.
ecpg will convert each input file given on the command line to the corresponding C output file. Input files preferably have the extension .pgc, in which case the extension will be replaced by .c to determine the output file name. If the extension of the input file is not .pgc, then the output file name is computed by appending .c to the full file name. The output file name can also be overridden using the -o option.
This reference page does not describe the embedded SQL language. See Chapter 33 for more information on that topic. Options
ecpg accepts the following command-line arguments:
-c
Automatically generate certain C code from SQL code. Currently, this works for EXEC SQL TYPE.
-C mode
Set a compatibility mode. mode can be INFORMIX or INFORMIX_SE.
-D symbol
Define a C preprocessor symbol.
-i
Parse system include files as well.
-I directory
Specify an additional include path, used to find files included via EXEC SQL INCLUDE. Defaults are . (current directory), /usr/local/include, the PostgreSQL include directory which is defined at compile time (default: /usr/local/pgsql/include), and /usr/include, in that order.
-o filename
Specifies that ecpg should write all its output to the given filename.
-r option
Selects run-time behavior. Option can be one of the following:
no_indicator
Do not use indicators but instead use special values to represent null values. Historically there have been databases using this approach. prepare
Prepare all statements before using them. Libecpg will keep a cache of prepared statements and reuse a statement if it gets executed again. If the cache runs full, libecpg will free the least used statement. questionmarks
Allow question mark as placeholder for compatibility reasons. This used to be the default long ago.
-t
Turn on autocommit of transactions. In this mode, each SQL command is automatically committed unless it is inside an explicit transaction block. In the default mode, commands are committed only when EXEC SQL COMMIT is issued.
-v
Print additional information including the version and the "include" path.
--version
Print the ecpg version and exit.
--help
Show help about ecpg command line arguments, and exit.
Notes
When compiling the preprocessed C code files, the compiler needs to be able to find the ECPG header files in the PostgreSQL include directory. Therefore, you might have to use the -I option when invoking the compiler (e.g., -I/usr/local/pgsql/include).
Programs using C code with embedded SQL have to be linked against the libecpg library, for example using the linker options -L/usr/local/pgsql/lib -lecpg.
The value of either of these directories that is appropriate for the installation can be found out using pg_config. Examples
If you have an embedded SQL C source file named prog1.pgc, you can create an executable program using the following sequence of commands:
ecpg prog1.pgc cc -I/usr/local/pgsql/include -c prog1.c cc -o prog1 prog1.o -L/usr/local/pgsql/lib -lecpg
pg_basebackup -- 做一个PostgreSQL集群的基础备份
Name pg_basebackup -- take a base backup of a PostgreSQL cluster Synopsis
pg_basebackup [option...] Description
pg_basebackup is used to take base backups of a running PostgreSQL database cluster. These are taken without affecting other clients to the database, and can be used both for point-in-time recovery (see Section 24.3) and as the starting point for a log shipping or streaming replication standby servers (see Section 25.2).
pg_basebackup makes a binary copy of the database cluster files, while making sure the system is automatically put in and out of backup mode automatically. Backups are always taken of the entire database cluster, it is not possible to back up individual databases or database objects. For individual database backups, a tool such as pg_dump must be used.
The backup is made over a regular PostgreSQL connection, and uses the replication protocol. The connection must be made with a user having REPLICATION permissions (see Section 20.2), and the user must be granted explicit permissions in pg_hba.conf. The server must also be configured with max_wal_senders set high enough to leave at least one session available for the backup.
There can be multiple pg_basebackups running at the same time, but it is better from a performance point of view to take only one backup, and copy the result. Options
The following command-line options control the location and format of the output.
-D directory --pgdata=directory
Directory to write the output to.
When the backup is in tar mode, and the directory is specified as - (dash), the tar file will be written to stdout.
This parameter is required.
-F format --format=format
Selects the format for the output. format can be one of the following:
p plain
Write the output as plain files, with the same layout as the current data directory and tablespaces. When the cluster has no additional tablespaces, the whole database will be placed in the target directory. If the cluster contains additional tablespaces, the main data directory will be placed in the target directory, but all other tablespaces will be placed in the same absolute path as they have on the server.
This is the default format. t tar
Write the output as tar files in the target directory. The main data directory will be written to a file named base.tar, and all other tablespaces will be named after the tablespace OID.
If the value - (dash) is specified as target directory, the tar contents will be written to standard output, suitable for piping to for example gzip. This is only possible if the cluster has no additional tablespaces.
-x --xlog
Includes the required transaction log files (WAL files) in the backup. This will include all transaction logs generated during the backup. If this option is specified, it is possible to start a postmaster directly in the extracted directory without the need to consult the log archive, thus making this a completely standalone backup.
Note: The transaction log files are collected at the end of the backup. Therefore, it is necessary for the wal_keep_segments parameter to be set high enough that the log is not removed before the end of the backup. If the log has been rotated when it's time to transfer it, the backup will fail and be unusable.
-z --gzip
Enables gzip compression of tar file output, with the default compression level. Compression is only available when using the tar format.
-Z level --compress=level
Enables gzip compression of tar file output, and specifies the compression level (1 through 9, 9 being best compression). Compression is only available when using the tar format.
The following command-line options control the generation of the backup and the running of the program.
-c fast|spread --checkpoint=fast|spread
Sets checkpoint mode to fast or spread (default).
-l label --label=label
Sets the label for the backup. If none is specified, a default value of pg_basebackup base backup will be used.
-P --progress
Enables progress reporting. Turning this on will deliver an approximate progress report during the backup. Since the database may change during the backup, this is only an approximation and may not end at exactly 100%. In particular, when WAL log is included in the backup, the total amount of data cannot be estimated in advance, and in this case the estimated target size will increase once it passes the total estimate without WAL.
When this is enabled, the backup will start by enumerating the size of the entire database, and then go back and send the actual contents. This may make the backup take slightly longer, and in particular it will take longer before the first data is sent.
-v --verbose
Enables verbose mode. Will output some extra steps during startup and shutdown, as well as show the exact file name that is currently being processed if progress reporting is also enabled.
The following command-line options control the database connection parameters.
-h host --host=host
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket. The default is taken from the PGHOST environment variable, if set, else a Unix domain socket connection is attempted.
-p port --port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections. Defaults to the PGPORT environment variable, if set, or a compiled-in default.
-U username --username=username
User name to connect as.
-w --no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-W --password
Force pg_basebackup to prompt for a password before connecting to a database.
This option is never essential, since pg_basebackup will automatically prompt for a password if the server demands password authentication. However, pg_basebackup will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W to avoid the extra connection attempt.
Other, less commonly used, parameters are also available:
-V --version
Print the pg_basebackup version and exit.
-? --help
Show help about pg_basebackup command line arguments, and exit.
Environment
This utility, like most other PostgreSQL utilities, uses the environment variables supported by libpq (see Section 31.13). Notes
The backup will include all files in the data directory and tablespaces, including the configuration files and any additional files placed in the directory by third parties. Only regular files and directories are allowed in the data directory, no symbolic links or special device files.
The way PostgreSQL manages tablespaces, the path for all additional tablespaces must be identical whenever a backup is restored. The main data directory, however, is relocatable to any location. Examples
To create a base backup of the server at mydbserver and store it in the local directory /usr/local/pgsql/data:
$ pg_basebackup -h mydbserver -D /usr/local/pgsql/data
To create a backup of the local server with one compressed tar file for each tablespace, and store it in the directory backup, showing a progress report while running:
$ pg_basebackup -D backup -Ft -z -P
To create a backup of a single-tablespace local database and compress this with bzip2:
$ pg_basebackup -D - -Ft | bzip2 > backup.tar.bz2
(This command will fail if there are multiple tablespaces in the database.)
pg_config — 检索已安装版本的 PostgreSQL 的信息
Name pg_config -- retrieve information about the installed version of PostgreSQL Synopsis
pg_config [option...] Description
The pg_config utility prints configuration parameters of the currently installed version of PostgreSQL. It is intended, for example, to be used by software packages that want to interface to PostgreSQL to facilitate finding the required header files and libraries. Options
To use pg_config, supply one or more of the following options:
--bindir
Print the location of user executables. Use this, for example, to find the psql program. This is normally also the location where the pg_config program resides.
--docdir
Print the location of documentation files.
--htmldir
Print the location of HTML documentation files.
--includedir
Print the location of C header files of the client interfaces.
--pkgincludedir
Print the location of other C header files.
--includedir-server
Print the location of C header files for server programming.
--libdir
Print the location of object code libraries.
--pkglibdir
Print the location of dynamically loadable modules, or where the server would search for them. (Other architecture-dependent data files might also be installed in this directory.)
--localedir
Print the location of locale support files. (This will be an empty string if locale support was not configured when PostgreSQL was built.)
--mandir
Print the location of manual pages.
--sharedir
Print the location of architecture-independent support files.
--sysconfdir
Print the location of system-wide configuration files.
--pgxs
Print the location of extension makefiles.
--configure
Print the options that were given to the configure script when PostgreSQL was configured for building. This can be used to reproduce the identical configuration, or to find out with what options a binary package was built. (Note however that binary packages often contain vendor-specific custom patches.) See also the examples below.
--cc
Print the value of the CC variable that was used for building PostgreSQL. This shows the C compiler used.
--cppflags
Print the value of the CPPFLAGS variable that was used for building PostgreSQL. This shows C compiler switches needed at preprocessing time (typically, -I switches).
--cflags
Print the value of the CFLAGS variable that was used for building PostgreSQL. This shows C compiler switches.
--cflags_sl
Print the value of the CFLAGS_SL variable that was used for building PostgreSQL. This shows extra C compiler switches used for building shared libraries.
--ldflags
Print the value of the LDFLAGS variable that was used for building PostgreSQL. This shows linker switches.
--ldflags_ex
Print the value of the LDFLAGS_EX variable that was used for building PostgreSQL. This shows linker switches used for building executables only.
--ldflags_sl
Print the value of the LDFLAGS_SL variable that was used for building PostgreSQL. This shows linker switches used for building shared libraries only.
--libs
Print the value of the LIBS variable that was used for building PostgreSQL. This normally contains -l switches for external libraries linked into PostgreSQL.
--version
Print the version of PostgreSQL.
If more than one option is given, the information is printed in that order, one item per line. If no options are given, all available information is printed, with labels. Notes
The option --includedir-server was added in PostgreSQL 7.2. In prior releases, the server include files were installed in the same location as the client headers, which could be queried with the option --includedir. To make your package handle both cases, try the newer option first and test the exit status to see whether it succeeded.
The options --docdir, --pkgincludedir, --localedir, --mandir, --sharedir, --sysconfdir, --cc, --cppflags, --cflags, --cflags_sl, --ldflags, --ldflags_sl, and --libs were added in PostgreSQL 8.1. The option --htmldir was added in PostgreSQL 8.4. The option --ldflags_ex was added in PostgreSQL 9.0.
In releases prior to PostgreSQL 7.1, before pg_config came to be, a method for finding the equivalent configuration information did not exist. Example
To reproduce the build configuration of the current PostgreSQL installation, run the following command:
eval ./configure `pg_config --configure`
The output of pg_config --configure contains shell quotation marks so arguments with spaces are represented correctly. Therefore, using eval is required for proper results.
pg_dump — 将一个PostgreSQL数据库抽出到一个脚本文件或者其它归档文件中
Name pg_dump -- extract a PostgreSQL database into a script file or other archive file Synopsis
pg_dump [connection-option...] [option...] [dbname] Description
pg_dump is a utility for backing up a PostgreSQL database. It makes consistent backups even if the database is being used concurrently. pg_dump does not block other users accessing the database (readers or writers).
Dumps can be output in script or archive file formats. Script dumps are plain-text files containing the SQL commands required to reconstruct the database to the state it was in at the time it was saved. To restore from such a script, feed it to psql. Script files can be used to reconstruct the database even on other machines and other architectures; with some modifications, even on other SQL database products.
The alternative archive file formats must be used with pg_restore to rebuild the database. They allow pg_restore to be selective about what is restored, or even to reorder the items prior to being restored. The archive file formats are designed to be portable across architectures.
When used with one of the archive file formats and combined with pg_restore, pg_dump provides a flexible archival and transfer mechanism. pg_dump can be used to backup an entire database, then pg_restore can be used to examine the archive and/or select which parts of the database are to be restored. The most flexible output file format is the "custom" format (-Fc). It allows for selection and reordering of all archived items, and is compressed by default.
While running pg_dump, one should examine the output for any warnings (printed on standard error), especially in light of the limitations listed below. Options
The following command-line options control the content and format of the output.
dbname
Specifies the name of the database to be dumped. If this is not specified, the environment variable PGDATABASE is used. If that is not set, the user name specified for the connection is used.
-a --data-only
Dump only the data, not the schema (data definitions).
This option is only meaningful for the plain-text format. For the archive formats, you can specify the option when you call pg_restore.
-b --blobs
Include large objects in the dump. This is the default behavior except when --schema, --table, or --schema-only is specified, so the -b switch is only useful to add large objects to selective dumps.
-c --clean
Output commands to clean (drop) database objects prior to outputting the commands for creating them. (Restore might generate some harmless errors.)
This option is only meaningful for the plain-text format. For the archive formats, you can specify the option when you call pg_restore.
-C --create
Begin the output with a command to create the database itself and reconnect to the created database. (With a script of this form, it doesn't matter which database you connect to before running the script.)
This option is only meaningful for the plain-text format. For the archive formats, you can specify the option when you call pg_restore.
-E encoding --encoding=encoding
Create the dump in the specified character set encoding. By default, the dump is created in the database encoding. (Another way to get the same result is to set the PGCLIENTENCODING environment variable to the desired dump encoding.)
-f file --file=file
Send output to the specified file. This parameter can be omitted for file based output formats, in which case the standard output is used. It must be given for the directory output format however, where it specifies the target directory instead of a file. In this case the directory is created by pg_dump and must not exist before.
-F format --format=format
Selects the format of the output. format can be one of the following:
p plain
Output a plain-text SQL script file (the default). c custom
Output a custom-format archive suitable for input into pg_restore. Together with the directory output format, this is the most flexible output format in that it allows manual selection and reordering of archived items during restore. This format is also compressed by default. d directory
Output a directory-format archive suitable for input into pg_restore. This will create a directory with one file for each table and blob being dumped, plus a so-called Table of Contents file describing the dumped objects in a machine-readable format that pg_restore can read. A directory format archive can be manipulated with standard Unix tools; for example, files in an uncompressed archive can be compressed with the gzip tool. This format is compressed by default. t tar
Output a tar-format archive suitable for input into pg_restore. The tar-format is compatible with the directory-format; extracting a tar-format archive produces a valid directory-format archive. However, the tar-format does not support compression and has a limit of 8 GB on the size of individual tables. Also, the relative order of table data items cannot be changed during restore.
-i --ignore-version
A deprecated option that is now ignored.
-n schema --schema=schema
Dump only schemas matching schema; this selects both the schema itself, and all its contained objects. When this option is not specified, all non-system schemas in the target database will be dumped. Multiple schemas can be selected by writing multiple -n switches. Also, the schema parameter is interpreted as a pattern according to the same rules used by psql's \d commands (see Patterns), so multiple schemas can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern if needed to prevent the shell from expanding the wildcards; see Examples.
Note: When -n is specified, pg_dump makes no attempt to dump any other database objects that the selected schema(s) might depend upon. Therefore, there is no guarantee that the results of a specific-schema dump can be successfully restored by themselves into a clean database.
Note: Non-schema objects such as blobs are not dumped when -n is specified. You can add blobs back to the dump with the --blobs switch.
-N schema --exclude-schema=schema
Do not dump any schemas matching the schema pattern. The pattern is interpreted according to the same rules as for -n. -N can be given more than once to exclude schemas matching any of several patterns.
When both -n and -N are given, the behavior is to dump just the schemas that match at least one -n switch but no -N switches. If -N appears without -n, then schemas matching -N are excluded from what is otherwise a normal dump.
-o --oids
Dump object identifiers (OIDs) as part of the data for every table. Use this option if your application references the OID columns in some way (e.g., in a foreign key constraint). Otherwise, this option should not be used.
-O --no-owner
Do not output commands to set ownership of objects to match the original database. By default, pg_dump issues ALTER OWNER or SET SESSION AUTHORIZATION statements to set ownership of created database objects. These statements will fail when the script is run unless it is started by a superuser (or the same user that owns all of the objects in the script). To make a script that can be restored by any user, but will give that user ownership of all the objects, specify -O.
This option is only meaningful for the plain-text format. For the archive formats, you can specify the option when you call pg_restore.
-R --no-reconnect
This option is obsolete but still accepted for backwards compatibility.
-s --schema-only
Dump only the object definitions (schema), not data.
-S username --superuser=username
Specify the superuser user name to use when disabling triggers. This is only relevant if --disable-triggers is used. (Usually, it's better to leave this out, and instead start the resulting script as superuser.)
-t table --table=table
Dump only tables (or views or sequences or foreign tables) matching table. Multiple tables can be selected by writing multiple -t switches. Also, the table parameter is interpreted as a pattern according to the same rules used by psql's \d commands (see Patterns), so multiple tables can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern if needed to prevent the shell from expanding the wildcards; see Examples.
The -n and -N switches have no effect when -t is used, because tables selected by -t will be dumped regardless of those switches, and non-table objects will not be dumped.
Note: When -t is specified, pg_dump makes no attempt to dump any other database objects that the selected table(s) might depend upon. Therefore, there is no guarantee that the results of a specific-table dump can be successfully restored by themselves into a clean database.
Note: The behavior of the -t switch is not entirely upward compatible with pre-8.2 PostgreSQL versions. Formerly, writing -t tab would dump all tables named tab, but now it just dumps whichever one is visible in your default search path. To get the old behavior you can write -t '*.tab'. Also, you must write something like -t sch.tab to select a table in a particular schema, rather than the old locution of -n sch -t tab.
-T table --exclude-table=table
Do not dump any tables matching the table pattern. The pattern is interpreted according to the same rules as for -t. -T can be given more than once to exclude tables matching any of several patterns.
When both -t and -T are given, the behavior is to dump just the tables that match at least one -t switch but no -T switches. If -T appears without -t, then tables matching -T are excluded from what is otherwise a normal dump.
-v --verbose
Specifies verbose mode. This will cause pg_dump to output detailed object comments and start/stop times to the dump file, and progress messages to standard error.
-V --version
Print the pg_dump version and exit.
-x --no-privileges --no-acl
Prevent dumping of access privileges (grant/revoke commands).
-Z 0..9 --compress=0..9
Specify the compression level to use. Zero means no compression. For the custom archive format, this specifies compression of individual table-data segments, and the default is to compress at a moderate level. For plain text output, setting a nonzero compression level causes the entire output file to be compressed, as though it had been fed through gzip; but the default is not to compress. The tar archive format currently does not support compression at all.
--binary-upgrade
This option is for use by in-place upgrade utilities. Its use for other purposes is not recommended or supported. The behavior of the option may change in future releases without notice.
--column-inserts --attribute-inserts
Dump data as INSERT commands with explicit column names (INSERT INTO table (column, ...) VALUES ...). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL databases. However, since this option generates a separate command for each row, an error in reloading a row causes only that row to be lost rather than the entire table contents.
--disable-dollar-quoting
This option disables the use of dollar quoting for function bodies, and forces them to be quoted using SQL standard string syntax.
--disable-triggers
This option is only relevant when creating a data-only dump. It instructs pg_dump to include commands to temporarily disable triggers on the target tables while the data is reloaded. Use this if you have referential integrity checks or other triggers on the tables that you do not want to invoke during data reload.
Presently, the commands emitted for --disable-triggers must be done as superuser. So, you should also specify a superuser name with -S, or preferably be careful to start the resulting script as a superuser.
This option is only meaningful for the plain-text format. For the archive formats, you can specify the option when you call pg_restore.
--inserts
Dump data as INSERT commands (rather than COPY). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL databases. However, since this option generates a separate command for each row, an error in reloading a row causes only that row to be lost rather than the entire table contents. Note that the restore might fail altogether if you have rearranged column order. The --column-inserts option is safe against column order changes, though even slower.
--lock-wait-timeout=timeout
Do not wait forever to acquire shared table locks at the beginning of the dump. Instead fail if unable to lock a table within the specified timeout. The timeout may be specified in any of the formats accepted by SET statement_timeout. (Allowed values vary depending on the server version you are dumping from, but an integer number of milliseconds is accepted by all versions since 7.3. This option is ignored when dumping from a pre-7.3 server.)
--no-security-labels
Do not dump security labels.
--no-tablespaces
Do not output commands to select tablespaces. With this option, all objects will be created in whichever tablespace is the default during restore.
This option is only meaningful for the plain-text format. For the archive formats, you can specify the option when you call pg_restore.
--no-unlogged-table-data
Do not dump the contents of unlogged tables. This option has no effect on whether or not the table definitions (schema) are dumped; it only suppresses dumping the table data. Data in unlogged tables is always excluded when dumping from a standby server.
--quote-all-identifiers
Force quoting of all identifiers. This may be useful when dumping a database for migration to a future version that may have introduced additional keywords.
--serializable-deferrable
Use a serializable transaction for the dump, to ensure that the snapshot used is consistent with later database states; but do this by waiting for a point in the transaction stream at which no anomalies can be present, so that there isn't a risk of the dump failing or causing other transactions to roll back with a serialization_failure. See Chapter 13 for more information about transaction isolation and concurrency control.
This option is not beneficial for a dump which is intended only for disaster recovery. It could be useful for a dump used to load a copy of the database for reporting or other read-only load sharing while the original database continues to be updated. Without it the dump may reflect a state which is not consistent with any serial execution of the transactions eventually committed. For example, if batch processing techniques are used, a batch may show as closed in the dump without all of the items which are in the batch appearing.
This option will make no difference if there are no read-write transactions active when pg_dump is started. If read-write transactions are active, the start of the dump may be delayed for an indeterminate length of time. Once running, performance with or without the switch is the same.
--use-set-session-authorization
Output SQL-standard SET SESSION AUTHORIZATION commands instead of ALTER OWNER commands to determine object ownership. This makes the dump more standards-compatible, but depending on the history of the objects in the dump, might not restore properly. Also, a dump using SET SESSION AUTHORIZATION will certainly require superuser privileges to restore correctly, whereas ALTER OWNER requires lesser privileges.
-? --help
Show help about pg_dump command line arguments, and exit.
The following command-line options control the database connection parameters.
-h host --host=host
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket. The default is taken from the PGHOST environment variable, if set, else a Unix domain socket connection is attempted.
-p port --port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections. Defaults to the PGPORT environment variable, if set, or a compiled-in default.
-U username --username=username
User name to connect as.
-w --no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-W --password
Force pg_dump to prompt for a password before connecting to a database.
This option is never essential, since pg_dump will automatically prompt for a password if the server demands password authentication. However, pg_dump will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W to avoid the extra connection attempt.
--role=rolename
Specifies a role name to be used to create the dump. This option causes pg_dump to issue a SET ROLE rolename command after connecting to the database. It is useful when the authenticated user (specified by -U) lacks privileges needed by pg_dump, but can switch to a role with the required rights. Some installations have a policy against logging in directly as a superuser, and use of this option allows dumps to be made without violating the policy.
Environment
PGDATABASE PGHOST PGOPTIONS PGPORT PGUSER
Default connection parameters.
This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq (see Section 31.13). Diagnostics
pg_dump internally executes SELECT statements. If you have problems running pg_dump, make sure you are able to select information from the database using, for example, psql. Also, any default connection settings and environment variables used by the libpq front-end library will apply.
The database activity of pg_dump is normally collected by the statistics collector. If this is undesirable, you can set parameter track_counts to false via PGOPTIONS or the ALTER USER command. Notes
If your database cluster has any local additions to the template1 database, be careful to restore the output of pg_dump into a truly empty database; otherwise you are likely to get errors due to duplicate definitions of the added objects. To make an empty database without any local additions, copy from template0 not template1, for example:
CREATE DATABASE foo WITH TEMPLATE template0;
When a data-only dump is chosen and the option --disable-triggers is used, pg_dump emits commands to disable triggers on user tables before inserting the data, and then commands to re-enable them after the data has been inserted. If the restore is stopped in the middle, the system catalogs might be left in the wrong state.
Members of tar archives are limited to a size less than 8 GB. (This is an inherent limitation of the tar file format.) Therefore this format cannot be used if the textual representation of any one table exceeds that size. The total size of a tar archive and any of the other output formats is not limited, except possibly by the operating system.
The dump file produced by pg_dump does not contain the statistics used by the optimizer to make query planning decisions. Therefore, it is wise to run ANALYZE after restoring from a dump file to ensure optimal performance; see Section 23.1.3 and Section 23.1.5 for more information. The dump file also does not contain any ALTER DATABASE ... SET commands; these settings are dumped by pg_dumpall, along with database users and other installation-wide settings.
Because pg_dump is used to transfer data to newer versions of PostgreSQL, the output of pg_dump can be expected to load into PostgreSQL server versions newer than pg_dump's version. pg_dump can also dump from PostgreSQL servers older than its own version. (Currently, servers back to version 7.0 are supported.) However, pg_dump cannot dump from PostgreSQL servers newer than its own major version; it will refuse to even try, rather than risk making an invalid dump. Also, it is not guaranteed that pg_dump's output can be loaded into a server of an older major version — not even if the dump was taken from a server of that version. Loading a dump file into an older server may require manual editing of the dump file to remove syntax not understood by the older server. Examples
To dump a database called mydb into a SQL-script file:
$ pg_dump mydb > db.sql
To reload such a script into a (freshly created) database named newdb:
$ psql -d newdb -f db.sql
To dump a database into a custom-format archive file:
$ pg_dump -Fc mydb > db.dump
To dump a database into a directory-format archive:
$ pg_dump -Fd mydb -f dumpdir
To reload an archive file into a (freshly created) database named newdb:
$ pg_restore -d newdb db.dump
To dump a single table named mytab:
$ pg_dump -t mytab mydb > db.sql
To dump all tables whose names start with emp in the detroit schema, except for the table named employee_log:
$ pg_dump -t 'detroit.emp*' -T detroit.employee_log mydb > db.sql
To dump all schemas whose names start with east or west and end in gsm, excluding any schemas whose names contain the word test:
$ pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql
The same, using regular expression notation to consolidate the switches:
$ pg_dump -n '(east|west)*gsm' -N '*test*' mydb > db.sql
To dump all database objects except for tables whose names begin with ts_:
$ pg_dump -T 'ts_*' mydb > db.sql
To specify an upper-case or mixed-case name in -t and related switches, you need to double-quote the name; else it will be folded to lower case (see Patterns). But double quotes are special to the shell, so in turn they must be quoted. Thus, to dump a single table with a mixed-case name, you need something like
$ pg_dump -t '"MixedCaseName"' mydb > mytab.sql
pg_dumpall — 抽出一个 PostgreSQL 数据库集群到脚本文件中
Name pg_dumpall -- extract a PostgreSQL database cluster into a script file Synopsis
pg_dumpall [connection-option...] [option...] Description
pg_dumpall is a utility for writing out ("dumping") all PostgreSQL databases of a cluster into one script file. The script file contains SQL commands that can be used as input to psql to restore the databases. It does this by calling pg_dump for each database in a cluster. pg_dumpall also dumps global objects that are common to all databases. (pg_dump does not save these objects.) This currently includes information about database users and groups, tablespaces, and properties such as access permissions that apply to databases as a whole.
Since pg_dumpall reads tables from all databases you will most likely have to connect as a database superuser in order to produce a complete dump. Also you will need superuser privileges to execute the saved script in order to be allowed to add users and groups, and to create databases.
The SQL script will be written to the standard output. Use the [-f|file] option or shell operators to redirect it into a file.
pg_dumpall needs to connect several times to the PostgreSQL server (once per database). If you use password authentication it will ask for a password each time. It is convenient to have a ~/.pgpass file in such cases. See Section 31.14 for more information. Options
The following command-line options control the content and format of the output.
-a --data-only
Dump only the data, not the schema (data definitions).
-c --clean
Include SQL commands to clean (drop) databases before recreating them. DROP commands for roles and tablespaces are added as well.
-f filename --file=filename
Send output to the specified file. If this is omitted, the standard output is used.
-g --globals-only
Dump only global objects (roles and tablespaces), no databases.
-i --ignore-version
A deprecated option that is now ignored.
-o --oids
Dump object identifiers (OIDs) as part of the data for every table. Use this option if your application references the OID columns in some way (e.g., in a foreign key constraint). Otherwise, this option should not be used.
-O --no-owner
Do not output commands to set ownership of objects to match the original database. By default, pg_dumpall issues ALTER OWNER or SET SESSION AUTHORIZATION statements to set ownership of created schema elements. These statements will fail when the script is run unless it is started by a superuser (or the same user that owns all of the objects in the script). To make a script that can be restored by any user, but will give that user ownership of all the objects, specify -O.
-r --roles-only
Dump only roles, no databases or tablespaces.
-s --schema-only
Dump only the object definitions (schema), not data.
-S username --superuser=username
Specify the superuser user name to use when disabling triggers. This is only relevant if --disable-triggers is used. (Usually, it's better to leave this out, and instead start the resulting script as superuser.)
-t --tablespaces-only
Dump only tablespaces, no databases or roles.
-v --verbose
Specifies verbose mode. This will cause pg_dumpall to output start/stop times to the dump file, and progress messages to standard error. It will also enable verbose output in pg_dump.
-V --version
Print the pg_dumpall version and exit.
-x --no-privileges --no-acl
Prevent dumping of access privileges (grant/revoke commands).
--binary-upgrade
This option is for use by in-place upgrade utilities. Its use for other purposes is not recommended or supported. The behavior of the option may change in future releases without notice.
--column-inserts --attribute-inserts
Dump data as INSERT commands with explicit column names (INSERT INTO table (column, ...) VALUES ...). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL databases.
--disable-dollar-quoting
This option disables the use of dollar quoting for function bodies, and forces them to be quoted using SQL standard string syntax.
--disable-triggers
This option is only relevant when creating a data-only dump. It instructs pg_dumpall to include commands to temporarily disable triggers on the target tables while the data is reloaded. Use this if you have referential integrity checks or other triggers on the tables that you do not want to invoke during data reload.
Presently, the commands emitted for --disable-triggers must be done as superuser. So, you should also specify a superuser name with -S, or preferably be careful to start the resulting script as a superuser.
--inserts
Dump data as INSERT commands (rather than COPY). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL databases. Note that the restore might fail altogether if you have rearranged column order. The --column-inserts option is safer, though even slower.
--lock-wait-timeout=timeout
Do not wait forever to acquire shared table locks at the beginning of the dump. Instead, fail if unable to lock a table within the specified timeout. The timeout may be specified in any of the formats accepted by SET statement_timeout. Allowed values vary depending on the server version you are dumping from, but an integer number of milliseconds is accepted by all versions since 7.3. This option is ignored when dumping from a pre-7.3 server.
--no-security-labels
Do not dump security labels.
--no-tablespaces
Do not output commands to create tablespaces nor select tablespaces for objects. With this option, all objects will be created in whichever tablespace is the default during restore.
--no-unlogged-table-data
Do not dump the contents of unlogged tables. This option has no effect on whether or not the table definitions (schema) are dumped; it only suppresses dumping the table data.
--quote-all-identifiers
Force quoting of all identifiers. This may be useful when dumping a database for migration to a future version that may have introduced additional keywords.
--use-set-session-authorization
Output SQL-standard SET SESSION AUTHORIZATION commands instead of ALTER OWNER commands to determine object ownership. This makes the dump more standards compatible, but depending on the history of the objects in the dump, might not restore properly.
-? --help
Show help about pg_dumpall command line arguments, and exit.
The following command-line options control the database connection parameters.
-h host --host=host
Specifies the host name of the machine on which the database server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket. The default is taken from the PGHOST environment variable, if set, else a Unix domain socket connection is attempted.
-l dbname --database=dbname
Specifies the name of the database to connect to to dump global objects and discover what other databases should be dumped. If not specified, the postgres database will be used, and if that does not exist, template1 will be used.
-p port --port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections. Defaults to the PGPORT environment variable, if set, or a compiled-in default.
-U username --username=username
User name to connect as.
-w --no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-W --password
Force pg_dumpall to prompt for a password before connecting to a database.
This option is never essential, since pg_dumpall will automatically prompt for a password if the server demands password authentication. However, pg_dumpall will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W to avoid the extra connection attempt.
Note that the password prompt will occur again for each database to be dumped. Usually, it's better to set up a ~/.pgpass file than to rely on manual password entry.
--role=rolename
Specifies a role name to be used to create the dump. This option causes pg_dumpall to issue a SET ROLE rolename command after connecting to the database. It is useful when the authenticated user (specified by -U) lacks privileges needed by pg_dumpall, but can switch to a role with the required rights. Some installations have a policy against logging in directly as a superuser, and use of this option allows dumps to be made without violating the policy.
Environment
PGHOST PGOPTIONS PGPORT PGUSER
Default connection parameters
This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq (see Section 31.13). Notes
Since pg_dumpall calls pg_dump internally, some diagnostic messages will refer to pg_dump.
Once restored, it is wise to run ANALYZE on each database so the optimizer has useful statistics. You can also run vacuumdb -a -z to analyze all databases.
pg_dumpall requires all needed tablespace directories to exist before the restore; otherwise, database creation will fail for databases in non-default locations. Examples
To dump all databases:
$ pg_dumpall > db.out
To reload database(s) from this file, you can use:
$ psql -f db.out postgres
(It is not important to which database you connect here since the script file created by pg_dumpall will contain the appropriate commands to create and connect to the saved databases.)
pg_restore — 从一个由 pg_dump 创建的备份文件中恢复 PostgreSQL 数据库。
Name pg_restore -- restore a PostgreSQL database from an archive file created by pg_dump Synopsis
pg_restore [connection-option...] [option...] [filename] Description
pg_restore is a utility for restoring a PostgreSQL database from an archive created by pg_dump in one of the non-plain-text formats. It will issue the commands necessary to reconstruct the database to the state it was in at the time it was saved. The archive files also allow pg_restore to be selective about what is restored, or even to reorder the items prior to being restored. The archive files are designed to be portable across architectures.
pg_restore can operate in two modes. If a database name is specified, pg_restore connects to that database and restores archive contents directly into the database. Otherwise, a script containing the SQL commands necessary to rebuild the database is created and written to a file or standard output. This script output is equivalent to the plain text output format of pg_dump. Some of the options controlling the output are therefore analogous to pg_dump options.
Obviously, pg_restore cannot restore information that is not present in the archive file. For instance, if the archive was made using the "dump data as INSERT commands" option, pg_restore will not be able to load the data using COPY statements. Options
pg_restore accepts the following command line arguments.
filename
Specifies the location of the archive file (or directory, for a directory-format archive) to be restored. If not specified, the standard input is used.
-a --data-only
Restore only the data, not the schema (data definitions).
-c --clean
Clean (drop) database objects before recreating them.
-C --create
Create the database before restoring into it. (When this option is used, the database named with -d is used only to issue the initial CREATE DATABASE command. All data is restored into the database name that appears in the archive.)
-d dbname --dbname=dbname
Connect to database dbname and restore directly into the database.
-e --exit-on-error
Exit if an error is encountered while sending SQL commands to the database. The default is to continue and to display a count of errors at the end of the restoration.
-f filename --file=filename
Specify output file for generated script, or for the listing when used with -l. Default is the standard output.
-F format --format=format
Specify format of the archive. It is not necessary to specify the format, since pg_restore will determine the format automatically. If specified, it can be one of the following:
c custom
The archive is in the custom format of pg_dump. d directory
The archive is a directory archive. t tar
The archive is a tar archive.
-i --ignore-version
A deprecated option that is now ignored.
-I index --index=index
Restore definition of named index only.
-j number-of-jobs --jobs=number-of-jobs
Run the most time-consuming parts of pg_restore — those which load data, create indexes, or create constraints — using multiple concurrent jobs. This option can dramatically reduce the time to restore a large database to a server running on a multiprocessor machine.
Each job is one process or one thread, depending on the operating system, and uses a separate connection to the server.
The optimal value for this option depends on the hardware setup of the server, of the client, and of the network. Factors include the number of CPU cores and the disk setup. A good place to start is the number of CPU cores on the server, but values larger than that can also lead to faster restore times in many cases. Of course, values that are too high will lead to decreased performance because of thrashing.
Only the custom archive format is supported with this option. The input file must be a regular file (not, for example, a pipe). This option is ignored when emitting a script rather than connecting directly to a database server. Also, multiple jobs cannot be used together with the option --single-transaction.
-l --list
List the contents of the archive. The output of this operation can be used as input to the -L option. Note that if filtering switches such as -n or -t are used with -l, they will restrict the items listed.
-L list-file --use-list=list-file
Restore only those archive elements that are listed in list-file, and restore them in the order they appear in the file. Note that if filtering switches such as -n or -t are used with -L, they will further restrict the items restored.
list-file is normally created by editing the output of a previous -l operation. Lines can be moved or removed, and can also be commented out by placing a semicolon (;) at the start of the line. See below for examples.
-n namespace --schema=schema
Restore only objects that are in the named schema. This can be combined with the -t option to restore just a specific table.
-O --no-owner
Do not output commands to set ownership of objects to match the original database. By default, pg_restore issues ALTER OWNER or SET SESSION AUTHORIZATION statements to set ownership of created schema elements. These statements will fail unless the initial connection to the database is made by a superuser (or the same user that owns all of the objects in the script). With -O, any user name can be used for the initial connection, and this user will own all the created objects.
-P function-name(argtype [, ...]) --function=function-name(argtype [, ...])
Restore the named function only. Be careful to spell the function name and arguments exactly as they appear in the dump file's table of contents.
-R --no-reconnect
This option is obsolete but still accepted for backwards compatibility.
-s --schema-only
Restore only the schema (data definitions), not the data (table contents). Current sequence values will not be restored, either. (Do not confuse this with the --schema option, which uses the word "schema" in a different meaning.)
-S username --superuser=username
Specify the superuser user name to use when disabling triggers. This is only relevant if --disable-triggers is used.
-t table --table=table
Restore definition and/or data of named table only. This can be combined with the -n option to specify a schema.
-T trigger --trigger=trigger
Restore named trigger only.
-v --verbose
Specifies verbose mode.
-V --version
Print the pg_restore version and exit.
-x --no-privileges --no-acl
Prevent restoration of access privileges (grant/revoke commands).
-1 --single-transaction
Execute the restore as a single transaction (that is, wrap the emitted commands in BEGIN/COMMIT). This ensures that either all the commands complete successfully, or no changes are applied. This option implies --exit-on-error.
--disable-triggers
This option is only relevant when performing a data-only restore. It instructs pg_restore to execute commands to temporarily disable triggers on the target tables while the data is reloaded. Use this if you have referential integrity checks or other triggers on the tables that you do not want to invoke during data reload.
Presently, the commands emitted for --disable-triggers must be done as superuser. So, you should also specify a superuser name with -S, or preferably run pg_restore as a PostgreSQL superuser.
--no-data-for-failed-tables
By default, table data is restored even if the creation command for the table failed (e.g., because it already exists). With this option, data for such a table is skipped. This behavior is useful if the target database already contains the desired table contents. For example, auxiliary tables for PostgreSQL extensions such as PostGIS might already be loaded in the target database; specifying this option prevents duplicate or obsolete data from being loaded into them.
This option is effective only when restoring directly into a database, not when producing SQL script output.
--no-security-labels
Do not output commands to restore security labels, even if the archive contains them.
--no-tablespaces
Do not output commands to select tablespaces. With this option, all objects will be created in whichever tablespace is the default during restore.
--use-set-session-authorization
Output SQL-standard SET SESSION AUTHORIZATION commands instead of ALTER OWNER commands to determine object ownership. This makes the dump more standards-compatible, but depending on the history of the objects in the dump, might not restore properly.
-? --help
Show help about pg_restore command line arguments, and exit.
pg_restore also accepts the following command line arguments for connection parameters:
-h host --host=host
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket. The default is taken from the PGHOST environment variable, if set, else a Unix domain socket connection is attempted.
-p port --port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections. Defaults to the PGPORT environment variable, if set, or a compiled-in default.
-U username --username=username
User name to connect as.
-w --no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-W --password
Force pg_restore to prompt for a password before connecting to a database.
This option is never essential, since pg_restore will automatically prompt for a password if the server demands password authentication. However, pg_restore will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W to avoid the extra connection attempt.
--role=rolename
Specifies a role name to be used to perform the restore. This option causes pg_restore to issue a SET ROLE rolename command after connecting to the database. It is useful when the authenticated user (specified by -U) lacks privileges needed by pg_restore, but can switch to a role with the required rights. Some installations have a policy against logging in directly as a superuser, and use of this option allows restores to be performed without violating the policy.
Environment
PGHOST PGOPTIONS PGPORT PGUSER
Default connection parameters
This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq (see Section 31.13). Diagnostics
When a direct database connection is specified using the -d option, pg_restore internally executes SQL statements. If you have problems running pg_restore, make sure you are able to select information from the database using, for example, psql. Also, any default connection settings and environment variables used by the libpq front-end library will apply. Notes
If your installation has any local additions to the template1 database, be careful to load the output of pg_restore into a truly empty database; otherwise you are likely to get errors due to duplicate definitions of the added objects. To make an empty database without any local additions, copy from template0 not template1, for example:
CREATE DATABASE foo WITH TEMPLATE template0;
The limitations of pg_restore are detailed below.
When restoring data to a pre-existing table and the option --disable-triggers is used, pg_restore emits commands to disable triggers on user tables before inserting the data, then emits commands to re-enable them after the data has been inserted. If the restore is stopped in the middle, the system catalogs might be left in the wrong state.
pg_restore cannot restore large objects selectively; for instance, only those for a specific table. If an archive contains large objects, then all large objects will be restored, or none of them if they are excluded via -L, -t, or other options.
See also the pg_dump documentation for details on limitations of pg_dump.
Once restored, it is wise to run ANALYZE on each restored table so the optimizer has useful statistics; see Section 23.1.3 and Section 23.1.5 for more information. Examples
Assume we have dumped a database called mydb into a custom-format dump file:
$ pg_dump -Fc mydb > db.dump
To drop the database and recreate it from the dump:
$ dropdb mydb $ pg_restore -C -d postgres db.dump
The database named in the -d switch can be any database existing in the cluster; pg_restore only uses it to issue the CREATE DATABASE command for mydb. With -C, data is always restored into the database name that appears in the dump file.
To reload the dump into a new database called newdb:
$ createdb -T template0 newdb $ pg_restore -d newdb db.dump
Notice we don't use -C, and instead connect directly to the database to be restored into. Also note that we clone the new database from template0 not template1, to ensure it is initially empty.
To reorder database items, it is first necessary to dump the table of contents of the archive:
$ pg_restore -l db.dump > db.list
The listing file consists of a header and one line for each item, e.g.:
- Archive created at Mon Sep 14 13
- 55:39 2009
- dbname
- DBDEMOS
- TOC Entries
- 81
- Compression
- 9
- Dump Version
- 1.10-0
- Format
- CUSTOM
- Integer
- 4 bytes
- Offset
- 8 bytes
- Dumped from database version
- 8.3.5
- Dumped by pg_dump version
- 8.3.8
- Selected TOC Entries
3; 2615 2200 SCHEMA - public pasha 1861; 0 0 COMMENT - SCHEMA public pasha 1862; 0 0 ACL - public pasha 317; 1247 17715 TYPE public composite pasha 319; 1247 25899 DOMAIN public domain0 pasha
Semicolons start a comment, and the numbers at the start of lines refer to the internal archive ID assigned to each item.
Lines in the file can be commented out, deleted, and reordered. For example:
10; 145433 TABLE map_resolutions postgres
- 2; 145344 TABLE species postgres
- 4; 145359 TABLE nt_header postgres
6; 145402 TABLE species_records postgres
- 8; 145416 TABLE ss_old postgres
could be used as input to pg_restore and would only restore items 10 and 6, in that order:
$ pg_restore -L db.list db.dump
psql — PostgreSQL 交互终端
Name psql -- PostgreSQL interactive terminal Synopsis
psql [option...] [dbname [username]] Description
psql is a terminal-based front-end to PostgreSQL. It enables you to type in queries interactively, issue them to PostgreSQL, and see the query results. Alternatively, input can be from a file. In addition, it provides a number of meta-commands and various shell-like features to facilitate writing scripts and automating a wide variety of tasks. Options
-a --echo-all
Print all input lines to standard output as they are read. This is more useful for script processing than interactive mode. This is equivalent to setting the variable ECHO to all.
-A --no-align
Switches to unaligned output mode. (The default output mode is otherwise aligned.)
-c command --command=command
Specifies that psql is to execute one command string, command, and then exit. This is useful in shell scripts. Start-up files (psqlrc and ~/.psqlrc) are ignored with this option.
command must be either a command string that is completely parsable by the server (i.e., it contains no psql-specific features), or a single backslash command. Thus you cannot mix SQL and psql meta-commands with this option. To achieve that, you could pipe the string into psql, like this: echo '\x \\ SELECT * FROM foo;' | psql. (\\ is the separator meta-command.)
If the command string contains multiple SQL commands, they are processed in a single transaction, unless there are explicit BEGIN/COMMIT commands included in the string to divide it into multiple transactions. This is different from the behavior when the same string is fed to psql's standard input. Also, only the result of the last SQL command is returned.
-d dbname --dbname=dbname
Specifies the name of the database to connect to. This is equivalent to specifying dbname as the first non-option argument on the command line.
If this parameter contains an = sign, it is treated as a conninfo string. See Section 31.1 for more information.
-e --echo-queries
Copy all SQL commands sent to the server to standard output as well. This is equivalent to setting the variable ECHO to queries.
-E --echo-hidden
Echo the actual queries generated by \d and other backslash commands. You can use this to study psql's internal operations. This is equivalent to setting the variable ECHO_HIDDEN from within psql.
-f filename --file=filename
Use the file filename as the source of commands instead of reading commands interactively. After the file is processed, psql terminates. This is in many ways equivalent to the internal command \i.
If filename is - (hyphen), then standard input is read.
Using this option is subtly different from writing psql < filename. In general, both will do what you expect, but using -f enables some nice features such as error messages with line numbers. There is also a slight chance that using this option will reduce the start-up overhead. On the other hand, the variant using the shell's input redirection is (in theory) guaranteed to yield exactly the same output you would have received had you entered everything by hand.
-F separator --field-separator=separator
Use separator as the field separator for unaligned output. This is equivalent to \pset fieldsep or \f.
-h hostname --host=hostname
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix-domain socket.
-H --html
Turn on HTML tabular output. This is equivalent to \pset format html or the \H command.
-l --list
List all available databases, then exit. Other non-connection options are ignored. This is similar to the internal command \list.
-L filename --log-file=filename
Write all query output into file filename, in addition to the normal output destination.
-n --no-readline
Do not use readline for line editing and do not use the history. This can be useful to turn off tab expansion when cutting and pasting.
-o filename --output=filename
Put all query output into file filename. This is equivalent to the command \o.
-p port --port=port
Specifies the TCP port or the local Unix-domain socket file extension on which the server is listening for connections. Defaults to the value of the PGPORT environment variable or, if not set, to the port specified at compile time, usually 5432.
-P assignment --pset=assignment
Specifies printing options, in the style of \pset. Note that here you have to separate name and value with an equal sign instead of a space. For example, to set the output format to LaTeX, you could write -P format=latex.
-q --quiet
Specifies that psql should do its work quietly. By default, it prints welcome messages and various informational output. If this option is used, none of this happens. This is useful with the -c option. Within psql you can also set the QUIET variable to achieve the same effect.
-R separator --record-separator=separator
Use separator as the record separator for unaligned output. This is equivalent to the \pset recordsep command.
-s --single-step
Run in single-step mode. That means the user is prompted before each command is sent to the server, with the option to cancel execution as well. Use this to debug scripts.
-S --single-line
Runs in single-line mode where a newline terminates an SQL command, as a semicolon does.
Note: This mode is provided for those who insist on it, but you are not necessarily encouraged to use it. In particular, if you mix SQL and meta-commands on a line the order of execution might not always be clear to the inexperienced user.
-t --tuples-only
Turn off printing of column names and result row count footers, etc. This is equivalent to the \t command.
-T table_options --table-attr=table_options
Specifies options to be placed within the HTML table tag. See \pset for details.
-U username --username=username
Connect to the database as the user username instead of the default. (You must have permission to do so, of course.)
-v assignment --set=assignment --variable=assignment
Perform a variable assignment, like the \set internal command. Note that you must separate name and value, if any, by an equal sign on the command line. To unset a variable, leave off the equal sign. To just set a variable without a value, use the equal sign but leave off the value. These assignments are done during a very early stage of start-up, so variables reserved for internal purposes might get overwritten later.
-V --version
Print the psql version and exit.
-w --no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
Note that this option will remain set for the entire session, and so it affects uses of the meta-command \connect as well as the initial connection attempt.
-W --password
Force psql to prompt for a password before connecting to a database.
This option is never essential, since psql will automatically prompt for a password if the server demands password authentication. However, psql will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W to avoid the extra connection attempt.
Note that this option will remain set for the entire session, and so it affects uses of the meta-command \connect as well as the initial connection attempt.
-x --expanded
Turn on the expanded table formatting mode. This is equivalent to the \x command.
-X, --no-psqlrc
Do not read the start-up file (neither the system-wide psqlrc file nor the user's ~/.psqlrc file).
-1 --single-transaction
When psql executes a script with the -f option, adding this option wraps BEGIN/COMMIT around the script to execute it as a single transaction. This ensures that either all the commands complete successfully, or no changes are applied.
If the script itself uses BEGIN, COMMIT, or ROLLBACK, this option will not have the desired effects. Also, if the script contains any command that cannot be executed inside a transaction block, specifying this option will cause that command (and hence the whole transaction) to fail.
-? --help
Show help about psql command line arguments, and exit.
Exit Status
psql returns 0 to the shell if it finished normally, 1 if a fatal error of its own occurs (e.g. out of memory, file not found), 2 if the connection to the server went bad and the session was not interactive, and 3 if an error occurred in a script and the variable ON_ERROR_STOP was set. Usage Connecting to a Database
psql is a regular PostgreSQL client application. In order to connect to a database you need to know the name of your target database, the host name and port number of the server, and what user name you want to connect as. psql can be told about those parameters via command line options, namely -d, -h, -p, and -U respectively. If an argument is found that does not belong to any option it will be interpreted as the database name (or the user name, if the database name is already given). Not all of these options are required; there are useful defaults. If you omit the host name, psql will connect via a Unix-domain socket to a server on the local host, or via TCP/IP to localhost on machines that don't have Unix-domain sockets. The default port number is determined at compile time. Since the database server uses the same default, you will not have to specify the port in most cases. The default user name is your Unix user name, as is the default database name. Note that you cannot just connect to any database under any user name. Your database administrator should have informed you about your access rights.
When the defaults aren't quite right, you can save yourself some typing by setting the environment variables PGDATABASE, PGHOST, PGPORT and/or PGUSER to appropriate values. (For additional environment variables, see Section 31.13.) It is also convenient to have a ~/.pgpass file to avoid regularly having to type in passwords. See Section 31.14 for more information.
An alternative way to specify connection parameters is in a conninfo string, which is used instead of a database name. This mechanism give you very wide control over the connection. For example:
$ psql "service=myservice sslmode=require"
This way you can also use LDAP for connection parameter lookup as described in Section 31.16. See Section 31.1 for more information on all the available connection options.
If the connection could not be made for any reason (e.g., insufficient privileges, server is not running on the targeted host, etc.), psql will return an error and terminate.
If at least one of standard input or standard output are a terminal, then psql sets the client encoding to "auto", which will detect the appropriate client encoding from the locale settings (LC_CTYPE environment variable on Unix systems). If this doesn't work out as expected, the client encoding can be overridden using the environment variable PGCLIENTENCODING. Entering SQL Commands
In normal operation, psql provides a prompt with the name of the database to which psql is currently connected, followed by the string =>. For example:
$ psql testdb psql (9.1.9) Type "help" for help.
testdb=>
At the prompt, the user can type in SQL commands. Ordinarily, input lines are sent to the server when a command-terminating semicolon is reached. An end of line does not terminate a command. Thus commands can be spread over several lines for clarity. If the command was sent and executed without error, the results of the command are displayed on the screen.
Whenever a command is executed, psql also polls for asynchronous notification events generated by LISTEN and NOTIFY. Meta-Commands
Anything you enter in psql that begins with an unquoted backslash is a psql meta-command that is processed by psql itself. These commands make psql more useful for administration or scripting. Meta-commands are often called slash or backslash commands.
The format of a psql command is the backslash, followed immediately by a command verb, then any arguments. The arguments are separated from the command verb and each other by any number of whitespace characters.
To include whitespace into an argument you can quote it with a single quote. To include a single quote into such an argument, use two single quotes. Anything contained in single quotes is furthermore subject to C-like substitutions for \n (new line), \t (tab), \digits (octal), and \xdigits (hexadecimal).
If an unquoted argument begins with a colon (:), it is taken as a psql variable and the value of the variable is used as the argument instead. If the variable name is surrounded by single quotes (e.g. :'var'), it will be escaped as an SQL literal and the result will be used as the argument. If the variable name is surrounded by double quotes, it will be escaped as an SQL identifier and the result will be used as the argument.
Arguments that are enclosed in backquotes (`) are taken as a command line that is passed to the shell. The output of the command (with any trailing newline removed) is taken as the argument value. The above escape sequences also apply in backquotes.
Some commands take an SQL identifier (such as a table name) as argument. These arguments follow the syntax rules of SQL: Unquoted letters are forced to lowercase, while double quotes (") protect letters from case conversion and allow incorporation of whitespace into the identifier. Within double quotes, paired double quotes reduce to a single double quote in the resulting name. For example, FOO"BAR"BAZ is interpreted as fooBARbaz, and "A weird"" name" becomes A weird" name.
Parsing for arguments stops at the end of the line, or when another unquoted backslash is found. An unquoted backslash is taken as the beginning of a new meta-command. The special sequence \\ (two backslashes) marks the end of arguments and continues parsing SQL commands, if any. That way SQL and psql commands can be freely mixed on a line. But in any case, the arguments of a meta-command cannot continue beyond the end of the line.
The following meta-commands are defined:
\a
If the current table output format is unaligned, it is switched to aligned. If it is not unaligned, it is set to unaligned. This command is kept for backwards compatibility. See \pset for a more general solution.
\c or \connect [ dbname [ username ] [ host ] [ port ] ]
Establishes a new connection to a PostgreSQL server. If the new connection is successfully made, the previous connection is closed. If any of dbname, username, host or port are omitted or specified as -, the value of that parameter from the previous connection is used. If there is no previous connection, the libpq default for the parameter's value is used.
If the connection attempt failed (wrong user name, access denied, etc.), the previous connection will only be kept if psql is in interactive mode. When executing a non-interactive script, processing will immediately stop with an error. This distinction was chosen as a user convenience against typos on the one hand, and a safety mechanism that scripts are not accidentally acting on the wrong database on the other hand.
\C [ title ]
Sets the title of any tables being printed as the result of a query or unset any such title. This command is equivalent to \pset title title. (The name of this command derives from "caption", as it was previously only used to set the caption in an HTML table.)
\cd [ directory ]
Changes the current working directory to directory. Without argument, changes to the current user's home directory.
Tip: To print your current working directory, use \! pwd.
\conninfo
Outputs information about the current database connection.
\copy { table [ ( column_list ) ] | ( query ) } { from | to } { filename | stdin | stdout | pstdin | pstdout } [ with ] [ binary ] [ oids ] [ delimiter [ as ] 'character' ] [ null [ as ] 'string' ] [ csv [ header ] [ quote [ as ] 'character' ] [ escape [ as ] 'character' ] [ force quote column_list | * ] [ force not null column_list ] ]
Performs a frontend (client) copy. This is an operation that runs an SQL COPY command, but instead of the server reading or writing the specified file, psql reads or writes the file and routes the data between the server and the local file system. This means that file accessibility and privileges are those of the local user, not the server, and no SQL superuser privileges are required.
The syntax of the command is similar to that of the SQL COPY command. Note that, because of this, special parsing rules apply to the \copy command. In particular, the variable substitution rules and backslash escapes do not apply.
\copy ... from stdin | to stdout reads/writes based on the command input and output respectively. All rows are read from the same source that issued the command, continuing until \. is read or the stream reaches EOF. Output is sent to the same place as command output. To read/write from psql's standard input or output, use pstdin or pstdout. This option is useful for populating tables in-line within a SQL script file.
Tip: This operation is not as efficient as the SQL COPY command because all data must pass through the client/server connection. For large amounts of data the SQL command might be preferable.
\copyright
Shows the copyright and distribution terms of PostgreSQL.
\d[S+] [ pattern ]
For each relation (table, view, index, sequence, or foreign table) or composite type matching the pattern, show all columns, their types, the tablespace (if not the default) and any special attributes such as NOT NULL or defaults. Associated indexes, constraints, rules, and triggers are also shown. For foreign tables, the associated foreign server is shown as well. ("Matching the pattern" is defined in Patterns below.)
The command form \d+ is identical, except that more information is displayed: any comments associated with the columns of the table are shown, as is the presence of OIDs in the table, the view definition if the relation is a view, and the generic options if the relation is a foreign table.
By default, only user-created objects are shown; supply a pattern or the S modifier to include system objects.
Note: If \d is used without a pattern argument, it is equivalent to \dtvsE which will show a list of all visible tables, views, sequences and foreign tables. This is purely a convenience measure.
\da[S] [ pattern ]
Lists aggregate functions, together with their return type and the data types they operate on. If pattern is specified, only aggregates whose names match the pattern are shown. By default, only user-created objects are shown; supply a pattern or the S modifier to include system objects.
\db[+] [ pattern ]
Lists tablespaces. If pattern is specified, only tablespaces whose names match the pattern are shown. If + is appended to the command name, each object is listed with its associated permissions.
\dc[S] [ pattern ]
Lists conversions between character-set encodings. If pattern is specified, only conversions whose names match the pattern are listed. By default, only user-created objects are shown; supply a pattern or the S modifier to include system objects.
\dC [ pattern ]
Lists type casts. If pattern is specified, only casts whose source or target types match the pattern are listed.
\dd[S] [ pattern ]
Shows the descriptions of objects matching the pattern, or of all visible objects if no argument is given. But in either case, only objects that have a description are listed. By default, only user-created objects are shown; supply a pattern or the S modifier to include system objects. "Object" covers aggregates, functions, operators, types, relations (tables, views, indexes, sequences), large objects, rules, and triggers. For example:
=> \dd version Object descriptions Schema | Name | Object | Description ------------+---------+----------+--------------------------- pg_catalog | version | function | PostgreSQL version string (1 row)
Descriptions for objects can be created with the COMMENT SQL command.
\ddp [ pattern ]
Lists default access privilege settings. An entry is shown for each role (and schema, if applicable) for which the default privilege settings have been changed from the built-in defaults. If pattern is specified, only entries whose role name or schema name matches the pattern are listed.
The ALTER DEFAULT PRIVILEGES command is used to set default access privileges. The meaning of the privilege display is explained under GRANT.
\dD[S] [ pattern ]
Lists domains. If pattern is specified, only domains whose names match the pattern are shown. By default, only user-created objects are shown; supply a pattern or the S modifier to include system objects.
\dE[S+] [ pattern ] \di[S+] [ pattern ] \ds[S+] [ pattern ] \dt[S+] [ pattern ] \dv[S+] [ pattern ]
In this group of commands, the letters E, i, s, t, and v stand for foreign table, index, sequence, table, and view, respectively. You can specify any or all of these letters, in any order, to obtain a listing of objects of these types. For example, \dit lists indexes and tables. If + is appended to the command name, each object is listed with its physical size on disk and its associated description, if any. If pattern is specified, only objects whose names match the pattern are listed. By default, only user-created objects are shown; supply a pattern or the S modifier to include system objects.
\des[+] [ pattern ]
Lists foreign servers (mnemonic: "external servers"). If pattern is specified, only those servers whose name matches the pattern are listed. If the form \des+ is used, a full description of each server is shown, including the server's ACL, type, version, and options.
\det[+] [ pattern ]
Lists foreign tables (mnemonic: "external tables"). If pattern is specified, only entries whose table name or schema name matches the pattern are listed. If the form \det+ is used, generic options are also displayed.
\deu[+] [ pattern ]
Lists user mappings (mnemonic: "external users"). If pattern is specified, only those mappings whose user names match the pattern are listed. If the form \deu+ is used, additional information about each mapping is shown. Caution
\deu+ might also display the user name and password of the remote user, so care should be taken not to disclose them.
\dew[+] [ pattern ]
Lists foreign-data wrappers (mnemonic: "external wrappers"). If pattern is specified, only those foreign-data wrappers whose name matches the pattern are listed. If the form \dew+ is used, the ACL and options of the foreign-data wrapper are also shown.
\df[antwS+] [ pattern ]
Lists functions, together with their arguments, return types, and function types, which are classified as "agg" (aggregate), "normal", "trigger", or "window". To display only functions of specific type(s), add the corresponding letters a, n, t, or w to the command. If pattern is specified, only functions whose names match the pattern are shown. If the form \df+ is used, additional information about each function, including volatility, language, source code and description, is shown. By default, only user-created objects are shown; supply a pattern or the S modifier to include system objects.
Tip: To look up functions taking arguments or returning values of a specific type, use your pager's search capability to scroll through the \df output.
\dF[+] [ pattern ]
Lists text search configurations. If pattern is specified, only configurations whose names match the pattern are shown. If the form \dF+ is used, a full description of each configuration is shown, including the underlying text search parser and the dictionary list for each parser token type.
\dFd[+] [ pattern ]
Lists text search dictionaries. If pattern is specified, only dictionaries whose names match the pattern are shown. If the form \dFd+ is used, additional information is shown about each selected dictionary, including the underlying text search template and the option values.
\dFp[+] [ pattern ]
Lists text search parsers. If pattern is specified, only parsers whose names match the pattern are shown. If the form \dFp+ is used, a full description of each parser is shown, including the underlying functions and the list of recognized token types.
\dFt[+] [ pattern ]
Lists text search templates. If pattern is specified, only templates whose names match the pattern are shown. If the form \dFt+ is used, additional information is shown about each template, including the underlying function names.
\dg[+] [ pattern ]
Lists database roles. If pattern is specified, only those roles whose names match the pattern are listed. (This command is now effectively the same as \du). If the form \dg+ is used, additional information is shown about each role, including the comment for each role.
\dl
This is an alias for \lo_list, which shows a list of large objects.
\dL[S+] [ pattern ]
Lists procedural languages. If pattern is specified, only languages whose names match the pattern are listed. By default, only user-created languages are shown; supply the S modifier to include system objects. If + is appended to the command name, each language is listed with its call handler, validator, access privileges, and whether it is a system object.
\dn[S+] [ pattern ]
Lists schemas (namespaces). If pattern is specified, only schemas whose names match the pattern are listed. By default, only user-created objects are shown; supply a pattern or the S modifier to include system objects. If + is appended to the command name, each object is listed with its associated permissions and description, if any.
\do[S] [ pattern ]
Lists operators with their operand and return types. If pattern is specified, only operators whose names match the pattern are listed. By default, only user-created objects are shown; supply a pattern or the S modifier to include system objects.
\dO[S+] [ pattern ]
Lists collations. If pattern is specified, only collations whose names match the pattern are listed. By default, only user-created objects are shown; supply a pattern or the S modifier to include system objects. If + is appended to the command name, each collation is listed with its associated description, if any. Note that only collations usable with the current database's encoding are shown, so the results may vary in different databases of the same installation.
\dp [ pattern ]
Lists tables, views and sequences with their associated access privileges. If pattern is specified, only tables, views and sequences whose names match the pattern are listed.
The GRANT and REVOKE commands are used to set access privileges. The meaning of the privilege display is explained under GRANT.
\drds [ role-pattern [ database-pattern ] ]
Lists defined configuration settings. These settings can be role-specific, database-specific, or both. role-pattern and database-pattern are used to select specific roles and databases to list, respectively. If omitted, or if * is specified, all settings are listed, including those not role-specific or database-specific, respectively.
The ALTER ROLE and ALTER DATABASE commands are used to define per-role and per-database configuration settings.
\dT[S+] [ pattern ]
Lists data types. If pattern is specified, only types whose names match the pattern are listed. If + is appended to the command name, each type is listed with its internal name and size, as well as its allowed values if it is an enum type. By default, only user-created objects are shown; supply a pattern or the S modifier to include system objects.
\du[+] [ pattern ]
Lists database roles. If pattern is specified, only those roles whose names match the pattern are listed. If the form \du+ is used, additional information is shown about each role, including the comment for each role.
\dx[+] [ pattern ]
Lists installed extensions. If pattern is specified, only those extensions whose names match the pattern are listed. If the form \dx+ is used, all the objects belonging to each matching extension are listed.
\e or \edit [ filename ] [ line_number ]
If filename is specified, the file is edited; after the editor exits, its content is copied back to the query buffer. If no filename is given, the current query buffer is copied to a temporary file which is then edited in the same fashion.
The new query buffer is then re-parsed according to the normal rules of psql, where the whole buffer is treated as a single line. (Thus you cannot make scripts this way. Use \i for that.) This means that if the query ends with (or contains) a semicolon, it is immediately executed. Otherwise it will merely wait in the query buffer; type semicolon or \g to send it, or \r to cancel.
If a line number is specified, psql will position the cursor on the specified line of the file or query buffer. Note that if a single all-digits argument is given, psql assumes it is a line number, not a file name.
Tip: See under Environment for how to configure and customize your editor.
\echo text [ ... ]
Prints the arguments to the standard output, separated by one space and followed by a newline. This can be useful to intersperse information in the output of scripts. For example:
=> \echo `date` Tue Oct 26 21:40:57 CEST 1999
If the first argument is an unquoted -n the trailing newline is not written.
Tip: If you use the \o command to redirect your query output you might wish to use \qecho instead of this command.
\ef [ function_description [ line_number ] ]
This command fetches and edits the definition of the named function, in the form of a CREATE OR REPLACE FUNCTION command. Editing is done in the same way as for \edit. After the editor exits, the updated command waits in the query buffer; type semicolon or \g to send it, or \r to cancel.
The target function can be specified by name alone, or by name and arguments, for example foo(integer, text). The argument types must be given if there is more than one function of the same name.
If no function is specified, a blank CREATE FUNCTION template is presented for editing.
If a line number is specified, psql will position the cursor on the specified line of the function body. (Note that the function body typically does not begin on the first line of the file.)
Tip: See under Environment for how to configure and customize your editor.
\encoding [ encoding ]
Sets the client character set encoding. Without an argument, this command shows the current encoding.
\f [ string ]
Sets the field separator for unaligned query output. The default is the vertical bar (|). See also \pset for a generic way of setting output options.
\g [ { filename | |command } ]
Sends the current query input buffer to the server and optionally stores the query's output in filename or pipes the output into a separate Unix shell executing command. A bare \g is virtually equivalent to a semicolon. A \g with argument is a "one-shot" alternative to the \o command.
\h or \help [ command ]
Gives syntax help on the specified SQL command. If command is not specified, then psql will list all the commands for which syntax help is available. If command is an asterisk (*), then syntax help on all SQL commands is shown.
Note: To simplify typing, commands that consists of several words do not have to be quoted. Thus it is fine to type \help alter table.
\H
Turns on HTML query output format. If the HTML format is already on, it is switched back to the default aligned text format. This command is for compatibility and convenience, but see \pset about setting other output options.
\i filename
Reads input from the file filename and executes it as though it had been typed on the keyboard.
Note: If you want to see the lines on the screen as they are read you must set the variable ECHO to all.
\l (or \list) \l+ (or \list+)
List the names, owners, character set encodings, and access privileges of all the databases in the server. If + is appended to the command name, database sizes, default tablespaces, and descriptions are also displayed. (Size information is only available for databases that the current user can connect to.)
\lo_export loid filename
Reads the large object with OID loid from the database and writes it to filename. Note that this is subtly different from the server function lo_export, which acts with the permissions of the user that the database server runs as and on the server's file system.
Tip: Use \lo_list to find out the large object's OID.
\lo_import filename [ comment ]
Stores the file into a PostgreSQL large object. Optionally, it associates the given comment with the object. Example:
foo=> \lo_import '/home/peter/pictures/photo.xcf' 'a picture of me' lo_import 152801
The response indicates that the large object received object ID 152801, which can be used to access the newly-created large object in the future. For the sake of readability, it is recommended to always associate a human-readable comment with every object. Both OIDs and comments can be viewed with the \lo_list command.
Note that this command is subtly different from the server-side lo_import because it acts as the local user on the local file system, rather than the server's user and file system.
\lo_list
Shows a list of all PostgreSQL large objects currently stored in the database, along with any comments provided for them.
\lo_unlink loid
Deletes the large object with OID loid from the database.
Tip: Use \lo_list to find out the large object's OID.
\o [ {filename | |command} ]
Saves future query results to the file filename or pipes future results into a separate Unix shell to execute command. If no arguments are specified, the query output will be reset to the standard output.
"Query results" includes all tables, command responses, and notices obtained from the database server, as well as output of various backslash commands that query the database (such as \d), but not error messages.
Tip: To intersperse text output in between query results, use \qecho.
\p
Print the current query buffer to the standard output.
\password [ username ]
Changes the password of the specified user (by default, the current user). This command prompts for the new password, encrypts it, and sends it to the server as an ALTER ROLE command. This makes sure that the new password does not appear in cleartext in the command history, the server log, or elsewhere.
\prompt [ text ] name
Prompts the user to set variable name. An optional prompt, text, can be specified. (For multiword prompts, use single quotes.)
By default, \prompt uses the terminal for input and output. However, if the -f command line switch is used, \prompt uses standard input and standard output.
\pset option [ value ]
This command sets options affecting the output of query result tables. option indicates which option is to be set. The semantics of value vary depending on the selected option. For some options, omitting value causes the option to be toggled or unset, as described under the particular option. If no such behavior is mentioned, then omitting value just results in the current setting being displayed.
Adjustable printing options are:
border
The value must be a number. In general, the higher the number the more borders and lines the tables will have, but this depends on the particular format. In HTML format, this will translate directly into the border=... attribute; in the other formats only values 0 (no border), 1 (internal dividing lines), and 2 (table frame) make sense. columns
Sets the target width for the wrapped format, and also the width limit for determining whether output is wide enough to require the pager. Zero (the default) causes the target width to be controlled by the environment variable COLUMNS, or the detected screen width if COLUMNS is not set. In addition, if columns is zero then the wrapped format only affects screen output. If columns is nonzero then file and pipe output is wrapped to that width as well. expanded (or x)
If value is specified it must be either on or off which will enable or disable expanded mode. If value is omitted the command toggles between regular and expanded mode. When expanded mode is enabled, query results are displayed in two columns, with the column name on the left and the data on the right. This mode is useful if the data wouldn't fit on the screen in the normal "horizontal" mode. fieldsep
Specifies the field separator to be used in unaligned output format. That way one can create, for example, tab- or comma-separated output, which other programs might prefer. To set a tab as field separator, type \pset fieldsep '\t'. The default field separator is '|' (a vertical bar). footer
If value is specified it must be either on or off which will enable or disable display of the table footer (the (n rows) count). If value is omitted the command toggles footer display on or off. format
Sets the output format to one of unaligned, aligned, wrapped, html, latex, or troff-ms. Unique abbreviations are allowed. (That would mean one letter is enough.)
unaligned format writes all columns of a row on one line, separated by the currently active field separator. This is useful for creating output that might be intended to be read in by other programs (for example, tab-separated or comma-separated format).
aligned format is the standard, human-readable, nicely formatted text output; this is the default.
wrapped format is like aligned but wraps wide data values across lines to make the output fit in the target column width. The target width is determined as described under the columns option. Note that psql will not attempt to wrap column header titles; therefore, wrapped format behaves the same as aligned if the total width needed for column headers exceeds the target.
The html, latex, and troff-ms formats put out tables that are intended to be included in documents using the respective mark-up language. They are not complete documents! (This might not be so dramatic in HTML, but in LaTeX you must have a complete document wrapper.) linestyle
Sets the border line drawing style to one of ascii, old-ascii or unicode. Unique abbreviations are allowed. (That would mean one letter is enough.) The default setting is ascii. This option only affects the aligned and wrapped output formats.
ascii style uses plain ASCII characters. Newlines in data are shown using a + symbol in the right-hand margin. When the wrapped format wraps data from one line to the next without a newline character, a dot (.) is shown in the right-hand margin of the first line, and again in the left-hand margin of the following line.
old-ascii style uses plain ASCII characters, using the formatting style used in PostgreSQL 8.4 and earlier. Newlines in data are shown using a : symbol in place of the left-hand column separator. When the data is wrapped from one line to the next without a newline character, a ; symbol is used in place of the left-hand column separator.
unicode style uses Unicode box-drawing characters. Newlines in data are shown using a carriage return symbol in the right-hand margin. When the data is wrapped from one line to the next without a newline character, an ellipsis symbol is shown in the right-hand margin of the first line, and again in the left-hand margin of the following line.
When the border setting is greater than zero, this option also determines the characters with which the border lines are drawn. Plain ASCII characters work everywhere, but Unicode characters look nicer on displays that recognize them. null
Sets the string to be printed in place of a null value. The default is to print nothing, which can easily be mistaken for an empty string. For example, one might prefer \pset null '(null)'. numericlocale
If value is specified it must be either on or off which will enable or disable display of a locale-specific character to separate groups of digits to the left of the decimal marker. If value is omitted the command toggles between regular and locale-specific numeric output. pager
Controls use of a pager program for query and psql help output. If the environment variable PAGER is set, the output is piped to the specified program. Otherwise a platform-dependent default (such as more) is used.
When the pager option is off, the pager program is not used. When the pager option is on, the pager is used when appropriate, i.e., when the output is to a terminal and will not fit on the screen. The pager option can also be set to always, which causes the pager to be used for all terminal output regardless of whether it fits on the screen. \pset pager without a value toggles pager use on and off. recordsep
Specifies the record (line) separator to use in unaligned output format. The default is a newline character. tableattr (or T)
Specifies attributes to be placed inside the HTML table tag in html output format. This could for example be cellpadding or bgcolor. Note that you probably don't want to specify border here, as that is already taken care of by \pset border. If no value is given, the table attributes are unset. title
Sets the table title for any subsequently printed tables. This can be used to give your output descriptive tags. If no value is given, the title is unset. tuples_only (or t)
If value is specified it must be either on or off which will enable or disable tuples-only mode. If value is omitted the command toggles between regular and tuples-only output. Regular output includes extra information such as column headers, titles, and various footers. In tuples-only mode, only actual table data is shown.
Illustrations of how these different formats look can be seen in the Examples section.
Tip: There are various shortcut commands for \pset. See \a, \C, \H, \t, \T, and \x.
Note: It is an error to call \pset without any arguments. In the future this case might show the current status of all printing options.
\q or \quit
Quits the psql program. In a script file, only execution of that script is terminated.
\qecho text [ ... ]
This command is identical to \echo except that the output will be written to the query output channel, as set by \o.
\r
Resets (clears) the query buffer.
\s [ filename ]
Print or save the command line history to filename. If filename is omitted, the history is written to the standard output. This option is only available if psql is configured to use the GNU Readline library.
\set [ name [ value [ ... ] ] ]
Sets the internal variable name to value or, if more than one value is given, to the concatenation of all of them. If no second argument is given, the variable is just set with no value. To unset a variable, use the \unset command.
Valid variable names can contain characters, digits, and underscores. See the section Variables below for details. Variable names are case-sensitive.
Although you are welcome to set any variable to anything you want, psql treats several variables as special. They are documented in the section about variables.
Note: This command is totally separate from the SQL command SET.
\sf[+] function_description
This command fetches and shows the definition of the named function, in the form of a CREATE OR REPLACE FUNCTION command. The definition is printed to the current query output channel, as set by \o.
The target function can be specified by name alone, or by name and arguments, for example foo(integer, text). The argument types must be given if there is more than one function of the same name.
If + is appended to the command name, then the output lines are numbered, with the first line of the function body being line 1.
\t
Toggles the display of output column name headings and row count footer. This command is equivalent to \pset tuples_only and is provided for convenience.
\T table_options
Specifies attributes to be placed within the table tag in HTML output format. This command is equivalent to \pset tableattr table_options.
\timing [ on | off ]
Without parameter, toggles a display of how long each SQL statement takes, in milliseconds. With parameter, sets same.
\w filename \w |command
Outputs the current query buffer to the file filename or pipes it to the Unix command command.
\x
Toggles expanded table formatting mode. As such it is equivalent to \pset expanded.
\z [ pattern ]
Lists tables, views and sequences with their associated access privileges. If a pattern is specified, only tables, views and sequences whose names match the pattern are listed.
This is an alias for \dp ("display privileges").
\! [ command ]
Escapes to a separate Unix shell or executes the Unix command command. The arguments are not further interpreted; the shell will see them as-is.
\?
Shows help information about the backslash commands.
Patterns
The various \d commands accept a pattern parameter to specify the object name(s) to be displayed. In the simplest case, a pattern is just the exact name of the object. The characters within a pattern are normally folded to lower case, just as in SQL names; for example, \dt FOO will display the table named foo. As in SQL names, placing double quotes around a pattern stops folding to lower case. Should you need to include an actual double quote character in a pattern, write it as a pair of double quotes within a double-quote sequence; again this is in accord with the rules for SQL quoted identifiers. For example, \dt "FOO""BAR" will display the table named FOO"BAR (not foo"bar). Unlike the normal rules for SQL names, you can put double quotes around just part of a pattern, for instance \dt FOO"FOO"BAR will display the table named fooFOObar.
Whenever the pattern parameter is omitted completely, the \d commands display all objects that are visible in the current schema search path — this is equivalent to using * as the pattern. (An object is said to be visible if its containing schema is in the search path and no object of the same kind and name appears earlier in the search path. This is equivalent to the statement that the object can be referenced by name without explicit schema qualification.) To see all objects in the database regardless of visibility, use *.* as the pattern.
Within a pattern, * matches any sequence of characters (including no characters) and ? matches any single character. (This notation is comparable to Unix shell file name patterns.) For example, \dt int* displays tables whose names begin with int. But within double quotes, * and ? lose these special meanings and are just matched literally.
A pattern that contains a dot (.) is interpreted as a schema name pattern followed by an object name pattern. For example, \dt foo*.*bar* displays all tables whose table name includes bar that are in schemas whose schema name starts with foo. When no dot appears, then the pattern matches only objects that are visible in the current schema search path. Again, a dot within double quotes loses its special meaning and is matched literally.
Advanced users can use regular-expression notations such as character classes, for example [0-9] to match any digit. All regular expression special characters work as specified in Section 9.7.3, except for . which is taken as a separator as mentioned above, * which is translated to the regular-expression notation .*, ? which is translated to ., and $ which is matched literally. You can emulate these pattern characters at need by writing ? for ., (R+|) for R*, or (R|) for R?. $ is not needed as a regular-expression character since the pattern must match the whole name, unlike the usual interpretation of regular expressions (in other words, $ is automatically appended to your pattern). Write * at the beginning and/or end if you don't wish the pattern to be anchored. Note that within double quotes, all regular expression special characters lose their special meanings and are matched literally. Also, the regular expression special characters are matched literally in operator name patterns (i.e., the argument of \do). Advanced Features Variables
psql provides variable substitution features similar to common Unix command shells. Variables are simply name/value pairs, where the value can be any string of any length. To set variables, use the psql meta-command \set:
testdb=> \set foo bar
sets the variable foo to the value bar. To retrieve the content of the variable, precede the name with a colon and use it as the argument of any slash command:
testdb=> \echo :foo bar
Note: The arguments of \set are subject to the same substitution rules as with other commands. Thus you can construct interesting references such as \set :foo 'something' and get "soft links" or "variable variables" of Perl or PHP fame, respectively. Unfortunately (or fortunately?), there is no way to do anything useful with these constructs. On the other hand, \set bar :foo is a perfectly valid way to copy a variable.
If you call \set without a second argument, the variable is set, with an empty string as value. To unset (or delete) a variable, use the command \unset.
psql's internal variable names can consist of letters, numbers, and underscores in any order and any number of them. A number of these variables are treated specially by psql. They indicate certain option settings that can be changed at run time by altering the value of the variable or that represent some state of the application. Although you can use these variables for any other purpose, this is not recommended, as the program behavior might grow really strange really quickly. By convention, all specially treated variables consist of all upper-case letters (and possibly numbers and underscores). To ensure maximum compatibility in the future, avoid using such variable names for your own purposes. A list of all specially treated variables follows.
AUTOCOMMIT
When on (the default), each SQL command is automatically committed upon successful completion. To postpone commit in this mode, you must enter a BEGIN or START TRANSACTION SQL command. When off or unset, SQL commands are not committed until you explicitly issue COMMIT or END. The autocommit-off mode works by issuing an implicit BEGIN for you, just before any command that is not already in a transaction block and is not itself a BEGIN or other transaction-control command, nor a command that cannot be executed inside a transaction block (such as VACUUM).
Note: In autocommit-off mode, you must explicitly abandon any failed transaction by entering ABORT or ROLLBACK. Also keep in mind that if you exit the session without committing, your work will be lost.
Note: The autocommit-on mode is PostgreSQL's traditional behavior, but autocommit-off is closer to the SQL spec. If you prefer autocommit-off, you might wish to set it in the system-wide psqlrc file or your ~/.psqlrc file.
DBNAME
The name of the database you are currently connected to. This is set every time you connect to a database (including program start-up), but can be unset.
ECHO
If set to all, all lines entered from the keyboard or from a script are written to the standard output before they are parsed or executed. To select this behavior on program start-up, use the switch -a. If set to queries, psql merely prints all queries as they are sent to the server. The switch for this is -e.
ECHO_HIDDEN
When this variable is set and a backslash command queries the database, the query is first shown. This way you can study the PostgreSQL internals and provide similar functionality in your own programs. (To select this behavior on program start-up, use the switch -E.) If you set the variable to the value noexec, the queries are just shown but are not actually sent to the server and executed.
ENCODING
The current client character set encoding.
FETCH_COUNT
If this variable is set to an integer value > 0, the results of SELECT queries are fetched and displayed in groups of that many rows, rather than the default behavior of collecting the entire result set before display. Therefore only a limited amount of memory is used, regardless of the size of the result set. Settings of 100 to 1000 are commonly used when enabling this feature. Keep in mind that when using this feature, a query might fail after having already displayed some rows.
Tip: Although you can use any output format with this feature, the default aligned format tends to look bad because each group of FETCH_COUNT rows will be formatted separately, leading to varying column widths across the row groups. The other output formats work better.
HISTCONTROL
If this variable is set to ignorespace, lines which begin with a space are not entered into the history list. If set to a value of ignoredups, lines matching the previous history line are not entered. A value of ignoreboth combines the two options. If unset, or if set to any other value than those above, all lines read in interactive mode are saved on the history list.
Note: This feature was shamelessly plagiarized from Bash.
HISTFILE
The file name that will be used to store the history list. The default value is ~/.psql_history. For example, putting:
\set HISTFILE ~/.psql_history- :DBNAME
in ~/.psqlrc will cause psql to maintain a separate history for each database.
Note: This feature was shamelessly plagiarized from Bash.
HISTSIZE
The number of commands to store in the command history. The default value is 500.
Note: This feature was shamelessly plagiarized from Bash.
HOST
The database server host you are currently connected to. This is set every time you connect to a database (including program start-up), but can be unset.
IGNOREEOF
If unset, sending an EOF character (usually Control+D) to an interactive session of psql will terminate the application. If set to a numeric value, that many EOF characters are ignored before the application terminates. If the variable is set but has no numeric value, the default is 10.
Note: This feature was shamelessly plagiarized from Bash.
LASTOID
The value of the last affected OID, as returned from an INSERT or \lo_import command. This variable is only guaranteed to be valid until after the result of the next SQL command has been displayed.
ON_ERROR_ROLLBACK
When on, if a statement in a transaction block generates an error, the error is ignored and the transaction continues. When interactive, such errors are only ignored in interactive sessions, and not when reading script files. When off (the default), a statement in a transaction block that generates an error aborts the entire transaction. The on_error_rollback-on mode works by issuing an implicit SAVEPOINT for you, just before each command that is in a transaction block, and rolls back to the savepoint on error.
ON_ERROR_STOP
By default, command processing continues after an error. When this variable is set, it will instead stop immediately. In interactive mode, psql will return to the command prompt; otherwise, psql will exit, returning error code 3 to distinguish this case from fatal error conditions, which are reported using error code 1. In either case, any currently running scripts (the top-level script, if any, and any other scripts which it may have in invoked) will be terminated immediately. If the top-level command string contained multiple SQL commands, processing will stop with the current command.
PORT
The database server port to which you are currently connected. This is set every time you connect to a database (including program start-up), but can be unset.
PROMPT1 PROMPT2 PROMPT3
These specify what the prompts psql issues should look like. See Prompting below.
QUIET
This variable is equivalent to the command line option -q. It is probably not too useful in interactive mode.
SINGLELINE
This variable is equivalent to the command line option -S.
SINGLESTEP
This variable is equivalent to the command line option -s.
USER
The database user you are currently connected as. This is set every time you connect to a database (including program start-up), but can be unset.
VERBOSITY
This variable can be set to the values default, verbose, or terse to control the verbosity of error reports.
SQL Interpolation
An additional useful feature of psql variables is that you can substitute ("interpolate") them into regular SQL statements. psql provides special facilities for ensuring that values used as SQL literals and identifiers are properly escaped. The syntax for interpolating a value without any special escaping is again to prepend the variable name with a colon (:):
testdb=> \set foo 'my_table' testdb=> SELECT * FROM :foo;
would then query the table my_table. Note that this may be unsafe: the value of the variable is copied literally, so it can even contain unbalanced quotes or backslash commands. You must make sure that it makes sense where you put it.
When a value is to be used as an SQL literal or identifier, it is safest to arrange for it to be escaped. To escape the value of a variable as an SQL literal, write a colon followed by the variable name in single quotes. To escape the value an SQL identifier, write a colon followed by the variable name in double quotes. The previous example would be more safely written this way:
testdb=> \set foo 'my_table' testdb=> SELECT * FROM :"foo";
Variable interpolation will not be performed into quoted SQL entities.
One possible use of this mechanism is to copy the contents of a file into a table column. First load the file into a variable and then proceed as above:
testdb=> \set content `cat my_file.txt` testdb=> INSERT INTO my_table VALUES (:'content');
(Note that this still won't work if my_file.txt contains NUL bytes. psql does not support embedded NUL bytes in variable values.)
Since colons can legally appear in SQL commands, an apparent attempt at interpolation (such as :name, :'name', or :"name") is not changed unless the named variable is currently set. In any case, you can escape a colon with a backslash to protect it from substitution. (The colon syntax for variables is standard SQL for embedded query languages, such as ECPG. The colon syntax for array slices and type casts are PostgreSQL extensions, hence the conflict. The colon syntax for escaping a variable's value as an SQL literal or identifier is a psql extension.) Prompting
The prompts psql issues can be customized to your preference. The three variables PROMPT1, PROMPT2, and PROMPT3 contain strings and special escape sequences that describe the appearance of the prompt. Prompt 1 is the normal prompt that is issued when psql requests a new command. Prompt 2 is issued when more input is expected during command input because the command was not terminated with a semicolon or a quote was not closed. Prompt 3 is issued when you run an SQL COPY command and you are expected to type in the row values on the terminal.
The value of the selected prompt variable is printed literally, except where a percent sign (%) is encountered. Depending on the next character, certain other text is substituted instead. Defined substitutions are:
%M
The full host name (with domain name) of the database server, or [local] if the connection is over a Unix domain socket, or [local:/dir/name], if the Unix domain socket is not at the compiled in default location.
%m
The host name of the database server, truncated at the first dot, or [local] if the connection is over a Unix domain socket.
%>
The port number at which the database server is listening.
%n
The database session user name. (The expansion of this value might change during a database session as the result of the command SET SESSION AUTHORIZATION.)
%/
The name of the current database.
%~
Like %/, but the output is ~ (tilde) if the database is your default database.
%#
If the session user is a database superuser, then a #, otherwise a >. (The expansion of this value might change during a database session as the result of the command SET SESSION AUTHORIZATION.)
%R
In prompt 1 normally =, but ^ if in single-line mode, and ! if the session is disconnected from the database (which can happen if \connect fails). In prompt 2 the sequence is replaced by -, *, a single quote, a double quote, or a dollar sign, depending on whether psql expects more input because the command wasn't terminated yet, because you are inside a /* ... */ comment, or because you are inside a quoted or dollar-escaped string. In prompt 3 the sequence doesn't produce anything.
%x
Transaction status: an empty string when not in a transaction block, or * when in a transaction block, or ! when in a failed transaction block, or ? when the transaction state is indeterminate (for example, because there is no connection).
%digits
The character with the indicated octal code is substituted.
%:name:
The value of the psql variable name. See the section Variables for details.
%`command`
The output of command, similar to ordinary "back-tick" substitution.
%[ ... %]
Prompts can contain terminal control characters which, for example, change the color, background, or style of the prompt text, or change the title of the terminal window. In order for the line editing features of Readline to work properly, these non-printing control characters must be designated as invisible by surrounding them with %[ and %]. Multiple pairs of these can occur within the prompt. For example:
testdb=> \set PROMPT1 '%[%033[1;33;40m%]%n@%/%R%[%033[0m%]%# '
results in a boldfaced (1;) yellow-on-black (33;40) prompt on VT100-compatible, color-capable terminals.
To insert a percent sign into your prompt, write %%. The default prompts are '%/%R%# ' for prompts 1 and 2, and '>> ' for prompt 3.
Note: This feature was shamelessly plagiarized from tcsh.
Command-Line Editing
psql supports the Readline library for convenient line editing and retrieval. The command history is automatically saved when psql exits and is reloaded when psql starts up. Tab-completion is also supported, although the completion logic makes no claim to be an SQL parser. If for some reason you do not like the tab completion, you can turn it off by putting this in a file named .inputrc in your home directory:
$if psql set disable-completion on $endif
(This is not a psql but a Readline feature. Read its documentation for further details.) Environment
COLUMNS
If \pset columns is zero, controls the width for the wrapped format and width for determining if wide output requires the pager.
PAGER
If the query results do not fit on the screen, they are piped through this command. Typical values are more or less. The default is platform-dependent. The use of the pager can be disabled by using the \pset command.
PGDATABASE PGHOST PGPORT PGUSER
Default connection parameters (see Section 31.13).
PSQL_EDITOR EDITOR VISUAL
Editor used by the \e and \ef commands. The variables are examined in the order listed; the first that is set is used.
The built-in default editors are vi on Unix systems and notepad.exe on Windows systems.
PSQL_EDITOR_LINENUMBER_ARG
When \e or \ef is used with a line number argument, this variable specifies the command-line argument used to pass the starting line number to the user's editor. For editors such as Emacs or vi, this is a plus sign. Include a trailing space in the value of the variable if there needs to be space between the option name and the line number. Examples:
PSQL_EDITOR_LINENUMBER_ARG='+' PSQL_EDITOR_LINENUMBER_ARG='--line '
The default is + on Unix systems (corresponding to the default editor vi, and useful for many other common editors); but there is no default on Windows systems.
SHELL
Command executed by the \! command.
TMPDIR
Directory for storing temporary files. The default is /tmp.
This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq (see Section 31.13). Files
Unless it is passed an -X or -c option, psql attempts to read and execute commands from the system-wide psqlrc file and the user's ~/.psqlrc file before starting up. (On Windows, the user's startup file is named %APPDATA%\postgresql\psqlrc.conf.) See PREFIX/share/psqlrc.sample for information on setting up the system-wide file. It could be used to set up the client or the server to taste (using the \set and SET commands).
Both the system-wide psqlrc file and the user's ~/.psqlrc file can be made version-specific by appending a dash and the PostgreSQL release number, for example ~/.psqlrc-9.1.9. A matching version-specific file will be read in preference to a non-version-specific file.
The command-line history is stored in the file ~/.psql_history, or %APPDATA%\postgresql\psql_history on Windows.
Notes
In an earlier life psql allowed the first argument of a single-letter backslash command to start directly after the command, without intervening whitespace. As of PostgreSQL 8.4 this is no longer allowed.
psql is only guaranteed to work smoothly with servers of the same version. That does not mean other combinations will fail outright, but subtle and not-so-subtle problems might come up. Backslash commands are particularly likely to fail if the server is of a newer version than psql itself. However, backslash commands of the \d family should work with servers of versions back to 7.4, though not necessarily with servers newer than psql itself.
Notes for Windows Users
psql is built as a "console application". Since the Windows console windows use a different encoding than the rest of the system, you must take special care when using 8-bit characters within psql. If psql detects a problematic console code page, it will warn you at startup. To change the console code page, two things are necessary:
Set the code page by entering cmd.exe /c chcp 1252. (1252 is a code page that is appropriate for German; replace it with your value.) If you are using Cygwin, you can put this command in /etc/profile.
Set the console font to Lucida Console, because the raster font does not work with the ANSI code page.
Examples
The first example shows how to spread a command over several lines of input. Notice the changing prompt:
testdb=> CREATE TABLE my_table ( testdb(> first integer not null default 0, testdb(> second text) testdb-> ; CREATE TABLE
Now look at the table definition again:
testdb=> \d my_table
Table "my_table" Attribute | Type | Modifier
+---------+--------------------
first | integer | not null default 0 second | text |
Now we change the prompt to something more interesting:
testdb=> \set PROMPT1 '%n@%m %~%R%# ' peter@localhost testdb=>
Let's assume you have filled the table with data and want to take a look at it:
peter@localhost testdb=> SELECT * FROM my_table;
first | second
+--------
1 | one 2 | two 3 | three 4 | four
(4 rows)
You can display tables in different ways by using the \pset command:
peter@localhost testdb=> \pset border 2 Border style is 2. peter@localhost testdb=> SELECT * FROM my_table; +-------+--------+ | first | second | +-------+--------+ | 1 | one | | 2 | two | | 3 | three | | 4 | four | +-------+--------+ (4 rows)
peter@localhost testdb=> \pset border 0 Border style is 0. peter@localhost testdb=> SELECT * FROM my_table; first second
------
1 one 2 two 3 three 4 four
(4 rows)
peter@localhost testdb=> \pset border 1 Border style is 1. peter@localhost testdb=> \pset format unaligned Output format is unaligned. peter@localhost testdb=> \pset fieldsep "," Field separator is ",". peter@localhost testdb=> \pset tuples_only Showing only tuples. peter@localhost testdb=> SELECT second, first FROM my_table; one,1 two,2 three,3 four,4
Alternatively, use the short commands:
peter@localhost testdb=> \a \t \x Output format is aligned. Tuples only is off. Expanded display is on. peter@localhost testdb=> SELECT * FROM my_table; -[ RECORD 1 ]- first | 1 second | one -[ RECORD 2 ]- first | 2 second | two -[ RECORD 3 ]- first | 3 second | three -[ RECORD 4 ]- first | 4 second | four
reindexdb -- 重新建立一个数据库索引
Name reindexdb -- reindex a PostgreSQL database Synopsis
reindexdb [connection-option...] [--table | -t table ] [--index | -i index ] [dbname]
reindexdb [connection-option...] [--all | -a]
reindexdb [connection-option...] [--system | -s] [dbname] Description
reindexdb is a utility for rebuilding indexes in a PostgreSQL database.
reindexdb is a wrapper around the SQL command REINDEX. There is no effective difference between reindexing databases via this utility and via other methods for accessing the server. Options
reindexdb accepts the following command-line arguments:
-a --all
Reindex all databases.
[-d] dbname [--dbname=]dbname
Specifies the name of the database to be reindexed. If this is not specified and -a (or --all) is not used, the database name is read from the environment variable PGDATABASE. If that is not set, the user name specified for the connection is used.
-e --echo
Echo the commands that reindexdb generates and sends to the server.
-i index --index=index
Recreate index only.
-q --quiet
Do not display progress messages.
-s --system
Reindex database's system catalogs.
-t table --table=table
Reindex table only.
-V --version
Print the reindexdb version and exit.
-? --help
Show help about reindexdb command line arguments, and exit.
reindexdb also accepts the following command-line arguments for connection parameters:
-h host --host=host
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket.
-p port --port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections.
-U username --username=username
User name to connect as.
-w --no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-W --password
Force reindexdb to prompt for a password before connecting to a database.
This option is never essential, since reindexdb will automatically prompt for a password if the server demands password authentication. However, reindexdb will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W to avoid the extra connection attempt.
Environment
PGDATABASE PGHOST PGPORT PGUSER
Default connection parameters
This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq (see Section 31.13). Diagnostics
In case of difficulty, see REINDEX and psql for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment variables used by the libpq front-end library will apply. Notes
reindexdb might need to connect several times to the PostgreSQL server, asking for a password each time. It is convenient to have a ~/.pgpass file in such cases. See Section 31.14 for more information. Examples
To reindex the database test:
$ reindexdb test
To reindex the table foo and the index bar in a database named abcd:
$ reindexdb --table foo --index bar abcd
vacuumdb — 收集垃圾并且分析一个PostgreSQL 数据库
Name vacuumdb -- garbage-collect and analyze a PostgreSQL database Synopsis
vacuumdb [connection-option...] [--full | -f] [--freeze | -F] [--verbose | -v] [--analyze | -z] [--analyze-only | -Z] [--table | -t table [( column [,...] )] ] [dbname]
vacuumdb [connection-option...] [--full | -f] [--freeze | -F] [--verbose | -v] [--analyze | -z] [--analyze-only | -Z] [--all | -a] Description
vacuumdb is a utility for cleaning a PostgreSQL database. vacuumdb will also generate internal statistics used by the PostgreSQL query optimizer.
vacuumdb is a wrapper around the SQL command VACUUM. There is no effective difference between vacuuming and analyzing databases via this utility and via other methods for accessing the server. Options
vacuumdb accepts the following command-line arguments:
-a --all
Vacuum all databases.
[-d] dbname [--dbname=]dbname
Specifies the name of the database to be cleaned or analyzed. If this is not specified and -a (or --all) is not used, the database name is read from the environment variable PGDATABASE. If that is not set, the user name specified for the connection is used.
-e --echo
Echo the commands that vacuumdb generates and sends to the server.
-f --full
Perform "full" vacuuming.
-F --freeze
Aggressively "freeze" tuples.
-q --quiet
Do not display progress messages.
-t table [ (column [,...]) ] --table=table [ (column [,...]) ]
Clean or analyze table only. Column names can be specified only in conjunction with the --analyze or --analyze-only options.
Tip: If you specify columns, you probably have to escape the parentheses from the shell. (See examples below.)
-v --verbose
Print detailed information during processing.
-V --version
Print the vacuumdb version and exit.
-z --analyze
Also calculate statistics for use by the optimizer.
-Z --analyze-only
Only calculate statistics for use by the optimizer (no vacuum).
-? --help
Show help about vacuumdb command line arguments, and exit.
vacuumdb also accepts the following command-line arguments for connection parameters:
-h host --host=host
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket.
-p port --port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections.
-U username --username=username
User name to connect as.
-w --no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-W --password
Force vacuumdb to prompt for a password before connecting to a database.
This option is never essential, since vacuumdb will automatically prompt for a password if the server demands password authentication. However, vacuumdb will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W to avoid the extra connection attempt.
Environment
PGDATABASE PGHOST PGPORT PGUSER
Default connection parameters
This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq (see Section 31.13). Diagnostics
In case of difficulty, see VACUUM and psql for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment variables used by the libpq front-end library will apply. Notes
vacuumdb might need to connect several times to the PostgreSQL server, asking for a password each time. It is convenient to have a ~/.pgpass file in such cases. See Section 31.14 for more information. Examples
To clean the database test:
$ vacuumdb test
To clean and analyze for the optimizer a database named bigdb:
$ vacuumdb --analyze bigdb
To clean a single table foo in a database named xyzzy, and analyze a single column bar of the table for the optimizer:
$ vacuumdb --analyze --verbose --table 'foo(bar)' xyzzy