From PostgreSQL wiki
Here is an example of a generic trigger function used for recording changes to tables into an audit log table. It will record the old and new records, the table affected, the user who made the change, and a timestamp for each change.
Please see the notes following the trigger source for information on time zones of timestamps, on trigger limitations, on how to record the value changes as json, etc.
A more powerful audit trigger for PostgreSQL 9.1+ only is available at the Audit Trigger for PostgreSQL 9.1plus snippet page.
-- create a schema named "audit" CREATE schema audit; REVOKE CREATE ON schema audit FROM public; CREATE TABLE audit.logged_actions ( schema_name text NOT NULL, table_name text NOT NULL, user_name text, action_tstamp timestamp WITH time zone NOT NULL DEFAULT current_timestamp, action TEXT NOT NULL CHECK (action IN ('I','D','U')), original_data text, new_data text, query text ) WITH (fillfactor=100); REVOKE ALL ON audit.logged_actions FROM public; -- You may wish to use different permissions; this lets anybody -- see the full audit data. In Pg 9.0 and above you can use column -- permissions for fine-grained control. GRANT SELECT ON audit.logged_actions TO public; CREATE INDEX logged_actions_schema_table_idx ON audit.logged_actions(((schema_name||'.'||table_name)::TEXT)); CREATE INDEX logged_actions_action_tstamp_idx ON audit.logged_actions(action_tstamp); CREATE INDEX logged_actions_action_idx ON audit.logged_actions(action); -- -- Now, define the actual trigger function: --
CREATE OR REPLACE FUNCTION audit.if_modified_func() RETURNS TRIGGER AS $body$ DECLARE v_old_data TEXT; v_new_data TEXT; BEGIN /* If this actually for real auditing (where you need to log EVERY action), then you would need to use something like dblink or plperl that could log outside the transaction, regardless of whether the transaction committed or rolled back. */ /* This dance with casting the NEW and OLD values to a ROW is not necessary in pg 9.0+ */ IF (TG_OP = 'UPDATE') THEN v_old_data := ROW(OLD.*); v_new_data := ROW(NEW.*); INSERT INTO audit.logged_actions (schema_name,table_name,user_name,action,original_data,new_data,query) VALUES (TG_TABLE_SCHEMA::TEXT,TG_TABLE_NAME::TEXT,session_user::TEXT,substring(TG_OP,1,1),v_old_data,v_new_data, current_query()); RETURN NEW; ELSIF (TG_OP = 'DELETE') THEN v_old_data := ROW(OLD.*); INSERT INTO audit.logged_actions (schema_name,table_name,user_name,action,original_data,query) VALUES (TG_TABLE_SCHEMA::TEXT,TG_TABLE_NAME::TEXT,session_user::TEXT,substring(TG_OP,1,1),v_old_data, current_query()); RETURN OLD; ELSIF (TG_OP = 'INSERT') THEN v_new_data := ROW(NEW.*); INSERT INTO audit.logged_actions (schema_name,table_name,user_name,action,new_data,query) VALUES (TG_TABLE_SCHEMA::TEXT,TG_TABLE_NAME::TEXT,session_user::TEXT,substring(TG_OP,1,1),v_new_data, current_query()); RETURN NEW; ELSE RAISE WARNING '[AUDIT.IF_MODIFIED_FUNC] - Other action occurred: %, at %',TG_OP,now(); RETURN NULL; END IF; EXCEPTION WHEN data_exception THEN RAISE WARNING '[AUDIT.IF_MODIFIED_FUNC] - UDF ERROR [DATA EXCEPTION] - SQLSTATE: %, SQLERRM: %',SQLSTATE,SQLERRM; RETURN NULL; WHEN unique_violation THEN RAISE WARNING '[AUDIT.IF_MODIFIED_FUNC] - UDF ERROR [UNIQUE] - SQLSTATE: %, SQLERRM: %',SQLSTATE,SQLERRM; RETURN NULL; WHEN OTHERS THEN RAISE WARNING '[AUDIT.IF_MODIFIED_FUNC] - UDF ERROR [OTHER] - SQLSTATE: %, SQLERRM: %',SQLSTATE,SQLERRM; RETURN NULL; END; $body$ LANGUAGE plpgsql SECURITY DEFINER SET search_path = pg_catalog, audit; -- -- To add this trigger to a table, use: -- CREATE TRIGGER tablename_audit -- AFTER INSERT OR UPDATE OR DELETE ON tablename -- FOR EACH ROW EXECUTE PROCEDURE audit.if_modified_func(); --
Not for in-table auditing
This is not a trigger for setting "last updated" and "who updated" columns in your tables. That should be done by a separate BEFORE INSERT OR UPDATE OR DELETE ... FOR EACH ROW trigger. You might want to log a history on tables that don't have built-in audit time/who columns, or add time/who data to tables you don't want full audit logs of, so this separation makes sense. More importantly, setting time/who data requires a BEFORE trigger, but this audit logging trigger must run as an AFTER trigger to make sure it captures the final state of the row after other triggers' effects.
There's a limit to what you can do with triggers for auditing in Pg. However most things you can't audit with a trigger can still be traced in the system logs, and more importantly can and should be outside the permissions of normal users.
If your production app logs into the database with a role that does not have superuser rights or CREATEUSER CREATEDB rights, if that role does not own the tables, and if you make sure it only has the minimum rights required GRANTed to it, you'll be fine.
Limitation: Cannot audit SELECT
This audit trigger does not capture SELECT activity. The only way to audit SELECT in PostgreSQL is via the system logs, as SELECT triggers are not supported and SELECT auditing wouldn't work without autonomous transactions in triggers to prevent loss of audit data on rollback.
Limitation: Cannot audit system tables
You can't use an audit trigger like this on system catalog tables, so you can't audit CREATE ROLE or CREATE TABLE etc with it.
Since a production app role should never have CREATEUSER or CREATEDB rights and should generally not have the CREATE right on any schema except temporary table schema, that won't be an issue in practice.
Limitation: Cannot audit DDL
PostgreSQL doesn't have hooks to fire triggers on DDL (ALTER TABLE statements, etc).
Since only the table owner or a superuser can execute DDL against a table, this won't be a problem for production audit logging.
Limitation: Can't reliably audit superuser or owner
This should be obvious, but seems to get overlooked: You can't use in-database auditing to securely track the actions of a superuser, the role that owns the audited table, or the role that owns the audit table. Audit logging will work but can be tampered with trivially by these users.
Your production app should run with an unpriveleged role and it should not own any of the tables it works with. Create them using a different role and GRANT the production app role only the rights it needs.
Details on how timestamps work
The action_tstamp timestamp uses current_timestamp, as it is current at the start of the transaction, so all actions that occur with each call of the audit.if_modified_func within the same transaction will have the same timestamp. If you want you can add an additional column that records the statement_timestamp() to record the time the current top-level statement from the client started or even clock_timestamp() if you want it to be the real time at the instant of the log record insertion.
Note that timestamp with time zone times are always stored in UTC and are displayed in local time by default. You can control display using AT TIME ZONE in queries, or with SET timezone = 'UTC' as a per-session GUC. See the Pg docs.
Auditing values as JSON
For PostgreSQL 9.2, or 9.1 with the fantastic json_91 addon, you can log the old and new values in the table as structured json instead of flat text, giving you much more power to query your audit history. Just change the types of v_old_data, v_new_data, original_data and new_data from TEXT to json, then replace ROW(OLD.*) and ROW(NEW.*) with row_to_json(OLD) and row_to_json(NEW) respectively.
Alternately, you can use the enhanced audit trigger for Pg 9.1 to audit hstore values.
Making the trigger more flexible
If you want to omit the full query text, or omit the data, for changes to some tables but not others it's not hard to do that. PostgreSQL triggers can take parameters that are passed during CREATE TRIGGER. These are passed each time that particular trigger instance from that CREATE TRIGGER statement is executed.
For example, if you define a trigger mytrigger(log_query_text boolean, log_values boolean) then in a CREATE TRIGGER statement you can say EXECUTE PROCEDURE mytrigger(false, true) and those params will get passed to the trigger function invocation. The trigger can use IF or CASE statements to change its behaviour based on the parameters.
It's fairly trivial to extend the above trigger function so you can turn query text logging, values logging, etc on and off for different invocations.
Demo / example code
-- Tested with a table named "t" DROP TABLE IF EXISTS t; CREATE TABLE t (x int NOT NULL PRIMARY KEY, y text); -- needs to be applied to all tables that we want to monitor -- this is a test trigger to show how we can audit all changes to the relevant tables, including inserts CREATE TRIGGER t_if_modified_trg AFTER INSERT OR UPDATE OR DELETE ON t FOR EACH ROW EXECUTE PROCEDURE audit.if_modified_func(); -- Some sample updates, deletes, and inserts to illustrate the points SELECT * FROM t; SELECT * FROM audit.logged_actions; INSERT INTO t (x,y) VALUES (1,'asdf'),(2,'werwer'),(3,NULL); SELECT * FROM t; SELECT * FROM audit.logged_actions; -- You may have noticed that the times output in the prior query are in your local time. -- They're stored as UTC, but Pg is converting them for display according to the 'timezone' GUC. SHOW timezone; -- See? SET timezone = 'UTC'; SELECT * FROM audit.logged_actions; RESET timezone; -- Another way to achieve the same effect: SELECT *, action_tstamp AT TIME ZONE 'UTC' AS action_tstamp_utc FROM audit.logged_actions; UPDATE t SET y='eeeeee' WHERE x=2; SELECT * FROM t; SELECT * FROM audit.logged_actions; UPDATE t SET y='yuyuyuy' WHERE x=3; SELECT * FROM t; SELECT * FROM audit.logged_actions; DELETE FROM t WHERE x=1; SELECT * FROM t; SELECT * FROM audit.logged_actions; -- should be a pk violation UPDATE t SET x=4 WHERE x=2; SELECT * FROM t; SELECT * FROM audit.logged_actions;
Original code by bricklen, modified by ringerc to separate in-table auditing from audit logging