Known Buildfarm Test Failures
Investigated Test failures
001_emergency_vacuum.pl fails to wait for datfrozenxid advancing
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-05-19+06%3A33%3A34 - HEAD
(dodo is a slow armv7l machine)
# +++ tap install-check in src/test/modules/xid_wraparound +++ # poll_query_until timed out executing this query: # # SELECT NOT EXISTS ( # SELECT * # FROM pg_database # WHERE age(datfrozenxid) > current_setting('autovacuum_freeze_max_age')::int) # # expecting this output: # t # last actual query output: # f # with stderr: # Tests were run but no plan was declared and done_testing() was not seen. # Looks like your test exited with 29 just after 1. t/001_emergency_vacuum.pl .. Dubious, test returned 29 (wstat 7424, 0x1d00) All 1 subtests passed
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-11%2011%3A30%3A18 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-12%2011%3A32%3A08 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-28%2013%3A34%3A34 - HEAD
027_stream_regress.pl fails to wait for standby because of incorrect CRC in WAL
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-05-02%2006%3A40%3A36 - HEAD
(dodo is a armv7l machine using SLICING_BY_8_CRC32C and having wal_consistency_checking enabled)
# poll_query_until timed out executing this query: # SELECT '2/8E09BD70' <= replay_lsn AND state = 'streaming' # FROM pg_catalog.pg_stat_replication # WHERE application_name IN ('standby_1', 'walreceiver') # expecting this output: # t # last actual query output: # # with stderr: # Tests were run but no plan was declared and done_testing() was not seen. # Looks like your test exited with 29 just after 2. [17:19:00] t/027_stream_regress.pl ............... Dubious, test returned 29 (wstat 7424, 0x1d00) All 2 subtests passed --- 027_stream_regress_standby_1.log: 2024-05-02 17:08:18.579 ACST [3404:205] LOG: restartpoint starting: wal 2024-05-02 17:08:18.401 ACST [3406:7] LOG: incorrect resource manager data checksum in record at 0/F14D7A60 2024-05-02 17:08:18.579 ACST [3407:2] FATAL: terminating walreceiver process due to administrator command 2024-05-02 17:08:18.579 ACST [3406:8] LOG: incorrect resource manager data checksum in record at 0/F14D7A60 ... 2024-05-02 17:19:00.093 ACST [3406:2604] LOG: incorrect resource manager data checksum in record at 0/F14D7A60 2024-05-02 17:19:00.093 ACST [3406:2605] LOG: waiting for WAL to become available at 0/F1002000 2024-05-02 17:19:00.594 ACST [3406:2606] LOG: incorrect resource manager data checksum in record at 0/F14D7A60 2024-05-02 17:19:00.594 ACST [3406:2607] LOG: waiting for WAL to become available at 0/F1002000 2024-05-02 17:19:00.758 ACST [3403:4] LOG: received immediate shutdown request 2024-05-02 17:19:00.785 ACST [3403:5] LOG: database system is shut down
WAL record CRC calculated incorrectly because of underlying buffer modification
020_archive_status.pl failed to wait for updated statistics due to send() returned EAGAIN
(morepork is running on OpenBSD 6.9)
# poll_query_until timed out executing this query: # SELECT archived_count FROM pg_stat_archiver # expecting this output: # 1 # last actual query output: # 0 # with stderr: # Looks like your test exited with 29 just after 4. [23:01:41] t/020_archive_status.pl .............. Dubious, test returned 29 (wstat 7424, 0x1d00) Failed 12/16 subtests --- 020_archive_status_master.log: 2024-04-30 22:57:27.931 CEST [83115:1] LOG: archive command failed with exit code 1 2024-04-30 22:57:27.931 CEST [83115:2] DETAIL: The failed archive command was: cp "pg_wal/000000010000000000000001_does_not_exist" "000000010000000000000001_does_not_exist" ... 2024-04-30 22:57:28.070 CEST [47962:2] [unknown] LOG: connection authorized: user=pgbf database=postgres application_name=020_archive_status.pl 2024-04-30 22:57:28.072 CEST [47962:3] 020_archive_status.pl LOG: statement: SELECT archived_count FROM pg_stat_archiver 2024-04-30 22:57:28.073 CEST [83115:3] LOG: could not send to statistics collector: Resource temporarily unavailable
Non-systematic handling of EINTR/EAGAIN/EWOULDBLOCK
031_recovery_conflict.pl fails to detect an expected lock acquisition
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-03-18+23%3A43%3A00 - HEAD
[23:48:52.521](9.831s) ok 13 - startup deadlock: cursor holding conflicting pin, also waiting for lock, established [23:55:13.749](381.228s) # poll_query_until timed out executing this query: # # SELECT 'waiting' FROM pg_locks WHERE locktype = 'relation' AND NOT granted; # # expecting this output: # waiting # last actual query output: # # with stderr: [23:55:13.763](0.013s) not ok 14 - startup deadlock: lock acquisition is waiting [23:55:13.763](0.001s) # Failed test 'startup deadlock: lock acquisition is waiting' # at /home/bf/bf-build/adder/HEAD/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 261. Waiting for replication conn standby's replay_lsn to pass 0/3450000 on primary done --- 031_recovery_conflict_standby.log 2024-03-18 23:48:52.526 UTC [3138907][client backend][1/2:0] LOG: statement: SELECT * FROM test_recovery_conflict_table2; 2024-03-18 23:48:52.690 UTC [3139905][not initialized][:0] LOG: connection received: host=[local] 2024-03-18 23:48:52.692 UTC [3139905][client backend][2/1:0] LOG: connection authenticated: user="bf" method=trust (/home/bf/bf-build/adder/HEAD/pgsql.build/testrun/recovery/031_recovery_conflict/data/t_031_recovery_conflict_standby_data/pgdata/pg_hba.conf:117) 2024-03-18 23:48:52.692 UTC [3139905][client backend][2/1:0] LOG: connection authorized: user=bf database=postgres application_name=031_recovery_conflict.pl 2024-03-18 23:48:53.301 UTC [3136308][startup][34/0:0] LOG: recovery still waiting after 10.099 ms: recovery conflict on buffer pin 2024-03-18 23:48:53.301 UTC [3136308][startup][34/0:0] CONTEXT: WAL redo at 0/342CCC0 for Heap2/PRUNE: ... 2024-03-18 23:48:53.301 UTC [3138907][client backend][1/2:0] ERROR: canceling statement due to conflict with recovery at character 15 2024-03-18 23:48:53.301 UTC [3138907][client backend][1/2:0] DETAIL: User transaction caused buffer deadlock with recovery. 2024-03-18 23:48:53.301 UTC [3138907][client backend][1/2:0] STATEMENT: SELECT * FROM test_recovery_conflict_table2; 2024-03-18 23:48:53.301 UTC [3136308][startup][34/0:0] LOG: recovery finished waiting after 10.633 ms: recovery conflict on buffer pin 2024-03-18 23:48:53.301 UTC [3136308][startup][34/0:0] CONTEXT: WAL redo at 0/342CCC0 for Heap2/PRUNE: ... 2024-03-18 23:48:53.769 UTC [3139905][client backend][2/2:0] LOG: statement: SELECT 'waiting' FROM pg_locks WHERE locktype = 'relation' AND NOT granted;
Test 031_recovery_conflict.pl is not immune to autovacuum
031_recovery_conflict.pl fails when a conflict counted twice
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-05-15+23%3A03%3A30 - HEAD
(olingo builds postgres with -O1 and address sanitizer)
[23:12:02.127](0.166s) not ok 6 - snapshot conflict: stats show conflict on standby [23:12:02.130](0.003s) # Failed test 'snapshot conflict: stats show conflict on standby' # at /home/bf/bf-build/olingo/HEAD/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 332. [23:12:02.130](0.000s) # got: '2' # expected: '1' ... [23:12:06.848](1.291s) not ok 17 - 5 recovery conflicts shown in pg_stat_database [23:12:06.887](0.040s) # Failed test '5 recovery conflicts shown in pg_stat_database' # at /home/bf/bf-build/olingo/HEAD/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 286. [23:12:06.887](0.000s) # got: '6' # expected: '5' Waiting for replication conn standby's replay_lsn to pass 0/3459160 on primary done --- 031_recovery_conflict_standby.log: 2024-05-15 23:12:01.959 UTC [1299981][client backend][2/2:0] FATAL: terminating connection due to conflict with recovery 2024-05-15 23:12:01.959 UTC [1299981][client backend][2/2:0] DETAIL: User query might have needed to see row versions that must be removed. 2024-05-15 23:12:01.959 UTC [1299981][client backend][2/2:0] HINT: In a moment you should be able to reconnect to the database and repeat your command. 2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] LOG: could not send data to client: Broken pipe 2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] FATAL: terminating connection due to conflict with recovery 2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] DETAIL: User query might have needed to see row versions that must be removed. 2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] HINT: In a moment you should be able to reconnect to the database and repeat your command.
Test 031_recovery_conflict fails when a conflict counted twice
001_rep_changes.pl fails due to publisher stuck on shutdown
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-05-16%2014%3A22%3A38 - HEAD
[14:33:02.374](0.333s) ok 23 - update works with dropped subscriber column ### Stopping node "publisher" using mode fast # Running: pg_ctl -D /home/bf/bf-build/adder/HEAD/pgsql.build/testrun/subscription/001_rep_changes/data/t_001_rep_changes_publisher_data/pgdata -m fast stop waiting for server to shut down.. ... ... ... .. failed pg_ctl: server does not shut down # pg_ctl stop failed: 256 # Postmaster PID for node "publisher" is 2222549 [14:39:04.375](362.001s) Bail out! pg_ctl stop failed --- 001_rep_changes_publisher.log 2024-05-16 14:33:02.907 UTC [2238704][client backend][4/22:0] LOG: statement: DELETE FROM tab_rep 2024-05-16 14:33:02.925 UTC [2238704][client backend][:0] LOG: disconnection: session time: 0:00:00.078 user=bf database=postgres host=[local] 2024-05-16 14:33:02.939 UTC [2222549][postmaster][:0] LOG: received fast shutdown request 2024-05-16 14:33:03.000 UTC [2222549][postmaster][:0] LOG: aborting any active transactions 2024-05-16 14:33:03.049 UTC [2222549][postmaster][:0] LOG: background worker "logical replication launcher" (PID 2223110) exited with exit code 1 2024-05-16 14:33:03.062 UTC [2222901][checkpointer][:0] LOG: shutting down 2024-05-16 14:39:04.377 UTC [2222549][postmaster][:0] LOG: received immediate shutdown request 2024-05-16 14:39:04.382 UTC [2222549][postmaster][:0] LOG: database system is shut down
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dikkop&dt=2024-04-24%2014%3A38%3A35 - HEAD
Also 035_standby_logical_decoding.pl fails on restart of standby (which is a publisher in the test):
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-04-17%2014%3A21%3A00 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-04-06%2016%3A28%3A38 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-11%2009%3A54%3A09 - HEAD
001_rep_changes.pl fails due to publisher stuck on shutdown
deadlock-parallel.spec fails due to timeout on jit-enabled animals
(canebrake uses --with-llvm, jit=1, jit_above_cost=0 options)
--- /home/bf/bf-build/canebrake/REL_15_STABLE/pgsql/src/test/isolation/expected/deadlock-parallel.out 2023-10-18 23:57:32.904930097 +0000 +++ /home/bf/bf-build/canebrake/REL_15_STABLE/pgsql.build/src/test/isolation/output_iso/results/deadlock-parallel.out 2024-04-03 00:02:29.290675485 +0000 @@ -46,23 +46,15 @@ 1 (1 row) +isolationtester: canceling step d2a1 after 300 seconds step d2a1: <... completed> - sum ------ -10000 -(1 row) - -lock_share ----------- - 1 -(1 row) - +ERROR: canceling statement due to user request step e1c: COMMIT; -step d2c: COMMIT; step e2l: <... completed> lock_excl --------- 1 (1 row) +step d2c: COMMIT; step e2c: COMMIT;
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2023-10-26%2001%3A00%3A24 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2024-03-04%2017%3A56%3A29 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2024-03-10%2023%3A32%3A33 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2024-03-05%2011%3A11%3A25 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pogona&dt=2024-02-20%2003%3A50%3A49 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=xenodermus&dt=2024-06-30%2022%3A08%3A36 - REL_14_STABLE
(petalura, pogona and xenodermus also use --with-llvm, jit=1, jit_above_cost=0 options)
Recent 027_streaming_regress.pl hangs \ deadlock-parallel failures
027_stream_regress.pl fails with timeout when waiting for catchup
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-05-22%2021%3A55%3A00 - HEAD
Similar failures at the same time:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-05-22%2021%3A55%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-05-22%2021%3A54%3A50 - HEAD
Waiting for replication conn standby_1's replay_lsn to pass 0/15218DE8 on primary [22:07:20.266](292.694s) # poll_query_until timed out executing this query: # SELECT '0/15218DE8' <= replay_lsn AND state = 'streaming' # FROM pg_catalog.pg_stat_replication # WHERE application_name IN ('standby_1', 'walreceiver') # expecting this output: # t # last actual query output: # f # with stderr: timed out waiting for catchup at /home/bf/bf-build/skink-master/HEAD/pgsql/src/test/recovery/t/027_stream_regress.pl line 103. --- 027_stream_regress_standby_1.log 2024-05-22 21:57:39.199 UTC [598721][postmaster][:0] LOG: starting PostgreSQL 17beta1 on x86_64-linux, compiled by gcc-13.2.0, 64-bit ... 2024-05-22 22:05:07.888 UTC [599624][checkpointer][:0] LOG: restartpoint starting: time 2024-05-22 22:05:21.622 UTC [599624][checkpointer][:0] LOG: restartpoint complete: wrote 31 buffers (24.2%); 0 WAL file(s) added, 0 removed, 4 recycled; write=11.770 s, sync=0.890 s, total=13.735 s; sync files=423, longest=0.111 s, average=0.003 s; distance=65698 kB, estimate=67034 kB; lsn=0/126ACA48, redo lsn=0/1202D1B8 2024-05-22 22:05:21.622 UTC [599624][checkpointer][:0] LOG: recovery restart point at 0/1202D1B8 2024-05-22 22:05:21.622 UTC [599624][checkpointer][:0] DETAIL: Last completed transaction was at log time 2024-05-22 22:01:29.448409+00. 2024-05-22 22:07:20.336 UTC [601831][walreceiver][:0] FATAL: could not receive data from WAL stream: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. 2024-05-22 22:07:20.908 UTC [598721][postmaster][:0] LOG: received immediate shutdown request 2024-05-22 22:07:21.251 UTC [598721][postmaster][:0] LOG: database system is shut down
Other occurrences after 2024-04:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-05-07%2018%3A59%3A24 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-05-07%2021%3A04%3A07 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-19%2010%3A18%3A35 - REL_15_STABLE (dodo is a slow armv7l machine)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-21%2018%3A31%3A11 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-28%2016%3A50%3A59 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-06-21%2007%3A38%3A10 - HEAD
Recent 027_streaming_regress.pl hangs \ Test concurrency reducing
027_stream_regress.pl fails due to some IOS plans of queries in create_index.sql changed
# Failed test 'regression tests pass' # at t/027_stream_regress.pl line 92. # got: '256' # expected: '0' # Looks like you failed 1 test of 6. [07:07:42] t/027_stream_regress.pl .............. Dubious, test returned 1 (wstat 256, 0x100) Failed 1/6 subtests --- regress_log_027_stream_regress: ... not ok 66 + create_index 27509 ms ... ---- diff -U3 /home/nm/farm/gcc64/REL_16_STABLE/pgsql.build/src/test/regress/expected/create_index.out /home/nm/farm/gcc64/REL_16_STABLE/pgsql.build/src/test/recovery/tmp_check/results/create_index.out --- /home/nm/farm/gcc64/REL_16_STABLE/pgsql.build/src/test/regress/expected/create_index.out 2023-07-08 15:26:29.000000000 +0000 +++ /home/nm/farm/gcc64/REL_16_STABLE/pgsql.build/src/test/recovery/tmp_check/results/create_index.out 2024-03-17 06:59:01.000000000 +0000 @@ -1916,11 +1916,15 @@ SELECT unique1 FROM tenk1 WHERE unique1 IN (1,42,7) ORDER BY unique1; - QUERY PLAN - ------------------------------------------------------- - Index Only Scan using tenk1_unique1 on tenk1 - Index Cond: (unique1 = ANY ('{1,42,7}'::integer[])) - (2 rows) + QUERY PLAN + ------------------------------------------------------------------- + Sort + Sort Key: unique1 + -> Bitmap Heap Scan on tenk1 + Recheck Cond: (unique1 = ANY ('{1,42,7}'::integer[])) + -> Bitmap Index Scan on tenk1_unique1 + Index Cond: (unique1 = ANY ('{1,42,7}'::integer[])) + (6 rows) SELECT unique1 FROM tenk1 WHERE unique1 IN (1,42,7) @@ -1936,12 +1940,13 @@ SELECT thousand, tenthous FROM tenk1 WHERE thousand < 2 AND tenthous IN (1001,3000) ORDER BY thousand; - QUERY PLAN - ------------------------------------------------------- - Index Only Scan using tenk1_thous_tenthous on tenk1 - Index Cond: (thousand < 2) - Filter: (tenthous = ANY ('{1001,3000}'::integer[])) - (3 rows) + QUERY PLAN + -------------------------------------------------------------------------------------- + Sort + Sort Key: thousand + -> Index Only Scan using tenk1_thous_tenthous on tenk1 + Index Cond: ((thousand < 2) AND (tenthous = ANY ('{1001,3000}'::integer[]))) + (4 rows) SELECT thousand, tenthous FROM tenk1 WHERE thousand < 2 AND tenthous IN (1001,3000)
Also 002_pg_upgrade.pl fails due to some IOS plans of queries in create_index.sql changed
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2024-01-02+07%3A09%3A09 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2023-11-15+06%3A16%3A15 - HEAD
To what extent should tests rely on VACUUM ANALYZE? \ create_index failures
regress-running/regress fails on skink due to timeout
(skink is a Valgrind animal)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-04%2002%3A44%3A19 - HEAD
1/1 postgresql:regress-running / regress-running/regress TIMEOUT 3000.06s killed by signal 15 SIGTERM --- inst/logfile ends with: 2024-06-04 03:39:24.664 UTC [3905755][client backend][5/1787:16793] ERROR: column "c2" of relation "test_add_column" already exists 2024-06-04 03:39:24.664 UTC [3905755][client backend][5/1787:16793] STATEMENT: ALTER TABLE test_add_column ADD COLUMN c2 integer, -- fail because c2 already exists ADD COLUMN c3 integer primary key; 2024-06-04 03:39:30.815 UTC [3905755][client backend][5/0:0] LOG: could not send data to client: Broken pipe 2024-06-04 03:39:30.816 UTC [3905755][client backend][5/0:0] FATAL: connection to client lost
Other occurrences after 2024-04-01:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-04-06%2021%3A59%3A00 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-04-25%2005%3A34%3A50 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-05-02%2021%3A38%3A23 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-05-06%2014%3A01%3A29 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-05-22%2022%3A25%3A15 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-05-24%2002%3A22%3A26 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-04%2022%3A04%3A09 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-05%2023%3A50%3A28 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-11%2021%3A59%3A28 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-24%2014%3A00%3A33 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-27%2000%3A46%3A49 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-27%2006%3A20%3A44 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-27%2019%3A09%3A56 - HEAD
Recent 027_streaming_regress.pl hangs \ meson TIMEOUT on skink
ssl tests (001_ssltests.pl, 002_scram.pl, 003_sslinfo.pl) fail due to TCP port conflict
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-06-04+11%3A20%3A07 - HEAD
287/295 postgresql:ssl / ssl/001_ssltests ERROR 6.18s (exit status 255 or signal 127 SIGinvalid) --- 001_ssltests_primary.log 2024-06-04 11:30:40.227 UTC [3373644][postmaster][:0] LOG: starting PostgreSQL 17beta1 on x86_64-linux, compiled by clang-13.0.1-11, 64-bit 2024-06-04 11:30:40.231 UTC [3373644][postmaster][:0] LOG: listening on Unix socket "/tmp/tUmT8ItNQ2/.s.PGSQL.60362" ... 2024-06-04 11:30:45.273 UTC [3376046][postmaster][:0] LOG: starting PostgreSQL 17beta1 on x86_64-linux, compiled by clang-13.0.1-11, 64-bit 2024-06-04 11:30:45.273 UTC [3376046][postmaster][:0] LOG: could not bind IPv4 address "127.0.0.1": Address already in use 2024-06-04 11:30:45.273 UTC [3376046][postmaster][:0] HINT: Is another postmaster already running on port 60362? If not, wait a few seconds and retry. 2024-06-04 11:30:45.273 UTC [3376046][postmaster][:0] WARNING: could not create listen socket for "127.0.0.1" 2024-06-04 11:30:45.273 UTC [3376046][postmaster][:0] FATAL: could not create any TCP/IP sockets
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-03-12%2023%3A15%3A50 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-03-21%2000%3A35%3A23 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-03-27%2011%3A15%3A31 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-04-16%2016%3A10%3A45 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-03-08%2011%3A19%3A42 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-03-11%2022%3A23%3A28 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-03-17%2023%3A03%3A50 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-03-20%2009%3A21%3A30 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-03-20%2016%3A53%3A27 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-04-07%2012%3A25%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-04-08%2019%3A50%3A13 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-04-19%2021%3A24%3A30 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-04-22%2006%3A17%3A13 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-04-29%2023%3A27%3A15 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-04-30%2000%3A24%3A28 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-06-04%2011%3A20%3A07 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-06-13%2019%3A10%3A35 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-06-16%2017%3A55%3A34 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-06-17%2019%3A47%3A41 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2024-06-20%2007%3A09%3A26 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-06-25%2021%3A55%3A23 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-06-26%2021%3A45%3A06 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-06-30%2018%3A18%3A07 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-06-30%2022%3A58%3A10 - HEAD
ssl tests fail due to TCP port conflict
035_standby_logical_decoding_standby.pl fails due to missing activeslot invalidation
(drongo executes the 035 test longer than 500 seconds)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-06-06+12%3A36%3A11 - HEAD
40/287 postgresql:recovery / recovery/035_standby_logical_decoding ERROR 2193.29s (exit status 255 or signal 127 SIGinvalid) --- regress_log_035_standby_logical_decoding [13:55:13.725](34.411s) ok 25 - inactiveslot slot invalidation is logged with vacuum on pg_class [13:55:13.727](0.002s) not ok 26 - activeslot slot invalidation is logged with vacuum on pg_class [13:55:13.728](0.001s) # Failed test 'activeslot slot invalidation is logged with vacuum on pg_class' # at C:/prog/bf/root/HEAD/pgsql/src/test/recovery/t/035_standby_logical_decoding.pl line 229. [14:27:42.995](1949.267s) # poll_query_until timed out executing this query: # select (confl_active_logicalslot = 1) from pg_stat_database_conflicts where datname = 'testdb' # expecting this output: # t # last actual query output: # f # with stderr: [14:27:42.999](0.004s) not ok 27 - confl_active_logicalslot updated [14:27:43.000](0.001s) # Failed test 'confl_active_logicalslot updated' # at C:/prog/bf/root/HEAD/pgsql/src/test/recovery/t/035_standby_logical_decoding.pl line 235. Timed out waiting confl_active_logicalslot to be updated at C:/prog/bf/root/HEAD/pgsql/src/test/recovery/t/035_standby_logical_decoding.pl line 235. --- 035_standby_logical_decoding_standby.log: 2024-06-06 13:55:07.715 UTC [9172:7] LOG: invalidating obsolete replication slot "row_removal_inactiveslot" 2024-06-06 13:55:07.715 UTC [9172:8] DETAIL: The slot conflicted with xid horizon 754. 2024-06-06 13:55:07.715 UTC [9172:9] CONTEXT: WAL redo at 0/4020A80 for Heap2/PRUNE_ON_ACCESS: snapshotConflictHorizon: 754, isCatalogRel: T, nplans: 0, nredirected: 0, ndead: 1, nunused: 0, dead: [48]; blkref #0: rel 1663/16384/2610, blk 0 2024-06-06 13:55:14.372 UTC [7532:1] [unknown] LOG: connection received: host=127.0.0.1 port=55328 2024-06-06 13:55:14.381 UTC [7532:2] [unknown] LOG: connection authenticated: identity="EC2AMAZ-P7KGG90\\pgrunner" method=sspi (C:/prog/bf/root/HEAD/pgsql.build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata/pg_hba.conf:2) 2024-06-06 13:55:14.381 UTC [7532:3] [unknown] LOG: connection authorized: user=pgrunner database=postgres application_name=035_standby_logical_decoding.pl 2024-06-06 13:55:14.443 UTC [7532:4] 035_standby_logical_decoding.pl LOG: statement: select (confl_active_logicalslot = 1) from pg_stat_database_conflicts where datname = 'testdb' 2024-06-06 13:55:14.452 UTC [7532:5] 035_standby_logical_decoding.pl LOG: disconnection: session time: 0:00:00.090 user=pgrunner database=postgres host=127.0.0.1 port=55328 # (there is no `invalidating obsolete replication slot "row_removal_activeslot"` message) ... 2024-06-06 14:27:42.675 UTC [4032:4] 035_standby_logical_decoding.pl LOG: statement: select (confl_active_logicalslot = 1) from pg_stat_database_conflicts where datname = 'testdb' 2024-06-06 14:27:42.681 UTC [4032:5] 035_standby_logical_decoding.pl LOG: disconnection: session time: 0:00:00.080 user=pgrunner database=postgres host=127.0.0.1 port=58713 2024-06-06 14:27:43.095 UTC [7892:2] FATAL: could not receive data from WAL stream: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
Isolation tests fail on hamerkop with "too many clients" errors
(hamerkop is a Windows animal with gssapi enabled on v16- (as of 2024-06-08))
test skip-locked ... ok 530 ms test skip-locked-2 ... ok 401 ms test skip-locked-3 ... FAILED (test process exited with exit code 1) 364 ms test skip-locked-4 ... FAILED (test process exited with exit code 1) 197 ms --- diff -w -U3 c:/build-farm-local/buildroot/REL_13_STABLE/pgsql.build/src/test/isolation/expected/skip-locked-3.out c:/build-farm-local/buildroot/REL_13_STABLE/pgsql.build/src/test/isolation/results/skip-locked-3.out --- c:/build-farm-local/buildroot/REL_13_STABLE/pgsql.build/src/test/isolation/expected/skip-locked-3.out 2024-06-07 23:17:47 +0900 +++ c:/build-farm-local/buildroot/REL_13_STABLE/pgsql.build/src/test/isolation/results/skip-locked-3.out 2024-06-08 00:01:27 +0900 @@ -1,25 +1,3 @@ Parsed test spec with 3 sessions - -starting permutation: s1a s2a s3a s1b s2b s3b ... +Connection 3 failed: could not initiate GSSAPI security context: Unspecified GSS failure. Minor code may provide more information: Credential cache is empty +FATAL: sorry, too many clients already
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-06-08%2015%3A28%3A48 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-06-12%2014%3A25%3A09 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-06-25%2014%3A33%3A17 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-06-28%2014%3A24%3A51 - REL_13_STABLE
Why is citext/regress failing on hamerkop? \ "sorry, too many clients already" failures
xversion-upgrade-XXX fails due to pg_ctl timeout
REL9_5_STABLE-ctl4.log waiting for server to shut down........................................................................................................................... failed pg_ctl: server does not shut down
Also test runs fail on the stopdb-C-x stage
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2024-06-08+01%3A41%3A41 - HEAD
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-03-06%2023%3A42%3A23 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-04-02%2019%3A05%3A04 - REL_15_STABLE
The xversion-upgrade test fails to stop server
A partial fix: The xversion-upgrade test fails to stop server \ PGCTLTIMEOUT increased on crake
subscriber tests fail due to an assertion failure in SnapBuildInitialSnapshot()
Bailout called. Further testing stopped: pg_ctl stop failed t/031_column_list.pl ............... ok --- 031_column_list_publisher.log 2024-05-16 00:23:24.522 UTC [1882382][walsender][5/22:0] LOG: received replication command: CREATE_REPLICATION_SLOT "pg_16588_sync_16582_7369385153852978065" LOGICAL pgoutput (SNAPSHOT 'use') 2024-05-16 00:23:24.522 UTC [1882382][walsender][5/22:0] STATEMENT: CREATE_REPLICATION_SLOT "pg_16588_sync_16582_7369385153852978065" LOGICAL pgoutput (SNAPSHOT 'use') 2024-05-16 00:23:24.639 UTC [1882382][walsender][5/22:0] LOG: logical decoding found consistent point at 0/164A088 2024-05-16 00:23:24.639 UTC [1882382][walsender][5/22:0] DETAIL: There are no running transactions. 2024-05-16 00:23:24.639 UTC [1882382][walsender][5/22:0] STATEMENT: CREATE_REPLICATION_SLOT "pg_16588_sync_16582_7369385153852978065" LOGICAL pgoutput (SNAPSHOT 'use') TRAP: FailedAssertion("TransactionIdPrecedesOrEquals(safeXid, snap->xmin)", File: "/home/bf/bf-build/skink/REL_15_STABLE/pgsql.build/../pgsql/src/backend/replication/logical/snapbuild.c", Line: 614, PID: 756819) 2024-05-09 07:11:55.444 UTC [756803][walsender][4/0:0] ERROR: cannot use different column lists for table "public.test_mix_1" in different publications 2024-05-09 07:11:55.444 UTC [756803][walsender][4/0:0] CONTEXT: slot "sub1", output plugin "pgoutput", in the change callback, associated LSN 0/163B860 2024-05-09 07:11:55.444 UTC [756803][walsender][4/0:0] STATEMENT: START_REPLICATION SLOT "sub1" LOGICAL 0/0 (proto_version '3', publication_names '"pub_mix_1","pub_mix_2"') postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(ExceptionalCondition+0x92)[0x6bc2db] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(SnapBuildInitialSnapshot+0x1fd)[0x521e82] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(+0x430bb1)[0x538bb1] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(exec_replication_command+0x3c9)[0x53ac9a] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(PostgresMain+0x748)[0x58f8f1] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(+0x3efabb)[0x4f7abb] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(+0x3f1bba)[0x4f9bba] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(+0x3f1dc8)[0x4f9dc8] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(PostmasterMain+0x1133)[0x4fb36b] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(main+0x210)[0x448be9] /lib/x86_64-linux-gnu/libc.so.6(+0x27b8a)[0x4cd0b8a] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x4cd0c45] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(_start+0x21)[0x1d2b71] 2024-05-09 07:11:55.588 UTC [747458][postmaster][:0] LOG: server process (PID 756819) was terminated by signal 6: Aborted 2024-05-09 07:11:55.588 UTC [747458][postmaster][:0] DETAIL: Failed process was running: CREATE_REPLICATION_SLOT "pg_16586_sync_16580_7366892877332646335" LOGICAL pgoutput (SNAPSHOT 'use')
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2024-02-09%2012%3A46%3A37 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-05-09%2003%3A48%3A10 - REL_15_STABLE
Assertion failure in SnapBuildInitialSnapshot()
Upgrade tests fail on Windows due to pg_upgrade_output.d/ not removed
2/242 postgresql:pg_upgrade / pg_upgrade/004_subscription ERROR 98.04s exit status 1 --- regress_log_004_subscription Upgrade Complete ---------------- Optimizer statistics are not transferred by pg_upgrade. Once you start the new server, consider running: C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/PGSQL~1.BUI/TMP_IN~1/tools/nmsys64/home/pgrunner/bf/root/HEAD/inst/bin/vacuumdb --all --analyze-in-stages Running this script will delete the old cluster's data files: delete_old_cluster.bat pg_upgrade: warning: could not remove directory "C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240613T060900.667/log": Directory not empty pg_upgrade: warning: could not remove directory "C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240613T060900.667": Directory not empty pg_upgrade: warning: could not stat file "C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240613T060900.667/log/pg_upgrade_internal.log": No such file or directory pg_upgrade: warning: could not remove directory "C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240613T060900.667/log": Directory not empty pg_upgrade: warning: could not remove directory "C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240613T060900.667": Directory not empty [06:09:33.510](34.360s) ok 8 - run of pg_upgrade for old instance when the subscription tables are in init/ready state [06:09:33.510](0.000s) not ok 9 - pg_upgrade_output.d/ removed after successful pg_upgrade [06:09:33.511](0.001s) # Failed test 'pg_upgrade_output.d/ removed after successful pg_upgrade' # at C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql/src/bin/pg_upgrade/t/004_subscription.pl line 265.
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-06-13%2011%3A03%3A07 - HEAD
pg_upgrade test failure \ the output directory remains after successful upgrade
The postgres_fdw test fails due to an unexpected warning on canceling a statement
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-06-20+09%3A52%3A04 - HEAD
70/70 postgresql:postgres_fdw-running / postgres_fdw-running/regress ERROR 278.67s exit status 1 --- postgres_fdw-running/regress/regression.diffs --- /home/bf/bf-build/olingo/HEAD/pgsql/contrib/postgres_fdw/expected/postgres_fdw.out 2024-06-07 10:43:46.591500366 +0000 +++ /home/bf/bf-build/olingo/HEAD/pgsql.build/testrun/postgres_fdw-running/regress/results/postgres_fdw.out 2024-06-20 10:13:52.926459374 +0000 @@ -2775,6 +2775,7 @@ SET LOCAL statement_timeout = '10ms'; select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long ERROR: canceling statement due to statement timeout +WARNING: could not get result of cancel request due to timeout COMMIT; -- ==================================================================== -- Check that userid to use when querying the remote table is correctly
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-04-02%2023%3A58%3A01 - HEAD
postgres_fdw-running/regress fails due to an unxepected warning
008_fsm_truncation failing on dodo in v14- due to slow fsync
### Starting node "standby" # Running: pg_ctl -D /media/pi/250gb/proj/bf2/v17/buildroot/REL_14_STABLE/pgsql.build/src/test/recovery/tmp_check/t_008_fsm_truncation_standby_data/pgdata -l /media/pi/250gb/proj/bf2/v17/buildroot/REL_14_STABLE/pgsql.build/src/test/recovery/tmp_check/log/008_fsm_truncation_standby.log -o --cluster-name=standby start waiting for server to start........................................................................................................................... stopped waiting pg_ctl: server did not start in time # pg_ctl start failed; logfile: --- 008_fsm_truncation_standby.log: 2024-06-19 21:27:30.843 ACST [13244:1] LOG: starting PostgreSQL 14.12 on armv7l-unknown-linux-gnueabihf, compiled by gcc (GCC) 15.0.0 20240617 (experimental), 32-bit 2024-06-19 21:27:31.768 ACST [13244:2] LOG: listening on Unix socket "/media/pi/250gb/proj/bf2/v17/buildroot/tmp/vLgcHgvc7O/.s.PGSQL.50013" 2024-06-19 21:27:35.055 ACST [13246:1] LOG: database system was interrupted; last known up at 2024-06-19 21:26:55 ACST 2024-06-19 21:29:38.320 ACST [13244:3] LOG: received immediate shutdown request 2024-06-19 21:29:42.130 ACST [13244:4] LOG: database system is shut down
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-20%2018%3A30%3A10 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-19%2010%3A15%3A32 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-20%2009%3A15%3A18 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-21%2018%3A30%3A34 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-28%2017%3A10%3A12 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-20%2009%3A15%3A24 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-21%2018%3A30%3A59 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-28%2017%3A00%3A28 - REL_14_STABLE
recoveryCheck/008_fsm_truncation is failing on dodo in v14- (due to slow fsync?)
Also 002_limits.pl exited abnormally with 29 just after 2
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-20+07%3A18%3A46 - HEAD
# +++ tap install-check in src/test/modules/xid_wraparound +++ t/001_emergency_vacuum.pl .. ok # Tests were run but no plan was declared and done_testing() was not seen. # Looks like your test exited with 29 just after 2. t/002_limits.pl ............ Dubious, test returned 29 (wstat 7424, 0x1d00) All 2 subtests passed t/003_wraparounds.pl ....... ok Test Summary Report ------------------- t/002_limits.pl (Wstat: 7424 Tests: 2 Failed: 0) Non-zero exit status: 29 Parse errors: No plan found in TAP output Files=3, Tests=10, 4235 wallclock secs ( 0.10 usr 0.13 sys + 18.05 cusr 12.76 csys = 31.04 CPU) Result: FAIL
Miscellaneous test failures in v14- on Windows due to "Permission denied" errors
============== shutting down postmaster ============== pg_ctl: could not open PID file "C:/tools/nmsys64/home/pgrunner/bf/root/REL_14_STABLE/pgsql.build/src/test/regress/./tmp_check/data/postmaster.pid": Permission denied
stat() vs ERROR_DELETE_PENDING, round N + 1 \ pushing fix e2f0f8ed2 to v15+
002_pg_upgrade.pl fails on mereswine due to a backend killed during execution of infinite_recurse.sql
(mereswine is an armv7 machine)
# Failed test 'regression tests pass' # at t/002_pg_upgrade.pl line 160. # got: '256' # expected: '0' # Failed test 'dump before running pg_upgrade' # at t/002_pg_upgrade.pl line 208. --- regress_log_002_pg_upgrade errors ... ok 12770 ms infinite_recurse ... FAILED (test process exited with exit code 2) 31232 ms test sanity_check ... FAILED (test process exited with exit code 2) 370 ms --- 002_pg_upgrade_old_node.log 2024-06-26 02:49:06.742 PDT [29121:4] LOG: server process (PID 30908) was terminated by signal 9: Killed 2024-06-26 02:49:06.742 PDT [29121:5] DETAIL: Failed process was running: select infinite_recurse();
infinite_recurse hitting OOM condititon on mereswine
040_pg_createsubscriber.pl fails when flushed position lagging behind on pg_sync_replication_slots()
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2024-06-28+04%3A42%3A48 - HEAD
163/295 postgresql:pg_basebackup / pg_basebackup/040_pg_createsubscriber ERROR 26.03s exit status 29 ... # Tests were run but no plan was declared and done_testing() was not seen. # Looks like your test exited with 29 just after 23. --- regress_log_040_pg_createsubscriber [04:46:20.321](0.703s) ok 23 - standby contains unmet conditions on node S ### Restarting node "node_p" # Running: pg_ctl -w -D /home/bf/bf-build/piculet/HEAD/pgsql.build/testrun/pg_basebackup/040_pg_createsubscriber/data/t_040_pg_createsubscriber_node_p_data/pgdata -l /home/bf/bf-build/piculet/HEAD/pgsql.build/testrun/pg_basebackup/040_pg_createsubscriber/log/040_pg_createsubscriber_node_p.log restart waiting for server to shut down.... done server stopped waiting for server to start.... done server started # Postmaster PID for node "node_p" is 415642 ### Starting node "node_s" # Running: pg_ctl -w -D /home/bf/bf-build/piculet/HEAD/pgsql.build/testrun/pg_basebackup/040_pg_createsubscriber/data/t_040_pg_createsubscriber_node_s_data/pgdata -l /home/bf/bf-build/piculet/HEAD/pgsql.build/testrun/pg_basebackup/040_pg_createsubscriber/log/040_pg_createsubscriber_node_s.log -o --cluster-name=node_s start waiting for server to start.... done server started # Postmaster PID for node "node_s" is 416482 error running SQL: 'psql:<stdin>:1: ERROR: skipping slot synchronization as the received slot sync LSN 0/30047F0 for slot "failover_slot" is ahead of the standby position 0/3004708' while running 'psql -XAtq -d port=51506 host=/tmp/pqWohdD5Qj dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'SELECT pg_sync_replication_slots()' at /home/bf/bf-build/piculet/HEAD/pgsql/src/test/perl/PostgreSQL/Test/Cluster.pm line 2126.
New instability of the 040_pg_createsubscriber test
Fixed Test Failures
partition_split.sql contains queries producing unstable results
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jackdaw&dt=2024-05-24+15%3A58%3A17 - HEAD
--- /home/debian/17-jackdaw/buildroot/HEAD/pgsql.build/src/test/regress/expected/partition_split.out 2024-05-24 15:58:23.929113215 +0000 +++ /home/debian/17-jackdaw/buildroot/HEAD/pgsql.build/src/test/regress/results/partition_split.out 2024-05-24 16:05:58.286542479 +0000 @@ -637,15 +637,15 @@ SELECT pg_get_constraintdef(oid), conname, conkey FROM pg_constraint WHERE conrelid = 'sales_feb2022'::regclass::oid; pg_get_constraintdef | conname | conkey ---------------------------------------------------------------------+---------------------------------+-------- - CHECK ((sales_amount > 1)) | sales_range_sales_amount_check | {2} FOREIGN KEY (salesperson_id) REFERENCES salespeople(salesperson_id) | sales_range_salesperson_id_fkey | {1} + CHECK ((sales_amount > 1)) | sales_range_sales_amount_check | {2} (2 rows) SELECT pg_get_constraintdef(oid), conname, conkey FROM pg_constraint WHERE conrelid = 'sales_mar2022'::regclass::oid; pg_get_constraintdef | conname | conkey ---------------------------------------------------------------------+---------------------------------+-------- - CHECK ((sales_amount > 1)) | sales_range_sales_amount_check | {2} FOREIGN KEY (salesperson_id) REFERENCES salespeople(salesperson_id) | sales_range_salesperson_id_fkey | {1} + CHECK ((sales_amount > 1)) | sales_range_sales_amount_check | {2} (2 rows) SELECT pg_get_constraintdef(oid), conname, conkey FROM pg_constraint WHERE conrelid = 'sales_apr2022'::regclass::oid;
Add SPLIT PARTITION/MERGE PARTITIONS commands \ the test unstable results
Provide deterministic order for catalog queries in partition_split.sql
026_overwrite_contrecord.pl and 033_replay_tsp_drops.pl trigger Assert("ItemIdIsNormal(lpp)")
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-06-04+03%3A27%3A47 - HEAD
29/295 postgresql:recovery / recovery/026_overwrite_contrecord ERROR 39.55s exit status 32 --- 026_overwrite_contrecord_standby.log TRAP: failed Assert("ItemIdIsNormal(lpp)"), File: "../pgsql/src/backend/access/heap/heapam.c", Line: 1002, PID: 3740958 postgres: standby: bf postgres [local] startup(ExceptionalCondition+0x81)[0x56c60bf9] postgres: standby: bf postgres [local] startup(+0xf776e)[0x5667276e] postgres: standby: bf postgres [local] startup(heap_getnextslot+0x40)[0x56672ee1] postgres: standby: bf postgres [local] startup(+0x11c218)[0x56697218] postgres: standby: bf postgres [local] startup(systable_getnext+0xfa)[0x56697c1a] postgres: standby: bf postgres [local] startup(+0x6d29c7)[0x56c4d9c7] postgres: standby: bf postgres [local] startup(+0x6d372c)[0x56c4e72c] postgres: standby: bf postgres [local] startup(+0x6d8288)[0x56c53288] postgres: standby: bf postgres [local] startup(RelationCacheInitializePhase3+0x149)[0x56c52d71]
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-04-03+03%3A32%3A18 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-04-04+15%3A38%3A16 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=margay&dt=2024-05-07+04%3A00%3A08 - HEAD
Assert in heapgettup_pagemode() fails due to underlying buffer change
Hot standby queries see transient all-zeros pages
040_pg_createsubscriber.pl fails due to error: could not obtain replication slot information
stderr: # Failed test 'run pg_createsubscriber --dry-run on node S' # at /home/bf/bf-build/culicidae/HEAD/pgsql/src/bin/pg_basebackup/t/040_pg_createsubscriber.pl line 264. # Looks like you failed 1 test of 31. --- regress_log_040_pg_createsubscriber [00:38:25.368](1.063s) ok 23 - standby contains unmet conditions on node S ... # Running: pg_createsubscriber --verbose --dry-run ... ... pg_createsubscriber: checking settings on publisher 2024-06-06 00:38:26.895 UTC [4001283][client backend][:0] LOG: disconnection: session time: 0:00:00.028 user=bf database=pg1 host=[local] pg_createsubscriber: error: could not obtain replication slot information: got 0 rows, expected 1 row ... pg_createsubscriber: server was stopped [00:38:27.352](1.984s) not ok 24 - run pg_createsubscriber --dry-run on node S [00:38:27.355](0.004s) # Failed test 'run pg_createsubscriber --dry-run on node S' # at /home/bf/bf-build/culicidae/HEAD/pgsql/src/bin/pg_basebackup/t/040_pg_createsubscriber.pl line 264.
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-06-12%2000%3A58%3A32 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-06-17%2008%3A00%3A02 - HEAD
speed up a logical replica setup \ analysis for two failures
pg_createsubscriber: Remove replication slot check on primary
plperl.sql failing in v15- on caiman with a newer Perl version
(caiman is running on Fedora Rawhide with Perl 5.40)
--- /repos/build-farm-17/REL_13_STABLE/pgsql.build/src/pl/plperl/expected/plperl.out 2024-06-23 21:35:07.618704257 -0300 +++ /repos/build-farm-17/REL_13_STABLE/pgsql.build/src/pl/plperl/results/plperl.out 2024-06-23 21:59:11.425256754 -0300 @@ -706,7 +706,8 @@ CONTEXT: PL/Perl anonymous code block -- check that eval is allowed and eval'd restricted ops are caught DO $$ eval q{chdir '.';}; warn "Caught: $@"; $$ LANGUAGE plperl; -WARNING: Caught: 'chdir' trapped by operation mask at line 1. +ERROR: 'eval hints' trapped by operation mask at line 1. +CONTEXT: PL/Perl anonymous code block
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-24%2001%3A34%3A23 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-25%2001%3A21%3A03 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-23%2003%3A51%3A53 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-24%2000%3A59%3A30 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-25%2000%3A48%3A40 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-23%2003%3A50%3A53 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-24%2000%3A35%3A06 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-23%2003%3A49%3A53 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-24%2000%3A15%3A16 - REL_12_STABLE
Buildfarm animal caiman showing a plperl test issue with newer Perl versions
Remove redundant perl version checks
inplace-inval.spec fails on prion and trilobite on checking relhasindex
(prion run tests with -DCATCACHE_FORCE_RELEASE, trilobite with -DCLOBBER_CACHE_ALWAYS)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2002%3A38%3A03 - HEAD
not ok 40 - inplace-inval 141 ms regression.diffs --- --- /home/ec2-user/bf/root/HEAD/pgsql/src/test/isolation/expected/inplace-inval.out 2024-06-28 02:38:07.965133814 +0000 +++ /home/ec2-user/bf/root/HEAD/pgsql.build/src/test/isolation/output_iso/results/inplace-inval.out 2024-06-28 04:50:14.086521986 +0000 @@ -14,7 +14,7 @@ relhasindex ----------- -f +t (1 row)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2002%3A33%3A04 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2002%3A34%3A03 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2002%3A35%3A03 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2002%3A36%3A03 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2002%3A37%3A03 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2004%3A53%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2006%3A03%3A04 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2012%3A53%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2014%3A13%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2015%3A33%3A03 - HEAD
Remove configuration-dependent output from new inplace-inval test.