Known Buildfarm Test Failures
Investigated Test failures
027_stream_regress.pl fails to wait for standby because of incorrect CRC in WAL
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-05-02%2006%3A40%3A36 - HEAD
(dodo is a armv7l machine using SLICING_BY_8_CRC32C and having wal_consistency_checking enabled)
# poll_query_until timed out executing this query: # SELECT '2/8E09BD70' <= replay_lsn AND state = 'streaming' # FROM pg_catalog.pg_stat_replication # WHERE application_name IN ('standby_1', 'walreceiver') # expecting this output: # t # last actual query output: # # with stderr: # Tests were run but no plan was declared and done_testing() was not seen. # Looks like your test exited with 29 just after 2. [17:19:00] t/027_stream_regress.pl ............... Dubious, test returned 29 (wstat 7424, 0x1d00) All 2 subtests passed --- 027_stream_regress_standby_1.log: 2024-05-02 17:08:18.579 ACST [3404:205] LOG: restartpoint starting: wal 2024-05-02 17:08:18.401 ACST [3406:7] LOG: incorrect resource manager data checksum in record at 0/F14D7A60 2024-05-02 17:08:18.579 ACST [3407:2] FATAL: terminating walreceiver process due to administrator command 2024-05-02 17:08:18.579 ACST [3406:8] LOG: incorrect resource manager data checksum in record at 0/F14D7A60 ... 2024-05-02 17:19:00.093 ACST [3406:2604] LOG: incorrect resource manager data checksum in record at 0/F14D7A60 2024-05-02 17:19:00.093 ACST [3406:2605] LOG: waiting for WAL to become available at 0/F1002000 2024-05-02 17:19:00.594 ACST [3406:2606] LOG: incorrect resource manager data checksum in record at 0/F14D7A60 2024-05-02 17:19:00.594 ACST [3406:2607] LOG: waiting for WAL to become available at 0/F1002000 2024-05-02 17:19:00.758 ACST [3403:4] LOG: received immediate shutdown request 2024-05-02 17:19:00.785 ACST [3403:5] LOG: database system is shut down
WAL record CRC calculated incorrectly because of underlying buffer modification
020_archive_status.pl failed to wait for updated statistics due to send() returned EAGAIN
(morepork is running on OpenBSD 6.9)
# poll_query_until timed out executing this query: # SELECT archived_count FROM pg_stat_archiver # expecting this output: # 1 # last actual query output: # 0 # with stderr: # Looks like your test exited with 29 just after 4. [23:01:41] t/020_archive_status.pl .............. Dubious, test returned 29 (wstat 7424, 0x1d00) Failed 12/16 subtests --- 020_archive_status_master.log: 2024-04-30 22:57:27.931 CEST [83115:1] LOG: archive command failed with exit code 1 2024-04-30 22:57:27.931 CEST [83115:2] DETAIL: The failed archive command was: cp "pg_wal/000000010000000000000001_does_not_exist" "000000010000000000000001_does_not_exist" ... 2024-04-30 22:57:28.070 CEST [47962:2] [unknown] LOG: connection authorized: user=pgbf database=postgres application_name=020_archive_status.pl 2024-04-30 22:57:28.072 CEST [47962:3] 020_archive_status.pl LOG: statement: SELECT archived_count FROM pg_stat_archiver 2024-04-30 22:57:28.073 CEST [83115:3] LOG: could not send to statistics collector: Resource temporarily unavailable
Non-systematic handling of EINTR/EAGAIN/EWOULDBLOCK
031_recovery_conflict.pl fails to detect an expected lock acquisition
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-03-18%2023%3A43%3A00 - HEAD
[23:48:52.521](9.831s) ok 13 - startup deadlock: cursor holding conflicting pin, also waiting for lock, established [23:55:13.749](381.228s) # poll_query_until timed out executing this query: # # SELECT 'waiting' FROM pg_locks WHERE locktype = 'relation' AND NOT granted; # # expecting this output: # waiting # last actual query output: # # with stderr: [23:55:13.763](0.013s) not ok 14 - startup deadlock: lock acquisition is waiting [23:55:13.763](0.001s) # Failed test 'startup deadlock: lock acquisition is waiting' # at /home/bf/bf-build/adder/HEAD/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 261. Waiting for replication conn standby's replay_lsn to pass 0/3450000 on primary done --- 031_recovery_conflict_standby.log 2024-03-18 23:48:52.526 UTC [3138907][client backend][1/2:0] LOG: statement: SELECT * FROM test_recovery_conflict_table2; 2024-03-18 23:48:52.690 UTC [3139905][not initialized][:0] LOG: connection received: host=[local] 2024-03-18 23:48:52.692 UTC [3139905][client backend][2/1:0] LOG: connection authenticated: user="bf" method=trust (/home/bf/bf-build/adder/HEAD/pgsql.build/testrun/recovery/031_recovery_conflict/data/t_031_recovery_conflict_standby_data/pgdata/pg_hba.conf:117) 2024-03-18 23:48:52.692 UTC [3139905][client backend][2/1:0] LOG: connection authorized: user=bf database=postgres application_name=031_recovery_conflict.pl 2024-03-18 23:48:53.301 UTC [3136308][startup][34/0:0] LOG: recovery still waiting after 10.099 ms: recovery conflict on buffer pin 2024-03-18 23:48:53.301 UTC [3136308][startup][34/0:0] CONTEXT: WAL redo at 0/342CCC0 for Heap2/PRUNE: ... 2024-03-18 23:48:53.301 UTC [3138907][client backend][1/2:0] ERROR: canceling statement due to conflict with recovery at character 15 2024-03-18 23:48:53.301 UTC [3138907][client backend][1/2:0] DETAIL: User transaction caused buffer deadlock with recovery. 2024-03-18 23:48:53.301 UTC [3138907][client backend][1/2:0] STATEMENT: SELECT * FROM test_recovery_conflict_table2; 2024-03-18 23:48:53.301 UTC [3136308][startup][34/0:0] LOG: recovery finished waiting after 10.633 ms: recovery conflict on buffer pin 2024-03-18 23:48:53.301 UTC [3136308][startup][34/0:0] CONTEXT: WAL redo at 0/342CCC0 for Heap2/PRUNE: ... 2024-03-18 23:48:53.769 UTC [3139905][client backend][2/2:0] LOG: statement: SELECT 'waiting' FROM pg_locks WHERE locktype = 'relation' AND NOT granted;
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-14%2016%3A39%3A49 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-12-26%2005%3A49%3A14 - master
Test 031_recovery_conflict.pl is not immune to autovacuum
031_recovery_conflict.pl fails when a conflict counted twice
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-05-15%2023%3A03%3A30 - HEAD
(olingo builds postgres with -O1 and address sanitizer)
[23:12:02.127](0.166s) not ok 6 - snapshot conflict: stats show conflict on standby [23:12:02.130](0.003s) # Failed test 'snapshot conflict: stats show conflict on standby' # at /home/bf/bf-build/olingo/HEAD/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 332. [23:12:02.130](0.000s) # got: '2' # expected: '1' ... [23:12:06.848](1.291s) not ok 17 - 5 recovery conflicts shown in pg_stat_database [23:12:06.887](0.040s) # Failed test '5 recovery conflicts shown in pg_stat_database' # at /home/bf/bf-build/olingo/HEAD/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 286. [23:12:06.887](0.000s) # got: '6' # expected: '5' Waiting for replication conn standby's replay_lsn to pass 0/3459160 on primary done --- 031_recovery_conflict_standby.log: 2024-05-15 23:12:01.959 UTC [1299981][client backend][2/2:0] FATAL: terminating connection due to conflict with recovery 2024-05-15 23:12:01.959 UTC [1299981][client backend][2/2:0] DETAIL: User query might have needed to see row versions that must be removed. 2024-05-15 23:12:01.959 UTC [1299981][client backend][2/2:0] HINT: In a moment you should be able to reconnect to the database and repeat your command. 2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] LOG: could not send data to client: Broken pipe 2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] FATAL: terminating connection due to conflict with recovery 2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] DETAIL: User query might have needed to see row versions that must be removed. 2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] HINT: In a moment you should be able to reconnect to the database and repeat your command.
Test 031_recovery_conflict fails when a conflict counted twice
001_rep_changes.pl fails due to publisher stuck on shutdown
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-05-16%2014%3A22%3A38 - HEAD
[14:33:02.374](0.333s) ok 23 - update works with dropped subscriber column ### Stopping node "publisher" using mode fast # Running: pg_ctl -D /home/bf/bf-build/adder/HEAD/pgsql.build/testrun/subscription/001_rep_changes/data/t_001_rep_changes_publisher_data/pgdata -m fast stop waiting for server to shut down.. ... ... ... .. failed pg_ctl: server does not shut down # pg_ctl stop failed: 256 # Postmaster PID for node "publisher" is 2222549 [14:39:04.375](362.001s) Bail out! pg_ctl stop failed --- 001_rep_changes_publisher.log 2024-05-16 14:33:02.907 UTC [2238704][client backend][4/22:0] LOG: statement: DELETE FROM tab_rep 2024-05-16 14:33:02.925 UTC [2238704][client backend][:0] LOG: disconnection: session time: 0:00:00.078 user=bf database=postgres host=[local] 2024-05-16 14:33:02.939 UTC [2222549][postmaster][:0] LOG: received fast shutdown request 2024-05-16 14:33:03.000 UTC [2222549][postmaster][:0] LOG: aborting any active transactions 2024-05-16 14:33:03.049 UTC [2222549][postmaster][:0] LOG: background worker "logical replication launcher" (PID 2223110) exited with exit code 1 2024-05-16 14:33:03.062 UTC [2222901][checkpointer][:0] LOG: shutting down 2024-05-16 14:39:04.377 UTC [2222549][postmaster][:0] LOG: received immediate shutdown request 2024-05-16 14:39:04.382 UTC [2222549][postmaster][:0] LOG: database system is shut down
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dikkop&dt=2024-04-24%2014%3A38%3A35 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-07-16%2022%3A45%3A10 - REL_17_STABLE
Also 035_standby_logical_decoding.pl fails on restart of standby (which is a publisher in the test):
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-04-17%2014%3A21%3A00 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-04-06%2016%3A28%3A38 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-11%2009%3A54%3A09 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-07-09%2003%3A46%3A44 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-10-09%2009%3A54%3A31 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-11-21%2006%3A25%3A02 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2024-11-27%2016%3A54%3A24 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-12-18%2003%3A32%3A12 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-12-18%2003%3A34%3A04 - REL_17_STABLE
001_rep_changes.pl fails due to publisher stuck on shutdown
deadlock-parallel.spec fails due to timeout on jit-enabled animals
(canebrake uses --with-llvm, jit=1, jit_above_cost=0 options)
--- /home/bf/bf-build/canebrake/REL_15_STABLE/pgsql/src/test/isolation/expected/deadlock-parallel.out 2023-10-18 23:57:32.904930097 +0000 +++ /home/bf/bf-build/canebrake/REL_15_STABLE/pgsql.build/src/test/isolation/output_iso/results/deadlock-parallel.out 2024-04-03 00:02:29.290675485 +0000 @@ -46,23 +46,15 @@ 1 (1 row) +isolationtester: canceling step d2a1 after 300 seconds step d2a1: <... completed> - sum ------ -10000 -(1 row) - -lock_share ----------- - 1 -(1 row) - +ERROR: canceling statement due to user request step e1c: COMMIT; -step d2c: COMMIT; step e2l: <... completed> lock_excl --------- 1 (1 row) +step d2c: COMMIT; step e2c: COMMIT;
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2023-10-26%2001%3A00%3A24 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2024-03-04%2017%3A56%3A29 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2024-03-10%2023%3A32%3A33 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2024-03-05%2011%3A11%3A25 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pogona&dt=2024-02-20%2003%3A50%3A49 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=xenodermus&dt=2024-06-30%2022%3A08%3A36 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pogona&dt=2024-07-26%2009%3A31%3A33 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=urutu&dt=2024-07-29%2018%3A14%3A41 - HEAD
(petalura, pogona, xenodermus and urutu also use --with-llvm, jit=1, jit_above_cost=0 options)
Recent 027_streaming_regress.pl hangs \ deadlock-parallel failures
027_stream_regress.pl fails with timeout when waiting for catchup
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-05-22%2021%3A55%3A00 - HEAD
Similar failures at the same time:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-05-22%2021%3A55%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-05-22%2021%3A54%3A50 - HEAD
Waiting for replication conn standby_1's replay_lsn to pass 0/15218DE8 on primary [22:07:20.266](292.694s) # poll_query_until timed out executing this query: # SELECT '0/15218DE8' <= replay_lsn AND state = 'streaming' # FROM pg_catalog.pg_stat_replication # WHERE application_name IN ('standby_1', 'walreceiver') # expecting this output: # t # last actual query output: # f # with stderr: timed out waiting for catchup at /home/bf/bf-build/skink-master/HEAD/pgsql/src/test/recovery/t/027_stream_regress.pl line 103. --- 027_stream_regress_standby_1.log 2024-05-22 21:57:39.199 UTC [598721][postmaster][:0] LOG: starting PostgreSQL 17beta1 on x86_64-linux, compiled by gcc-13.2.0, 64-bit ... 2024-05-22 22:05:07.888 UTC [599624][checkpointer][:0] LOG: restartpoint starting: time 2024-05-22 22:05:21.622 UTC [599624][checkpointer][:0] LOG: restartpoint complete: wrote 31 buffers (24.2%); 0 WAL file(s) added, 0 removed, 4 recycled; write=11.770 s, sync=0.890 s, total=13.735 s; sync files=423, longest=0.111 s, average=0.003 s; distance=65698 kB, estimate=67034 kB; lsn=0/126ACA48, redo lsn=0/1202D1B8 2024-05-22 22:05:21.622 UTC [599624][checkpointer][:0] LOG: recovery restart point at 0/1202D1B8 2024-05-22 22:05:21.622 UTC [599624][checkpointer][:0] DETAIL: Last completed transaction was at log time 2024-05-22 22:01:29.448409+00. 2024-05-22 22:07:20.336 UTC [601831][walreceiver][:0] FATAL: could not receive data from WAL stream: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. 2024-05-22 22:07:20.908 UTC [598721][postmaster][:0] LOG: received immediate shutdown request 2024-05-22 22:07:21.251 UTC [598721][postmaster][:0] LOG: database system is shut down
Other occurrences after 2024-04:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-05-07%2018%3A59%3A24 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-05-07%2021%3A04%3A07 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-19%2010%3A18%3A35 - REL_15_STABLE (dodo is a slow armv7l machine)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-21%2018%3A31%3A11 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-28%2016%3A50%3A59 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-06-21%2007%3A38%3A10 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-07-04%2011%3A36%3A32 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-07-04%2011%3A35%3A43 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-07-04%2011%3A36%3A19 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-07-04%2011%3A36%3A34 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-07-04%2011%3A36%3A44 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-07-06%2005%3A41%3A24 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-07-08%2009%3A48%3A23 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-07-09%2019%3A39%3A42 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-07-09%2019%3A39%3A45 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-07-09%2019%3A39%3A53 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-07-10%2014%3A04%3A30 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-07-10%2014%3A04%3A31 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-07-11%2017%3A21%3A37 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-07-15%2019%3A13%3A07 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-07-17%2008%3A48%3A32 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-07-17%2008%3A47%3A41 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-07-17%2008%3A48%3A25 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-07-22%2014%3A20%3A09 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-07-26%2007%3A44%3A27 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-07-26%2009%3A20%3A43 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-07-26%2022%3A25%3A21 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-07-26%2022%3A25%3A34 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-07-26%2022%3A25%3A50 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-07-29%2013%3A20%3A01 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-07-29%2013%3A19%3A14 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-07-29%2016%3A18%3A17 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-07-29%2016%3A18%3A15 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-07-30%2010%3A28%3A57 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-07-30%2010%3A28%3A56 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-31%2000%3A36%3A18 - REL_17_STABLE
Recent 027_streaming_regress.pl hangs \ Test concurrency reducing
027_stream_regress.pl fails on crake with timeout when waiting for catchup
150/263 postgresql:recovery / recovery/027_stream_regress ERROR 1246.17s exit status 29 --- regress_log_027_stream_regress [11:24:44.119](225.205s) # poll_query_until timed out executing this query: # SELECT '2/791D9828' <= replay_lsn AND state = 'streaming' # FROM pg_catalog.pg_stat_replication # WHERE application_name IN ('standby_1', 'walreceiver') # expecting this output: # t # last actual query output: # f # with stderr: timed out waiting for catchup at /home/andrew/bf/root/REL_16_STABLE/pgsql/src/test/recovery/t/027_stream_regress.pl line 100. --- 027_stream_regress_standby_1.log 2024-07-17 11:24:13.363 EDT [2024-07-17 11:04:06 EDT 1365647:393] LOG: restartpoint starting: wal 2024-07-17 11:24:22.384 EDT [2024-07-17 11:04:06 EDT 1365647:394] LOG: restartpoint complete: wrote 92 buffers (71.9%); 0 WAL file(s) added, 1 removed, 3 recycled; write=9.021 s, sync=0.001 s, total=9.022 s; sync files=0, longest=0.000 s, average=0.000 s; distance=63581 kB, estimate=67348 kB; lsn=1/B4A99C78, redo lsn=1/B1053BC8 2024-07-17 11:24:22.384 EDT [2024-07-17 11:04:06 EDT 1365647:395] LOG: recovery restart point at 1/B1053BC8 2024-07-17 11:24:22.384 EDT [2024-07-17 11:04:06 EDT 1365647:396] DETAIL: Last completed transaction was at log time 2024-07-17 11:11:58.69292-04. 2024-07-17 11:24:44.260 EDT [2024-07-17 11:04:06 EDT 1365651:2] FATAL: could not receive data from WAL stream: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-09%2021%3A37%3A04 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-15%2005%3A18%3A04 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-19%2004%3A30%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-19%2004%3A29%3A06 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-19%2017%3A44%3A10 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-23%2000%3A36%3A59 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-23%2008%3A07%3A08 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-24%2004%3A29%3A23 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-24%2011%3A42%3A19 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-24%2023%3A39%3A06 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-24%2012%3A04%3A29 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-25%2014%3A07%3A04 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-25%2020%3A18%3A05 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-26%2011%3A15%3A58 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-26%2016%3A12%3A04 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-29%2016%3A23%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-31%2014%3A16%3A48 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-31%2013%3A57%3A32 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-31%2013%3A35%3A04 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-02%2019%3A57%3A03 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-07%2019%3A00%3A33 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-09%2002%3A05%3A44 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-09%2021%3A12%3A02 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-10%2008%3A22%3A03 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-10%2018%3A47%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-10%2022%3A23%3A46 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-11%2019%3A47%3A02 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-19%2022%3A57%3A03 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-09-04%2021%3A42%3A04 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-09-11%2007%3A47%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-09-13%2022%3A45%3A47 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-09-16%2018%3A25%3A07 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-09-18%2003%3A11%3A48 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-09-25%2000%3A58%3A56 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-09-27%2021%3A56%3A30 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-09-30%2017%3A02%3A17 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-02%2018%3A33%3A52 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-04%2014%3A42%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-08%2011%3A41%3A20 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-11%2003%3A52%3A51 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-21%2018%3A02%3A43 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-25%2012%3A32%3A35 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-25%2006%3A33%3A42 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-12-07%2020%3A47%3A42 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-12-18%2000%3A06%3A26 - master
Recent 027_streaming_regress.pl hangs \ crake is failing due to other reasons
027_stream_regress.pl fails due to some IOS plans of queries in create_index.sql changed
# Failed test 'regression tests pass' # at t/027_stream_regress.pl line 92. # got: '256' # expected: '0' # Looks like you failed 1 test of 6. [07:07:42] t/027_stream_regress.pl .............. Dubious, test returned 1 (wstat 256, 0x100) Failed 1/6 subtests --- regress_log_027_stream_regress: ... not ok 66 + create_index 27509 ms ... ---- diff -U3 /home/nm/farm/gcc64/REL_16_STABLE/pgsql.build/src/test/regress/expected/create_index.out /home/nm/farm/gcc64/REL_16_STABLE/pgsql.build/src/test/recovery/tmp_check/results/create_index.out --- /home/nm/farm/gcc64/REL_16_STABLE/pgsql.build/src/test/regress/expected/create_index.out 2023-07-08 15:26:29.000000000 +0000 +++ /home/nm/farm/gcc64/REL_16_STABLE/pgsql.build/src/test/recovery/tmp_check/results/create_index.out 2024-03-17 06:59:01.000000000 +0000 @@ -1916,11 +1916,15 @@ SELECT unique1 FROM tenk1 WHERE unique1 IN (1,42,7) ORDER BY unique1; - QUERY PLAN - ------------------------------------------------------- - Index Only Scan using tenk1_unique1 on tenk1 - Index Cond: (unique1 = ANY ('{1,42,7}'::integer[])) - (2 rows) + QUERY PLAN + ------------------------------------------------------------------- + Sort + Sort Key: unique1 + -> Bitmap Heap Scan on tenk1 + Recheck Cond: (unique1 = ANY ('{1,42,7}'::integer[])) + -> Bitmap Index Scan on tenk1_unique1 + Index Cond: (unique1 = ANY ('{1,42,7}'::integer[])) + (6 rows) SELECT unique1 FROM tenk1 WHERE unique1 IN (1,42,7) @@ -1936,12 +1940,13 @@ SELECT thousand, tenthous FROM tenk1 WHERE thousand < 2 AND tenthous IN (1001,3000) ORDER BY thousand; - QUERY PLAN - ------------------------------------------------------- - Index Only Scan using tenk1_thous_tenthous on tenk1 - Index Cond: (thousand < 2) - Filter: (tenthous = ANY ('{1001,3000}'::integer[])) - (3 rows) + QUERY PLAN + -------------------------------------------------------------------------------------- + Sort + Sort Key: thousand + -> Index Only Scan using tenk1_thous_tenthous on tenk1 + Index Cond: ((thousand < 2) AND (tenthous = ANY ('{1001,3000}'::integer[]))) + (4 rows) SELECT thousand, tenthous FROM tenk1 WHERE thousand < 2 AND tenthous IN (1001,3000)
Also 002_pg_upgrade.pl fails due to some IOS plans of queries in create_index.sql changed
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2024-01-02%2007%3A09%3A09 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2023-11-15%2006%3A16%3A15 - HEAD
To what extent should tests rely on VACUUM ANALYZE? \ create_index failures
regress-running/regress fails on skink due to timeout
(skink is a Valgrind animal)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-04%2002%3A44%3A19 - HEAD
1/1 postgresql:regress-running / regress-running/regress TIMEOUT 3000.06s killed by signal 15 SIGTERM --- inst/logfile ends with: 2024-06-04 03:39:24.664 UTC [3905755][client backend][5/1787:16793] ERROR: column "c2" of relation "test_add_column" already exists 2024-06-04 03:39:24.664 UTC [3905755][client backend][5/1787:16793] STATEMENT: ALTER TABLE test_add_column ADD COLUMN c2 integer, -- fail because c2 already exists ADD COLUMN c3 integer primary key; 2024-06-04 03:39:30.815 UTC [3905755][client backend][5/0:0] LOG: could not send data to client: Broken pipe 2024-06-04 03:39:30.816 UTC [3905755][client backend][5/0:0] FATAL: connection to client lost
Other occurrences after 2024-04-01:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-04-06%2021%3A59%3A00 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-04-25%2005%3A34%3A50 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-05-02%2021%3A38%3A23 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-05-06%2014%3A01%3A29 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-05-22%2022%3A25%3A15 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-05-24%2002%3A22%3A26 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-04%2022%3A04%3A09 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-05%2023%3A50%3A28 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-11%2021%3A59%3A28 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-24%2014%3A00%3A33 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-27%2000%3A46%3A49 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-27%2006%3A20%3A44 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-27%2019%3A09%3A56 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-28%2002%3A27%3A55 - HEAD
Recent 027_streaming_regress.pl hangs \ meson TIMEOUT on skink
035_standby_logical_decoding_standby.pl fails due to missing activeslot invalidation
(drongo executes the 035 test for longer than 500 seconds)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-06-06%2012%3A36%3A11 - HEAD
40/287 postgresql:recovery / recovery/035_standby_logical_decoding ERROR 2193.29s (exit status 255 or signal 127 SIGinvalid) --- regress_log_035_standby_logical_decoding [13:55:13.725](34.411s) ok 25 - inactiveslot slot invalidation is logged with vacuum on pg_class [13:55:13.727](0.002s) not ok 26 - activeslot slot invalidation is logged with vacuum on pg_class [13:55:13.728](0.001s) # Failed test 'activeslot slot invalidation is logged with vacuum on pg_class' # at C:/prog/bf/root/HEAD/pgsql/src/test/recovery/t/035_standby_logical_decoding.pl line 229. [14:27:42.995](1949.267s) # poll_query_until timed out executing this query: # select (confl_active_logicalslot = 1) from pg_stat_database_conflicts where datname = 'testdb' # expecting this output: # t # last actual query output: # f # with stderr: [14:27:42.999](0.004s) not ok 27 - confl_active_logicalslot updated [14:27:43.000](0.001s) # Failed test 'confl_active_logicalslot updated' # at C:/prog/bf/root/HEAD/pgsql/src/test/recovery/t/035_standby_logical_decoding.pl line 235. Timed out waiting confl_active_logicalslot to be updated at C:/prog/bf/root/HEAD/pgsql/src/test/recovery/t/035_standby_logical_decoding.pl line 235. --- 035_standby_logical_decoding_standby.log: 2024-06-06 13:55:07.715 UTC [9172:7] LOG: invalidating obsolete replication slot "row_removal_inactiveslot" 2024-06-06 13:55:07.715 UTC [9172:8] DETAIL: The slot conflicted with xid horizon 754. 2024-06-06 13:55:07.715 UTC [9172:9] CONTEXT: WAL redo at 0/4020A80 for Heap2/PRUNE_ON_ACCESS: snapshotConflictHorizon: 754, isCatalogRel: T, nplans: 0, nredirected: 0, ndead: 1, nunused: 0, dead: [48]; blkref #0: rel 1663/16384/2610, blk 0 2024-06-06 13:55:14.372 UTC [7532:1] [unknown] LOG: connection received: host=127.0.0.1 port=55328 2024-06-06 13:55:14.381 UTC [7532:2] [unknown] LOG: connection authenticated: identity="EC2AMAZ-P7KGG90\\pgrunner" method=sspi (C:/prog/bf/root/HEAD/pgsql.build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata/pg_hba.conf:2) 2024-06-06 13:55:14.381 UTC [7532:3] [unknown] LOG: connection authorized: user=pgrunner database=postgres application_name=035_standby_logical_decoding.pl 2024-06-06 13:55:14.443 UTC [7532:4] 035_standby_logical_decoding.pl LOG: statement: select (confl_active_logicalslot = 1) from pg_stat_database_conflicts where datname = 'testdb' 2024-06-06 13:55:14.452 UTC [7532:5] 035_standby_logical_decoding.pl LOG: disconnection: session time: 0:00:00.090 user=pgrunner database=postgres host=127.0.0.1 port=55328 # (there is no `invalidating obsolete replication slot "row_removal_activeslot"` message) ... 2024-06-06 14:27:42.675 UTC [4032:4] 035_standby_logical_decoding.pl LOG: statement: select (confl_active_logicalslot = 1) from pg_stat_database_conflicts where datname = 'testdb' 2024-06-06 14:27:42.681 UTC [4032:5] 035_standby_logical_decoding.pl LOG: disconnection: session time: 0:00:00.080 user=pgrunner database=postgres host=127.0.0.1 port=58713 2024-06-06 14:27:43.095 UTC [7892:2] FATAL: could not receive data from WAL stream: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-07-03%2018%3A41%3A03 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-07-05%2017%3A54%3A44 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-07-17%2010%3A27%3A22 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-07-23%2010%3A18%3A47 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-07-26%2003%3A21%3A30 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-07-31%2005%3A11%3A07 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-01%2008%3A05%3A08 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-02%2010%3A34%3A45 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-15%2023%3A34%3A18 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-21%2004%3A19%3A21 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-21%2009%3A11%3A08 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-09-04%2021%3A41%3A39 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-09-06%2004%3A19%3A35 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-09-13%2019%3A02%3A48 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-09-16%2017%3A00%3A15 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-09-18%2014%3A32%3A39 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-04%2015%3A48%3A11 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-17%2004%3A26%3A00 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-18%2021%3A30%3A51 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-29%2010%3A52%3A07 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-31%2008%3A07%3A11 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-05%2011%3A11%3A28 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-22%2008%3A58%3A52 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-29%2022%3A23%3A47 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-12-04%2020%3A33%3A42 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-12-20%2017%3A00%3A28 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-12-24%2021%3A07%3A27 - master
xversion-upgrade-XXX fails due to pg_ctl timeout
REL9_5_STABLE-ctl4.log waiting for server to shut down........................................................................................................................... failed pg_ctl: server does not shut down
Also test runs fail on the stopdb-C-x stage
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2024-06-08%2001%3A41%3A41 - HEAD
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-03-06%2023%3A42%3A23 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-04-02%2019%3A05%3A04 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kingsnake&dt=2024-04-27%2015%3A08%3A10 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kingsnake&dt=2024-06-13%2017%3A58%3A28 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=habu&dt=2024-08-05%2003%3A11%3A29 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2024-08-13%2002%3A04%3A07 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kingsnake&dt=2024-08-12%2015%3A42%3A48 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2024-08-20%2003%3A02%3A27 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2024-08-20%2002%3A04%3A04 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kingsnake&dt=2024-08-23%2015%3A09%3A02 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2024-10-30%2008%3A50%3A01 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2024-10-30%2006%3A39%3A15 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2024-10-30%2005%3A06%3A23 - REL_15_STABLE
The xversion-upgrade test fails to stop server
A partial fix: The xversion-upgrade test fails to stop server \ PGCTLTIMEOUT increased on crake
subscriber tests fail due to an assertion failure in SnapBuildInitialSnapshot()
Bailout called. Further testing stopped: pg_ctl stop failed t/031_column_list.pl ............... ok --- 031_column_list_publisher.log 2024-05-16 00:23:24.522 UTC [1882382][walsender][5/22:0] LOG: received replication command: CREATE_REPLICATION_SLOT "pg_16588_sync_16582_7369385153852978065" LOGICAL pgoutput (SNAPSHOT 'use') 2024-05-16 00:23:24.522 UTC [1882382][walsender][5/22:0] STATEMENT: CREATE_REPLICATION_SLOT "pg_16588_sync_16582_7369385153852978065" LOGICAL pgoutput (SNAPSHOT 'use') 2024-05-16 00:23:24.639 UTC [1882382][walsender][5/22:0] LOG: logical decoding found consistent point at 0/164A088 2024-05-16 00:23:24.639 UTC [1882382][walsender][5/22:0] DETAIL: There are no running transactions. 2024-05-16 00:23:24.639 UTC [1882382][walsender][5/22:0] STATEMENT: CREATE_REPLICATION_SLOT "pg_16588_sync_16582_7369385153852978065" LOGICAL pgoutput (SNAPSHOT 'use') TRAP: FailedAssertion("TransactionIdPrecedesOrEquals(safeXid, snap->xmin)", File: "/home/bf/bf-build/skink/REL_15_STABLE/pgsql.build/../pgsql/src/backend/replication/logical/snapbuild.c", Line: 614, PID: 756819) 2024-05-09 07:11:55.444 UTC [756803][walsender][4/0:0] ERROR: cannot use different column lists for table "public.test_mix_1" in different publications 2024-05-09 07:11:55.444 UTC [756803][walsender][4/0:0] CONTEXT: slot "sub1", output plugin "pgoutput", in the change callback, associated LSN 0/163B860 2024-05-09 07:11:55.444 UTC [756803][walsender][4/0:0] STATEMENT: START_REPLICATION SLOT "sub1" LOGICAL 0/0 (proto_version '3', publication_names '"pub_mix_1","pub_mix_2"') postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(ExceptionalCondition+0x92)[0x6bc2db] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(SnapBuildInitialSnapshot+0x1fd)[0x521e82] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(+0x430bb1)[0x538bb1] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(exec_replication_command+0x3c9)[0x53ac9a] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(PostgresMain+0x748)[0x58f8f1] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(+0x3efabb)[0x4f7abb] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(+0x3f1bba)[0x4f9bba] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(+0x3f1dc8)[0x4f9dc8] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(PostmasterMain+0x1133)[0x4fb36b] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(main+0x210)[0x448be9] /lib/x86_64-linux-gnu/libc.so.6(+0x27b8a)[0x4cd0b8a] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x4cd0c45] postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(_start+0x21)[0x1d2b71] 2024-05-09 07:11:55.588 UTC [747458][postmaster][:0] LOG: server process (PID 756819) was terminated by signal 6: Aborted 2024-05-09 07:11:55.588 UTC [747458][postmaster][:0] DETAIL: Failed process was running: CREATE_REPLICATION_SLOT "pg_16586_sync_16580_7366892877332646335" LOGICAL pgoutput (SNAPSHOT 'use')
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2024-02-09%2012%3A46%3A37 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-05-09%2003%3A48%3A10 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-09-14%2013%3A22%3A59 - REL_15_STABLE
Assertion failure in SnapBuildInitialSnapshot()
Upgrade tests fail on Windows due to pg_upgrade_output.d/ not removed
2/242 postgresql:pg_upgrade / pg_upgrade/004_subscription ERROR 98.04s exit status 1 --- regress_log_004_subscription Upgrade Complete ---------------- Optimizer statistics are not transferred by pg_upgrade. Once you start the new server, consider running: C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/PGSQL~1.BUI/TMP_IN~1/tools/nmsys64/home/pgrunner/bf/root/HEAD/inst/bin/vacuumdb --all --analyze-in-stages Running this script will delete the old cluster's data files: delete_old_cluster.bat pg_upgrade: warning: could not remove directory "C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240613T060900.667/log": Directory not empty pg_upgrade: warning: could not remove directory "C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240613T060900.667": Directory not empty pg_upgrade: warning: could not stat file "C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240613T060900.667/log/pg_upgrade_internal.log": No such file or directory pg_upgrade: warning: could not remove directory "C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240613T060900.667/log": Directory not empty pg_upgrade: warning: could not remove directory "C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240613T060900.667": Directory not empty [06:09:33.510](34.360s) ok 8 - run of pg_upgrade for old instance when the subscription tables are in init/ready state [06:09:33.510](0.000s) not ok 9 - pg_upgrade_output.d/ removed after successful pg_upgrade [06:09:33.511](0.001s) # Failed test 'pg_upgrade_output.d/ removed after successful pg_upgrade' # at C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql/src/bin/pg_upgrade/t/004_subscription.pl line 265.
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-06-13%2011%3A03%3A07 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-07-30%2008%3A41%3A20 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-09-11%2006%3A16%3A10 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-17%2002%3A19%3A56 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-11-04%2013%3A03%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-11-04%2020%3A03%3A06 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-20%2012%3A29%3A06 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-11-27%2008%3A00%3A08 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-12-19%2010%3A06%3A05 - REL_17_STABLE
Also buildfarm's check-pg_upgrade fails on removing data.old
c:\\build-farm-local\\buildroot\\REL_12_STABLE\\pgsql.build\\src\\bin\\pg_upgrade>RMDIR /s/q "c:\\build-farm-local\\buildroot\\REL_12_STABLE\\pgsql.build\\src\\bin\\pg_upgrade\\tmp_check\\data.old" \203f\203B\203\214\203N\203g\203\212\202\252\213\363\202\305\202\315\202\240\202\350\202\334\202\271\202\361\201B --- The last line is a Japanese message: ディレクトリが空ではありません。 = Directory not empty encoded in SJIS.
pg_upgrade test failure \ the output directory remains after successful upgrade
008_fsm_truncation failing on dodo in v14- due to slow fsync
### Starting node "standby" # Running: pg_ctl -D /media/pi/250gb/proj/bf2/v17/buildroot/REL_14_STABLE/pgsql.build/src/test/recovery/tmp_check/t_008_fsm_truncation_standby_data/pgdata -l /media/pi/250gb/proj/bf2/v17/buildroot/REL_14_STABLE/pgsql.build/src/test/recovery/tmp_check/log/008_fsm_truncation_standby.log -o --cluster-name=standby start waiting for server to start........................................................................................................................... stopped waiting pg_ctl: server did not start in time # pg_ctl start failed; logfile: --- 008_fsm_truncation_standby.log: 2024-06-19 21:27:30.843 ACST [13244:1] LOG: starting PostgreSQL 14.12 on armv7l-unknown-linux-gnueabihf, compiled by gcc (GCC) 15.0.0 20240617 (experimental), 32-bit 2024-06-19 21:27:31.768 ACST [13244:2] LOG: listening on Unix socket "/media/pi/250gb/proj/bf2/v17/buildroot/tmp/vLgcHgvc7O/.s.PGSQL.50013" 2024-06-19 21:27:35.055 ACST [13246:1] LOG: database system was interrupted; last known up at 2024-06-19 21:26:55 ACST 2024-06-19 21:29:38.320 ACST [13244:3] LOG: received immediate shutdown request 2024-06-19 21:29:42.130 ACST [13244:4] LOG: database system is shut down
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-20%2018%3A30%3A10 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-01%2017%3A20%3A27 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-19%2010%3A15%3A32 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-20%2009%3A15%3A18 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-21%2018%3A30%3A34 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-28%2017%3A10%3A12 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-01%2012%3A10%3A12 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-20%2009%3A15%3A24 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-21%2018%3A30%3A59 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-28%2017%3A00%3A28 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-01%2013%3A01%3A00 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-02%2005%3A00%3A36 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-02%2018%3A00%3A15 - REL_14_STABLE
recoveryCheck/008_fsm_truncation is failing on dodo in v14- (due to slow fsync?)
Also 002_limits.pl exited abnormally with 29 just after 2
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-20%2007%3A18%3A46 - HEAD
# +++ tap install-check in src/test/modules/xid_wraparound +++ t/001_emergency_vacuum.pl .. ok # Tests were run but no plan was declared and done_testing() was not seen. # Looks like your test exited with 29 just after 2. t/002_limits.pl ............ Dubious, test returned 29 (wstat 7424, 0x1d00) All 2 subtests passed t/003_wraparounds.pl ....... ok Test Summary Report ------------------- t/002_limits.pl (Wstat: 7424 Tests: 2 Failed: 0) Non-zero exit status: 29 Parse errors: No plan found in TAP output Files=3, Tests=10, 4235 wallclock secs ( 0.10 usr 0.13 sys + 18.05 cusr 12.76 csys = 31.04 CPU) Result: FAIL
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-02%2016%3A33%3A17 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-17%2014%3A57%3A13 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-17%2014%3A58%3A03 - HEAD
Miscellaneous test failures in v14- on Windows due to "Permission denied" errors
============== shutting down postmaster ============== pg_ctl: could not open PID file "C:/tools/nmsys64/home/pgrunner/bf/root/REL_14_STABLE/pgsql.build/src/test/regress/./tmp_check/data/postmaster.pid": Permission denied
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-07-10%2002%3A27%3A04 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-07-10%2002%3A09%3A24 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-08-08%2001%3A11%3A00 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-08-08%2001%3A31%3A42 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-11-06%2011%3A03%3A06 - REL_12_STABLE
stat() vs ERROR_DELETE_PENDING, round N + 1 \ pushing fix e2f0f8ed2 to v15+
002_pg_upgrade.pl/check fails on mereswine due to a backend killed during execution of infinite_recurse.sql
(mereswine is an armv7 machine)
# Failed test 'regression tests pass' # at t/002_pg_upgrade.pl line 160. # got: '256' # expected: '0' # Failed test 'dump before running pg_upgrade' # at t/002_pg_upgrade.pl line 208. --- regress_log_002_pg_upgrade errors ... ok 12770 ms infinite_recurse ... FAILED (test process exited with exit code 2) 31232 ms test sanity_check ... FAILED (test process exited with exit code 2) 370 ms --- 002_pg_upgrade_old_node.log 2024-06-26 02:49:06.742 PDT [29121:4] LOG: server process (PID 30908) was terminated by signal 9: Killed 2024-06-26 02:49:06.742 PDT [29121:5] DETAIL: Failed process was running: select infinite_recurse();
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mereswine&dt=2024-07-03%2002%3A10%3A35 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mereswine&dt=2024-08-23%2002%3A10%3A26 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mereswine&dt=2024-11-29%2003%3A10%3A27 - REL_16_STABLE
infinite_recurse hitting OOM condititon on mereswine
Miscellaneous tests fail on Windows due to a connection closed before receiving a final error message
# Failed test 'certificate authorization fails with revoked client cert with server-side CRL directory: matches' # at t/001_ssltests.pl line 742. # 'psql: error: connection to server at "127.0.0.1", port 57497 failed: server closed the connection unexpectedly # This probably means the server terminated abnormally # before or while processing the request. # server closed the connection unexpectedly # This probably means the server terminated abnormally # before or while processing the request.' # doesn't match '(?^:SSL error: ssl[a-z0-9/]* alert certificate revoked)' # Looks like you failed 1 test of 180. [16:08:45] t/001_ssltests.pl .. Dubious, test returned 1 (wstat 256, 0x100) Failed 1/180 subtests
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-08-31%2007%3A54%3A58 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-09-28%2019%3A42%3A52 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-11%2001%3A24%3A14 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-10-29%2001%3A23%3A21 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-07%2006%3A09%3A52 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-14%2012%3A25%3A14 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-11-15%2020%3A38%3A19 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-11-17%2011%3A03%3A16 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-12-11%2005%3A48%3A37 - master
Why is src/test/modules/committs/t/002_standby.pl flaky? \ A new attempt to fix this mess
031_recovery_conflict.pl test might fail due to late pgstat entries flushing
23/296 postgresql:recovery / recovery/031_recovery_conflict ERROR 11.55s exit status 1 --- regress_log_031_recovery_conflict [07:58:53.979](0.255s) ok 11 - tablespace conflict: logfile contains terminated connection due to recovery conflict [07:58:54.058](0.080s) not ok 12 - tablespace conflict: stats show conflict on standby [07:58:54.059](0.000s) # Failed test 'tablespace conflict: stats show conflict on standby' # at /home/bf/bf-build/rorqual/REL_17_STABLE/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 332. [07:58:54.059](0.000s) # got: '0' # expected: '1'
The 031_recovery_conflict.pl test might fail due to late pgstat entries flushing
005_opclass_damage.pl fails on Windows animals due to timeout
180/244 postgresql:pg_amcheck / pg_amcheck/005_opclass_damage TIMEOUT 3001.43s exit status 1 --- regress_log_005_opclass_damage [05:57:13.802](1835.196s) ok 1 - pg_amcheck all schemas, tables and indexes reports no corruption: exit code 0 [05:57:13.802](0.000s) ok 2 - pg_amcheck all schemas, tables and indexes reports no corruption: no stderr [05:57:13.803](0.001s) ok 3 - pg_amcheck all schemas, tables and indexes reports no corruption: matches # Running: pg_amcheck -p 10642 postgres [06:05:32.533](498.730s) ok 4 - pg_amcheck all schemas, tables and indexes reports fickleidx corruption status (got 2 vs expected 2) [06:05:32.533](0.000s) ok 5 - pg_amcheck all schemas, tables and indexes reports fickleidx corruption stdout /(?^:item order invariant violated for index "fickleidx")/ # Running: pg_amcheck --checkunique -p 10642 postgres
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-07-25%2016%3A02%3A33 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-07-26%2001%3A06%3A33 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-07-26%2006%3A13%3A01 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-07-26%2010%3A31%3A00 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-07-27%2021%3A36%3A58 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-07-27%2009%3A45%3A44 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-07-28%2004%3A45%3A07 - HEAD
fairywren timeout failures on the pg_amcheck/005_opclass_damage test
culicidae failed to restart server due to incorrect checksum in control file
(culicidae tests EXEC_BACKEND)
001_auth_node.log 2024-07-24 04:19:28.403 UTC [1018014][postmaster][:0] LOG: starting PostgreSQL 16.3 on x86_64-linux, compiled by gcc-13.3.0, 64-bit 2024-07-24 04:19:28.427 UTC [1018014][postmaster][:0] LOG: listening on Unix socket "/tmp/U3Osq_FaO8/.s.PGSQL.12427" 2024-07-24 04:19:29.036 UTC [1018564][startup][:0] LOG: database system was shut down at 2024-07-24 04:19:27 UTC 2024-07-24 04:19:29.038 UTC [1018562][not initialized][:0] FATAL: incorrect checksum in control file
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-11-07%2006%3A21%3A37 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-12-15%2020%3A29%3A49 - REL_16_STABLE
Also culicidae failed regression test due to incorrect checksum
\342\226\266 1/1 + partition_prune 3736 ms FAIL --- inst/logfile 2024-08-17 01:25:31.254 UTC [2841385][client backend][43/184:0] LOG: connection authorized: user=buildfarm database=regression application_name=pg_regress/partition_prune ... 2024-08-17 01:25:33.676 UTC [2842326][not initialized][:0] FATAL: incorrect checksum in control file ... 2024-08-17 01:25:33.683 UTC [2841385][client backend][43/553:0] ERROR: parallel worker failed to initialize 2024-08-17 01:25:33.683 UTC [2841385][client backend][43/553:0] HINT: More details may be available in the server log. 2024-08-17 01:25:33.683 UTC [2841385][client backend][43/553:0] CONTEXT: PL/pgSQL function explain_parallel_append(text) line 5 at FOR over EXECUTE statement 2024-08-17 01:25:33.683 UTC [2841385][client backend][43/553:0] STATEMENT: select explain_parallel_append('select avg(ab.a) from ab inner join lprt_a a on ab.a = a.a where a.a in(1, 0, 0)');
race condition when writing pg_control \ the issue in question apparently happened in the wild
stats.sql is failing sporadically in v14- on POWER/aarch64 animals
test stats ... FAILED 469155 ms ... --- /home/nm/farm/gcc64/REL_14_STABLE/pgsql.build/src/test/regress/expected/stats.out 2022-03-30 01:18:17.000000000 +0000 +++ /home/nm/farm/gcc64/REL_14_STABLE/pgsql.build/src/test/regress/results/stats.out 2024-07-30 09:49:39.000000000 +0000 @@ -165,11 +165,11 @@ WHERE relname like 'trunc_stats_test%' order by relname; relname | n_tup_ins | n_tup_upd | n_tup_del | n_live_tup | n_dead_tup -------------------+-----------+-----------+-----------+------------+------------ - trunc_stats_test | 3 | 0 | 0 | 0 | 0 - trunc_stats_test1 | 4 | 2 | 1 | 1 | 0 - trunc_stats_test2 | 1 | 0 | 0 | 1 | 0 - trunc_stats_test3 | 4 | 0 | 0 | 2 | 2 - trunc_stats_test4 | 2 | 0 | 0 | 0 | 2 + trunc_stats_test | 0 | 0 | 0 | 0 | 0 + trunc_stats_test1 | 0 | 0 | 0 | 0 | 0 + trunc_stats_test2 | 0 | 0 | 0 | 0 | 0 + trunc_stats_test3 | 0 | 0 | 0 | 0 | 0 + trunc_stats_test4 | 0 | 0 | 0 | 0 | 0 ... --- inst/logfile 2024-07-30 09:25:11.225 UTC [63307946:1] LOG: using stale statistics instead of current ones because stats collector is not responding 2024-07-30 09:25:11.345 UTC [11206724:559] pg_regress/create_index LOG: using stale statistics instead of current ones because stats collector is not responding ...
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2024-03-29%2005%3A27%3A09 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2024-03-19%2002%3A09%3A07 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2024-08-02%2002%3A04%3A10 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chimaera&dt=2023-09-28%2011%3A08%3A08 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chimaera&dt=2024-08-13%2011%3A29%3A27 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2024-09-19%2003%3A34%3A21 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2024-09-27%2008%3A51%3A24 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=blackneck&dt=2024-10-30%2009%3A08%3A05 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2024-12-20%2005%3A33%3A31 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fritillary&dt=2024-12-22%2003%3A21%3A59 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2024-12-24%2007%3A17%3A19 - REL_13_STABLE
The stats.sql test is failing sporadically in v14- on POWER7/AIX 7.1 buildfarm animals
pg_ctl stop/start fails on Windows due to inconsistent check for postmaster.pid existence
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-19%2017%3A32%3A54 - HEAD
... pg_createsubscriber: stopping the subscriber 2024-08-19 18:02:47.608 UTC [6988:4] LOG: received fast shutdown request 2024-08-19 18:02:47.608 UTC [6988:5] LOG: aborting any active transactions 2024-08-19 18:02:47.612 UTC [5884:2] FATAL: terminating walreceiver process due to administrator command 2024-08-19 18:02:47.705 UTC [7036:1] LOG: shutting down pg_createsubscriber: server was stopped ... [18:02:47.900](2.828s) ok 29 - run pg_createsubscriber without --databases ... pg_createsubscriber: starting the standby with command-line options pg_createsubscriber: pg_ctl command is: ... 2024-08-19 18:02:48.163 UTC [5284:1] FATAL: could not create lock file "postmaster.pid": File exists pg_createsubscriber: server was started pg_createsubscriber: checking settings on subscriber 2024-08-19 18:02:48.484 UTC [6988:6] LOG: database system is shut down
DELETE PENDING strikes back, via pg_ctl stop/start
pg_ctl stop fails on Cygwin due to DELETE PENDING state of postmaster.pid
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2024-08-22%2009%3A52%3A46 - HEAD
waiting for server to shut down........pg_ctl: could not open PID file "data-C/postmaster.pid": Permission denied
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2024-11-11%2011%3A26%3A06 - master
DELETE PENDING strikes back, via pg_ctl stop/start \ a lorikeet failure
dblink.sql (and postgres_fdw.sql) fail on Windows due to the cancel packet not sent
40/67 postgresql:dblink-running / dblink-running/regress ERROR 32.97s exit status 1 --- pgsql.build/testrun/dblink-running/regress/regression.diffs SELECT dblink_cancel_query('dtest1'); - dblink_cancel_query ---------------------- - OK + dblink_cancel_query +-------------------------- + cancel request timed out (1 row)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-11%2022%3A42%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-27%2018%3A34%3A52 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-12-02%2007%3A59%3A27 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-12-19%2011%3A00%3A15 - master
Add non-blocking version of PQcancel \ the dblink test failed on drongo
timeouts.spec failed because of statement cancelled due to unexpected reason
257/260 postgresql:isolation / isolation/isolation ERROR 79.90s exit status 1 --- pgsql.build/testrun/isolation/isolation/regression.diffs --- /home/bf/bf-build/mylodon/REL_16_STABLE/pgsql/src/test/isolation/expected/timeouts.out 2023-06-30 00:57:49.207140401 +0000 +++ /home/bf/bf-build/mylodon/REL_16_STABLE/pgsql.build/testrun/isolation/isolation/results/timeouts.out 2024-08-30 23:06:07.610042527 +0000 @@ -78,4 +78,4 @@ step slto: SET lock_timeout = '10s'; SET statement_timeout = '10ms'; step update: DELETE FROM accounts WHERE accountid = 'checking'; <waiting ...> step update: <... completed> -ERROR: canceling statement due to statement timeout +ERROR: canceling statement due to user request
Add non-blocking version of PQcancel \ mylodon failed due to reason discussed upthread
002_archiving.pl fails due to promote request not received timely on Windows
(drongo is a Windows animal)
6/289 postgresql:recovery / recovery/002_archiving ERROR 626.63s (exit status 255 or signal 127 SIGinvalid) --- regress_log_002_archiving [17:11:11.519](0.001s) ok 3 - recovery_end_command not executed yet ### Promoting node "standby" # Running: pg_ctl -D C:\\prog\\bf\\root\\REL_17_STABLE\\pgsql.build/testrun/recovery/002_archiving\\data/t_002_archiving_standby_data/pgdata -l C:\\prog\\bf\\root\\REL_17_STABLE\\pgsql.build/testrun/recovery/002_archiving\\log/002_archiving_standby.log promote waiting for server to promote....................................................................................................................................................................................... stopped waiting pg_ctl: server did not promote in time [17:20:06.095](534.576s) Bail out! command "pg_ctl -D C:\\prog\\bf\\root\\REL_17_STABLE\\pgsql.build/testrun/recovery/002_archiving\\data/t_002_archiving_standby_data/pgdata -l C:\\prog\\bf\\root\\REL_17_STABLE\\pgsql.build/testrun/recovery/002_archiving\\log/002_archiving_standby.log promote" exited with value 1 --- 002_archiving_standby.log 2024-09-29 17:11:10.319 UTC [6408:3] LOG: recovery restart point at 0/3028BF8 2024-09-29 17:11:10.319 UTC [6408:4] DETAIL: Last completed transaction was at log time 2024-09-29 17:10:57.783965+00. The system cannot find the file specified. 2024-09-29 17:11:10.719 UTC [7440:5] 002_archiving.pl LOG: disconnection: session time: 0:00:00.488 user=pgrunner database=postgres host=127.0.0.1 port=62549 The system cannot find the file specified. The system cannot find the file specified. ... The system cannot find the file specified. 2024-09-29 17:20:08.237 UTC [6684:4] LOG: received immediate shutdown request The system cannot find the file specified. ...
(there is no "received promote request" message)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-06-28%2001%3A06%3A00 - REL_16_STABLE
promote request not received timely on slow Windows machines
019_replslot_limit.pl fails due to walsender stuck on sending FATAL to frozen walreceiver
297/297 postgresql:recovery / recovery/019_replslot_limit ERROR 306.28s exit status 29 --- regress_log_019_replslot_limit [12:56:34.033](0.228s) ok 19 - walsender termination logged [13:00:57.133](263.100s) # poll_query_until timed out executing this query: # SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3' # expecting this output: # lost # last actual query output: # unreserved # with stderr: timed out waiting for slot to be lost at /home/bf/bf-build/francolin/REL_17_STABLE/pgsql/src/test/recovery/t/019_replslot_limit.pl line 388. --- 019_replslot_limit_primary3.log 2024-10-03 12:56:34.041 UTC [673987] standby_3 FATAL: terminating connection due to administrator command 2024-10-03 12:56:34.041 UTC [673987] standby_3 STATEMENT: START_REPLICATION SLOT "rep3" 0/800000 TIMELINE 1 2024-10-03 12:56:34.066 UTC [674545] 019_replslot_limit.pl LOG: statement: SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3' 2024-10-03 12:56:34.238 UTC [674628] 019_replslot_limit.pl LOG: statement: SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3' ...
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2023-04-05%2017%3A47%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-02-04%2001%3A53%3A44 - master
027_stream_regress.pl failed on drongo due to walreceiver not reconnecting after primary restart
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-14%2010%3A08%3A17 - master
(drongo is a Windows animal)
166/294 postgresql:recovery / recovery/027_stream_regress ERROR 871.81s exit status 25 --- regress_log_027_stream_regress Waiting for replication conn standby_1's replay_lsn to pass 0/158C8B98 on primary [10:41:32.115](661.161s) # poll_query_until timed out executing this query: # SELECT '0/158C8B98' <= replay_lsn AND state = 'streaming' # FROM pg_catalog.pg_stat_replication # WHERE application_name IN ('standby_1', 'walreceiver') # expecting this output: # t # last actual query output: # --- 027_stream_regress_standby_1.log 2024-10-14 10:30:28.483 UTC [4320:12] 027_stream_regress.pl LOG: disconnection: session time: 0:00:03.793 user=pgrunner database=postgres host=127.0.0.1 port=61748 2024-10-14 10:30:31.442 UTC [8468:2] LOG: replication terminated by primary server 2024-10-14 10:30:31.442 UTC [8468:3] DETAIL: End of WAL reached on timeline 1 at 0/158C8B98. 2024-10-14 10:30:31.442 UTC [8468:4] FATAL: could not send end-of-streaming message to primary: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. no COPY in progress 2024-10-14 10:30:31.443 UTC [5452:7] LOG: invalid resource manager ID 101 at 0/158C8B98 2024-10-14 10:35:06.986 UTC [8648:21] LOG: restartpoint starting: time 2024-10-14 10:35:06.991 UTC [8648:22] LOG: restartpoint complete: wrote 0 buffers (0.0%), wrote 1 SLRU buffers; 0 WAL file(s) added, 0 removed, 1 recycled; write=0.001 s, sync=0.001 s, total=0.005 s; sync files=0, longest=0.000 s, average=0.000 s; distance=15336 kB, estimate=69375 kB; lsn=0/158C8B20, redo lsn=0/158C8B20 2024-10-14 10:35:06.991 UTC [8648:23] LOG: recovery restart point at 0/158C8B20 2024-10-14 10:35:06.991 UTC [8648:24] DETAIL: Last completed transaction was at log time 2024-10-14 10:30:24.820804+00. 2024-10-14 10:41:32.510 UTC [4220:4] LOG: received immediate shutdown request
Also 001_rep_changes.pl failed on fairywren due to walreceiver not reconnecting after primary restart
+++ tap check in src/test/subscription +++ # poll_query_until timed out executing this query: # SELECT '0/1534000' <= replay_lsn AND state = 'streaming' # FROM pg_catalog.pg_stat_replication # WHERE application_name IN ('tap_sub', 'walreceiver') # expecting this output: # t # last actual query output: # # with stderr: # Tests were run but no plan was declared and done_testing() was not seen. # Looks like your test exited with 25 just after 23. [16:07:39] t/001_rep_changes.pl ............... Dubious, test returned 25 (wstat 6400, 0x1900) --- pgsql.build/src/test/subscription/tmp_check/log/001_rep_changes_publisher.log 2024-11-15 16:00:58.066 UTC [8716:3] 001_rep_changes.pl LOG: statement: DELETE FROM tab_rep 2024-11-15 16:00:58.068 UTC [8716:4] 001_rep_changes.pl LOG: disconnection: session time: 0:00:00.010 user=pgrunner database=postgres host=[local] 2024-11-15 16:00:58.109 UTC [3628:4] LOG: received fast shutdown request 2024-11-15 16:00:58.109 UTC [3628:5] LOG: aborting any active transactions 2024-11-15 16:00:58.121 UTC [3628:6] LOG: background worker "logical replication launcher" (PID 8756) exited with exit code 1 2024-11-15 16:00:58.121 UTC [6480:1] LOG: shutting down 2024-11-15 16:00:58.392 UTC [7740:14] tap_sub LOG: disconnection: session time: 0:00:00.682 user=pgrunner database=postgres host=[local] 2024-11-15 16:00:58.421 UTC [6480:2] LOG: checkpoint starting: shutdown immediate 2024-11-15 16:00:58.477 UTC [6480:3] LOG: checkpoint complete: wrote 9 buffers (7.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.057 s; sync files=0, longest=0.000 s, average=0.000 s; distance=617 kB, estimate=617 kB 2024-11-15 16:00:58.486 UTC [3628:7] LOG: database system is shut down 2024-11-15 16:00:58.741 UTC [8864:1] LOG: starting PostgreSQL 15.9 on x86_64-w64-mingw32, compiled by gcc.exe (Rev3, Built by MSYS2 project) 14.1.0, 64-bit --- pgsql.build/src/test/subscription/tmp_check/log/001_rep_changes_subscriber.log 2024-11-15 16:00:57.692 UTC [5512:1] LOG: logical replication apply worker for subscription "tap_sub" has started 2024-11-15 16:00:58.394 UTC [5512:2] LOG: data stream from publisher has ended 2024-11-15 16:00:58.394 UTC [5512:3] ERROR: could not send end-of-streaming message to primary: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. no COPY in progress 2024-11-15 16:00:58.405 UTC [4848:9] LOG: background worker "logical replication worker" (PID 5512) exited with exit code 1 2024-11-15 16:00:58.483 UTC [2204:1] LOG: logical replication apply worker for subscription "tap_sub" has started 2024-11-15 16:05:33.567 UTC [5260:1] LOG: checkpoint starting: time
Also 021_twophase.pl failed on fairywren due to walreceiver not reconnecting after primary restart
[14:23:23.860](1.196s) ok 9 - Rows inserted via 2PC are visible on the subscriber ### Stopping node "publisher" using mode immediate # Running: pg_ctl -D C:\\tools\\xmsys64\\home\\pgrunner\\bf\\root\\HEAD\\pgsql.build/testrun/subscription/021_twophase/data/t_021_twophase_publisher_data/pgdata -m immediate stop waiting for server to shut down.... done server stopped # No postmaster PID for node "publisher" ### Starting node "publisher" # Running: pg_ctl -w -D C:\\tools\\xmsys64\\home\\pgrunner\\bf\\root\\HEAD\\pgsql.build/testrun/subscription/021_twophase/data/t_021_twophase_publisher_data/pgdata -l C:\\tools\\xmsys64\\home\\pgrunner\\bf\\root\\HEAD\\pgsql.build/testrun/subscription/021_twophase/log/021_twophase_publisher.log -o --cluster-name=publisher start waiting for server to start.... done server started # Postmaster PID for node "publisher" is 8896 Waiting for replication conn tap_sub's replay_lsn to pass 0/178D688 on publisher [14:31:05.104](461.244s) # poll_query_until timed out executing this query: # SELECT '0/178D688' <= replay_lsn AND state = 'streaming' # FROM pg_catalog.pg_stat_replication # WHERE application_name IN ('tap_sub', 'walreceiver') # expecting this output: # t # last actual query output: # # with stderr: [14:31:05.241](0.137s) # Last pg_stat_replication contents: timed out waiting for catchup at C:/tools/xmsys64/home/pgrunner/bf/root/HEAD/pgsql/src/test/subscription/t/021_twophase.pl line 242. --- pgsql.build/testrun/subscription/021_twophase/log/021_twophase_subscriber.log 2024-12-25 14:23:24.064 UTC [4168:2] ERROR: could not receive data from WAL stream: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. 2024-12-25 14:23:24.115 UTC [6164:1] LOG: logical replication apply worker for subscription "tap_sub" has started 2024-12-25 14:23:24.120 UTC [5256:4] LOG: background worker "logical replication apply worker" (PID 4168) exited with exit code 1 2024-12-25 14:28:23.097 UTC [276:4] LOG: checkpoint starting: time 2024-12-25 14:28:23.430 UTC [276:5] LOG: 1 two-phase state file was written for a long-running prepared transaction 2024-12-25 14:28:23.431 UTC [276:6] LOG: checkpoint complete: wrote 3 buffers (0.0%), wrote 1 SLRU buffers; 0 WAL file(s) added, 0 removed, 0 recycled; write=0.326 s, sync=0.001 s, total=0.334 s; sync files=0, longest=0.000 s, average=0.000 s; distance=8 kB, estimate=8 kB; lsn=0/178C418, redo lsn=0/178C3F8 2024-12-25 14:31:05.415 UTC [5256:5] LOG: received immediate shutdown request
WaitEventSetWaitBlock() can still hang on Windows due to connection reset
pageinspect/page.sql fails in v14 due to freeze requested not happening
============== creating database "contrib_regression" ============== ... test page ... FAILED 401 ms ... --- pgsql.build/contrib/pageinspect/regression.diffs --- C:/prog/bf/root/REL_14_STABLE/pgsql.build/contrib/pageinspect/expected/page.out 2024-09-14 14:59:50.899122300 +0000 +++ C:/prog/bf/root/REL_14_STABLE/pgsql.build/contrib/pageinspect/results/page.out 2024-11-09 05:16:52.027703100 +0000 @@ -93,8 +93,8 @@ FROM heap_page_items(get_raw_page('test1', 0)), LATERAL heap_tuple_infomask_flags(t_infomask, t_infomask2); t_infomask | t_infomask2 | raw_flags | combined_flags -------------+-------------+-----------------------------------------------------------+-------------------- - 2816 | 2 | {HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID} | {HEAP_XMIN_FROZEN} +------------+-------------+-----------------------------------------+---------------- + 2304 | 2 | {HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID} | {} (1 row)
Revert "Prevent instability in contrib/pageinspect's regression test."
deadlock-soft.sql is failing on newest Fedora
not ok 24 - deadlock-soft 741086 ms --- pgsql.build/src/test/isolation/output_iso/regression.diffs --- /repos/client-code-REL_18/REL_16_STABLE/pgsql.build/src/test/isolation/expected/deadlock-soft.out 2024-11-11 13:02:04.188815923 -0300 +++ /repos/client-code-REL_18/REL_16_STABLE/pgsql.build/src/test/isolation/output_iso/results/deadlock-soft.out 2024-11-11 13:26:13.849527129 -0300 @@ -7,11 +7,15 @@ step e2l: LOCK TABLE a2 IN ACCESS EXCLUSIVE MODE; <waiting ...> step d1a2: LOCK TABLE a2 IN ACCESS SHARE MODE; <waiting ...> step d2a1: LOCK TABLE a1 IN ACCESS SHARE MODE; <waiting ...> +isolationtester: canceling step d1a2 after 360 seconds step d1a2: <... completed> +ERROR: canceling statement due to user request +step d2a1: <... completed> step d1c: COMMIT; +isolationtester: canceling step e1l after 360 seconds step e1l: <... completed> +ERROR: canceling statement due to user request step e1c: COMMIT; -step d2a1: <... completed> step d2c: COMMIT;
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-11-16%2004%3A24%3A24 - master
deadlock-soft isolation test is failing on newest Fedora
ssl tests still have opportunity to fail due to TCP port conflict
296/305 postgresql:subscription / subscription/100_bugs OK 26.74s 14 subtests passed \342\226\266 297/305 pg_ctl restart failed ERROR 297/305 postgresql:ssl / ssl/002_scram ERROR 5.96s (exit status 255 or signal 127 SIGinvalid) ... --- regress_log_002_scram ### Restarting node "primary" # Running: pg_ctl -w -D /home/bf/bf-build/culicidae/HEAD/pgsql.build/testrun/ssl/002_scram/data/t_002_scram_primary_data/pgdata -l /home/bf/bf-build/culicidae/HEAD/pgsql.build/testrun/ssl/002_scram/log/002_scram_primary.log restart waiting for server to shut down..... done server stopped waiting for server to start.... stopped waiting pg_ctl: could not start server Examine the log output. # pg_ctl restart failed; see logfile for details: /home/bf/bf-build/culicidae/HEAD/pgsql.build/testrun/ssl/002_scram/log/002_scram_primary.log # No postmaster PID for node "primary" [20:57:30.208](5.688s) Bail out! pg_ctl restart failed --- 002_scram_primary.log 2024-11-21 20:57:28.783 UTC [4067616][postmaster][:0] LOG: received fast shutdown request 2024-11-21 20:57:28.803 UTC [4067616][postmaster][:0] LOG: aborting any active transactions 2024-11-21 20:57:28.818 UTC [4067616][postmaster][:0] LOG: background worker "logical replication launcher" (PID 4067783) exited with exit code 1 2024-11-21 20:57:28.825 UTC [4067730][checkpointer][:0] LOG: shutting down 2024-11-21 20:57:28.835 UTC [4067730][checkpointer][:0] LOG: checkpoint starting: shutdown immediate 2024-11-21 20:57:30.050 UTC [4067730][checkpointer][:0] LOG: checkpoint complete: wrote 5713 buffers (34.9%), wrote 3 SLRU buffers; 0 WAL file(s) added, 0 removed, 3 recycled; write=0.964 s, sync=0.103 s, total=1.220 s; sync files=1797, longest=0.030 s, average=0.001 s; distance=46011 kB, estimate=46011 kB; lsn=0/4474998, redo lsn=0/4474998 2024-11-21 20:57:30.094 UTC [4067616][postmaster][:0] LOG: database system is shut down 2024-11-21 20:57:30.175 UTC [4070346][postmaster][:0] LOG: starting PostgreSQL 18devel on x86_64-linux, compiled by gcc-14.2.0, 64-bit 2024-11-21 20:57:30.175 UTC [4070346][postmaster][:0] LOG: could not bind IPv4 address "127.0.0.1": Address already in use 2024-11-21 20:57:30.175 UTC [4070346][postmaster][:0] HINT: Is another postmaster already running on port 32301? If not, wait a few seconds and retry. 2024-11-21 20:57:30.175 UTC [4070346][postmaster][:0] WARNING: could not create listen socket for "127.0.0.1" 2024-11-21 20:57:30.175 UTC [4070346][postmaster][:0] FATAL: could not create any TCP/IP sockets
ssl tests fail due to TCP port conflict \ substantially reduce buildfarm failures
Parallel tests publication and subscription might fail due to concurrent tuple update
# parallel group (2 tests): subscription publication not ok 157 + publication 2251 ms ok 158 + subscription 415 ms --- /home/fedora/17-desman/buildroot/REL_16_STABLE/pgsql.build/src/test/regress/expected/publication.out 2024-12-09 18:34:02.939762233 +0000 +++ /home/fedora/17-desman/buildroot/REL_16_STABLE/pgsql.build/src/test/regress/results/publication.out 2024-12-09 18:44:48.582958859 +0000 @@ -1193,23 +1193,29 @@ ERROR: permission denied for database regression SET ROLE regress_publication_user; GRANT CREATE ON DATABASE regression TO regress_publication_user2; +ERROR: tuple concurrently updated SET ROLE regress_publication_user2; SET client_min_messages = 'ERROR'; CREATE PUBLICATION testpub2; -- ok +ERROR: permission denied for database regression --- pgsql.build/src/test/regress/log/postmaster.log 2024-12-09 18:44:46.753 UTC [1345157:903] pg_regress/publication STATEMENT: CREATE PUBLICATION testpub2; 2024-12-09 18:44:46.753 UTC [1345158:287] pg_regress/subscription LOG: statement: REVOKE CREATE ON DATABASE REGRESSION FROM regress_subscription_user3; 2024-12-09 18:44:46.754 UTC [1345157:904] pg_regress/publication LOG: statement: SET ROLE regress_publication_user; 2024-12-09 18:44:46.754 UTC [1345157:905] pg_regress/publication LOG: statement: GRANT CREATE ON DATABASE regression TO regress_publication_user2; 2024-12-09 18:44:46.754 UTC [1345157:906] pg_regress/publication ERROR: tuple concurrently updated 2024-12-09 18:44:46.754 UTC [1345157:907] pg_regress/publication STATEMENT: GRANT CREATE ON DATABASE regression TO regress_publication_user2;
Parallel tests publication and subscription might fail due to concurrent tuple update
019_replslot_limit.pl might fail due to checkpoint skipped
[12:27:41.437](0.024s) ok 18 - have walreceiver pid 637143 [12:30:42.564](181.127s) not ok 19 - walsender termination logged [12:30:42.564](0.000s) [12:30:42.564](0.000s) # Failed test 'walsender termination logged' # at t/019_replslot_limit.pl line 382. --- 019_replslot_limit_primary3.log: 2024-12-13 12:27:40.912 ACDT [637093:7] LOG: checkpoint starting: wal ... 2024-12-13 12:27:41.461 ACDT [637182:4] 019_replslot_limit.pl LOG: statement: SELECT pg_logical_emit_message(false, '', 'foo'); 2024-12-13 12:27:41.462 ACDT [637182:5] 019_replslot_limit.pl LOG: statement: SELECT pg_switch_wal(); 2024-12-13 12:27:41.463 ACDT [637182:6] 019_replslot_limit.pl LOG: disconnection: session time: 0:00:00.003 user=postgres database=postgres host=[local] 2024-12-13 12:27:41.668 ACDT [637093:8] LOG: checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 1 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.756 s; sync files=0, longest=0.000 s, average=0.000 s; distance=1024 kB, estimate=1024 kB; lsn=0/900060, redo lsn=0/700028 2024-12-13 12:27:41.668 ACDT [637093:9] LOG: checkpoints are occurring too frequently (1 second apart) 2024-12-13 12:27:41.668 ACDT [637093:10] HINT: Consider increasing the configuration parameter "max_wal_size". 2024-12-13 12:30:42.565 ACDT [637144:10] standby_3 LOG: terminating walsender process due to replication timeout 2024-12-13 12:30:42.565 ACDT [637144:11] standby_3 STATEMENT: START_REPLICATION SLOT "rep3" 0/700000 TIMELINE 1
019_replslot_limit.pl might fail due to checkpoint skipped
tablespace.sql is unstable due to lack of ORDER BY (in v15-)
# Failed test 'regression tests pass' # at t/027_stream_regress.pl line 81. # got: '256' # expected: '0' # Looks like you failed 1 test of 8. [17:36:30] t/027_stream_regress.pl .............. --- pgsql.build/src/test/recovery/tmp_check/log/regress_log_027_stream_regress test tablespace ... FAILED 47555 ms diff -U3 /home/nm/farm/xlc32/REL_15_STABLE/pgsql.build/src/test/regress/expected/tablespace.out /home/nm/farm/xlc32/REL_15_STABLE/pgsql.build/src/test/recovery/tmp_check/results/tablespace.out --- /home/nm/farm/xlc32/REL_15_STABLE/pgsql.build/src/test/regress/expected/tablespace.out 2024-11-26 05:26:30.000000000 +0000 +++ /home/nm/farm/xlc32/REL_15_STABLE/pgsql.build/src/test/recovery/tmp_check/results/tablespace.out 2024-12-25 17:13:47.000000000 +0000 @@ -334,9 +334,9 @@ where c.reltablespace = t.oid AND c.relname LIKE 'part%_idx'; relname | spcname -------------+------------------ + part_a_idx | regress_tblspace part1_a_idx | regress_tblspace part2_a_idx | regress_tblspace - part_a_idx | regress_tblspace (3 rows)
Unstable regression test "tablespace" / Add ORDER BY to stabilize tablespace test for partitioned index
Fixed Test Failures
partition_split.sql contains queries producing unstable results
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jackdaw&dt=2024-05-24%2015%3A58%3A17 - HEAD
--- /home/debian/17-jackdaw/buildroot/HEAD/pgsql.build/src/test/regress/expected/partition_split.out 2024-05-24 15:58:23.929113215 +0000 +++ /home/debian/17-jackdaw/buildroot/HEAD/pgsql.build/src/test/regress/results/partition_split.out 2024-05-24 16:05:58.286542479 +0000 @@ -637,15 +637,15 @@ SELECT pg_get_constraintdef(oid), conname, conkey FROM pg_constraint WHERE conrelid = 'sales_feb2022'::regclass::oid; pg_get_constraintdef | conname | conkey ---------------------------------------------------------------------+---------------------------------+-------- - CHECK ((sales_amount > 1)) | sales_range_sales_amount_check | {2} FOREIGN KEY (salesperson_id) REFERENCES salespeople(salesperson_id) | sales_range_salesperson_id_fkey | {1} + CHECK ((sales_amount > 1)) | sales_range_sales_amount_check | {2} (2 rows) SELECT pg_get_constraintdef(oid), conname, conkey FROM pg_constraint WHERE conrelid = 'sales_mar2022'::regclass::oid; pg_get_constraintdef | conname | conkey ---------------------------------------------------------------------+---------------------------------+-------- - CHECK ((sales_amount > 1)) | sales_range_sales_amount_check | {2} FOREIGN KEY (salesperson_id) REFERENCES salespeople(salesperson_id) | sales_range_salesperson_id_fkey | {1} + CHECK ((sales_amount > 1)) | sales_range_sales_amount_check | {2} (2 rows) SELECT pg_get_constraintdef(oid), conname, conkey FROM pg_constraint WHERE conrelid = 'sales_apr2022'::regclass::oid;
Add SPLIT PARTITION/MERGE PARTITIONS commands \ the test unstable results
Provide deterministic order for catalog queries in partition_split.sql
026_overwrite_contrecord.pl and 033_replay_tsp_drops.pl trigger Assert("ItemIdIsNormal(lpp)")
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-06-04%2003%3A27%3A47 - HEAD
29/295 postgresql:recovery / recovery/026_overwrite_contrecord ERROR 39.55s exit status 32 --- 026_overwrite_contrecord_standby.log TRAP: failed Assert("ItemIdIsNormal(lpp)"), File: "../pgsql/src/backend/access/heap/heapam.c", Line: 1002, PID: 3740958 postgres: standby: bf postgres [local] startup(ExceptionalCondition+0x81)[0x56c60bf9] postgres: standby: bf postgres [local] startup(+0xf776e)[0x5667276e] postgres: standby: bf postgres [local] startup(heap_getnextslot+0x40)[0x56672ee1] postgres: standby: bf postgres [local] startup(+0x11c218)[0x56697218] postgres: standby: bf postgres [local] startup(systable_getnext+0xfa)[0x56697c1a] postgres: standby: bf postgres [local] startup(+0x6d29c7)[0x56c4d9c7] postgres: standby: bf postgres [local] startup(+0x6d372c)[0x56c4e72c] postgres: standby: bf postgres [local] startup(+0x6d8288)[0x56c53288] postgres: standby: bf postgres [local] startup(RelationCacheInitializePhase3+0x149)[0x56c52d71]
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-04-03%2003%3A32%3A18 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-04-04%2015%3A38%3A16 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=margay&dt=2024-05-07%2004%3A00%3A08 - HEAD
Assert in heapgettup_pagemode() fails due to underlying buffer change
Hot standby queries see transient all-zeros pages
040_pg_createsubscriber.pl fails due to error: could not obtain replication slot information
stderr: # Failed test 'run pg_createsubscriber --dry-run on node S' # at /home/bf/bf-build/culicidae/HEAD/pgsql/src/bin/pg_basebackup/t/040_pg_createsubscriber.pl line 264. # Looks like you failed 1 test of 31. --- regress_log_040_pg_createsubscriber [00:38:25.368](1.063s) ok 23 - standby contains unmet conditions on node S ... # Running: pg_createsubscriber --verbose --dry-run ... ... pg_createsubscriber: checking settings on publisher 2024-06-06 00:38:26.895 UTC [4001283][client backend][:0] LOG: disconnection: session time: 0:00:00.028 user=bf database=pg1 host=[local] pg_createsubscriber: error: could not obtain replication slot information: got 0 rows, expected 1 row ... pg_createsubscriber: server was stopped [00:38:27.352](1.984s) not ok 24 - run pg_createsubscriber --dry-run on node S [00:38:27.355](0.004s) # Failed test 'run pg_createsubscriber --dry-run on node S' # at /home/bf/bf-build/culicidae/HEAD/pgsql/src/bin/pg_basebackup/t/040_pg_createsubscriber.pl line 264.
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-06-12%2000%3A58%3A32 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-06-17%2008%3A00%3A02 - HEAD
speed up a logical replica setup \ analysis for two failures
pg_createsubscriber: Remove replication slot check on primary
plperl.sql failing in v15- on caiman with a newer Perl version
(caiman is running on Fedora Rawhide with Perl 5.40)
--- /repos/build-farm-17/REL_13_STABLE/pgsql.build/src/pl/plperl/expected/plperl.out 2024-06-23 21:35:07.618704257 -0300 +++ /repos/build-farm-17/REL_13_STABLE/pgsql.build/src/pl/plperl/results/plperl.out 2024-06-23 21:59:11.425256754 -0300 @@ -706,7 +706,8 @@ CONTEXT: PL/Perl anonymous code block -- check that eval is allowed and eval'd restricted ops are caught DO $$ eval q{chdir '.';}; warn "Caught: $@"; $$ LANGUAGE plperl; -WARNING: Caught: 'chdir' trapped by operation mask at line 1. +ERROR: 'eval hints' trapped by operation mask at line 1. +CONTEXT: PL/Perl anonymous code block
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-24%2001%3A34%3A23 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-25%2001%3A21%3A03 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-23%2003%3A51%3A53 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-24%2000%3A59%3A30 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-25%2000%3A48%3A40 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-23%2003%3A50%3A53 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-24%2000%3A35%3A06 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-23%2003%3A49%3A53 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-24%2000%3A15%3A16 - REL_12_STABLE
Buildfarm animal caiman showing a plperl test issue with newer Perl versions
Remove redundant perl version checks
inplace-inval.spec fails on prion and trilobite on checking relhasindex
(prion run tests with -DCATCACHE_FORCE_RELEASE, trilobite with -DCLOBBER_CACHE_ALWAYS)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2002%3A38%3A03 - HEAD
not ok 40 - inplace-inval 141 ms regression.diffs --- --- /home/ec2-user/bf/root/HEAD/pgsql/src/test/isolation/expected/inplace-inval.out 2024-06-28 02:38:07.965133814 +0000 +++ /home/ec2-user/bf/root/HEAD/pgsql.build/src/test/isolation/output_iso/results/inplace-inval.out 2024-06-28 04:50:14.086521986 +0000 @@ -14,7 +14,7 @@ relhasindex ----------- -f +t (1 row)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2002%3A33%3A04 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2002%3A34%3A03 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2002%3A35%3A03 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2024-06-28%2005%3A01%3A49 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2002%3A36%3A03 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2024-06-28%2013%3A22%3A03 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2002%3A37%3A03 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2004%3A53%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2006%3A03%3A04 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2012%3A53%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2014%3A13%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-06-28%2015%3A33%3A03 - HEAD
Remove configuration-dependent output from new inplace-inval test.
040_pg_createsubscriber.pl fails on Windows due to unterminated quoted string
93/242 postgresql:pg_basebackup / pg_basebackup/040_pg_createsubscriber ERROR 7.95s exit status 2 ... stderr: # Tests were run but no plan was declared and done_testing() was not seen. # Looks like your test exited with 2 just after 19. --- regress_log_040_pg_createsubscriber [18:15:28.475](4.206s) ok 18 - created database with ASCII characters from 1 to 45 # Running: createdb regression./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ [18:15:28.801](0.327s) ok 19 - created database with ASCII characters from 46 to 90 connection error: 'psql: error: unterminated quoted string in connection info string' while running 'psql -XAtq -d port=52984 host=C:/tools/nmsys64/tmp/hHg_pngw4z dbname='regression\\\\"\\\\������� �������������������� !"#$%&\\'()*+,-\\\\\\\\"\\\\\\\\\\\\' -f - -v ON_ERROR_STOP=1' at C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql/src/test/perl/PostgreSQL/Test/Cluster.pm line 2124. # Postmaster PID for node "node_p" is 916 ### Stopping node "node_p" using mode immediate # Running: pg_ctl -D C:\\tools\\nmsys64\\home\\pgrunner\\bf\\root\\HEAD\\pgsql.build/testrun/pg_basebackup/040_pg_createsubscriber/data/t_040_pg_createsubscriber_node_p_data/pgdata -m immediate stop waiting for server to shut down.... done server stopped # No postmaster PID for node "node_p" # No postmaster PID for node "node_f" [18:15:29.072](0.271s) # Tests were run but no plan was declared and done_testing() was not seen. [18:15:29.073](0.001s) # Looks like your test exited with 2 just after 19.
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-06-30%2019%3A03%3A06 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-06-30%2019%3A43%3A28 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-06-30%2022%3A03%3A06 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-06-30%2019%3A43%3A28 - HEAD
/
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-06-30%2022%3A03%3A06 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-06-30%2022%3A43%3A09 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-07-01%2000%3A03%3A05 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-07-01%2000%3A57%3A49 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-07-01%2002%3A03%3A06 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-07-01%2002%3A44%3A53 - HEAD
speed up a logical replica setup \ b3f5ccebd blew up on fairywren
Temporarily(?) weaken new pg_createsubscriber test on Windows.
Further weaken new pg_createsubscriber test on Windows.
prepare.sql fails with ERROR: out of memory
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dhole&dt=2024-07-02%2008%3A01%3A13 - HEAD
not ok 193 + prepare 989 ms --- /home/centos/build-farm-17-dhole/buildroot/HEAD/pgsql.build/src/test/regress/expected/prepare.out 2024-07-02 08:01:35.653580689 +0000 +++ /home/centos/build-farm-17-dhole/buildroot/HEAD/pgsql.build/src/test/regress/results/prepare.out 2024-07-02 08:12:46.599290163 +0000 @@ -186,9 +186,8 @@ --- regression.diffs: -- max parameter number and one above PREPARE q9 AS SELECT $268435455, $268435456; -ERROR: there is no parameter $268435456 -LINE 1: PREPARE q9 AS SELECT $268435455, $268435456; - ^ +ERROR: out of memory +DETAIL: Failed on request of size 1073741820 in memory context "PortalContext". -- test DEALLOCATE ALL;
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=morepork&dt=2024-07-02%2008%3A30%3A38 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=schnauzer&dt=2024-07-02%2008%3A31%3A34 - HEAD
040_pg_createsubscriber.pl fails when flushed position lagging behind on pg_sync_replication_slots()
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2024-06-28%2004%3A42%3A48 - HEAD
163/295 postgresql:pg_basebackup / pg_basebackup/040_pg_createsubscriber ERROR 26.03s exit status 29 ... # Tests were run but no plan was declared and done_testing() was not seen. # Looks like your test exited with 29 just after 23. --- regress_log_040_pg_createsubscriber [04:46:20.321](0.703s) ok 23 - standby contains unmet conditions on node S ### Restarting node "node_p" # Running: pg_ctl -w -D /home/bf/bf-build/piculet/HEAD/pgsql.build/testrun/pg_basebackup/040_pg_createsubscriber/data/t_040_pg_createsubscriber_node_p_data/pgdata -l /home/bf/bf-build/piculet/HEAD/pgsql.build/testrun/pg_basebackup/040_pg_createsubscriber/log/040_pg_createsubscriber_node_p.log restart waiting for server to shut down.... done server stopped waiting for server to start.... done server started # Postmaster PID for node "node_p" is 415642 ### Starting node "node_s" # Running: pg_ctl -w -D /home/bf/bf-build/piculet/HEAD/pgsql.build/testrun/pg_basebackup/040_pg_createsubscriber/data/t_040_pg_createsubscriber_node_s_data/pgdata -l /home/bf/bf-build/piculet/HEAD/pgsql.build/testrun/pg_basebackup/040_pg_createsubscriber/log/040_pg_createsubscriber_node_s.log -o --cluster-name=node_s start waiting for server to start.... done server started # Postmaster PID for node "node_s" is 416482 error running SQL: 'psql:<stdin>:1: ERROR: skipping slot synchronization as the received slot sync LSN 0/30047F0 for slot "failover_slot" is ahead of the standby position 0/3004708' while running 'psql -XAtq -d port=51506 host=/tmp/pqWohdD5Qj dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'SELECT pg_sync_replication_slots()' at /home/bf/bf-build/piculet/HEAD/pgsql/src/test/perl/PostgreSQL/Test/Cluster.pm line 2126.
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-07-01%2006%3A55%3A38 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-07-02%2008%3A46%3A00 - HEAD
Also 040_pg_createsubscriber.pl fails due to autovacuum
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-07-02%2008%3A45%3A39 - HEAD
162/295 postgresql:pg_basebackup / pg_basebackup/040_pg_createsubscriber ERROR 81.93s exit status 1 # Failed test 'failover slot is synced' # at /home/bf/bf-build/adder/HEAD/pgsql/src/bin/pg_basebackup/t/040_pg_createsubscriber.pl line 300. # got: '' # expected: 'failover_slot' # Looks like you failed 1 test of 36. --- regress_log_040_pg_createsubscriber [08:53:55.718](3.371s) not ok 26 - failover slot is synced [08:53:55.718](0.001s) # Failed test 'failover slot is synced' # at /home/bf/bf-build/adder/HEAD/pgsql/src/bin/pg_basebackup/t/040_pg_createsubscriber.pl line 300. [08:53:55.719](0.000s) # got: '' # expected: 'failover_slot'
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-07-02%2018%3A06%3A00 - HEAD
New instability of the 040_pg_createsubscriber test
Fix the testcase introduced in commit 81d20fbf7a.
not ok 138 + misc_functions 188 ms ... # 1 of 223 tests failed. --- regression.diffs --- /home/bf/bf-build/bushmaster/HEAD/pgsql/src/test/regress/expected/misc_functions.out 2024-07-09 00:00:11.692649130 +0000 +++ /home/bf/bf-build/bushmaster/HEAD/pgsql.build/src/test/regress/results/misc_functions.out 2024-07-09 00:02:05.252863892 +0000 @@ -641,7 +641,10 @@ explain_mask_costs ------------------------------------------------------------------------------------------ Function Scan on generate_series g (cost=N..N rows=30 width=N) (actual rows=30 loops=1) -(1 row) + JIT: + Functions: 3 + Options: Inlining false, Optimization false, Expressions true, Deforming true +(4 rows) ...
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=taipan&dt=2024-07-09%2000%3A07%3A59 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bushmaster&dt=2024-07-09%2000%3A17%3A47 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=taipan&dt=2024-07-09%2000%3A23%3A00 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=taipan&dt=2024-07-09%2000%3A39%3A55 - HEAD
pgsql: Teach planner how to estimate rows for timestamp generate_series \ Fixing bushmaster failures
Avoid JIT-related test instability in EXPLAIN ANALYZE
ssl tests (001_ssltests.pl, 002_scram.pl, 003_sslinfo.pl) fail due to TCP port conflict
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-06-04%2011%3A20%3A07 - HEAD
287/295 postgresql:ssl / ssl/001_ssltests ERROR 6.18s (exit status 255 or signal 127 SIGinvalid) --- 001_ssltests_primary.log 2024-06-04 11:30:40.227 UTC [3373644][postmaster][:0] LOG: starting PostgreSQL 17beta1 on x86_64-linux, compiled by clang-13.0.1-11, 64-bit 2024-06-04 11:30:40.231 UTC [3373644][postmaster][:0] LOG: listening on Unix socket "/tmp/tUmT8ItNQ2/.s.PGSQL.60362" ... 2024-06-04 11:30:45.273 UTC [3376046][postmaster][:0] LOG: starting PostgreSQL 17beta1 on x86_64-linux, compiled by clang-13.0.1-11, 64-bit 2024-06-04 11:30:45.273 UTC [3376046][postmaster][:0] LOG: could not bind IPv4 address "127.0.0.1": Address already in use 2024-06-04 11:30:45.273 UTC [3376046][postmaster][:0] HINT: Is another postmaster already running on port 60362? If not, wait a few seconds and retry. 2024-06-04 11:30:45.273 UTC [3376046][postmaster][:0] WARNING: could not create listen socket for "127.0.0.1" 2024-06-04 11:30:45.273 UTC [3376046][postmaster][:0] FATAL: could not create any TCP/IP sockets
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-03-12%2023%3A15%3A50 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-03-21%2000%3A35%3A23 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-03-27%2011%3A15%3A31 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-04-16%2016%3A10%3A45 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-03-08%2011%3A19%3A42 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-03-11%2022%3A23%3A28 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-03-17%2023%3A03%3A50 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-03-20%2009%3A21%3A30 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-03-20%2016%3A53%3A27 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-04-07%2012%3A25%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-04-08%2019%3A50%3A13 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-04-19%2021%3A24%3A30 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-04-22%2006%3A17%3A13 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-04-29%2023%3A27%3A15 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-04-30%2000%3A24%3A28 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-06-13%2019%3A10%3A35 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-06-16%2017%3A55%3A34 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-06-17%2019%3A47%3A41 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2024-06-20%2007%3A09%3A26 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-06-25%2021%3A55%3A23 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-06-26%2021%3A45%3A06 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-06-30%2018%3A18%3A07 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-07-03%2023%3A34%3A02 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-06-30%2022%3A58%3A10 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-07-05%2004%3A57%3A55 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-07-06%2002%3A34%3A40 - REL_17_STABLE
ssl tests fail due to TCP port conflict
Also kerberos/001_auth due to UDP/TCP port conflict
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-07-02%2009%3A27%3A15 - HEAD
\342\226\266 28/295 command "/usr/sbin/krb5kdc -P /home/bf/bf-build/rorqual/HEAD/pgsql.build/testrun/kerberos/001_auth/data/krb5kdc.pid" exited with value 1 ERROR 28/295 postgresql:kerberos / kerberos/001_auth ERROR 4.43s (exit status 255 or signal 127 SIGinvalid) --- krb5kdc.log Jul 02 09:29:41 andres-postgres-buildfarm-v5 krb5kdc[471964](info): setting up network... Jul 02 09:29:41 andres-postgres-buildfarm-v5 krb5kdc[471964](Error): Address already in use - Cannot bind server socket on 127.0.0.1.55853 Jul 02 09:29:41 andres-postgres-buildfarm-v5 krb5kdc[471964](Error): Failed setting up a UDP socket (for 127.0.0.1.55853) Jul 02 09:29:41 andres-postgres-buildfarm-v5 krb5kdc[471964](Error): Address already in use - Error setting up network
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-05-15%2001%3A25%3A07 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-07-04%2008%3A28%3A19 - HEAD
ssl tests fail due to TCP port conflict \ kerberos/001_auth suffers from the port conflict
Force nodes for SSL tests to start in TCP mode
Choose ports for test servers less likely to result in conflicts
001_ssltests.pl failed in REL_12_STABLE due to host name translation error
Bailout called. Further testing stopped: pg_ctl start failed FAILED--Further testing stopped: pg_ctl start failed make: *** [check] Error 255 ================== pgsql.build/src/test/ssl/tmp_check/log/001_ssltests_master.log =================== 2024-07-08 22:19:47.955 UTC [25951:1] LOG: starting PostgreSQL 12.19 on aarch64-unknown-linux-gnu, compiled by gcc (GCC) 7.3.1 20180712 (Red Hat 7.3.1-17), 64-bit 2024-07-08 22:19:47.957 UTC [25951:2] LOG: could not translate host name "/tmp/xwbhy3r7ai", service "32471" to address: Name or service not known 2024-07-08 22:19:47.957 UTC [25951:3] WARNING: could not create listen socket for "/tmp/xwbhy3r7ai" 2024-07-08 22:19:47.957 UTC [25951:4] FATAL: could not create any TCP/IP sockets 2024-07-08 22:19:47.957 UTC [25951:5] LOG: database system is shut down
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hachi&dt=2024-07-08%2021%3A05%3A04 - REL_12_STABLE
Revert "Force nodes for SSL tests to start in TCP mode"
Revert "Force nodes for SSL tests to start in TCP mode"
xml.sql fails because of test results change due to libxml2.13 incompatibilies
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=indri&dt=2024-07-09%2018%3A17%3A23 - HEAD
--- /Users/buildfarm/bf-data/HEAD/pgsql.build/src/test/regress/expected/xml.out 2024-07-09 14:17:24 +++ /Users/buildfarm/bf-data/HEAD/pgsql.build/src/test/regress/results/xml.out 2024-07-09 14:18:36 @@ -254,17 +254,11 @@ DETAIL: line 1: xmlParseEntityRef: no name <invalidentity>&</invalidentity> ^ -line 1: chunk is not well balanced -<invalidentity>&</invalidentity> - ^ SELECT xmlparse(content '<undefinedentity>&idontexist;</undefinedentity>'); ERROR: invalid XML content DETAIL: line 1: Entity 'idontexist' not defined <undefinedentity>&idontexist;</undefinedentity> ^ -line 1: chunk is not well balanced -<undefinedentity>&idontexist;</undefinedentity> - ^ SELECT xmlparse(content '<invalidns xmlns=''<''/>'); xmlparse --------------------------- @@ -283,9 +277,6 @@ <twoerrors>&idontexist;</unbalanced> ^ line 1: Opening and ending tag mismatch: twoerrors line 1 and unbalanced -<twoerrors>&idontexist;</unbalanced> - ^ -line 1: chunk is not well balanced <twoerrors>&idontexist;</unbalanced> ^ SELECT xmlparse(content '<nosuchprefix:tag/>');
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=indri&dt=2024-07-10%2014%3A16%3A06 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=indri&dt=2024-07-09%2021%3A58%3A12 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=indri&dt=2024-07-09%2020%3A01%3A37 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=indri&dt=2024-07-10%2023%3A45%3A57 - REL_16_STABLE
XML test error on Arch Linux \ ignoring XML_ERR_NOT_WELL_BALANCED
Suppress "chunk is not well balanced" errors from libxml2.
Make our back branches compatible with libxml2 2.13.x.
xml.sql fails due to libxml2-version-dependent error reports
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-07-09%2019%3A02%3A05 - HEAD
--- /opt/postgres/bf/v11/buildroot/HEAD/pgsql.build/src/test/regress/expected/xml_2.out 2024-07-09 19:02:04.906168111 +0000 +++ /opt/postgres/bf/v11/buildroot/HEAD/pgsql.build/src/test/regress/results/xml.out 2024-07-09 19:08:11.038226015 +0000 @@ -282,16 +282,12 @@ SELECT xmlparse(content '<unclosed>'); ERROR: invalid XML content DETAIL: line 1: Premature end of data in tag unclosed line 1 -<unclosed> - ^ SELECT xmlparse(content '<parent><child></parent></child>'); ERROR: invalid XML content DETAIL: line 1: Opening and ending tag mismatch: child line 1 and parent <parent><child></parent></child> ^ line 1: Opening and ending tag mismatch: parent line 1 and child -<parent><child></parent></child> - ^ SELECT xmlparse(document ' '); ERROR: invalid XML document DETAIL: line 1: Start tag expected, '<' not found @@ -345,16 +341,12 @@ SELECT xmlparse(document '<unclosed>'); ERROR: invalid XML document DETAIL: line 1: Premature end of data in tag unclosed line 1 -<unclosed> - ^ SELECT xmlparse(document '<parent><child></parent></child>'); ERROR: invalid XML document DETAIL: line 1: Opening and ending tag mismatch: child line 1 and parent <parent><child></parent></child> ^ line 1: Opening and ending tag mismatch: parent line 1 and child -<parent><child></parent></child> - ^ SELECT xmlpi(name foo); xmlpi ---------
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=massasauga&dt=2024-07-09%2019%3A15%3A04 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=widowbird&dt=2024-07-09%2019%3A30%3A07 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-07-09%2019%3A39%3A04 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=massasauga&dt=2024-07-09%2019%3A40%3A04 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=plover&dt=2024-07-09%2019%3A54%3A07 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros&dt=2024-07-09%2019%3A58%3A12 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=batta&dt=2024-07-09%2020%3A05%3A04 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rinkhals&dt=2024-07-09%2020%3A05%3A51 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=siskin&dt=2024-07-09%2020%3A11%3A18 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=guaibasaurus&dt=2024-07-09%2020%3A20%3A04 - HEAD
XML test error on Arch Linux \ Several animals are reporting different error text
Remove new XML test cases added by e7192486d.
040_pg_createsubscriber.pl fails due to subscriber's local catalog xmin is ahead of remote xmin
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-07-08%2013%3A16%3A35 - HEAD
163/295 postgresql:pg_basebackup / pg_basebackup/040_pg_createsubscriber ERROR 84.29s exit status 1 --- regress_log_040_pg_createsubscriber [13:28:05.647](2.460s) not ok 26 - failover slot is synced [13:28:05.648](0.001s) # Failed test 'failover slot is synced' # at /home/bf/bf-build/skink-master/HEAD/pgsql/src/bin/pg_basebackup/t/040_pg_createsubscriber.pl line 307. [13:28:05.648](0.000s) # got: '' # expected: 'failover_slot' #### Begin standard error psql:<stdin>:1: WARNING: subscription was created, but is not connected HINT: To initiate replication, you must manually create the replication slot, enable the subscription, and refresh the subscription. #### End standard error --- 040_pg_createsubscriber_node_s.log 2024-07-08 13:28:05.369 UTC [3985464][client backend][0/2:0] LOG: statement: SELECT pg_sync_replication_slots() 2024-07-08 13:28:05.557 UTC [3985464][client backend][0/2:0] LOG: could not sync slot "failover_slot" as remote slot precedes local slot 2024-07-08 13:28:05.557 UTC [3985464][client backend][0/2:0] DETAIL: Remote slot has LSN 0/30047B8 and catalog xmin 743, but local slot has LSN 0/30047B8 and catalog xmin 744. 2024-07-08 13:28:05.557 UTC [3985464][client backend][0/2:0] STATEMENT: SELECT pg_sync_replication_slots() --- 040_pg_createsubscriber_node_p.log 2024-07-08 13:28:00.702 UTC [3981996][postmaster][:0] LOG: listening on Unix socket "/tmp/WnqJHhLtur/.s.PGSQL.60666" 2024-07-08 13:28:00.872 UTC [3982331][walsender][:0] FATAL: the database system is starting up 2024-07-08 13:28:00.875 UTC [3982328][startup][:0] LOG: database system was shut down at 2024-07-08 13:28:00 UTC 2024-07-08 13:28:01.105 UTC [3981996][postmaster][:0] LOG: database system is ready to accept connections
speed up a logical replica setup \ pg_createsubscriber can affect primary's catalog xmin
Fix unstable test in 040_pg_createsubscriber.
collate.windows.win1252.sql fails on Windows due to trailing whitespace differences
| 41/287 - regression tests pass FAIL 41/287 postgresql:recovery / recovery/027_stream_regress ERROR 756.41s exit status 1 ------------------------------------- 8< ------------------------------------- stderr: # Failed test 'regression tests pass' # at C:/prog/bf/root/REL_17_STABLE/pgsql/src/test/recovery/t/027_stream_regress.pl line 95. # got: '256' # expected: '0' # Looks like you failed 1 test of 9. --- regress_log_027_stream_regress ... not ok 154 + collate.windows.win1252 7120 ms # 1 of 223 tests failed. --- C:/prog/bf/root/REL_17_STABLE/pgsql/src/test/regress/expected/collate.windows.win1252.out 2024-07-02 01:00:52.572435500 +0000 +++ C:/prog/bf/root/REL_17_STABLE/pgsql.build/testrun/recovery/027_stream_regress/data/results/collate.windows.win1252.out 2024-07-10 16:19:06.807922000 +0000 @@ -21,10 +21,10 @@ ); \\d collate_test1 Table "collate_tests.collate_test1" - Column | Type | Collation | Nullable | Default + Column | Type | Collation | Nullable | Default --------+---------+-----------+----------+--------- - a | integer | | | - b | text | en_US | not null | + a | integer | | | + b | text | en_US | not null | CREATE TABLE collate_test_fail ( a int, @@ -52,10 +52,10 @@ ); \\d collate_test_like Table "collate_tests.collate_test_like" - Column | Type | Collation | Nullable | Default + Column | Type | Collation | Nullable | Default --------+---------+-----------+----------+--------- - a | integer | | | - b | text | en_US | not null | + a | integer | | | + b | text | en_US | not null | CREATE TABLE collate_test2 ( ...
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-07-11%2009%3A45%3A50 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-07-10%2017%3A19%3A23 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-07-12%2010%3A26%3A09 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-07-13%2013%3A44%3A15 - REL_17_STABLE
CFbot failed on Windows platform
Change pg_regress.c back to using diff -w on Windows
constraints.sql fails due to unstable order of new query results
--- /home/ec2-user/bf/root/REL_15_STABLE/pgsql/src/test/regress/expected/constraints.out 2024-07-12 11:06:13.651296795 +0000 +++ /home/ec2-user/bf/root/REL_15_STABLE/pgsql.build/src/test/regress/results/constraints.out 2024-07-12 11:12:36.482910801 +0000 @@ -642,8 +642,8 @@ FROM pg_constraint WHERE conname IN ('tp_pkey', 'tp_b_a_key'); conname | conparentid | conislocal | coninhcount ------------+-------------+------------+------------- - tp_pkey | 0 | t | 0 tp_b_a_key | 0 | t | 0 + tp_pkey | 0 | t | 0 (2 rows)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2024-07-12%2011%3A13%3A45 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-07-12%2011%3A03%3A05 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pogona&dt=2024-07-12%2010%3A57%3A31 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-07-12%2011%3A04%3A03 - REL_13_STABLE
Add ORDER BY to new test query
Isolation tests fail on hamerkop with "too many clients" errors
(hamerkop is a Windows animal with gssapi enabled on v16- (as of 2024-06-08))
test skip-locked ... ok 530 ms test skip-locked-2 ... ok 401 ms test skip-locked-3 ... FAILED (test process exited with exit code 1) 364 ms test skip-locked-4 ... FAILED (test process exited with exit code 1) 197 ms --- diff -w -U3 c:/build-farm-local/buildroot/REL_13_STABLE/pgsql.build/src/test/isolation/expected/skip-locked-3.out c:/build-farm-local/buildroot/REL_13_STABLE/pgsql.build/src/test/isolation/results/skip-locked-3.out --- c:/build-farm-local/buildroot/REL_13_STABLE/pgsql.build/src/test/isolation/expected/skip-locked-3.out 2024-06-07 23:17:47 +0900 +++ c:/build-farm-local/buildroot/REL_13_STABLE/pgsql.build/src/test/isolation/results/skip-locked-3.out 2024-06-08 00:01:27 +0900 @@ -1,25 +1,3 @@ Parsed test spec with 3 sessions - -starting permutation: s1a s2a s3a s1b s2b s3b ... +Connection 3 failed: could not initiate GSSAPI security context: Unspecified GSS failure. Minor code may provide more information: Credential cache is empty +FATAL: sorry, too many clients already
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-06-08%2015%3A28%3A48 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-06-12%2014%3A25%3A09 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-06-15%2017%3A15%3A44 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-06-25%2014%3A33%3A17 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-06-28%2014%3A24%3A51 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-07-11%2013%3A34%3A40 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-07-12%2013%3A57%3A12 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-07-12%2013%3A30%3A54 - REL_13_STABLE
Why is citext/regress failing on hamerkop? \ "sorry, too many clients already" failures
Fix lost Windows socket EOF events.
partition_split.sql and partition_merge.sql fail due to new queries missing schema names
--- /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/expected/partition_split.out 2024-07-15 06:23:21.117538000 +0200 +++ /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/results/partition_split.out 2024-07-15 06:26:53.086612000 +0200 @@ -1506,18 +1506,24 @@ tablename | tablespace -----------+------------------ t | regress_tblspace + t | tp_0_1 | regress_tblspace tp_1_2 | regress_tblspace -(3 rows) + tp_1_2 | +(5 rows) ...
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2024-07-15%2004%3A21%3A02 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-15%2004%3A22%3A01 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=loach&dt=2024-07-15%2004%3A25%3A17 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=morepork&dt=2024-07-15%2004%3A30%3A34 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2024-07-15%2004%3A36%3A39 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=plover&dt=2024-07-15%2004%3A50%3A22 - REL_17_STABLE
Fix unstable tests in partition_merge.sql and partition_split.sql.
select_parallel.sql fails due to a newly added query plan change
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-07-17%2017%3A12%3A53 - HEAD
193/296 postgresql:recovery / recovery/027_stream_regress ERROR 192.60s exit status 1 # Failed test 'regression tests pass' # at /home/bf/bf-build/tamandua/HEAD/pgsql/src/test/recovery/t/027_stream_regress.pl line 95. # got: '256' # expected: '0' # Looks like you failed 1 test of 9. --- regress_log_027_stream_regress not ok 155 - select_parallel 3183 ms --- /home/bf/bf-build/tamandua/HEAD/pgsql/src/test/regress/expected/select_parallel.out 2024-07-12 02:22:21.079018314 +0000 +++ /home/bf/bf-build/tamandua/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/results/select_parallel.out 2024-07-17 17:17:36.018136963 +0000 @@ -671,15 +671,15 @@ -- the joinrel is not parallel-safe due to the OFFSET clause in the subquery explain (costs off) select * from tenk1 t1, (select * from tenk2 t2 offset 0) t2 where t1.two > t2.two; - QUERY PLAN -------------------------------------------- + QUERY PLAN +------------------------------------------------- Nested Loop Join Filter: (t1.two > t2.two) - -> Gather - Workers Planned: 4 - -> Parallel Seq Scan on tenk1 t1 + -> Seq Scan on tenk2 t2 -> Materialize - -> Seq Scan on tenk2 t2 + -> Gather + Workers Planned: 4 + -> Parallel Seq Scan on tenk1 t1 (7 rows) alter table tenk2 reset (parallel_workers);
Fix unstable test in select_parallel.sql
postgres_fdw.sql hangs on lorikeet due to a Cygwin anomaly
(lorikeet is a Cygwin animal)
=================================================== timed out after 10800 secs --- lastcommand # +++ regress install-check in contrib/postgres_fdw +++ # using postmaster on /home/andrew/bf/root/tmp/buildfarm-e2ahpQ, port 5878
Add non-blocking version of PQcancel \ query-cancelling backend is stuck inside poll()
postgres_fdw: Split out the query_cancel test to its own file
043_vacuum_horizon_floor.pl fails on timeout while waiting for index vacuuming
Test Summary Report ------------------- t/043_vacuum_horizon_floor.pl (Wstat: 7424 Tests: 3 Failed: 0) Non-zero exit status: 29 Parse errors: No plan found in TAP output --- regress_log_043_vacuum_horizon_floor [21:27:38.061](0.011s) ok 3 - Cursor query returned 7 from second fetch. Expected value 7. [21:33:30.415](352.354s) # poll_query_until timed out executing this query: # # SELECT index_vacuum_count > 0 # FROM pg_stat_progress_vacuum # WHERE datname='test_db' AND relid::regclass = 'vac_horizon_floor_table'::regclass; # # expecting this output: # t # last actual query output: # f # with stderr: IPC::Run: timeout on timer #2 at /usr/share/perl5/IPC/Run.pm line 2951.
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-07-19%2019%3A34%3A56 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-07-20%2023%3A47%3A06 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-07-21%2000%3A45%3A49 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-07-22%2015%3A00%3A11 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-07-22%2014%3A01%3A51 - REL_17_STABLE
Revert "Test that vacuum removes tuples older than OldestXmin"
043_vacuum_horizon_floor.pl fails on slow machines (especially 32-bit) due to IPC::Run: timeout
t/043_vacuum_horizon_floor.pl (Wstat: 1024 Tests: 0 Failed: 0) Non-zero exit status: 4 Parse errors: No plan found in TAP output Files=42, Tests=590, 3121 wallclock secs ( 1.75 usr 0.40 sys + 301.09 cusr 296.01 csys = 599.25 CPU) Result: FAIL --- regress_log_043_vacuum_horizon_floor [04:14:23.558](258.036s) # issuing query via background psql: # INSERT INTO vac_horizon_floor_table VALUES (99); # UPDATE vac_horizon_floor_table SET col1 = 100 WHERE col1 = 99; # SELECT 'after_update'; # IPC::Run: timeout on timer #1 at /usr/share/perl5/IPC/Run.pm line 2951.
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mereswine&dt=2024-07-21%2006%3A26%3A51 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2024-07-20%2006%3A09%3A08 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gull&dt=2024-07-20%2011%3A16%3A05 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2024-07-21%2006%3A01%3A48 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mereswine&dt=2024-07-21%2009%3A07%3A36 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2024-07-22%2001%3A00%3A26 - HEAD
Vacuum ERRORs out considering freezing dead tuples from before OldestXmin \ the test takes too long
Revert "Test that vacuum removes tuples older than OldestXmin"
select_parallel.sql fails on Cygwin due to asserts triggered (< v16 only)
parallel group (5 tests, in groups of 3): psql_crosstab psql rules amutils stats_ext rules ... ok 1289 ms psql ... ok 689 ms psql_crosstab ... ok 238 ms amutils ... ok 185 ms stats_ext ... ok 1388 ms test select_parallel ... FAILED (test process exited with exit code 2) 16012 ms --- postmaster.log 2024-07-09 05:01:44.019 EDT [668cfc72.ab3:146] pg_regress/select_parallel LOG: statement: explain (costs off) select * from (select string4, count(unique2) from tenk1 group by string4 order by string4) ss right join (values (1),(2),(3)) v(x) on true; TRAP: FailedAssertion("!(slot->in_use)", File: "/home/andrew/bf/root/REL_12_STABLE/pgsql.build/../pgsql/src/backend/postmaster/bgworker.c", Line: 436) *** starting debugger for pid 2026, tid 6324 2024-07-09 05:01:44.020 EDT [668cfc72.ab3:147] pg_regress/select_parallel LOG: statement: select * from (select string4, count(unique2) from tenk1 group by string4 order by string4) ss right join (values (1),(2),(3)) v(x) on true; 2024-07-09 05:01:50.267 EDT [668cfc72.ab3:148] pg_regress/select_parallel FATAL: postmaster exited during a parallel transaction TRAP: FailedAssertion("!(entry->trans == ((void *)0))", File: "/home/andrew/bf/root/REL_12_STABLE/pgsql.build/../pgsql/src/backend/postmaster/pgstat.c", Line: 872)
intermittent failures in Cygwin from select_parallel tests \ signal blocking broke on Cygwin
Also select_parallel.sql hangs on Cygwin (<v16 only)
=================================================== timed out after 10800 secs test stats_ext ... ok 1370 ms test select_parallel ...
Also dblink.sql, postgres_fdw.sql fail on Cygwin due to connection failures (<v16 only)
test dblink ... FAILED 1067 ms --- pgsql.build/contrib/dblink/regression.diffs --- /home/andrew/bf/root/REL_13_STABLE/pgsql.build/../pgsql/contrib/dblink/expected/dblink.out 2021-04-10 20:04:00.415079000 -0400 +++ /home/andrew/bf/root/REL_13_STABLE/pgsql.build/contrib/dblink/results/dblink.out 2024-07-16 05:38:21.825846900 -0400 @@ -836,36 +836,19 @@ (11 rows) SELECT dblink_connect('dtest1', connection_parameters()); - dblink_connect ----------------- - OK -(1 row) - +ERROR: could not establish connection +DETAIL: could not connect to server: Connection refused --- inst/logfile: 2024-07-16 05:38:21.492 EDT [66963f67.7823:4] LOG: could not accept new connection: Software caused connection abort 2024-07-16 05:38:21.492 EDT [66963f8c.79e5:170] pg_regress/dblink ERROR: could not establish connection 2024-07-16 05:38:21.492 EDT [66963f8c.79e5:171] pg_regress/dblink DETAIL: could not connect to server: Connection refused Is the server running locally and accepting connections on Unix domain socket "/home/andrew/bf/root/tmp/buildfarm-DK1yh4/.s.PGSQL.5838"?
Sporadic connection-setup-related test failures on Cygwin in v15-
This problem went away in v16 with commit 7389aad63666a2cac18cd6d7496378d7f50ef37b; now we don't really need signal blocking to work 100% reliably properly in the postmaster.
postgres_fdw.sql failed on hake due to pgfdw_conn_check returned 0 against expectations
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hake&dt=2024-07-26%2017%3A16%3A31 - HEAD
--- /export/home/elmer/c15x/buildroot/HEAD/pgsql.build/contrib/postgres_fdw/expected/postgres_fdw.out Fri Jul 26 19:16:29 2024 +++ /export/home/elmer/c15x/buildroot/HEAD/pgsql.build/contrib/postgres_fdw/results/postgres_fdw.out Fri Jul 26 19:31:12 2024 @@ -12326,7 +12326,7 @@ FROM postgres_fdw_get_connections(true); case ------ - 1 + 0 (1 row)
postgres_fdw: Fix bug in connection status check.
The --disable-spinlocks animals failing after 9d9b9d46f due to a spinlock released twice
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-07-29%2013%3A27%3A42 - HEAD
regress_log_027_stream_regress not ok 109 + brin 3653 ms # (test process exited with exit code 2) not ok 110 + gin 3669 ms # (test process exited with exit code 2) --- 027_stream_regress_primary.log 2024-07-29 13:31:55.529 UTC [3768125] LOG: background worker "parallel worker" (PID 3810206) was terminated by signal 6: Aborted
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-07-29%2013%3A19%3A25 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=francolin&dt=2024-07-29%2013%3A19%3A00 - HEAD
Fix double-release of spinlock
001_emergency_vacuum.pl fails to wait for datfrozenxid advancing
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-05-19%2006%3A33%3A34 - HEAD
(dodo is a slow armv7l machine)
# +++ tap install-check in src/test/modules/xid_wraparound +++ # poll_query_until timed out executing this query: # # SELECT NOT EXISTS ( # SELECT * # FROM pg_database # WHERE age(datfrozenxid) > current_setting('autovacuum_freeze_max_age')::int) # # expecting this output: # t # last actual query output: # f # with stderr: # Tests were run but no plan was declared and done_testing() was not seen. # Looks like your test exited with 29 just after 1. t/001_emergency_vacuum.pl .. Dubious, test returned 29 (wstat 7424, 0x1d00) All 1 subtests passed
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-11%2011%3A30%3A18 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-12%2011%3A32%3A08 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-28%2013%3A34%3A34 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-02%2016%3A33%3A17 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-03%2007%3A35%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-03%2017%3A31%3A02 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-03%2021%3A32%3A15 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-10%2015%3A39%3A03 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-17%2009%3A34%3A13 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-17%2011%3A39%3A26 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-20%2020%3A35%3A39 - HEAD
Testing autovacuum wraparound (including failsafe) \ autovacuum worker can't be started due to a race condition
xid_wraparound tests intermittent failure.
Stabilize xid_wraparound tests
040_pg_createsubscriber.pl fails due to recovery timed out during pg_createsubscriber run
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-07-11%2007%3A25%3A12 - HEAD
239/295 postgresql:pg_basebackup / pg_basebackup/040_pg_createsubscriber ERROR 274.49s exit status 29 --- regress_log_040_pg_createsubscriber recovery_target_lsn = '0/30098D0' pg_createsubscriber: starting the subscriber ... 2024-07-11 07:37:10.001 UTC [2948830][client backend][0/3:0] LOG: statement: SELECT pg_catalog.pg_is_in_recovery() ... pg_createsubscriber: server was started pg_createsubscriber: waiting for the target server to reach the consistent state ... 2024-07-11 07:40:10.816 UTC [2948830][client backend][0/183:0] LOG: statement: SELECT pg_catalog.pg_is_in_recovery() ... 2024-07-11 07:40:10.837 UTC [2948531][postmaster][:0] LOG: received fast shutdown request # (there is no "recovery stopping after WAL location (LSN) XXXX" record) ... pg_createsubscriber: server was stopped pg_createsubscriber: error: recovery timed out
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-07-12%2000%3A38%3A06 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-07-13%2001%3A53%3A19 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-07-22%2002%3A31%3A32 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=canebrake&dt=2024-07-25%2002%3A39%3A02 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-07-26%2009%3A20%3A15 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-07-26%2022%3A24%3A58 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-07-26%2016%3A02%3A40 - HEAD
speed up a logical replica setup \ recovery timed out
pg_createsubscriber: Fix an unpredictable recovery wait time.
002_pg_upgrade.pl fails with debug_parallel_query = regress after f5f30c22e
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=perentie&dt=2024-08-01%2000%3A00%3A03 - HEAD
# Failed test 'regression tests pass' # at t/002_pg_upgrade.pl line 260. # got: '256' # expected: '0' # Failed test 'dump before running pg_upgrade' # at t/002_pg_upgrade.pl line 322. --- 002_pg_upgrade_old_node.log 2024-08-01 09:12:17.546 JST [458166:4] FATAL: cannot change "client_encoding" during a parallel operation ... 2024-08-01 09:12:18.391 JST [450200:253] LOG: background worker "parallel worker" (PID 458166) was terminated by signal 6: Aborted 2024-08-01 09:12:18.391 JST [450200:254] DETAIL: Failed process was running: SELECT a, b FROM collate_test1 UNION ALL SELECT a, b FROM collate_test3 ORDER BY 2; ...
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-07-31%2022%3A57%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-07-31%2023%3A39%3A02 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-08-01%2000%3A00%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=massasauga&dt=2024-07-31%2022%3A55%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=massasauga&dt=2024-07-31%2023%3A40%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=massasauga&dt=2024-07-31%2023%3A55%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mule&dt=2024-07-31%2023%3A30%3A04 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-07-31%2023%3A09%3A05 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skimmer&dt=2024-08-01%2000%3A50%3A33 - REL_16_STABLE
Also collate.icu.utf8.sql triggers assertion failure when non-default encoding is used
not ok 145 + collate.icu.utf8 3316 ms # (test process exited with exit code 2) --- inst/logfile 2024-08-01 03:00:53.562 CEST [821316:4] FATAL: cannot change "client_encoding" during a parallel operation TRAP: failed Assert("!IsTransactionOrTransactionBlock()"), File: "pgstat.c", Line: 632, PID: 821316 0x9deaed <ExceptionalCondition+0x6d> at /home/pgbf/buildroot/HEAD/inst/bin/postgres 0x8a0e86 <pgstat_report_stat+0x2d6> at /home/pgbf/buildroot/HEAD/inst/bin/postgres 0x8a0f58 <pgstat_shutdown_hook+0x38> at /home/pgbf/buildroot/HEAD/inst/bin/postgres 0x8592c5 <shmem_exit+0x65> at /home/pgbf/buildroot/HEAD/inst/bin/postgres 0x8591dc <proc_exit_prepare+0x5c> at /home/pgbf/buildroot/HEAD/inst/bin/postgres 0x859136 <proc_exit+0x56> at /home/pgbf/buildroot/HEAD/inst/bin/postgres 0x9df998 <errfinish+0x258> at /home/pgbf/buildroot/HEAD/inst/bin/postgres 0x686626 <assign_client_encoding+0x56> at /home/pgbf/buildroot/HEAD/inst/bin/postgres 0x9f8ce3 <AtEOXact_GUC+0x2f3> at /home/pgbf/buildroot/HEAD/inst/bin/postgres 0x546935 <AbortTransaction+0x285> at /home/pgbf/buildroot/HEAD/inst/bin/postgres 0x54662b <AbortOutOfAnyTransaction+0x5b> at /home/pgbf/buildroot/HEAD/inst/bin/postgres 0x9f23d9 <ShutdownPostgres+0x9> at /home/pgbf/buildroot/HEAD/inst/bin/postgres
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jay&dt=2024-07-31%2023%3A58%3A16 - HEAD
Revert "Allow parallel workers to cope with a newly-created session user ID."
001_concurrent_transaction.pl fails due to standby not synchronized (after e2ed7e322)
223/299 postgresql:pg_visibility / pg_visibility/001_concurrent_transaction ERROR 5.55s exit status 29 stderr: # Tests were run but no plan was declared and done_testing() was not seen. # Looks like your test exited with 29 just after 1. (test program exited with status code 29) --- regress_log_001_concurrent_transaction [21:27:36.319](0.096s) ok 1 - pg_check_visible() detects no errors error running SQL: 'psql:<stdin>:1: ERROR: function pg_check_visible(unknown) does not exist LINE 1: SELECT * FROM pg_check_visible('vacuum_test'); ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts.' while running 'psql -XAtq -d port=28153 host=/tmp/H_TnqDvPcT dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'SELECT * FROM pg_check_visible('vacuum_test');' at /home/bf/bf-build/calliphoridae/HEAD/pgsql/src/test/perl/PostgreSQL/Test/Cluster.pm line 2140.
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-08-15%2021%3A20%3A13 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-08-15%2021%3A20%3A00 - HEAD
pgsql: Fix GetStrictOldestNonRemovableTransactionId() on standby \ buildfarm failures
Add missing wait_for_catchup() to pg_visibility tap test
006_db_file_copy.pl failed on dikkop due to replication timeout
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dikkop&dt=2024-07-27%2023%3A22%3A57 - HEAD
[02:09:22.511](24.583s) ok 1 - full backup ... [02:10:35.758](73.247s) not ok 2 - incremental backup --- 006_db_file_copy_primary.log 2024-07-28 02:09:29.441 UTC [67785:12] 006_db_file_copy.pl LOG: received replication command: START_REPLICATION SLOT "pg_basebackup_67785" 0/4000000 TIMELINE 1 2024-07-28 02:09:29.441 UTC [67785:13] 006_db_file_copy.pl STATEMENT: START_REPLICATION SLOT "pg_basebackup_67785" 0/4000000 TIMELINE 1 2024-07-28 02:09:29.441 UTC [67785:14] 006_db_file_copy.pl LOG: acquired physical replication slot "pg_basebackup_67785" 2024-07-28 02:09:29.441 UTC [67785:15] 006_db_file_copy.pl STATEMENT: START_REPLICATION SLOT "pg_basebackup_67785" 0/4000000 TIMELINE 1 2024-07-28 02:10:29.487 UTC [67785:16] 006_db_file_copy.pl LOG: terminating walsender process due to replication timeout 2024-07-28 02:10:29.487 UTC [67785:17] 006_db_file_copy.pl STATEMENT: START_REPLICATION SLOT "pg_basebackup_67785" 0/4000000 TIMELINE 1
Also 001_stream_rep.pl failed on dikkop due to replication timeout
regress_log_001_stream_rep # Taking pg_basebackup my_backup from node "standby_1" # Running: pg_basebackup -D /mnt/data/buildfarm/buildroot/REL_14_STABLE/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_1_data/backup/my_backup -h /mnt/data/buildfarm/buildroot/tmp/dU3MkMjYZe -p 20416 --checkpoint fast --no-sync pg_basebackup: error: could not send feedback packet: server closed the connection unexpectedly --- 001_stream_rep_standby_1.log 2024-08-02 08:24:11.371 UTC [33738:5] standby_1 LOG: received replication command: START_REPLICATION 0/3000000 TIMELINE 1 2024-08-02 08:24:11.371 UTC [33738:6] standby_1 STATEMENT: START_REPLICATION 0/3000000 TIMELINE 1 2024-08-02 08:25:11.485 UTC [33738:7] standby_1 LOG: terminating walsender process due to replication timeout 2024-08-02 08:25:11.485 UTC [33738:8] standby_1 STATEMENT: START_REPLICATION 0/3000000 TIMELINE 1
Also 003_timeline.pl failed on dikkop due to replication timeout
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dikkop&dt=2024-08-04%2010%3A04%3A51 - HEAD
regress_log_003_timeline # Running: pg_basebackup -D /mnt/data/buildfarm/buildroot/HEAD/pgsql.build/src/bin/pg_combinebackup/tmp_check/t_003_timeline_node1_data/backup/backup2 --no-sync -cfast --incremental /mnt/data/buildfarm/buildroot/HEAD/pgsql.build/src/bin/pg_combinebackup/tmp_check/t_003_timeline_node1_data/backup/backup1/backup_manifest pg_basebackup: error: could not receive data from WAL stream: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. ... [12:47:42.477](87.278s) not ok 2 - incremental backup from node1 --- 003_timeline_node1.log 2024-08-04 12:46:34.985 UTC [4951:12] 003_timeline.pl LOG: received replication command: START_REPLICATION SLOT "pg_basebackup_4951" 0/4000000 TIMELINE 1 2024-08-04 12:46:34.985 UTC [4951:13] 003_timeline.pl STATEMENT: START_REPLICATION SLOT "pg_basebackup_4951" 0/4000000 TIMELINE 1 2024-08-04 12:46:34.986 UTC [4951:14] 003_timeline.pl LOG: acquired physical replication slot "pg_basebackup_4951" 2024-08-04 12:46:34.986 UTC [4951:15] 003_timeline.pl STATEMENT: START_REPLICATION SLOT "pg_basebackup_4951" 0/4000000 TIMELINE 1 2024-08-04 12:47:34.987 UTC [4951:16] 003_timeline.pl LOG: terminating walsender process due to replication timeout
dikkop failed the pg_combinebackupCheck/006_db_file_copy.pl test
`make temp-install/check` triggers assert on CLOBBER_CACHE_ALWAYS animal after c14d4acb8
performing post-bootstrap initialization ... TRAP: failed Assert("found"), File: "typcache.c", Line: 3077, PID: 22100
type cache cleanup improvements \ trilobite failed to perform `make check`
Revert: Avoid looping over all type cache entries in TypeCacheRelCallback()
The postgres_fdw test fails due to an unexpected warning on canceling a statement
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-06-20%2009%3A52%3A04 - HEAD
70/70 postgresql:postgres_fdw-running / postgres_fdw-running/regress ERROR 278.67s exit status 1 --- postgres_fdw-running/regress/regression.diffs --- /home/bf/bf-build/olingo/HEAD/pgsql/contrib/postgres_fdw/expected/postgres_fdw.out 2024-06-07 10:43:46.591500366 +0000 +++ /home/bf/bf-build/olingo/HEAD/pgsql.build/testrun/postgres_fdw-running/regress/results/postgres_fdw.out 2024-06-20 10:13:52.926459374 +0000 @@ -2775,6 +2775,7 @@ SET LOCAL statement_timeout = '10ms'; select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long ERROR: canceling statement due to statement timeout +WARNING: could not get result of cancel request due to timeout COMMIT; -- ==================================================================== -- Check that userid to use when querying the remote table is correctly
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-04-02%2023%3A58%3A01 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-07-02%2000%3A24%3A10 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=francolin&dt=2024-07-09%2000%3A17%3A16 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-07-09%2003%3A46%3A50 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=francolin&dt=2024-07-09%2019%3A02%3A27 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-07-13%2004%3A15%3A25 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-07-16%2022%3A45%3A08 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sarus&dt=2024-07-20%2015%3A02%3A23 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-07-20%2020%3A57%3A01 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-07-26%2013%3A15%3A09 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-08-09%2005%3A25%3A24 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-08-10%2019%3A52%3A56 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-08-19%2011%3A30%3A04 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-08-20%2019%3A29%3A20 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-08-29%2010%3A42%3A09 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-08-29%2012%3A52%3A00 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-08-30%2006%3A25%3A46 - HEAD
postgres_fdw-running/regress fails due to an unxepected warning
Make postgres_fdw's query_cancel test less flaky.
kerberos/001_auth.pl fails on chipmunk due to kerberos utils missing
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2024-08-03%2014%3A12%3A56 - HEAD
# +++ tap check in src/test/kerberos +++ Bailout called. Further testing stopped: command "/usr/sbin/kdb5_util create -s -P secret0" exited with value 2 FAILED--Further testing stopped: command "/usr/sbin/kdb5_util create -s -P secret0" exited with value 2 Makefile:22: recipe for target 'check' failed --- regress_log_001_auth Can't exec "/usr/sbin/kdb5_util": No such file or directory at /home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/kerberos/../../../src/test/perl/PostgreSQL/Test/Utils.pm line 349.
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2024-08-03%2005%3A35%3A30 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2024-08-02%2019%3A42%3A01 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2024-08-02%2012%3A25%3A07 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2024-08-02%2006%3A06%3A49 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2024-08-02%2000%3A32%3A22 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2024-08-01%2019%3A19%3A19 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2024-08-22%2010%3A25%3A23 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2024-08-22%2021%3A59%3A29 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2024-08-24%2010%3A25%3A23 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2024-08-24%2016%3A47%3A59 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2024-08-26%2016%3A00%3A16 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2024-08-26%2006%3A14%3A22 - REL_17_STABLE
configure failures on chipmunk \ chipmunk fails on the kerberosCheck stage
configure failures on chipmunk \ chipmunk turned green
pg_visibilty failed due to stack-use-after-scope under ASAN after ed1b1ee59
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-09-03%2017%3A47%3A20 - master
84/300 postgresql:pg_visibility / pg_visibility/regress ERROR 2.84s exit status 1 --- pgsql.build/testrun/pg_visibility/regress/log/postmaster.log ==2755594==ERROR: AddressSanitizer: stack-use-after-scope on address 0x7ffe7e0a1770 at pc 0x56137e4ad33f bp 0x7ffe7e0a1530 sp 0x7ffe7e0a1528 READ of size 4 at 0x7ffe7e0a1770 thread T0 #0 0x56137e4ad33e in block_range_read_stream_cb /home/bf/bf-build/olingo/HEAD/pgsql.build/../pgsql/src/backend/storage/aio/read_stream.c:177:9 #1 0x56137e4af3e0 in read_stream_get_block /home/bf/bf-build/olingo/HEAD/pgsql.build/../pgsql/src/backend/storage/aio/read_stream.c:196:14 ... 2024-09-03 17:54:12.370 UTC postmaster[2755270] LOG: server process (PID 2755594) was terminated by signal 6: Aborted 2024-09-03 17:54:12.370 UTC postmaster[2755270] DETAIL: Failed process was running: select count(*) > 0 from pg_visibility('regular_table');
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-09-03%2017%3A47%3A20 - master
Use read streams in pg_visibility \ invalid scope of variable
Fix stack variable scope from previous commit.
sqljson_queryfuncs.sql and sqljson_jsontable.sql break due to a llvm compilation issue (after 3a9746097)
not ok 184 + sqljson_queryfuncs 92 ms # (test process exited with exit code 2) not ok 185 + sqljson_jsontable 85 ms # (test process exited with exit code 2) --- /home/bf/bf-build/bushmaster/HEAD/pgsql/src/test/regress/expected/sqljson_queryfuncs.out 2024-08-29 12:56:56.067199857 +0000 +++ /home/bf/bf-build/bushmaster/HEAD/pgsql.build/src/test/regress/results/sqljson_queryfuncs.out 2024-09-06 03:19:10.330722910 +0000 @@ -48,1408 +48,8 @@ (1 row) SELECT JSON_EXISTS(jsonb '1', 'strict $.a' ERROR ON ERROR); -ERROR: jsonpath member accessor can only be applied to an object ... +FATAL: fatal llvm error: Broken module found, compilation aborted! +server closed the connection unexpectedly ... --- /home/bf/bf-build/bushmaster/HEAD/pgsql/src/test/regress/expected/sqljson_jsontable.out 2024-09-06 03:10:37.530863386 +0000 +++ /home/bf/bf-build/bushmaster/HEAD/pgsql.build/src/test/regress/results/sqljson_jsontable.out 2024-09-06 03:19:10.322722911 +0000 @@ -16,1162 +16,8 @@ ^ DETAIL: Only EMPTY [ ARRAY ] or ERROR is allowed in the top-level ON ERROR clause. SELECT * FROM JSON_TABLE('[]', 'strict $.a' COLUMNS (js2 int PATH '$') EMPTY ON ERROR); - js2 ------ -(0 rows) ... +FATAL: fatal llvm error: Broken module found, compilation aborted! +server closed the connection unexpectedly
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=canebrake&dt=2024-09-06%2003%3A09%3A15 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=urutu&dt=2024-09-06%2003%3A10%3A35 - master
Re: pgsql: Add more SQL/JSON constructor functions \ 0004 doesn't play nicely with LLVM
Revert recent SQL/JSON related commits
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=taipan&dt=2024-09-06%2003%3A08%3A17 - master
not ok 185 + sqljson_jsontable 1869 ms ... --- pgsql.build/src/test/regress/regression.diffs --- /home/bf/bf-build/taipan/HEAD/pgsql/src/test/regress/expected/sqljson_jsontable.out 2024-09-06 03:08:13.902924907 +0000 +++ /home/bf/bf-build/taipan/HEAD/pgsql.build/src/test/regress/results/sqljson_jsontable.out 2024-09-06 03:16:12.978761395 +0000 @@ -1140,7 +1140,10 @@ Table Function Scan on "json_table" (cost=0.01..1.00 rows=100 width=32) Output: a Table Function Call: JSON_TABLE('"a"'::jsonb, '$' AS json_table_path_0 COLUMNS (a text PATH '$')) -(3 rows) + JIT: + Functions: 2 + Options: Inlining false, Optimization false, Expressions true, Deforming true +(6 rows) ...
Re: pgsql: Add more SQL/JSON constructor functions \ to not use EXPLAIN VERBOSE {{{2}}}
Revert recent SQL/JSON related commits
pageinspect/page.sql fails on big-endian animals (after 05036a315)
test page ... FAILED 182 ms --- /export/home/nm/farm/studio64v12_6/REL_14_STABLE/pgsql.build/../pgsql/contrib/pageinspect/expected/page.out Fri Sep 13 00:54:47 2024 +++ /export/home/nm/farm/studio64v12_6/REL_14_STABLE/pgsql.build/contrib/pageinspect/results/page.out Fri Sep 13 01:49:16 2024 @@ -242,6 +242,6 @@ from heap_page_items(get_raw_page('test_sequence', 0)); tuple_data_split ------------------------------------------------------- - {"\\\\x0100000000000000","\\\\x0000000000000000","\\\\x00"} + {"\\\\x0000000000000001","\\\\x0000000000000000","\\\\x00"} (1 row)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2024-09-12%2021%3A38%3A11 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2024-09-12%2022%3A14%3A15 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2024-09-12%2023%3A49%3A33 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2024-09-13%2000%3A50%3A32 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2024-09-13%2001%3A00%3A13 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2024-09-13%2001%3A45%3A37 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2024-09-13%2002%3A35%3A06 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2024-09-13%2003%3A25%3A49 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2024-09-13%2004%3A09%3A05 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2024-09-13%2004%3A55%3A55 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2024-09-13%2005%3A42%3A24 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sarus&dt=2024-09-13%2014%3A08%3A57 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sarus&dt=2024-09-13%2014%3A16%3A48 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sarus&dt=2024-09-13%2014%3A25%3A08 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sarus&dt=2024-09-13%2014%3A33%3A38 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sarus&dt=2024-09-13%2014%3A41%3A47 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sarus&dt=2024-09-13%2014%3A50%3A14 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sarus&dt=2024-09-13%2014%3A59%3A40 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=ruddy&dt=2024-09-13%2010%3A08%3A41 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=ruddy&dt=2024-09-13%2010%3A12%3A51 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=ruddy&dt=2024-09-13%2010%3A17%3A09 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=ruddy&dt=2024-09-13%2010%3A21%3A43 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=ruddy&dt=2024-09-13%2010%3A27%3A39 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=ruddy&dt=2024-09-13%2010%3A33%3A46 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=ruddy&dt=2024-09-13%2010%3A39%3A15 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=froghopper&dt=2024-09-13%2008%3A10%3A35 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=froghopper&dt=2024-09-13%2008%3A31%3A55 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pike&dt=2024-09-13%2007%3A10%3A31 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pike&dt=2024-09-13%2007%3A21%3A03 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pike&dt=2024-09-13%2007%3A31%3A30 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pike&dt=2024-09-13%2007%3A40%3A45 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pike&dt=2024-09-13%2007%3A49%3A20 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pike&dt=2024-09-13%2008%3A00%3A00 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pike&dt=2024-09-13%2008%3A12%3A07 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lora&dt=2024-09-13%2004%3A10%3A38 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lora&dt=2024-09-13%2004%3A21%3A33 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lora&dt=2024-09-13%2004%3A35%3A57 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lora&dt=2024-09-13%2004%3A51%3A29 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lora&dt=2024-09-13%2005%3A04%3A00 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lora&dt=2024-09-13%2005%3A16%3A35 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lora&dt=2024-09-13%2005%3A29%3A52 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=boa&dt=2024-09-13%2015%3A10%3A05 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=boa&dt=2024-09-13%2015%3A17%3A25 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2024-09-13%2004%3A13%3A56 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2024-09-13%2009%3A01%3A31 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2024-09-13%2003%3A30%3A38 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2024-09-13%2007%3A32%3A58 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2024-09-12%2022%3A26%3A11 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=margay&dt=2024-09-13%2009%3A00%3A05 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=margay&dt=2024-09-13%2008%3A00%3A05 - REL_17_STABLE
Fix contrib/pageinspect's test for sequences.
pageinspect/page.sql fails on access to temp tables during parallel operation (after 05036a315)
test page ... FAILED 444 ms --- /u1/tac/build-farm-17/buildroot/REL_14_STABLE/pgsql.build/contrib/pageinspect/expected/page.out 2024-09-12 18:38:15.159791288 -0400 +++ /u1/tac/build-farm-17/buildroot/REL_14_STABLE/pgsql.build/contrib/pageinspect/results/page.out 2024-09-12 19:01:41.660534527 -0400 @@ -240,8 +240,4 @@ create temporary sequence test_sequence; select tuple_data_split('test_sequence'::regclass, t_data, t_infomask, t_infomask2, t_bits) from heap_page_items(get_raw_page('test_sequence', 0)); - tuple_data_split -------------------------------------------------------- - {"\\\\x0100000000000000","\\\\x0000000000000000","\\\\x00"} -(1 row) - +ERROR: cannot access temporary tables during a parallel operation
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skimmer&dt=2024-09-12%2022%3A00%3A02 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skimmer&dt=2024-09-12%2022%3A18%3A33 - REL_13_STABLE
Fix contrib/pageinspect's test for sequences.
merge.sql fails due to ExclusiveLock not owned, in v15, v16 (after 51ff46de2)
--- /home/buildfarm/build-farm-17/buildroot/REL_16_STABLE/pgsql/src/test/regress/expected/merge.out 2024-09-24 18:29:05.348210581 -0400 +++ /home/buildfarm/build-farm-17/buildroot/REL_16_STABLE/pgsql.build/testrun/regress/regress/results/merge.out 2024-09-24 18:29:52.270236567 -0400 @@ -404,6 +404,8 @@ ON t.tid = s.sid WHEN NOT MATCHED THEN INSERT VALUES (4, NULL); +WARNING: you don't own a lock of type ExclusiveLock +WARNING: you don't own a lock of type ExclusiveLock
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-09-25%2000%3A07%3A24 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-09-25%2000%3A07%3A04 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-09-25%2000%3A06%3A46 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=longfin&dt=2024-09-24%2023%3A49%3A47 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2024-09-24%2023%3A46%3A43 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-09-24%2023%3A40%3A32 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-09-24%2023%3A39%3A22 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-09-24%2023%3A33%3A39 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-09-24%2023%3A32%3A45 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hippopotamus&dt=2024-09-24%2023%3A29%3A30 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=morepork&dt=2024-09-24%2023%3A26%3A48 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-09-24%2023%3A25%3A01 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=plover&dt=2024-09-24%2023%3A11%3A51 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2024-09-24%2023%3A08%3A44 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=shieldtail&dt=2024-09-24%2023%3A07%3A12 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=cavefish&dt=2024-09-24%2022%3A53%3A05 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=akepa&dt=2024-09-24%2022%3A29%3A02 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=shieldtail&dt=2024-09-24%2022%3A54%3A44 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=plover&dt=2024-09-24%2023%3A08%3A51 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-09-24%2023%3A19%3A44 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=morepork&dt=2024-09-24%2023%3A24%3A01 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-09-24%2023%3A27%3A57 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-09-24%2023%3A28%3A48 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-09-24%2023%3A29%3A31 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-09-24%2023%3A33%3A33 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-09-24%2023%3A34%3A00 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2024-09-24%2023%3A34%3A49 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=longfin&dt=2024-09-24%2023%3A46%3A58 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-09-25%2000%3A00%3A54 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-09-25%2000%3A01%3A28 - REL_15_STABLE
race condition in pg_class \ the pushes contained at least one defect
Fix use of uninitialized value in previous commit.
Recently added intra-grant-inplace-db.spec fails on slow/JIT-enabled machines
test intra-grant-inplace-db ... FAILED 4302 ms ======================= 1 of 98 tests failed. ======================= --- regression.diffs --- /home/fedora/17-habu/buildroot/REL_12_STABLE/pgsql.build/src/test/isolation/expected/intra-grant-inplace-db.out 2024-07-18 03:08:32.946251561 +0000 +++ /home/fedora/17-habu/buildroot/REL_12_STABLE/pgsql.build/src/test/isolation/output_iso/results/intra-grant-inplace-db.out 2024-07-18 03:26:41.886968008 +0000 @@ -21,8 +21,7 @@ WHERE datname = current_catalog AND age(datfrozenxid) > (SELECT min(age(x)) FROM frozen_witness); -?column? ----------------------- -datfrozenxid retreated -(1 row) +?column? +-------- +(0 rows)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=urutu&dt=2024-07-22%2018%3A00%3A46 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-07-24%2005%3A21%3A05 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=taipan&dt=2024-07-28%2012%3A20%3A37 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=urutu&dt=2024-08-08%2012%3A01%3A17 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=oystercatcher&dt=2024-09-17%2003%3A00%3A46 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2024-09-21%2018%3A44%3A45 - REL_14_STABLE
race condition in pg_class \ intra-grant-inplace-db.spec may fail on a slow machine
Fix data loss at inplace update after heap_update().
password.sql fails due to new length-checking query not reflected in password_1.out
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gecko&dt=2024-10-07%2020%3A30%3A34 - master
not ok 125 + password 952 ms --- regression.diffs --- /home/linux1/17-gecko/buildroot/HEAD/pgsql.build/src/test/regress/expected/password_1.out 2024-10-07 16:28:24.961175600 -0400 +++ /home/linux1/17-gecko/buildroot/HEAD/pgsql.build/src/test/regress/results/password.out 2024-10-07 16:39:53.941216373 -0400 @@ -128,6 +128,13 @@ regress_passwd_sha_len2 | t (3 rows) +-- Test that valid hashes that are too long are rejected +CREATE ROLE regress_passwd10 PASSWORD 'SCRAM-...'; +ERROR: encrypted password is too long +DETAIL: Encrypted passwords must be no longer than 512 bytes. +ALTER ROLE regress_passwd9 PASSWORD 'SCRAM-...'; +ERROR: encrypted password is too long +DETAIL: Encrypted passwords must be no longer than 512 bytes.
pgsql: Fix test for password hash length limit.
Fix test for password hash length limit.
Recent addition to test_decoding/stream.sql fails on 32-bit (and some other) animals in v14, v15
test stream ... FAILED 186 ms --- pgsql.build/contrib/test_decoding/regression.diffs --- /home/bf/bf-build/adder/REL_14_STABLE/pgsql.build/../pgsql/contrib/test_decoding/expected/stream.out 2024-10-07 10:23:43.511063647 +0000 +++ /home/bf/bf-build/adder/REL_14_STABLE/pgsql.build/contrib/test_decoding/results/stream.out 2024-10-07 10:31:09.703092511 +0000 @@ -122,7 +122,7 @@ SELECT count(*) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'stream-changes', '1'); count ------- - 315 + 0 (1 row)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2024-10-07%2012%3A10%3A09 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2024-10-07%2012%3A40%3A11 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-07%2019%3A37%3A19 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2024-10-07%2021%3A30%3A08 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-07%2022%3A24%3A36 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2024-10-08%2001%3A49%3A07 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2024-10-08%2002%3A43%3A23 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2024-10-08%2003%3A29%3A46 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-07%2010%3A32%3A29 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2024-10-07%2012%3A32%3A07 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2024-10-07%2012%3A46%3A23 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hachi&dt=2024-10-07%2015%3A16%3A40 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2024-10-07%2017%3A07%3A19 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gokiburi&dt=2024-10-07%2018%3A17%3A09 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-07%2019%3A42%3A51 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hachi&dt=2024-10-07%2021%3A35%3A21 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2024-10-07%2021%3A52%3A38 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-07%2022%3A30%3A41 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gokiburi&dt=2024-10-08%2000%3A17%3A18 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2024-10-08%2001%3A55%3A26 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2024-10-08%2003%3A17%3A55 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hachi&dt=2024-10-08%2003%3A35%3A42 - REL_15_STABLE
(gokiburi and hachi run tests with wal_compression = zstd, default_toast_compression = lz4)
Stabilize the test added by commit 022564f60c.
Recently added 001_connection_limits.pl fails on Windows
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-08%2020%3A21%3A41 - master
231/294 postgresql:postmaster / postmaster/001_connection_limits ERROR 28.72s exit status 25 --- pgsql.build/testrun/postmaster/001_connection_limits/log/001_connection_limits_primary.log 2024-10-08 22:11:10.549 UTC [5080:3] [unknown] LOG: no match in usermap "regress" for user "regress_regular" authenticated as "pgrunner@EC2AMAZ-P7KGG90" 2024-10-08 22:11:10.549 UTC [5080:4] [unknown] FATAL: SSPI authentication failed for user "regress_regular" 2024-10-08 22:11:10.549 UTC [5080:5] [unknown] DETAIL: Connection matched file "C:/prog/bf/root/HEAD/pgsql.build/testrun/postmaster/001_connection_limits/data/t_001_connection_limits_primary_data/pgdata/pg_hba.conf" line 2: "host all all 127.0.0.1/32 sspi include_realm=1 map=regress"
pgsql: Allow roles created by new test to log in under SSPI.
Allow roles created by new test to log in under SSPI.
001_emergency_vacuum.pl fails due to slow server shutdown on perentie
Bailout called. Further testing stopped: pg_ctl stop failed [09:32:28] t/001_emergency_vacuum.pl .. Dubious, test returned 255 (wstat 65280, 0xff00) --- 001_emergency_vacuum_main.log 2024-10-05 09:30:28.483 JST [2711201:4] LOG: received fast shutdown request 2024-10-05 09:30:28.483 JST [2711201:5] LOG: aborting any active transactions 2024-10-05 09:30:28.485 JST [2711201:6] LOG: background worker "logical replication launcher" (PID 2711207) exited with exit code 1 2024-10-05 09:30:28.675 JST [2711202:1] LOG: shutting down 2024-10-05 09:30:28.675 JST [2711202:2] LOG: checkpoint starting: shutdown immediate 2024-10-05 09:32:28.722 JST [2711201:7] LOG: received immediate shutdown request 2024-10-05 09:32:28.740 JST [2711201:8] LOG: database system is shut down --- regress_log_001_emergency_vacuum [09:30:28.479](0.000s) ok 6 - failsafe vacuum triggered for small_trunc ### Stopping node "main" using mode fast # Running: pg_ctl -D /home/bf/buildroot/HEAD/pgsql.build/src/test/modules/xid_wraparound/tmp_check/t_001_emergency_vacuum_main_data/pgdata -m fast stop waiting for server to shut down........................................................................................................................... failed pg_ctl: server does not shut down # pg_ctl stop failed: 256
Testing autovacuum wraparound (including failsafe) \ perentie needs larger PGCTLTIMEOUT
Testing autovacuum wraparound (including failsafe) \ other tests won't be overlapped
XversionUpgrade-xxx-HEAD tests fail due to checksum settings difference
--- upgrade.fairywren/HEAD/REL9_2_STABLE-upgrade.log Performing Consistency Checks ----------------------------- Checking cluster versions ok old cluster does not use data checksums but the new one does Failure, exiting
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-16%2007%3A17%3A04 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-16%2007%3A36%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-16%2012%3A38%3A20 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-10-16%2014%3A31%3A52 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-16%2016%3A22%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-16%2016%3A32%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-16%2017%3A14%3A21 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-10-16%2018%3A03%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-16%2022%3A09%3A45 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-17%2001%3A18%3A17 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-17%2001%3A32%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-17%2006%3A22%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-17%2006%3A32%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-17%2006%3A42%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-17%2007%3A34%3A50 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-17%2012%3A30%3A01 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-10-17%2017%3A44%3A46 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-17%2019%3A32%3A04 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-17%2022%3A02%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-18%2005%3A27%3A51 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-18%2006%3A30%3A56 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-18%2008%3A27%3A01 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-18%2009%3A32%3A04 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-18%2009%3A47%3A01 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-10-18%2015%3A46%3A11 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-18%2016%3A58%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-18%2017%3A52%3A11 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-18%2018%3A02%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-10-19%2000%3A03%3A52 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-19%2015%3A47%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-10-19%2016%3A03%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-19%2016%3A36%3A45 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-20%2004%3A45%3A53 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-10-20%2007%3A31%3A28 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-20%2010%3A44%3A52 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-20%2013%3A22%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-10-21%2003%3A03%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-21%2003%3A35%3A42 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-10-21%2015%3A12%3A16 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-21%2020%3A02%3A23 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-10-22%2004%3A08%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-22%2008%3A55%3A24 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-10-22%2010%3A03%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-22%2010%3A42%3A18 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-10-22%2012%3A03%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-10-22%2022%3A15%3A40 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-23%2009%3A49%3A22 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-10-23%2023%3A21%3A40 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-10-24%2004%3A24%3A13 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-24%2011%3A48%3A19 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-10-24%2014%3A01%3A39 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-10-24%2015%3A03%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-10-26%2002%3A26%3A39 - master
Enable data checksums by default \ upgrade tests on the buildfarm don't like this
fix for data checksum default change
test_extensions.sql fails on Windows because of different EOL chars
101/290 postgresql:test_extensions / test_extensions/regress ERROR 8.61s exit status 1 --- pgsql.build/testrun/test_extensions/regress/regression.diffs --- c:/build-farm-local/buildroot/HEAD/pgsql/src/test/modules/test_extensions/expected/test_extensions.out 2024-10-23 20:03:34 +0900 +++ c:/build-farm-local/buildroot/HEAD/pgsql.build/testrun/test_extensions/regress/results/test_extensions.out 2024-10-23 20:20:10 +0900 @@ -77,7 +77,7 @@ ERROR: syntax error at or near "FUNCTIN" LINE 1: CREATE FUNCTIN my_erroneous_func(int) RETURNS int LANGUAGE S... ^ -QUERY: CREATE FUNCTIN my_erroneous_func(int) RETURNS int LANGUAGE SQL +QUERY: CREATE FUNCTIN my_erroneous_func(int) RETURNS int LANGUAGE SQL AS $$ SELECT $1 + 1 $$; CONTEXT: extension script file "test_ext7--2.0--2.1bad.sql", near line 10 alter extension test_ext7 update to '2.2bad'; ...
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-10-24%2011%3A00%3A28 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-10-25%2011%3A00%3A22 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-10-26%2011%3A00%3A16 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-10-27%2011%3A00%3A15 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-10-28%2011%3A00%3A16 - master
Better error reporting from extension scripts \ hamerkop doesn't like this patch
Strip Windows newlines from extension script files manually.
001_pgbench_with_server.pl fails due to IPC::Run losing stdout/stderr on macOS
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=indri&dt=2024-10-02%2002%3A34%3A16 - master
(indri is a macOS animal)
[22:38:14.887](0.014s) ok 362 - pgbench script error: sleep undefined variable status (got 2 vs expected 2) [22:38:14.887](0.000s) ok 363 - pgbench script error: sleep undefined variable stdout /(?^:processed: 0/1)/ [22:38:14.887](0.000s) not ok 364 - pgbench script error: sleep undefined variable stderr /(?^:sleep: undefined variable)/ [22:38:14.887](0.000s) [22:38:14.887](0.000s) # Failed test 'pgbench script error: sleep undefined variable stderr /(?^:sleep: undefined variable)/' # at t/001_pgbench_with_server.pl line 1242. [22:38:14.887](0.000s) # '' # doesn't match '(?^:sleep: undefined variable)'
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sifaka&dt=2024-10-29%2016%3A43%3A25 - REL_17_STABLE
IPC::Run accepts bug reports \ pgbench test failed on indri
IPC-Run: Retry _read() on EINTR, instead of losing pipe contents.
intra-grant-inplace-db.spec failed due to test session stuck in LockBufferForCleanup()
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sarus&dt=2024-10-26%2014%3A08%3A58 - master
not ok 42 - intra-grant-inplace-db 360069 ms --- pgsql.build/src/test/isolation/output_iso/regression.diffs --- /home/linux1/17-sarus/buildroot/HEAD/pgsql.build/src/test/isolation/expected/intra-grant-inplace-db.out 2024-10-26 14:08:40.978532918 +0000 +++ /home/linux1/17-sarus/buildroot/HEAD/pgsql.build/src/test/isolation/output_iso/results/intra-grant-inplace-db.out 2024-10-26 14:28:51.808555640 +0000 @@ -9,13 +9,14 @@ step grant1: GRANT TEMP ON DATABASE isolation_regression TO regress_temp_grantee; -step vac2: VACUUM (FREEZE); <waiting ...> +isolationtester: canceling step vac2 after 360 seconds +step vac2: VACUUM (FREEZE); +ERROR: canceling statement due to user request step snap3:
heap_inplace_lock vs. autovacuum w/ LOCKTAG_TUPLE
Unpin buffer before inplace update waits for an XID to end.
pg_regress tests fail due to "could not read blocks" error after 2b9b8ebbf
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-10-31%2008%3A23%3A03 - master
ok 75 + create_cast 842 ms not ok 76 + constraints 5685 ms ok 77 + triggers 18404 ms ok 78 + select 3267 ms not ok 79 + inherit 20746 ms ok 80 + typed_table 6472 ms ... SELECT *, tableoid::regclass::text FROM SYS_COL_CHECK_TBL; - city | state | is_capital | altitude | tableoid ----------+------------+------------+----------+------------------- - Seattle | Washington | f | 100 | sys_col_check_tbl -(1 row) - +ERROR: could not read blocks 0..0 in file "global/2672": read only 0 of 8192 bytes
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=schnauzer&dt=2024-10-31%2011%3A10%3A37 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=broadbill&dt=2024-10-31%2013%3A34%3A08 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamushi&dt=2024-10-31%2016%3A08%3A43 - master
Relcache refactoring \ call to RelationInitPhysicalAddr(relation) missing
Fix refreshing physical relfilenumber on shared index
XversionUpgrade-REL9_2_STABLE-xxx fails on crake after time zone data update
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-29%2017%3A23%3A45 - master
upgrade.crake/HEAD/dumpdiff-REL9_2_STABLE --- /home/andrew/bf/root/upgrade.crake/HEAD/origin-REL9_2_STABLE.sql.fixed 2024-10-29 13:40:01.778445456 -0400 +++ /home/andrew/bf/root/upgrade.crake/HEAD/converted-REL9_2_STABLE-to-HEAD.sql.fixed 2024-10-29 13:40:01.780445460 -0400 @@ -206462,12 +206462,12 @@ 1997-02-14 20:32:01-05 1997-02-15 20:32:01-05 1997-02-16 20:32:01-05 -0097-02-16 20:32:01-05 BC -0097-02-16 20:32:01-05 -0597-02-16 20:32:01-05 -1097-02-16 20:32:01-05 -1697-02-16 20:32:01-05 -1797-02-16 20:32:01-05 +0097-02-16 20:35:59-04:56:02 BC +0097-02-16 20:35:59-04:56:02 +0597-02-16 20:35:59-04:56:02 +1097-02-16 20:35:59-04:56:02 +1697-02-16 20:35:59-04:56:02 +1797-02-16 20:35:59-04:56:02 1897-02-16 20:32:01-05 1997-02-16 20:32:01-05 2097-02-16 20:32:01-05
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-29%2015%3A57%3A06 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-29%2017%3A42%3A02 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-29%2016%3A10%3A18 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-29%2017%3A55%3A23 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-29%2016%3A24%3A41 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-29%2018%3A09%3A27 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-29%2016%3A41%3A09 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-29%2018%3A25%3A37 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-29%2016%3A59%3A22 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-10-29%2017%3A09%3A39 - REL_17_STABLE
sepgsql/ddl.sql fails due to unexpected output after 89e51abcb
# +++ regress install-check in contrib/sepgsql +++ # using postmaster on /tmp/buildfarm-w9a0n0, port 5678 ok 1 - label 1298 ms ok 2 - dml 1007 ms not ok 3 - ddl 1003 ms ... --- contrib/sepgsql/regression.diffs --- /opt/src/pgsql-git/build-farm-root/HEAD/pgsql.build/contrib/sepgsql/expected/ddl.out 2024-05-13 03:52:12.247155159 -0700 +++ /opt/src/pgsql-git/build-farm-root/HEAD/pgsql.build/contrib/sepgsql/results/ddl.out 2024-10-31 14:04:13.083215399 -0700 @@ -154,6 +154,8 @@ CREATE FUNCTION regtest_func(text,int[]) RETURNS bool LANGUAGE plpgsql AS 'BEGIN RAISE NOTICE ''regtest_func => %'', $1; RETURN true; END'; LOG: SELinux: allowed { search } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0 tcontext=system_u:object_r:sepgsql_schema_t:s0 tclass=db_schema name="pg_catalog" permissive=0 +LINE 1: CREATE FUNCTION regtest_func(text,int[]) RETURNS bool LANGUA... + ...
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros&dt=2024-11-01%2002%3A52%3A12 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros&dt=2024-11-01%2001%3A52%3A14 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros&dt=2024-11-01%2008%3A52%3A12 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros&dt=2024-11-01%2012%3A52%3A12 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros&dt=2024-11-01%2015%3A52%3A12 - master
Update contrib/sepgsql regression tests for commit 89e51abcb.
001_pg_bsd_indent.pl fails on Solaris/AIX due to illegal diff option
# +++ tap check in src/tools/pg_bsd_indent +++ # Failed test 'pg_bsd_indent output matches for binary' # at t/001_pg_bsd_indent.pl line 50. ... --- pgsql.build/src/tools/pg_bsd_indent/tmp_check/log/regress_log_001_pg_bsd_indent # Running: diff -upd /export/home/nm/farm/studio64v12_6/REL_16_STABLE/pgsql/src/tools/pg_bsd_indent/tests/binary.0.stdout binary.out /usr/bin/diff: illegal option -- p
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2024-11-03%2007%3A31%3A13 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2024-11-03%2020%3A21%3A01 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2024-11-03%2020%3A27%3A56 - REL_16_STABLE
pgsql: Use portable diff options in pg_bsd_indent's regression test.
Use portable diff options in pg_bsd_indent's regression test.
plperl_env.sql fails with `vcregress plcheck` due to REGRESS_OPTS lost (after 8fe3e697a)
test plperl_env ... FAILED 120 ms --- pgsql.build/src/pl/plperl/regression.diffs --- H:/prog/bf/root/REL_12_STABLE/pgsql.build/src/pl/plperl/expected/plperl_env.out 2024-11-11 10:00:00.848816600 -0500 +++ H:/prog/bf/root/REL_12_STABLE/pgsql.build/src/pl/plperl/results/plperl_env.out 2024-11-11 10:00:05.226265200 -0500 @@ -5,6 +5,7 @@ RETURNS text[] AS '/lib/regress.dll', 'get_environ' LANGUAGE C STRICT; +ERROR: could not access file "/lib/regress.dll": No such file or directory -- fetch the process environment
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-11%2015%3A25%3A02 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2024-11-11%2017%3A30%3A05 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-11%2015%3A41%3A18 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2024-11-11%2015%3A36%3A57 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-11%2016%3A16%3A11 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2024-11-11%2018%3A36%3A29 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2024-11-11%2016%3A27%3A22 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-11%2017%3A01%3A08 - REL_15_STABLE
pgsql: src/tools/msvc: Respect REGRESS_OPTS in plcheck.
src/tools/msvc: Respect REGRESS_OPTS in plcheck.
XversionUpgrade-REL_12_STABLE-XXX fails due to referenced library missing (after b7e3a52a8)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-11%2018%3A50%3A23 - master
upgrade.crake/HEAD/REL_12_STABLE-upgrade.log ... Checking for presence of required libraries fatal Your installation references loadable libraries that are missing from the new installation. You can add these libraries to the new installation, or remove the functions using them from the old installation. A list of problem libraries is in the file: /home/andrew/bf/root/upgrade.crake/HEAD/inst/REL_12_STABLE-upgrade/pg_upgrade_output.d/20241111T140702.275/loadable_libraries.txt Failure, exiting --- upgrade.crake/HEAD/inst/REL_12_STABLE-20241111T140702.275/loadable_libraries.txt could not load library "/home/andrew/bf/root/REL_12_STABLE/pgsql.build/src/pl/plperl/../../../src/test/regress/regress.so": ERROR: could not access file "/home/andrew/bf/root/REL_12_STABLE/pgsql.build/src/pl/plperl/../../../src/test/regress/regress.so": No such file or directory In database: pl_regression_plperl
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-11%2015%3A16%3A19 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-11-11%2016%3A15%3A36 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-11%2017%3A26%3A30 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-11%2015%3A38%3A41 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-11%2017%3A49%3A14 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-11-11%2018%3A54%3A54 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-11%2016%3A03%3A23 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-11-11%2021%3A07%3A18 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-11%2018%3A17%3A03 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-11-11%2023%3A32%3A22 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-11%2018%3A33%3A47 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-11%2020%3A14%3A25 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-11-12%2002%3A25%3A23 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-11-12%2005%3A06%3A36 - master
Fix cross-version upgrade tests.
plperl_env.sql fails with meson due to line numbers mismatch because of comments not stripped
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-11-11%2014%3A26%3A57 - master
42/303 postgresql:plperl / plperl/regress ERROR 2.33s exit status 1 --- pgsql.build/testrun/plperl/regress/regression.diffs --- /home/bf/bf-build/skink-master/HEAD/pgsql/src/pl/plperl/expected/plperl_env.out 2024-11-11 14:27:00.761078836 +0000 +++ /home/bf/bf-build/skink-master/HEAD/pgsql.build/testrun/plperl/regress/results/plperl_env.out 2024-11-11 14:28:37.729351724 +0000 @@ -49,5 +49,5 @@ } $$ LANGUAGE plperl; -WARNING: attempted alteration of $ENV{TEST_PLPERL_ENV_FOO} at line 12. +WARNING: attempted alteration of $ENV{TEST_PLPERL_ENV_FOO} at line 43. NOTICE: environ unaffected
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-11-11%2015%3A01%3A51 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-11-11%2015%3A31%3A34 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-11-11%2015%3A42%3A35 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-11-11%2015%3A47%3A44 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-11-11%2015%3A53%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-11-11%2015%3A54%3A36 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-11-11%2016%3A02%3A07 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-11-11%2016%3A06%3A24 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-11-11%2016%3A06%3A59 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-11-11%2016%3A43%3A49 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-11-11%2016%3A46%3A01 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-11%2016%3A48%3A35 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2024-11-11%2015%3A39%3A18 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=francolin&dt=2024-11-11%2015%3A41%3A41 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-11-11%2015%3A44%3A10 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-11-11%2015%3A52%3A49 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-11-11%2015%3A59%3A18 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-11-11%2016%3A02%3A10 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-11-11%2016%3A03%3A27 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-11-11%2016%3A13%3A18 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-11-11%2016%3A18%3A16 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-11-11%2016%3A18%3A54 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-11%2016%3A38%3A27 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-11-11%2016%3A55%3A25 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-11-11%2016%3A58%3A38 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-11-11%2014%3A27%3A57 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-11-11%2014%3A28%3A12 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-11-11%2014%3A28%3A41 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-11-11%2014%3A29%3A55 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2024-11-11%2015%3A03%3A24 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=francolin&dt=2024-11-11%2015%3A04%3A26 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-11-11%2015%3A19%3A33 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-11-11%2015%3A22%3A38 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-11-11%2015%3A23%3A07 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-11-11%2015%3A25%3A01 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-11-11%2015%3A26%3A21 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-11-11%2015%3A29%3A10 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-11-11%2016%3A28%3A08 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-11-11%2016%3A28%3A48 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-11%2016%3A28%3A49 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2024-11-11%2016%3A30%3A51 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=francolin&dt=2024-11-11%2016%3A32%3A21 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-11-11%2017%3A04%3A36 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-11-11%2017%3A06%3A50 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-11-11%2017%3A11%3A02 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-11-11%2017%3A17%3A33 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-11-11%2017%3A18%3A51 - REL_16_STABLE
Avoid bizarre meson behavior with backslashes in command arguments.
privileges.sql and rowsecurity.sql fail due to lack of non-superuser connections for parallel workers after 5a2fed911
not ok 114 + privileges 15831 ms ... not ok 121 + rowsecurity 7502 ms --- pgsql.build/src/test/regress/regression.diffs --- /home/builder/pgbf_builds/HEAD/pgsql.build/src/test/regress/expected/privileges.out Mon Nov 11 22:12:42 2024 +++ /home/builder/pgbf_builds/HEAD/pgsql.build/src/test/regress/results/privileges.out Mon Nov 11 22:17:26 2024 @@ -73,11 +73,7 @@ SET ROLE regress_priv_user3; GRANT regress_priv_user1 TO regress_priv_user4; SELECT grantor::regrole FROM pg_auth_members WHERE roleid = 'regress_priv_user1'::regrole and member = 'regress_priv_user4'::regrole; - grantor --------------------- - regress_priv_user2 -(1 row) - +ERROR: remaining connection slots are reserved for roles with the SUPERUSER attribute RESET ROLE; ... --- /home/builder/pgbf_builds/HEAD/pgsql.build/src/test/regress/expected/rowsecurity.out Mon Nov 11 22:12:42 2024 +++ /home/builder/pgbf_builds/HEAD/pgsql.build/src/test/regress/results/rowsecurity.out Mon Nov 11 22:17:17 2024 @@ -113,30 +113,7 @@ (3 rows) \\d document - Table "regress_rls_schema.document" ... +ERROR: remaining connection slots are reserved for roles with the SUPERUSER attribute
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sawshark&dt=2024-11-11%2018%3A39%3A58 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sawshark&dt=2024-11-11%2016%3A00%3A10 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sawshark&dt=2024-11-11%2019%3A56%3A31 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sawshark&dt=2024-11-11%2015%3A34%3A27 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sawshark&dt=2024-11-11%2016%3A34%3A15 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sawshark&dt=2024-11-11%2015%3A31%3A27 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sawshark&dt=2024-11-11%2019%3A30%3A18 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sawshark&dt=2024-11-11%2016%3A29%3A25 - REL_13_STABLE
Parallel workers use AuthenticatedUserId for connection privilege checks.
test_regex.sql fails on PPC/AIX animals in v14/v15 after 2496c3f6f
test test_regex ... FAILED 686 ms --- pgsql.build/src/test/modules/test_regex/regression.diffs --- /home/nm/farm/xlc64/REL_15_STABLE/pgsql.build/src/test/modules/test_regex/expected/test_regex.out 2024-11-15 23:48:42.000000000 +0000 +++ /home/nm/farm/xlc64/REL_15_STABLE/pgsql.build/src/test/modules/test_regex/results/test_regex.out 2024-11-16 05:04:14.000000000 +0000 @@ -2080,11 +2080,7 @@ (2 rows) select * from test_regex('[^\\d\\D]', '0123456789abc*', 'ILPE'); - test_regex - -------------------------------------------------------- - {0,REG_UBBS,REG_UNONPOSIX,REG_ULOCALE,REG_UIMPOSSIBLE} - (1 row) - + ERROR: invalid regular expression: out of memory -- check char classes' handling of newlines
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2024-11-17%2008%3A22%3A22 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2024-11-17%2002%3A29%3A18 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2024-11-17%2002%3A24%3A03 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2024-11-17%2002%3A24%3A03 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2024-11-16%2022%3A00%3A12 - REL_14_STABLE
BUG #18708: regex problem \ malloc(0) returning NULL triggers error
Fix recently-exposed portability issue in regex optimization.
not ok 104 + hash_index 14846 ms # (test process exited with exit code 2) --- pgsql.build/src/test/regress/log/postmaster.log 2024-08-20 20:56:47.318 CEST [2179731:95] LOG: server process (PID 2184722) was terminated by signal 11: Segmentation fault 2024-08-20 20:56:47.318 CEST [2179731:96] DETAIL: Failed process was running: COPY hash_f8_heap FROM '/home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/data/hash.data'; --- stack trace: pgsql.build/src/test/regress/tmp_check/data/core Core was generated by `postgres: pgbf regression [local] COPY '. Program terminated with signal SIGSEGV, Segmentation fault. #0 0x0000002ac8e62674 in heap_multi_insert (relation=0x3f9525c890, slots=0x2ae68a5b30, ntuples=<optimized out>, cid=<optimized out>, options=<optimized out>, bistate=0x2ae6891c18) at heapam.c:2296 2296 tuple->t_tableOid = slots[i]->tts_tableOid; ... $1 = {si_signo = 11, ... _sigfault = {si_addr = 0x2ae600cbcc}, ..
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-08-24%2016%3A32%3A23 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-08-26%2016%3A20%3A46 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-09-03%2016%3A38%3A46 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-09-16%2023%3A07%3A46 - REL_17_STABLE?
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-10-30%2022%3A10%3A20 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-11-09%2015%3A01%3A23 - REL_12_STABLE
RISC-V animals sporadically produce weird memory-related failures
RISC-V animals sporadically produce weird memory-related failures \ RISC-V animals upgraded
select_distinct.sql/select_distinct_on.sql fail due to name conflict (after a8ccf4e93)
not ok 91 + select_distinct 1053 ms not ok 92 + select_distinct_on 1010 ms --- pgsql.build/src/test/regress/regression.diffs --- /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/expected/select_distinct.out Tue Nov 26 01:51:23 2024 +++ /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/results/select_distinct.out Tue Nov 26 01:52:52 2024 @@ -537,31 +537,13 @@ SET max_parallel_workers_per_gather=2; EXPLAIN (COSTS OFF) SELECT DISTINCT y, x FROM distinct_tbl limit 10; - QUERY PLAN ---------------------------------------------------------------------------------------------- - Limit - -> Unique - -> Gather Merge - Workers Planned: 1 - -> Unique - -> Parallel Index Only Scan using distinct_tbl_x_y_idx on distinct_tbl -(6 rows) - +ERROR: relation "distinct_tbl" does not exist +LINE 2: SELECT DISTINCT y, x FROM distinct_tbl limit 10; ... --- /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/expected/select_distinct_on.out Tue Nov 26 01:51:23 2024 +++ /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/results/select_distinct_on.out Tue Nov 26 01:52:52 2024 @@ -126,8 +126,13 @@ -- the input path's ordering -- CREATE TABLE distinct_tbl (x int, y int, z int); +ERROR: relation "distinct_tbl" already exists
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=morepork&dt=2024-11-26%2001%3A13%3A45 - master
011_generated.pl fails due to incorrect expected log line added with 8fcd80258
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-11-27%2004%3A17%3A03 - master
# Tests were run but no plan was declared and done_testing() was not seen. # Looks like your test exited with 255 just after 11. [05:44:12] t/011_generated.pl ................. Dubious, test returned 255 (wstat 65280, 0xff00) All 11 subtests passed --- pgsql.build/src/test/subscription/tmp_check/log/regress_log_011_generated [05:41:11.702](0.447s) ok 11 - tab3 incremental replication, when publish_generated_columns=true #### Begin standard error psql:<stdin>:1: NOTICE: dropped replication slot "sub1" on publisher #### End standard error #### Begin standard error psql:<stdin>:3: NOTICE: created replication slot "sub1" on publisher #### End standard error timed out waiting for match: (?^:ERROR: ( [A-Z0-9]:)? logical replication target relation "public.t1" has incompatible generated columns: "c2", "c3") at t/011_generated.pl line 363. # Postmaster PID for node "publisher" is 2990522 ### Stopping node "publisher" using mode immediate
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-11-27%2007%3A33%3A03 - master
Fix buildfarm failure from commit 8fcd80258b.
Tests fail on avocet due to timeout after upgrading buildfarm client to REL_18
check (01:21:50) check-pg_upgrade (02:17:52) lastcommand (00:14:44) ... 'script_version' => 'REL_18', ... timed out after 14400 secs
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=avocet&dt=2024-11-22%2016%3A11%3A23 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=avocet&dt=2024-11-23%2000%3A12%3A05 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=avocet&dt=2024-11-23%2004%3A12%3A18 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=avocet&dt=2024-11-23%2008%3A12%3A36 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=avocet&dt=2024-11-23%2012%3A12%3A50 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=avocet&dt=2024-11-23%2016%3A13%3A05 - master
Announcing Release 18 of the PostgreSQL Buildfarm client / timeout should be increased on avocet
Announcing Release 18 of the PostgreSQL Buildfarm client / wait_timeout changed from undefined to 0
regression tests fail with segmentation faults on riscv64 with llvm enabled
2024-11-30 19:34:53.302 CET [13395:4] LOG: server process (PID 13439) was terminated by signal 11: Segmentation fault 2024-11-30 19:34:53.302 CET [13395:5] DETAIL: Failed process was running: SELECT '' AS tf_12, BOOLTBL1.*, BOOLTBL2.* FROM BOOLTBL1, BOOLTBL2 WHERE BOOLTBL2.f1 <> BOOLTBL1.f1;
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-11-30%2018%3A35%3A17 - REL_13_STABLE
Miscellaneous regress tests fail due to segfaults/assertion failures after d28dff3f6
# parallel group (20 tests): regproc txid int4 pg_lsn uuid varchar char float4 name oid float8 text enum int2 int8 money boolean bit numeric rangetypes ok 2 + boolean 1623 ms ok 3 + char 331 ms ok 4 + name 464 ms ok 5 + varchar 328 ms ok 6 + text 736 ms ok 7 + int2 1069 ms ok 8 + int4 272 ms ok 9 + int8 1306 ms ok 10 + oid 554 ms ok 11 + float4 362 ms ok 12 + float8 618 ms not ok 13 + bit 1806 ms # (test process exited with exit code 2) not ok 14 + numeric 1878 ms # (test process exited with exit code 2) ok 15 + txid 260 ms ok 16 + uuid 302 ms ok 17 + enum 889 ms ok 18 + money 1331 ms not ok 19 + rangetypes 1878 ms # (test process exited with exit code 2) ok 20 + pg_lsn 293 ms ok 21 + regproc 226 ms ... --- pgsql.build/src/test/regress/log/postmaster.log 2024-12-03 04:21:42.150 UTC [2773168:10] LOG: client backend (PID 2773211) was terminated by signal 11: Segmentation fault 2024-12-03 04:21:42.150 UTC [2773168:11] DETAIL: Failed process was running: SELECT * FROM pg_input_error_info('01010Z01', 'bit(8)'); 2024-12-03 04:21:42.862 UTC [2773168:15] LOG: client backend (PID 2773696) was terminated by signal 11: Segmentation fault 2024-12-03 04:21:42.862 UTC [2773168:16] DETAIL: Failed process was running: SELECT * FROM pg_input_error_info('@ 30 eons ago', 'interval'); 2024-12-03 04:21:44.449 UTC [2773168:26] LOG: client backend (PID 2773956) was terminated by signal 11: Segmentation fault 2024-12-03 04:21:44.449 UTC [2773168:27] DETAIL: Failed process was running: SELECT * FROM pg_input_error_info('a <100000> b', 'tsquery'); 2024-12-03 04:21:56.425 UTC [2773168:34] LOG: client backend (PID 2776542) was terminated by signal 6: Aborted 2024-12-03 04:21:56.425 UTC [2773168:35] DETAIL: Failed process was running: SELECT max(row(a,b)) FROM aggtest; --- stack trace: pgsql.build/src/test/regress/tmp_check/data/core.2773211 Core was generated by `postgres: centos regression [local] SELECT '. Program terminated with signal SIGSEGV, Segmentation fault. #0 0x00007fff7d8d915c in __strcmp_power9 () from /lib64/glibc-hwcaps/power10/libc.so.6 #0 0x00007fff7d8d915c in __strcmp_power9 () from /lib64/glibc-hwcaps/power10/libc.so.6 #1 0x00000000100cb30c in equalRowTypes (tupdesc1=<optimized out>, tupdesc2=<optimized out>) at tupdesc.c:644 #2 0x00000000107c4040 in record_type_typmod_compare (a=<optimized out>, b=<optimized out>, size=<optimized out>) at typcache.c:2039 #3 0x00000000107d9ce8 in hash_search_with_hash_value (hashp=0x44ddd060, keyPtr=0x7fffd40b09a8, hashvalue=85290192, action=HASH_FIND, foundPtr=0x7fffd40b0948) at dynahash.c:1021 #4 0x00000000107c3d54 in assign_record_type_typmod (tupDesc=0x44cd6698) at typcache.c:2082 #5 0x00000000107d5734 in internal_get_result_type (funcid=<optimized out>, call_expr=<optimized out>, rsinfo=<optimized out>, resultTypeId=<optimized out>, resultTupleDesc=0x7fffd40b0bb8) at funcapi.c:469 ... --- stack trace: pgsql.build/src/test/regress/tmp_check/data/core.2776542 Core was generated by `postgres: centos regression [local] SELECT '. Program terminated with signal SIGABRT, Aborted. #0 0x00007fff7d8a56a8 in __pthread_kill_implementation () from /lib64/glibc-hwcaps/power10/libc.so.6 #0 0x00007fff7d8a56a8 in __pthread_kill_implementation () from /lib64/glibc-hwcaps/power10/libc.so.6 #1 0x00007fff7d847f20 in raise () from /lib64/glibc-hwcaps/power10/libc.so.6 #2 0x00007fff7d82a574 in abort () from /lib64/glibc-hwcaps/power10/libc.so.6 #3 0x00000000107c5d94 in ExceptionalCondition (conditionName=<optimized out>, fileName=<optimized out>, lineNumber=<optimized out>) at assert.c:66 #4 0x00000000100bb928 in TupleDescCompactAttr (tupdesc=0x7fff725a1000, i=0) at ../../../../src/include/access/tupdesc.h:172 #5 heap_deform_tuple (tuple=<optimized out>, tupleDesc=0x7fff725a1000, values=0x44e2b240, isnull=0x44e2b260) at heaptuple.c:1376 #6 0x0000000010721f3c in record_out (fcinfo=0x7fffd40b0ac0) at rowtypes.c:390 #7 0x00000000107d35b0 in FunctionCall1Coll (flinfo=0x44e07038, collation=0, arg1=<optimized out>) at fmgr.c:1139
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=nuthatch&dt=2024-12-03%2004%3A15%3A27 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gokiburi&dt=2024-12-03%2007%3A00%3A39 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=oystercatcher&dt=2024-12-03%2004%3A12%3A37 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hachi&dt=2024-12-03%2004%3A06%3A21 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jay&dt=2024-12-03%2003%3A58%3A22 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=massasauga&dt=2024-12-03%2003%3A55%3A04 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hippopotamus&dt=2024-12-03%2003%3A53%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-12-03%2003%3A53%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros&dt=2024-12-03%2003%3A52%3A14- master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-12-03%2003%3A57%3A01 - master
pgsql: Introduce CompactAttribute array in TupleDesc \ buildfarm shows something is broken
Revert "Introduce CompactAttribute array in TupleDesc"
plperl_setup.sql fails on 32-bit systems because of perl ABI mismatch after 962da900a
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2024-12-04%2003%3A10%3A08 - master
# +++ regress install-check in src/pl/plperl +++ # using postmaster on /home/pgbf/buildroot/tmp/buildfarm-Sht_Xi, port 5678 not ok 1 - plperl_setup 403 ms # (test process exited with exit code 2) ... --- inst/logfile 2024-12-04 05:33:01.919 CET [2222:18] pg_regress/plperl_setup LOG: statement: CREATE EXTENSION plperl; Util.c: loadable library and perl binaries are mismatched (got handshake key 0x9280080, needed 0x9380080)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=turaco&dt=2024-12-04%2004%3A15%3A09 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2024-12-04%2005%3A10%3A09 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=turaco&dt=2024-12-04%2008%3A15%3A11 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2024-12-04%2009%3A10%3A09 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mereswine&dt=2024-12-04%2011%3A41%3A15 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2024-12-04%2011%3A52%3A54 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=turaco&dt=2024-12-04%2012%3A53%3A22 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=turaco&dt=2024-12-04%2016%3A15%3A11 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2024-12-04%2016%3A23%3A14 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2024-12-04%2017%3A10%3A09 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2024-12-04%2021%3A10%3A08 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=turaco&dt=2024-12-05%2000%3A15%3A10 - master
Cannot find a working 64-bit integer type on Illumos \ system headers included before pg_config.h
Fix header inclusion order in c.h.
select_into.sql fails due to missing BUFFERS OFF (after c2a4078eb)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tayra&dt=2024-12-11%2009%3A45%3A08 - master
not ok 90 + select_into 283 ms ... # 1 of 223 tests failed. --- --- pgsql.build/src/test/regress/regression.diffs --- /repos/client-code-REL_18/HEAD/pgsql.build/src/test/regress/expected/select_into.out 2024-12-11 06:45:13.939852597 -0300 +++ /repos/client-code-REL_18/HEAD/pgsql.build/src/test/regress/results/select_into.out 2024-12-11 06:46:20.672076602 -0300 @@ -57,7 +57,9 @@ -------------------------------------- ProjectSet (actual rows=3 loops=1) -> Result (actual rows=1 loops=1) -(2 rows) + Planning: + Buffers: shared hit=3 +(4 rows)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-12-11%2009%3A37%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bushmaster&dt=2024-12-11%2009%3A37%3A33 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-12-11%2009%3A36%3A54 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-12-11%2009%3A36%3A34 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-12-11%2009%3A43%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-12-11%2009%3A42%3A04 - master
Add missing BUFFERS OFF in select_into regression tests
Add missing BUFFERS OFF in select_into regression tests
matview.sql and misc_functions.sql fail on cache-release-testing animals (after c2a4078eb)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-12-11%2010%3A03%3A03 - master
not ok 118 + matview 17804 ms ... not ok 139 + misc_functions 4111 ms ... # 2 of 223 tests failed. --- pgsql.build/src/test/regress/regression.diffs --- /home/ec2-user/bf/root/HEAD/pgsql/src/test/regress/expected/matview.out 2024-12-11 10:03:04.454700734 +0000 +++ /home/ec2-user/bf/root/HEAD/pgsql.build/src/test/regress/results/matview.out 2024-12-11 10:06:04.465742830 +0000 @@ -631,7 +631,9 @@ -------------------------------------- ProjectSet (actual rows=10 loops=1) -> Result (actual rows=1 loops=1) -(2 rows) + Planning: + Buffers: shared hit=145 +(4 rows) ... --- /home/ec2-user/bf/root/HEAD/pgsql/src/test/regress/expected/misc_functions.out 2024-12-11 10:03:04.454700734 +0000 +++ /home/ec2-user/bf/root/HEAD/pgsql.build/src/test/regress/results/misc_functions.out 2024-12-11 10:06:29.585888248 +0000 @@ -650,7 +650,10 @@ explain_mask_costs ------------------------------------------------------------------------------------------ Function Scan on generate_series g (cost=N..N rows=30 width=N) (actual rows=30 loops=1) -(1 row) + Buffers: shared hit=58 + Planning: + Buffers: shared hit=174 +(4 rows) ...
Add missing BUFFERS OFF in regression tests, take 2
Add missing BUFFERS OFF in regression tests, take 2
pg_stat_statements/level_tracking.sql fails on cache-release-testing animals (after c2a4078eb)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-12-11%2010%3A23%3A03 - master
not ok 5 - level_tracking 1608 ms --- pgsql.build/contrib/pg_stat_statements/regression.diffs --- /home/ec2-user/bf/root/HEAD/pgsql/contrib/pg_stat_statements/expected/level_tracking.out 2024-12-11 10:23:02.881643853 +0000 +++ /home/ec2-user/bf/root/HEAD/pgsql.build/contrib/pg_stat_statements/results/level_tracking.out 2024-12-11 10:29:40.303947627 +0000 @@ -907,14 +907,18 @@ QUERY PLAN -------------------------------- Result (actual rows=1 loops=1) -(1 row) + Planning: + Buffers: shared hit=28 +(3 rows) ...
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-12-11%2020%3A23%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-12-11%2018%3A43%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-12-11%2011%3A53%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=avocet&dt=2024-12-11%2015%3A26%3A35 - master
Fix further fallout from EXPLAIN ANALYZE BUFFERS change
009_twophase.pl might fail when replica is lagging behind
# pump_until: process terminated unexpectedly when searching for "(?^:background_psql: QUERY_SEPARATOR)" with stream: "" # Looks like your test exited with 29 just after 13. [00:50:47] t/009_twophase.pl .................... Dubious, test returned 29 (wstat 7424, 0x1d00) Failed 14/27 subtests --- pgsql.build/src/test/recovery/tmp_check/log/regress_log_009_twophase # issuing query via background psql: SELECT count(*) FROM t_009_tbl_standby_mvcc # pump_until: process terminated unexpectedly when searching for "(?^:background_psql: QUERY_SEPARATOR)" with stream: "" query failed: psql:<stdin>:6: ERROR: relation "t_009_tbl_standby_mvcc" does not exist LINE 1: SELECT count(*) FROM t_009_tbl_standby_mvcc
Make 009_twophase.pl test pass with recovery_min_apply_delay set
snakefly fails in the ssl_passphrase_callback-check stage (after a70e01d43)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-09-03%2009%3A56%3A46 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-12-16%2013%3A16%3A34 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-12-16%2021%3A02%3A34 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-12-16%2020%3A40%3A10 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-12-16%2019%3A56%3A54 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-12-16%2017%3A35%3A17 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-12-16%2014%3A04%3A25 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-12-16%2014%3A03%3A23 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-12-16%2014%3A02%3A24 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-12-16%2014%3A01%3A21 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-12-17%2005%3A40%3A19 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-12-17%2004%3A10%3A18 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-12-17%2000%3A27%3A51 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snakefly&dt=2024-12-17%2000%3A26%3A50 - REL_!7_STABLE
make -j1 checkprep >>'/opt/postgres/bf/v11/buildroot/HEAD/pgsql.build'/tmp_install/log/install.log 2>&1 make: *** [temp-install] Error 2
Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~? \ old buildfarm client (v11)
Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~? \ buildfarm client upgraded
stats.sql fails due to unexpected fsyncs (after 9aea73fc6)
215/306 postgresql:recovery / recovery/027_stream_regress ERROR 343.00s exit status 1 stderr: # Failed test 'regression tests pass' # at /home/bf/bf-build/grassquit/HEAD/pgsql/src/test/recovery/t/027_stream_regress.pl line 95. # got: '256' # expected: '0' # Looks like you failed 1 test of 9. --- /home/bf/bf-build/grassquit/HEAD/pgsql/src/test/regress/expected/stats.out 2024-12-19 04:44:08.779311933 +0000 +++ /home/bf/bf-build/grassquit/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/results/stats.out 2024-12-19 16:37:41.351784840 +0000 @@ -1333,7 +1333,7 @@ AND :my_io_sum_shared_after_fsyncs= 0); ?column? ---------- - t + f (1 row) -- Change the tablespace so that the table is rewritten directly, then SELECT
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-12-19%2010%3A43%3A57 - master
per backend I/O statistics \ grassquit failures
Relax regression test for fsync check of backend-level stats
# poll_query_until timed out executing this query: # SELECT last_archived_wal FROM pg_stat_archiver # expecting this output: # 000000010000000000000002 # last actual query output: # # with stderr: # psql: error: connection to server on socket "/home/bf/proj/bf/build-farm-17/buildroot/tmp/D032mdJ4c4/.s.PGSQL.16723" failed: FATAL: could not open shared memory segment "/PostgreSQL.1516345622": No such file or directory # Tests were run but no plan was declared and done_testing() was not seen. # Looks like your test exited with 29 just after 7. [12:38:23] t/020_archive_status.pl ...............
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=leafhopper&dt=2024-12-16%2020%3A40%3A09 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=leafhopper&dt=2024-07-24%2013%3A53%3A41 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=leafhopper&dt=2024-07-24%2012%3A20%3A27 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=leafhopper&dt=2024-07-24%2013%3A58%3A47 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=leafhopper&dt=2024-07-24%2013%3A54%3A32 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2024-12-19%2001%3A30%3A57 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=batta&dt=2024-12-16%2008%3A05%3A04 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=parula&dt=2024-12-21%2009%3A56%3A28 - master
Several buildfarm animals fail tests because of shared memory error \ REMOVEIPC was set to yes
posgtres_fdw/query_cancel fails due to an unexpected warning on canceling a statement
ok 1 - postgres_fdw 7625 ms not ok 2 - query_cancel 30166 ms 1..2 # 1 of 2 tests failed. --- /home/linux1/17-treehopper/buildroot/REL_17_STABLE/pgsql.build/contrib/postgres_fdw/expected/query_cancel.out 2024-09-30 19:20:58.839809149 +0000 +++ /home/linux1/17-treehopper/buildroot/REL_17_STABLE/pgsql.build/contrib/postgres_fdw/results/query_cancel.out 2024-09-30 19:35:01.471960306 +0000 @@ -29,4 +29,5 @@ -- This would take very long if not canceled: SELECT count(*) FROM ft1 a CROSS JOIN ft1 b CROSS JOIN ft1 c CROSS JOIN ft1 d; ERROR: canceling statement due to statement timeout +WARNING: could not get result of cancel request due to timeout --- inst/logfile 2024-09-30 19:34:31.347 UTC [3201033:8] pg_regress/query_cancel LOG: statement: SET LOCAL statement_timeout = '100ms'; 2024-09-30 19:34:31.347 UTC [3201033:9] pg_regress/query_cancel LOG: statement: SELECT count(*) FROM ft1 a CROSS JOIN ft1 b CROSS JOIN ft1 c CROSS JOIN ft1 d; 2024-09-30 19:34:31.347 UTC [3201034:13] fdw_retry_check LOG: execute <unnamed>: DECLARE c2 CURSOR FOR SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 1" r4 ON (TRUE)) INNER JOIN "S 1"."T 1" r6 ON (TRUE)) 2024-09-30 19:34:31.464 UTC [3201033:10] pg_regress/query_cancel ERROR: canceling statement due to statement timeout 2024-09-30 19:34:31.464 UTC [3201033:11] pg_regress/query_cancel STATEMENT: SELECT count(*) FROM ft1 a CROSS JOIN ft1 b CROSS JOIN ft1 c CROSS JOIN ft1 d; 2024-09-30 19:34:31.466 UTC [3201035:1] [unknown] LOG: connection received: host=[local] 2024-09-30 19:34:31.474 UTC [3201034:14] fdw_retry_check LOG: statement: FETCH 100 FROM c2 2024-09-30 19:35:01.485 UTC [3201033:12] pg_regress/query_cancel WARNING: could not get result of cancel request due to timeout
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=froghopper&dt=2024-10-25%2008%3A31%3A55 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pipit&dt=2024-11-13%2001%3A12%3A28 - master
Add non-blocking version of PQcancel \ A newfound way to break the test
postgres_fdw: re-issue cancel requests a few times if necessary.
Regression tests fail on OpenBSD due to low semmns value
--- /home/builder/pgbf_builds/HEAD/pgsql.build/src/test/regress/expected/prepared_xacts.out Mon Jul 22 04:20:08 2024 +++ /home/builder/pgbf_builds/HEAD/pgsql.build/src/test/regress/results/prepared_xacts.out Mon Jul 22 04:21:45 2024 @@ -216,55 +216,4 @@ rollback; -- Disconnect, we will continue testing in a different backend \\c - --- There should still be two prepared transactions ... +\\connect: connection to server on socket "/home/builder/pgbf_builds/tmp/pg_regress-nFb732/.s.PGSQL.5678" failed: FATAL: sorry, too many clients already
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sawshark&dt=2024-11-25%2006%3A20%3A22 - REL_17_STABLE
Also transactions.sql times out on OpenBSD
ok 89 - sanity_check 322 ms # parallel group (20 tests): delete prepared_xacts select_having select_distinct_on select_implicit random select_into namespace portals subselect case union select_distinct update hash_index join arrays aggregates btree_index =================================================== timed out after 14400 secs
Regression tests fail on OpenBSD due to low semmns value
Try to avoid semaphore-related test failures on NetBSD/OpenBSD.
Unsorted/Unhelpful Test Failures
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2024-07-22%2023%3A48%3A35 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2024-09-15%2000%3A40%3A56 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2024-10-02%2023%3A12%3A02 - REL_14_STABLE
2024-07-23 02:05:01.488 CEST [11841:3] FATAL: shmat(id=721420316, addr=0, flags=0x4000) failed: Not enough space
running bootstrap script ... 2024-07-24 08:07:26.339 EDT [32845] FATAL: could not create semaphores: No space left on device 2024-07-24 08:07:26.339 EDT [32845] DETAIL: Failed system call was semget(153403862309413338, 17, 03600).
running bootstrap script ... 2024-07-24 08:18:18.859 EDT [39978] FATAL: could not create semaphores: No space left on device 2024-07-24 08:18:18.859 EDT [39978] DETAIL: Failed system call was semget(126100789568507477, 17, 03600).
running bootstrap script ... 2024-07-24 08:29:04.947 EDT [47271] FATAL: could not create semaphores: No space left on device 2024-07-24 08:29:04.947 EDT [47271] DETAIL: Failed system call was semget(43065671438868489, 17, 03600).
running bootstrap script ... 2024-07-25 05:03:13.089 EDT [25385] FATAL: could not create semaphores: No space left on device 2024-07-25 05:03:13.089 EDT [25385] DETAIL: Failed system call was semget(5828004, 17, 03600).
running bootstrap script ... 2024-07-25 05:12:52.813 EDT [32431] FATAL: could not create semaphores: No space left on device 2024-07-25 05:12:52.813 EDT [32431] DETAIL: Failed system call was semget(344525371495453222, 17, 03600).
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-07-24%2003%3A03%3A53 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-08-26%2004%3A38%3A23 - REL_14_STABLE
valgrind: Fatal error at startup: a function redirection ... valgrind: Cannot continue -- exiting now. Sorry.
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-07-22%2015%3A43%3A03 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-09-27%2013%3A33%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-11-28%2011%3A13%3A03 - REL_17_STABLE
-- create an extra wide table to test for issues related to that -- (temporarily hide query, to avoid the long CREATE TABLE stmt) \\set ECHO none +ERROR: could not extend file "base/16387/1249" with FileFallocate(): No space left on device
- -> Index Scan using ab_a2_b2_a_idx on ab_a2_b2 ab_5 (never executed) - Index Cond: (a = a.a) + -> Seq Scan on ab_a2_b2 ab_5 (never executed) + Filter: (a.a = a)
test sto_using_hash_index ... FAILED (test process exited with exit code 1) 5 ms (no test log saved)
test sto_using_select ... FAILED (test process exited with exit code 1) 6 ms test sto_using_hash_index ... FAILED (test process exited with exit code 1) 5 ms (no test log saved)
Temporary (occurred on 2024-07-24 only) environmental issues on leafhopper
timed out after 10800 secs checking for suffix of executables...
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=canebrake&dt=2024-08-09%2009%3A14%3A15 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=canebrake&dt=2024-08-08%2019%3A19%3A21 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=canebrake&dt=2024-08-08%2005%3A02%3A34 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=canebrake&dt=2024-08-07%2015%3A11%3A43 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=canebrake&dt=2024-08-06%2006%3A58%3A57 - REL_14_STABLE
+ERROR: could not import a module for Decimal constructor +DETAIL: ImportError: /usr/lib/python3.12/lib-dynload/_contextvars.cpython-312-x86_64-linux-gnu.so: undefined symbol: PyContextVar_Type
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-05%2004%3A24%3A54 - HEAD
291/291 postgresql:recovery / recovery/043_wal_replay_wait TIMEOUT 3000.17s exit status 1 [04:47:30.407](1.844s) ok 3 - get timeout on waiting for unreachable LSN 01
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gull&dt=2024-08-06%2014%3A56%3A39 - HEAD
# +++ tap install-check in src/test/modules/test_custom_rmgrs +++ Bailout called. Further testing stopped: pg_ctl start failed
pg_rewind: error: could not fetch file list: ERROR: could not load library "/u1/tac/build-farm-17/buildroot/REL_15_STABLE/pgsql.build/tmp_install/u1/tac/build-farm/buildroot/REL_15_STABLE/inst/lib/postgresql/llvmjit.so": libLLVM-16.so: cannot open shared object file: No such file or directory
(kingsnake is an ppc64le (POWER9) animal)
not ok 1 - basic_archive 122486 ms --- pgsql.build/contrib/basic_archive/regression.diffs --- /home/fedora/17-kingsnake/buildroot/REL_17_STABLE/pgsql.build/contrib/basic_archive/expected/basic_archive.out 2024-08-19 19:18:02.127953655 +0000 +++ /home/fedora/17-kingsnake/buildroot/REL_17_STABLE/pgsql.build/contrib/basic_archive/results/basic_archive.out 2024-08-19 20:08:27.248588589 +0000 @@ -23,7 +23,7 @@ WHERE a ~ '^[0-9A-F]{24}$'; ?column? ---------- - t + f (1 row) --- pgsql.build/contrib/basic_archive/log/postmaster.log 2024-08-19 20:06:25.585 UTC [381940:6] pg_regress/basic_archive LOG: statement: DO $$ DECLARE archived bool; loops int := 0; BEGIN LOOP archived := count(*) > 0 FROM pg_ls_dir('.', false, false) a WHERE a ~ '^[0-9A-F]{24}$'; IF archived OR loops > 120 * 10 THEN EXIT; END IF; PERFORM pg_sleep(0.1); loops := loops + 1; END LOOP; END $$; 2024-08-19 20:08:27.252 UTC [381940:7] pg_regress/basic_archive LOG: statement: SELECT count(*) > 0 FROM pg_ls_dir('.', false, false) a WHERE a ~ '^[0-9A-F]{24}$';
the expected archive file (000000010000000000000001?) didn't appear in the data directory within 120 seconds?
+++ isolation install-check in src/test/modules/delay_execution +++ ============== running regression test queries ============== test partition-addition ... FAILED 312375 ms --- inst/logfile ... 2024-08-29 14:31:13.852 UTC [4029501:5] isolation/partition-addition/control connection LOG: statement: CREATE TABLE foo (a int, b text) PARTITION BY LIST(a); CREATE TABLE foo1 PARTITION OF foo FOR VALUES IN (1); CREATE TABLE foo3 PARTITION OF foo FOR VALUES IN (3); CREATE TABLE foo4 PARTITION OF foo FOR VALUES IN (4); INSERT INTO foo VALUES (1, 'ABC'); INSERT INTO foo VALUES (3, 'DEF'); INSERT INTO foo VALUES (4, 'GHI'); 2024-08-29 14:31:13.859 UTC [4029503:5] isolation/partition-addition/s2 LOG: statement: SELECT pg_advisory_lock(12345); 2024-08-29 14:31:13.859 UTC [4029502:5] isolation/partition-addition/s1 LOG: statement: LOAD 'delay_execution'; SET delay_execution.post_planning_lock_id = 12345; SELECT * FROM foo WHERE a <> 1 AND a <> (SELECT 3); 2024-08-29 14:31:13.870 UTC [4029501:6] isolation/partition-addition/control connection LOG: execute isolationtester_waiting: SELECT pg_catalog.pg_isolation_test_session_is_blocked($1, '{4029502,4029503}') 2024-08-29 14:31:13.870 UTC [4029501:7] isolation/partition-addition/control connection DETAIL: parameters: $1 = '4029502' ... 2024-08-29 14:36:26.052 UTC [4029501:60550] isolation/partition-addition/control connection LOG: execute isolationtester_waiting: SELECT pg_catalog.pg_isolation_test_session_is_blocked($1, '{4029502,4029503}') 2024-08-29 14:36:26.052 UTC [4029501:60551] isolation/partition-addition/control connection DETAIL: parameters: $1 = '4029502' 2024-08-29 14:36:26.055 UTC [4029502:6] isolation/partition-addition/s1 ERROR: canceling statement due to user request 2024-08-29 14:36:26.055 UTC [4029502:7] isolation/partition-addition/s1 STATEMENT: LOAD 'delay_execution'; SET delay_execution.post_planning_lock_id = 12345; SELECT * FROM foo WHERE a <> 1 AND a <> (SELECT 3);
(iguana is a ppc64le (POWER9) animal)
Session "s1" was blocked but pg_isolation_test_session_is_blocked() could not determine that, either because pg_blocking_pids() somehow omitted PID 4029503 (can be emulated with "PG_RETURN_BOOL(false);" inserted at the start of pg_isolation_test_session_is_blocked()), or because "s1" was blocked somehow before reaching planner_hook (= delay_execution_planner) (can be emulated with "SELECT pg_sleep(330);" added before "SET delay_execution.post_planning_lock_id = 12345;" in the session "s1" declaration).
Not reproduced. Moreover, this is the only failure of this kind among all TestModulesCheck-C failures recorded (50+).
2024-08-25 06:09:30.249 ACST [303452] LOG: invalid value for parameter "lc_time": "en_AU.UTF-8" 2024-08-25 06:09:30.249 ACST [303452:5] FATAL: configuration file "/home/postgres/proj/build-farm-17/buildroot/REL_14_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/data/postgresql.conf" contains errors stopped waiting pg_ctl: could not start server
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rinkhals&dt=2024-08-26%2019%3A27%3A51 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rinkhals&dt=2024-09-02%2019%3A18%3A59 - master
+ERROR: could not open shared memory segment "/PostgreSQL.2570984136": No such file or directory
2024-09-05 06:24:05.855 GMT [3913662] FATAL: could not reattach to shared memory (key=2229908, addr=0xffffebe13000): Invalid argument 2024-09-05 06:24:05.872 GMT [3913664] FATAL: could not reattach to shared memory (key=2229908, addr=0xffffebe13000): Invalid argument ...
make (01:25:46) ... scripts-check (01:38:17) ... [04:33:31] t/020_createdb.pl ......... ok 1326343 ms ( 0.03 usr 0.00 sys + 7.41 cusr 9.33 csys = 16.77 CPU) [04:40:17] t/040_createuser.pl ....... ok 406303 ms ( 0.02 usr 0.00 sys + 3.07 cusr 2.79 csys = 5.88 CPU) # Tests were run but no plan was declared and done_testing() was not seen. # Looks like your test exited with 29 just after 13. [04:47:09] t/050_dropdb.pl ........... Dubious, test returned 29 (wstat 7424, 0x1d00) All 13 subtests passed ... --- pgsql.build/src/bin/scripts/tmp_check/log/regress_log_050_dropdb [04:47:00.052](0.069s) ok 13 - fails with nonexistent database error running SQL: 'psql:<stdin>:2: ERROR: source database "template1" is being accessed by other users DETAIL: There is 1 other session using the database.' while running 'psql -XAtq -d port=13455 host=/home/nm/farm/tmp/gsuqPCSa4L dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql ' CREATE DATABASE regression_invalid; UPDATE pg_database SET datconnlimit = -2 WHERE datname = 'regression_invalid'; --- pgsql.build/src/bin/scripts/tmp_check/log/050_dropdb_main.log 2024-09-03 04:47:00.116 CEST [4558:3] 050_dropdb.pl LOG: statement: CREATE DATABASE regression_invalid; 2024-09-03 04:47:05.118 CEST [4558:4] 050_dropdb.pl ERROR: source database "template1" is being accessed by other users 2024-09-03 04:47:05.118 CEST [4558:5] 050_dropdb.pl DETAIL: There is 1 other session using the database. 2024-09-03 04:47:05.118 CEST [4558:6] 050_dropdb.pl STATEMENT: CREATE DATABASE regression_invalid;
compare duration with the next (successful) run:
make (00:02:09) ... scripts-check (00:01:31) ... [20:01:56] t/020_createdb.pl ......... ok 12996 ms ( 0.02 usr 0.00 sys + 5.46 cusr 6.11 csys = 11.59 CPU) [20:02:01] t/040_createuser.pl ....... ok 4888 ms ( 0.01 usr 0.00 sys + 2.47 cusr 2.16 csys = 4.64 CPU) [20:02:06] t/050_dropdb.pl ........... ok 5093 ms ( 0.00 usr 0.00 sys + 2.62 cusr 2.19 csys = 4.81 CPU)
(Perhaps, wrasse was extremely slow at the time of the failed test run and autovacuum worker started in template1 could not stop during 5 seconds.)
# Tests were run but no plan was declared and done_testing() was not seen. # Looks like your test exited with 29 just after 24. [09:46:01] t/018_wal_optimize.pl ................. Dubious, test returned 29 (wstat 7424, 0x1d00) All 24 subtests passed --- pgsql.build/src/test/recovery/tmp_check/log/regress_log_018_wal_optimize [09:45:58.805](1.411s) ok 24 - wal_level = replica, TRUNCATE INSERT PREPARE connection error: 'psql:<stdin>:5: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. psql:<stdin>:5: error: connection to server was lost' while running 'psql -XAtq -d port=28557 host=/home/pgbfarm/buildroot/tmp/3D1U2ctc_K dbname='postgres' -f - -v ON_ERROR_STOP=1' at /home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/recovery/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 2138. # Stale postmaster.pid file for node "node_minimal": PID 30563 no longer exists # Stale postmaster.pid file for node "node_replica": PID 30644 no longer exists --- pgsql.build/src/test/recovery/tmp_check/log/018_wal_optimize_node_minimal.log 2024-09-02 09:45:44.777 EEST [30563:3] LOG: database system is ready to accept connections ... 2024-09-02 09:45:45.403 EEST [30573:4] 018_wal_optimize.pl LOG: statement: SELECT pg_relation_filepath(oid) FROM pg_class WHERE reltablespace = 0 AND relpersistence <> 't' AND pg_relation_filepath(oid) IS NOT NULL; 2024-09-02 09:45:45.594 EEST [30573:5] 018_wal_optimize.pl LOG: disconnection: session time: 0:00:00.225 user=pgbfarm database=postgres host=[local] === EOF === --- pgsql.build/src/test/recovery/tmp_check/log/018_wal_optimize_node_replica.log 2024-09-02 09:45:58.347 EEST [30644:3] LOG: database system is ready to accept connections ... 2024-09-02 09:45:58.992 EEST [30654:5] 018_wal_optimize.pl LOG: statement: BEGIN; 2024-09-02 09:45:58.994 EEST [30654:6] 018_wal_optimize.pl LOG: statement: CREATE TABLE noskip (id serial PRIMARY KEY); 2024-09-02 09:45:59.091 EEST [30654:7] 018_wal_optimize.pl LOG: statement: INSERT INTO noskip (SELECT FROM generate_series(1, 20000) a) ; === EOF ===
(two postmasters (30563 and 30644) just disappeared silently (were killed by OOM killer?))
SUCCESS: The process with PID 6304 (child process of PID 6420) has been terminated. SUCCESS: The process with PID 6344 (child process of PID 6420) has been terminated. SUCCESS: The process with PID 6420 (child process of PID 5384) has been terminated. SUCCESS: The process with PID 5384 (child process of PID 4076) has been terminated. postgresql:amcheck / amcheck/003_cic_2pc time out (After 3000.0 seconds) 246/246 postgresql:amcheck / amcheck/003_cic_2pc TIMEOUT 3000.15s exit status 1 --- pgsql.build/testrun/amcheck/003_cic_2pc/log/003_cic_2pc_CIC_2PC_test.log 2024-09-03 17:18:59.877 UTC [6804:3] LOG: database system is ready to accept connections 2024-09-03 17:23:59.868 UTC [4460:1] LOG: checkpoint starting: time 2024-09-03 17:24:04.992 UTC [4460:2] LOG: checkpoint complete: wrote 48 buffers (0.3%); 0 WAL file(s) added, 0 removed, 0 recycled; write=5.119 s, sync=0.001 s, total=5.124 s; sync files=0, longest=0.000 s, average=0.000 s; distance=280 kB, estimate=280 kB; lsn=0/1577AD0, redo lsn=0/1577AB0 === EOF === --- pgsql.build/testrun/amcheck/003_cic_2pc/log/regress_log_003_cic_2pc [17:18:59.429](0.155s) ok 1 - bt_index_check after overlapping 2PC ### Restarting node "CIC_2PC_test" # Running: pg_ctl -w -D C:\\tools\\nmsys64\\home\\pgrunner\\bf\\root\\HEAD\\pgsql.build/testrun/amcheck/003_cic_2pc/data/t_003_cic_2pc_CIC_2PC_test_data/pgdata -l C:\\tools\\nmsys64\\home\\pgrunner\\bf\\root\\HEAD\\pgsql.build/testrun/amcheck/003_cic_2pc/log/003_cic_2pc_CIC_2PC_test.log restart waiting for server to shut down.... done server stopped waiting for server to start.... done server started # Postmaster PID for node "CIC_2PC_test" is 6804 === EOF ===
The cause of the failure is not clear, but the test stopped when using BackgroundPsql.
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-09-11%2007%3A24%3A53 - master
SUCCESS: The process with PID 5976 (child process of PID 5928) has been terminated. SUCCESS: The process with PID 4768 (child process of PID 5928) has been terminated. SUCCESS: The process with PID 1616 (child process of PID 5928) has been terminated. SUCCESS: The process with PID 5928 (child process of PID 5664) has been terminated. SUCCESS: The process with PID 5664 (child process of PID 6668) has been terminated. postgresql:subscription / subscription/015_stream time out (After 3000.0 seconds) 293/293 postgresql:subscription / subscription/015_stream TIMEOUT 3000.15s exit status 1 --- pgsql.build/testrun/subscription/015_stream/log/015_stream_publisher.log 2024-09-11 09:13:36.886 UTC [512:4] 015_stream.pl LOG: statement: TRUNCATE TABLE test_tab_2 ... 2024-09-11 09:13:38.811 UTC [4320:4] 015_stream.pl LOG: statement: SELECT '0/20E3BC8' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name IN ('tap_sub', 'walreceiver') 2024-09-11 09:13:38.894 UTC [4320:5] 015_stream.pl LOG: disconnection: session time: 0:00:00.252 user=pgrunner database=postgres host=127.0.0.1 port=58965 2024-09-11 09:17:17.844 UTC [5380:1] LOG: checkpoint starting: time 2024-09-11 09:17:20.657 UTC [5380:2] LOG: checkpoint complete: wrote 18 buffers (14.1%); 0 WAL file(s) added, 0 removed, 1 recycled; write=2.715 s, sync=0.001 s, total=2.813 s; sync files=0, longest=0.000 s, average=0.000 s; distance=12451 kB, estimate=12451 kB; lsn=0/21232F8, redo lsn=0/21232A0 === EOF === --- pgsql.build/testrun/subscription/015_stream/log/regress_log_015_stream [09:13:36.560](4.953s) ok 8 - data replicated to subscriber after dropping index Waiting for replication conn tap_sub's replay_lsn to pass 0/20E3BC8 on publisher done [09:13:39.752](3.191s) # issuing query via background psql: # BEGIN; # INSERT INTO test_tab_2 SELECT i FROM generate_series(1, 5000) s(i);
There is no connection attempt logged on the server side, the test stopped when using BackgroundPsql.
See also: A single and irreproducible 043_wal_replay_wait failure
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gecko&dt=2024-09-11%2020%3A10%3A36 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gecko&dt=2024-09-11%2000%3A47%3A53 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gecko&dt=2024-09-11%2000%3A38%3A49 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gecko&dt=2024-09-11%2000%3A29%3A42 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gecko&dt=2024-09-11%2000%3A14%3A33 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gecko&dt=2024-09-11%2000%3A02%3A06 - REL_12_STABLE
+ERROR: could not compute MD5 hash: unsupported
gecko is running in FIPS 140 mode
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=goshawk&dt=2024-09-12%2016%3A30%3A41 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=goshawk&dt=2024-09-12%2016%3A34%3A16 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=shoebill&dt=2024-09-12%2016%3A21%3A48 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=shoebill&dt=2024-09-12%2016%3A23%3A44 - master
-ERROR: could not compute MD5 hash: unsupported +ERROR: could not compute MD5 hash: disabled for FIPS
goshawk and shoebill running on SUSE Linux Enterprise Server 15 SP2 (FIPS)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=indri&dt=2024-09-21%2019%3A25%3A32 - master
LLVM ERROR: ThinLTO cannot create input file: Unknown attribute kind (86) (Producer: 'APPLE_1_1600.0.26.3_0' Reader: 'LLVM 15.0.7') PLEASE submit a bug report to https://github.com/llvm/llvm-project/issues/ and include the crash backtrace.
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-09-23%2021%3A04%3A29 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-09-24%2003%3A03%3A07 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-09-24%2007%3A03%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-09-24%2000%3A45%3A10 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-09-23%2023%3A25%3A02 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-09-24%2006%3A22%3A45 - REL_16_STABLE
+ERROR: could not load library "C:/tools/xmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/tmp_install/tools/xmsys64/home/pgrunner/bf/root/HEAD/inst/lib/postgresql/plperl.dll": The specified module could not be found.
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-09-24%2016%3A03%3A07 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-09-25%2003%3A34%3A42 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-09-25%2007%3A18%3A30 - REL_17_STABLE
sh: line 1: /home/pgrunner/bf/root/upgrade.fairywren/REL9_2_STABLE/inst/bin/pg_ctl: No such file or directory
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-17%2010%3A33%3A05 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-19%2010%3A33%3A38 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-21%2008%3A52%3A56 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-21%2019%3A49%3A25 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-22%2023%3A36%3A55 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-23%2009%3A06%3A37 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-17%2010%3A47%3A29 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-19%2010%3A48%3A03 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-21%2009%3A07%3A29 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-21%2020%3A02%3A58 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-22%2015%3A08%3A14 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-23%2000%3A26%3A47 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-23%2009%3A55%3A51 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-17%2009%3A55%3A05 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-19%2009%3A55%3A12 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-21%2008%3A03%3A09 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-21%2019%3A09%3A52 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-22%2014%3A26%3A57 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-22%2023%3A51%3A35 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-23%2009%3A19%3A19 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-17%2010%3A14%3A57 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-19%2010%3A15%3A05 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-21%2008%3A26%3A14 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-21%2019%3A27%3A48 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-22%2014%3A45%3A29 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-23%2000%3A08%3A42 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-23%2009%3A36%3A43 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-17%2009%3A34%3A47 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-19%2009%3A35%3A42 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-20%2004%3A05%3A45 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-21%2009%3A41%3A42 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-21%2020%3A28%3A58 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-22%2015%3A40%3A09 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-23%2000%3A54%3A03 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-23%2010%3A25%3A01 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-17%2007%3A07%3A17 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-17%2011%3A04%3A47 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-18%2004%3A52%3A29 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-20%2003%3A41%3A01 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-21%2009%3A26%3A31 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-21%2020%3A18%3A06 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-22%2015%3A26%3A02 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-23%2000%3A42%3A03 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-23%2010%3A13%3A06 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-17%2006%3A48%3A24 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-17%2007%3A19%3A46 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-17%2011%3A14%3A25 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-17%2019%3A30%3A04 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-17%2022%3A00%3A45 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-18%2005%3A08%3A13 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-18%2008%3A23%3A10 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-18%2009%3A26%3A51 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-18%2009%3A38%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-18%2015%3A26%3A40 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-18%2017%3A47%3A00 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-18%2017%3A58%3A38 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-19%2015%3A45%3A19 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-20%2004%3A26%3A56 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-21%2002%3A12%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-21%2002%3A27%3A50 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-21%2009%3A55%3A32 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-21%2011%3A34%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-21%2020%3A40%3A28 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-22%2004%3A08%3A18 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-22%2006%3A34%3A59 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-22%2010%3A27%3A14 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-22%2015%3A54%3A27 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-22%2019%3A49%3A26 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-22%2022%3A25%3A21 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-23%2001%3A08%3A16 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-23%2010%3A38%3A39 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-10-23%2014%3A18%3A27 - master
/home/bf/bf-build/adder/REL_12_STABLE/pgsql.build/../pgsql/src/pl/plperl/Util.c: loadable library and perl binaries are mismatched (got first handshake key 0x9580080, needed 0x91c0080)
Bailout called. Further testing stopped: pg_ctl start failed t/001_constraint_validation.pl .. Dubious, test returned 255 (wstat 65280, 0xff00)
No test log saved. Waiting for the Tweak log file collection change to be deployed on buildfarm animals, including sidewinder, to get more information on the failure.
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-11-07%2003%3A43%3A22 - master
make[2]: Warning: File '../../preproc/ecpg.o' has modification time 70822 s in the future /repos/client-code-REL_18/HEAD/pgsql.build/src/interfaces/ecpg/preproc/ecpg.c:72:(.text+0x16): undefined reference to `mm_alloc'
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sevengill&dt=2024-11-15%2012%3A35%3A06 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sevengill&dt=2024-12-03%2000%3A01%3A19 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sevengill&dt=2024-12-07%2013%3A57%3A11 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sevengill&dt=2024-12-07%2019%3A40%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sevengill&dt=2024-12-08%2000%3A12%3A34 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sevengill&dt=2024-12-07%2019%3A23%3A14 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sevengill&dt=2024-12-07%2013%3A57%3A11 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sevengill&dt=2024-12-09%2006%3A51%3A21 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sevengill&dt=2024-12-09%2002%3A14%3A09 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sevengill&dt=2024-12-09%2006%3A34%3A34 - REL_17_STABLE
2024-12-02 20:49:45.644 CST [54997:1] FATAL: could not create shared memory segment: No space left on device
--- /home/bf/proj/bf/build-farm-17/REL_15_STABLE/pgsql.build/src/test/regress/expected/select_parallel.out 2024-12-16 23:43:03.742123619 +0000 +++ /home/bf/proj/bf/build-farm-17/REL_15_STABLE/pgsql.build/src/test/recovery/tmp_check/results/select_parallel.out 2024-12-16 23:48:25.102618931 +0000 @@ -551,7 +551,7 @@ -> Nested Loop (actual rows=98000 loops=1) -> Seq Scan on tenk2 (actual rows=10 loops=1) Filter: (thousand = 0) - Rows Removed by Filter: 9990 + Rows Removed by Filter: 447009543
--- /home/bf/proj/bf/build-farm-17/REL_16_STABLE/pgsql.build/src/test/regress/expected/select_parallel.out 2024-12-21 22:18:03.844773742 +0000 +++ /home/bf/proj/bf/build-farm-17/REL_16_STABLE/pgsql.build/src/test/recovery/tmp_check/results/select_parallel.out 2024-12-21 22:23:28.264849796 +0000 @@ -551,7 +551,7 @@ -> Nested Loop (actual rows=98000 loops=1) -> Seq Scan on tenk2 (actual rows=10 loops=1) Filter: (thousand = 0) - Rows Removed by Filter: 9990 + Rows Removed by Filter: 9395
@@ -179,7 +179,7 @@ Hits: 980 Misses: 20 Evictions: Zero Overflows: 0 Memory Usage: NkB -> Seq Scan on tenk1 t2 (actual rows=1 loops=N) Filter: ((t1.twenty = unique1) AND (t1.two = two)) - Rows Removed by Filter: 9999 + Rows Removed by Filter: 9775
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-12-17%2008%3A59%3A44 - master
--- C:/prog/bf/root/HEAD/pgsql/src/test/regress/expected/stats.out 2024-09-18 19:31:14.665516500 +0000 +++ C:/prog/bf/root/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/results/stats.out 2024-12-17 09:57:08.944588500 +0000 @@ -1291,7 +1291,7 @@ SELECT :io_sum_shared_after_writes > :io_sum_shared_before_writes; ?column? ---------- - t + f (1 row)
Maybe bgwriter stole beffers from checkpoint?
# poll_query_until timed out executing this query: # # SELECT vacuum_count > 0 # FROM pg_stat_all_tables WHERE relname = 'vac_horizon_floor_table'; 2024-12-14 10:43:37.277 UTC [11534840:9] 043_vacuum_horizon_floor.pl LOG: statement: VACUUM (VERBOSE, FREEZE) vac_horizon_floor_table; ... 2024-12-14 10:43:47.361 UTC [11534840:10] 043_vacuum_horizon_floor.pl LOG: using stale statistics instead of current ones because stats collector is not responding 2024-12-14 10:43:47.361 UTC [11534840:11] 043_vacuum_horizon_floor.pl STATEMENT: VACUUM (VERBOSE, FREEZE) vac_horizon_floor_table; 2024-12-14 10:43:47.362 UTC [11534840:12] 043_vacuum_horizon_floor.pl INFO: aggressively vacuuming "public.vac_horizon_floor_table" ... 2024-12-14 10:43:49.296 UTC [11534840:25] 043_vacuum_horizon_floor.pl INFO: index "vac_horizon_floor_table_col1_idx" now contains 3 row versions in 551 pages 2024-12-14 10:43:49.296 UTC [11534840:26] 043_vacuum_horizon_floor.pl DETAIL: 200000 index row versions were removed. 544 index pages were newly deleted. 544 index pages are currently deleted, of which 0 are currently reusable. CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s. 2024-12-14 10:43:49.296 UTC [11534840:27] 043_vacuum_horizon_floor.pl CONTEXT: while cleaning up index "vac_horizon_floor_table_col1_idx" of relation "public.vac_horizon_floor_table" 2024-12-14 10:43:49.296 UTC [11534840:28] 043_vacuum_horizon_floor.pl INFO: table "vac_horizon_floor_table": found 199559 removable, 3 nonremovable row versions in 885 out of 885 pages 2024-12-14 10:43:49.296 UTC [11534840:29] 043_vacuum_horizon_floor.pl DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 741 Skipped 0 pages due to buffer pins, 0 frozen pages. CPU: user: 0.09 s, system: 0.03 s, elapsed: 1.93 s. 2024-12-14 10:43:49.296 UTC [11534840:30] 043_vacuum_horizon_floor.pl CONTEXT: while scanning relation "public.vac_horizon_floor_table" ...