Known Buildfarm Test Failures
Investigated Test failures
027_stream_regress.pl fails to wait for standby because of incorrect CRC in WAL
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-05-02%2006%3A40%3A36 - HEAD
(dodo is a armv7l machine using SLICING_BY_8_CRC32C and having wal_consistency_checking enabled)
# poll_query_until timed out executing this query:
# SELECT '2/8E09BD70' <= replay_lsn AND state = 'streaming'
# FROM pg_catalog.pg_stat_replication
# WHERE application_name IN ('standby_1', 'walreceiver')
# expecting this output:
# t
# last actual query output:
#
# with stderr:
# Tests were run but no plan was declared and done_testing() was not seen.
# Looks like your test exited with 29 just after 2.
[17:19:00] t/027_stream_regress.pl ...............
Dubious, test returned 29 (wstat 7424, 0x1d00)
All 2 subtests passed
---
027_stream_regress_standby_1.log:
2024-05-02 17:08:18.579 ACST [3404:205] LOG: restartpoint starting: wal
2024-05-02 17:08:18.401 ACST [3406:7] LOG: incorrect resource manager data checksum in record at 0/F14D7A60
2024-05-02 17:08:18.579 ACST [3407:2] FATAL: terminating walreceiver process due to administrator command
2024-05-02 17:08:18.579 ACST [3406:8] LOG: incorrect resource manager data checksum in record at 0/F14D7A60
...
2024-05-02 17:19:00.093 ACST [3406:2604] LOG: incorrect resource manager data checksum in record at 0/F14D7A60
2024-05-02 17:19:00.093 ACST [3406:2605] LOG: waiting for WAL to become available at 0/F1002000
2024-05-02 17:19:00.594 ACST [3406:2606] LOG: incorrect resource manager data checksum in record at 0/F14D7A60
2024-05-02 17:19:00.594 ACST [3406:2607] LOG: waiting for WAL to become available at 0/F1002000
2024-05-02 17:19:00.758 ACST [3403:4] LOG: received immediate shutdown request
2024-05-02 17:19:00.785 ACST [3403:5] LOG: database system is shut down
WAL record CRC calculated incorrectly because of underlying buffer modification
020_archive_status.pl failed to wait for updated statistics due to send() returned EAGAIN
(morepork is running on OpenBSD 6.9)
# poll_query_until timed out executing this query: # SELECT archived_count FROM pg_stat_archiver # expecting this output: # 1 # last actual query output: # 0 # with stderr: # Looks like your test exited with 29 just after 4. [23:01:41] t/020_archive_status.pl .............. Dubious, test returned 29 (wstat 7424, 0x1d00) Failed 12/16 subtests --- 020_archive_status_master.log: 2024-04-30 22:57:27.931 CEST [83115:1] LOG: archive command failed with exit code 1 2024-04-30 22:57:27.931 CEST [83115:2] DETAIL: The failed archive command was: cp "pg_wal/000000010000000000000001_does_not_exist" "000000010000000000000001_does_not_exist" ... 2024-04-30 22:57:28.070 CEST [47962:2] [unknown] LOG: connection authorized: user=pgbf database=postgres application_name=020_archive_status.pl 2024-04-30 22:57:28.072 CEST [47962:3] 020_archive_status.pl LOG: statement: SELECT archived_count FROM pg_stat_archiver 2024-04-30 22:57:28.073 CEST [83115:3] LOG: could not send to statistics collector: Resource temporarily unavailable
Non-systematic handling of EINTR/EAGAIN/EWOULDBLOCK
031_recovery_conflict.pl fails to detect an expected lock acquisition
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-03-18%2023%3A43%3A00 - HEAD
[23:48:52.521](9.831s) ok 13 - startup deadlock: cursor holding conflicting pin, also waiting for lock, established [23:55:13.749](381.228s) # poll_query_until timed out executing this query: # # SELECT 'waiting' FROM pg_locks WHERE locktype = 'relation' AND NOT granted; # # expecting this output: # waiting # last actual query output: # # with stderr: [23:55:13.763](0.013s) not ok 14 - startup deadlock: lock acquisition is waiting [23:55:13.763](0.001s) # Failed test 'startup deadlock: lock acquisition is waiting' # at /home/bf/bf-build/adder/HEAD/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 261. Waiting for replication conn standby's replay_lsn to pass 0/3450000 on primary done --- 031_recovery_conflict_standby.log 2024-03-18 23:48:52.526 UTC [3138907][client backend][1/2:0] LOG: statement: SELECT * FROM test_recovery_conflict_table2; 2024-03-18 23:48:52.690 UTC [3139905][not initialized][:0] LOG: connection received: host=[local] 2024-03-18 23:48:52.692 UTC [3139905][client backend][2/1:0] LOG: connection authenticated: user="bf" method=trust (/home/bf/bf-build/adder/HEAD/pgsql.build/testrun/recovery/031_recovery_conflict/data/t_031_recovery_conflict_standby_data/pgdata/pg_hba.conf:117) 2024-03-18 23:48:52.692 UTC [3139905][client backend][2/1:0] LOG: connection authorized: user=bf database=postgres application_name=031_recovery_conflict.pl 2024-03-18 23:48:53.301 UTC [3136308][startup][34/0:0] LOG: recovery still waiting after 10.099 ms: recovery conflict on buffer pin 2024-03-18 23:48:53.301 UTC [3136308][startup][34/0:0] CONTEXT: WAL redo at 0/342CCC0 for Heap2/PRUNE: ... 2024-03-18 23:48:53.301 UTC [3138907][client backend][1/2:0] ERROR: canceling statement due to conflict with recovery at character 15 2024-03-18 23:48:53.301 UTC [3138907][client backend][1/2:0] DETAIL: User transaction caused buffer deadlock with recovery. 2024-03-18 23:48:53.301 UTC [3138907][client backend][1/2:0] STATEMENT: SELECT * FROM test_recovery_conflict_table2; 2024-03-18 23:48:53.301 UTC [3136308][startup][34/0:0] LOG: recovery finished waiting after 10.633 ms: recovery conflict on buffer pin 2024-03-18 23:48:53.301 UTC [3136308][startup][34/0:0] CONTEXT: WAL redo at 0/342CCC0 for Heap2/PRUNE: ... 2024-03-18 23:48:53.769 UTC [3139905][client backend][2/2:0] LOG: statement: SELECT 'waiting' FROM pg_locks WHERE locktype = 'relation' AND NOT granted;
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-14%2016%3A39%3A49 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-12-26%2005%3A49%3A14 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-02-01%2007%3A11%3A46 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-03-06%2007%3A10%3A07 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-03-16%2007%3A14%3A48 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-03-27%2019%3A48%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-04-30%2014%3A32%3A56 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-07-23%2011%3A25%3A45 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-08-18%2012%3A40%3A10 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-10-07%2012%3A31%3A57 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-10-26%2023%3A39%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-10-31%2018%3A42%3A01 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-11-07%2021%3A25%3A18 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-12-08%2017%3A36%3A37 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2026-01-05%2020%3A54%3A24 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2026-01-19%2012%3A45%3A55 - REL_18_STABLE
Test 031_recovery_conflict.pl is not immune to autovacuum
031_recovery_conflict.pl fails when a conflict counted twice
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-05-15%2023%3A03%3A30 - HEAD
(olingo builds postgres with -O1 and address sanitizer)
[23:12:02.127](0.166s) not ok 6 - snapshot conflict: stats show conflict on standby [23:12:02.130](0.003s) # Failed test 'snapshot conflict: stats show conflict on standby' # at /home/bf/bf-build/olingo/HEAD/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 332. [23:12:02.130](0.000s) # got: '2' # expected: '1' ... [23:12:06.848](1.291s) not ok 17 - 5 recovery conflicts shown in pg_stat_database [23:12:06.887](0.040s) # Failed test '5 recovery conflicts shown in pg_stat_database' # at /home/bf/bf-build/olingo/HEAD/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 286. [23:12:06.887](0.000s) # got: '6' # expected: '5' Waiting for replication conn standby's replay_lsn to pass 0/3459160 on primary done --- 031_recovery_conflict_standby.log: 2024-05-15 23:12:01.959 UTC [1299981][client backend][2/2:0] FATAL: terminating connection due to conflict with recovery 2024-05-15 23:12:01.959 UTC [1299981][client backend][2/2:0] DETAIL: User query might have needed to see row versions that must be removed. 2024-05-15 23:12:01.959 UTC [1299981][client backend][2/2:0] HINT: In a moment you should be able to reconnect to the database and repeat your command. 2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] LOG: could not send data to client: Broken pipe 2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] FATAL: terminating connection due to conflict with recovery 2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] DETAIL: User query might have needed to see row versions that must be removed. 2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] HINT: In a moment you should be able to reconnect to the database and repeat your command.
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2025-05-18%2018%3A30%3A24 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2026-01-05%2009%3A56%3A37 - REL_18_STABLE
Test 031_recovery_conflict fails when a conflict counted twice
001_rep_changes.pl fails due to publisher stuck on shutdown
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-05-16%2014%3A22%3A38 - HEAD
[14:33:02.374](0.333s) ok 23 - update works with dropped subscriber column ### Stopping node "publisher" using mode fast # Running: pg_ctl -D /home/bf/bf-build/adder/HEAD/pgsql.build/testrun/subscription/001_rep_changes/data/t_001_rep_changes_publisher_data/pgdata -m fast stop waiting for server to shut down.. ... ... ... .. failed pg_ctl: server does not shut down # pg_ctl stop failed: 256 # Postmaster PID for node "publisher" is 2222549 [14:39:04.375](362.001s) Bail out! pg_ctl stop failed --- 001_rep_changes_publisher.log 2024-05-16 14:33:02.907 UTC [2238704][client backend][4/22:0] LOG: statement: DELETE FROM tab_rep 2024-05-16 14:33:02.925 UTC [2238704][client backend][:0] LOG: disconnection: session time: 0:00:00.078 user=bf database=postgres host=[local] 2024-05-16 14:33:02.939 UTC [2222549][postmaster][:0] LOG: received fast shutdown request 2024-05-16 14:33:03.000 UTC [2222549][postmaster][:0] LOG: aborting any active transactions 2024-05-16 14:33:03.049 UTC [2222549][postmaster][:0] LOG: background worker "logical replication launcher" (PID 2223110) exited with exit code 1 2024-05-16 14:33:03.062 UTC [2222901][checkpointer][:0] LOG: shutting down 2024-05-16 14:39:04.377 UTC [2222549][postmaster][:0] LOG: received immediate shutdown request 2024-05-16 14:39:04.382 UTC [2222549][postmaster][:0] LOG: database system is shut down
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dikkop&dt=2024-04-24%2014%3A38%3A35 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-07-16%2022%3A45%3A10 - REL_17_STABLE
Also 035_standby_logical_decoding.pl fails on restart of standby (which is a publisher in the test):
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-04-17%2014%3A21%3A00 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-04-06%2016%3A28%3A38 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-11%2009%3A54%3A09 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-07-09%2003%3A46%3A44 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-10-09%2009%3A54%3A31 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-11-21%2006%3A25%3A02 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2024-11-27%2016%3A54%3A24 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-12-18%2003%3A32%3A12 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2024-12-18%2003%3A34%3A04 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2025-01-31%2018%3A53%3A38 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2025-02-18%2011%3A43%3A39 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2025-02-25%2001%3A09%3A53 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2025-03-05%2021%3A54%3A58 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2025-03-08%2018%3A04%3A18 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-03-21%2015%3A37%3A04 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2025-03-28%2008%3A57%3A23 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2025-03-30%2020%3A30%3A16 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2025-04-16%2001%3A58%3A42 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-04-15%2016%3A16%3A06 - REL_17_STABLE
001_rep_changes.pl fails due to publisher stuck on shutdown
027_stream_regress.pl fails on crake with timeout when waiting for catchup
150/263 postgresql:recovery / recovery/027_stream_regress ERROR 1246.17s exit status 29
---
regress_log_027_stream_regress
[11:24:44.119](225.205s) # poll_query_until timed out executing this query:
# SELECT '2/791D9828' <= replay_lsn AND state = 'streaming'
# FROM pg_catalog.pg_stat_replication
# WHERE application_name IN ('standby_1', 'walreceiver')
# expecting this output:
# t
# last actual query output:
# f
# with stderr:
timed out waiting for catchup at /home/andrew/bf/root/REL_16_STABLE/pgsql/src/test/recovery/t/027_stream_regress.pl line 100.
---
027_stream_regress_standby_1.log
2024-07-17 11:24:13.363 EDT [2024-07-17 11:04:06 EDT 1365647:393] LOG: restartpoint starting: wal
2024-07-17 11:24:22.384 EDT [2024-07-17 11:04:06 EDT 1365647:394] LOG: restartpoint complete: wrote 92 buffers (71.9%); 0 WAL file(s) added, 1 removed, 3 recycled; write=9.021 s, sync=0.001 s, total=9.022 s; sync files=0, longest=0.000 s, average=0.000 s; distance=63581 kB, estimate=67348 kB; lsn=1/B4A99C78, redo lsn=1/B1053BC8
2024-07-17 11:24:22.384 EDT [2024-07-17 11:04:06 EDT 1365647:395] LOG: recovery restart point at 1/B1053BC8
2024-07-17 11:24:22.384 EDT [2024-07-17 11:04:06 EDT 1365647:396] DETAIL: Last completed transaction was at log time 2024-07-17 11:11:58.69292-04.
2024-07-17 11:24:44.260 EDT [2024-07-17 11:04:06 EDT 1365651:2] FATAL: could not receive data from WAL stream: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-09%2021%3A37%3A04 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-15%2005%3A18%3A04 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-19%2004%3A30%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-19%2004%3A29%3A06 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-19%2017%3A44%3A10 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-23%2000%3A36%3A59 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-23%2008%3A07%3A08 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-24%2004%3A29%3A23 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-24%2011%3A42%3A19 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-24%2023%3A39%3A06 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-24%2012%3A04%3A29 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-25%2014%3A07%3A04 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-25%2020%3A18%3A05 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-26%2011%3A15%3A58 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-26%2016%3A12%3A04 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-29%2016%3A23%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-31%2014%3A16%3A48 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-31%2013%3A57%3A32 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-31%2013%3A35%3A04 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-02%2019%3A57%3A03 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-07%2019%3A00%3A33 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-09%2002%3A05%3A44 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-09%2021%3A12%3A02 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-10%2008%3A22%3A03 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-10%2018%3A47%3A03 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-10%2022%3A23%3A46 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-11%2019%3A47%3A02 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-08-19%2022%3A57%3A03 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-09-04%2021%3A42%3A04 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-09-11%2007%3A47%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-09-13%2022%3A45%3A47 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-09-16%2018%3A25%3A07 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-09-18%2003%3A11%3A48 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-09-25%2000%3A58%3A56 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-09-27%2021%3A56%3A30 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-09-30%2017%3A02%3A17 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-02%2018%3A33%3A52 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-04%2014%3A42%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-08%2011%3A41%3A20 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-11%2003%3A52%3A51 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-21%2018%3A02%3A43 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-25%2012%3A32%3A35 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-11-25%2006%3A33%3A42 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-12-07%2020%3A47%3A42 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-12-18%2000%3A06%3A26 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-01-06%2008%3A32%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-01-09%2005%3A44%3A46 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-01-11%2018%3A29%3A41 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-01-14%2008%3A46%3A33 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-01-15%2010%3A47%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-01-14%2018%3A30%3A48 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-01-17%2004%3A22%3A09 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-01-21%2015%3A29%3A10 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-01-22%2019%3A34%3A51 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-01-24%2006%3A07%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-01-25%2003%3A33%3A16 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-02-12%2021%3A12%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-02-14%2013%3A30%3A13 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-02-14%2019%3A02%3A03 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-02-18%2011%3A47%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-02-19%2003%3A38%3A00 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-02-19%2020%3A48%3A04 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-02-20%2016%3A22%3A01 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-02-21%2008%3A17%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-02-24%2017%3A42%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-02-27%2016%3A42%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-03-03%2000%3A22%3A02 - master
Recent 027_streaming_regress.pl hangs \ crake is failing due to other reasons
027_stream_regress.pl fails due to some IOS plans of queries in create_index.sql changed
# Failed test 'regression tests pass'
# at t/027_stream_regress.pl line 92.
# got: '256'
# expected: '0'
# Looks like you failed 1 test of 6.
[07:07:42] t/027_stream_regress.pl ..............
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/6 subtests
---
regress_log_027_stream_regress:
...
not ok 66 + create_index 27509 ms
...
----
diff -U3 /home/nm/farm/gcc64/REL_16_STABLE/pgsql.build/src/test/regress/expected/create_index.out /home/nm/farm/gcc64/REL_16_STABLE/pgsql.build/src/test/recovery/tmp_check/results/create_index.out
--- /home/nm/farm/gcc64/REL_16_STABLE/pgsql.build/src/test/regress/expected/create_index.out 2023-07-08 15:26:29.000000000 +0000
+++ /home/nm/farm/gcc64/REL_16_STABLE/pgsql.build/src/test/recovery/tmp_check/results/create_index.out 2024-03-17 06:59:01.000000000 +0000
@@ -1916,11 +1916,15 @@
SELECT unique1 FROM tenk1
WHERE unique1 IN (1,42,7)
ORDER BY unique1;
- QUERY PLAN
- -------------------------------------------------------
- Index Only Scan using tenk1_unique1 on tenk1
- Index Cond: (unique1 = ANY ('{1,42,7}'::integer[]))
- (2 rows)
+ QUERY PLAN
+ -------------------------------------------------------------------
+ Sort
+ Sort Key: unique1
+ -> Bitmap Heap Scan on tenk1
+ Recheck Cond: (unique1 = ANY ('{1,42,7}'::integer[]))
+ -> Bitmap Index Scan on tenk1_unique1
+ Index Cond: (unique1 = ANY ('{1,42,7}'::integer[]))
+ (6 rows)
SELECT unique1 FROM tenk1
WHERE unique1 IN (1,42,7)
@@ -1936,12 +1940,13 @@
SELECT thousand, tenthous FROM tenk1
WHERE thousand < 2 AND tenthous IN (1001,3000)
ORDER BY thousand;
- QUERY PLAN
- -------------------------------------------------------
- Index Only Scan using tenk1_thous_tenthous on tenk1
- Index Cond: (thousand < 2)
- Filter: (tenthous = ANY ('{1001,3000}'::integer[]))
- (3 rows)
+ QUERY PLAN
+ --------------------------------------------------------------------------------------
+ Sort
+ Sort Key: thousand
+ -> Index Only Scan using tenk1_thous_tenthous on tenk1
+ Index Cond: ((thousand < 2) AND (tenthous = ANY ('{1001,3000}'::integer[])))
+ (4 rows)
SELECT thousand, tenthous FROM tenk1
WHERE thousand < 2 AND tenthous IN (1001,3000)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2025-01-17%2009%3A21%3A57 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2025-06-05%2009%3A19%3A04 - REL_16_STABLE
Also 002_pg_upgrade.pl fails due to some IOS plans of queries in create_index.sql changed
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2024-01-02%2007%3A09%3A09 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2023-11-15%2006%3A16%3A15 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2025-03-03%2007%3A43%3A17 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2025-07-08%2004%3A47%3A58 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2025-10-02%2004%3A18%3A48 - REL_16_STABLE
To what extent should tests rely on VACUUM ANALYZE? \ create_index failures
xversion-upgrade-XXX fails due to pg_ctl timeout
REL9_5_STABLE-ctl4.log waiting for server to shut down........................................................................................................................... failed pg_ctl: server does not shut down
Also test runs fail on the stopdb-C-x stage
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2024-06-08%2001%3A41%3A41 - HEAD
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-03-06%2023%3A42%3A23 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-04-02%2019%3A05%3A04 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kingsnake&dt=2024-04-27%2015%3A08%3A10 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kingsnake&dt=2024-06-13%2017%3A58%3A28 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=habu&dt=2024-08-05%2003%3A11%3A29 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2024-08-13%2002%3A04%3A07 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kingsnake&dt=2024-08-12%2015%3A42%3A48 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2024-08-20%2003%3A02%3A27 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2024-08-20%2002%3A04%3A04 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kingsnake&dt=2024-08-23%2015%3A09%3A02 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2024-10-30%2008%3A50%3A01 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2024-10-30%2006%3A39%3A15 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2024-10-30%2005%3A06%3A23 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2025-02-25%2000%3A43%3A03 - master
The xversion-upgrade test fails to stop server
A partial fix: The xversion-upgrade test fails to stop server \ PGCTLTIMEOUT increased on crake
subscriber tests fail due to an assertion failure in SnapBuildInitialSnapshot()
Bailout called. Further testing stopped: pg_ctl stop failed
t/031_column_list.pl ............... ok
---
031_column_list_publisher.log
2024-05-16 00:23:24.522 UTC [1882382][walsender][5/22:0] LOG: received replication command: CREATE_REPLICATION_SLOT "pg_16588_sync_16582_7369385153852978065" LOGICAL pgoutput (SNAPSHOT 'use')
2024-05-16 00:23:24.522 UTC [1882382][walsender][5/22:0] STATEMENT: CREATE_REPLICATION_SLOT "pg_16588_sync_16582_7369385153852978065" LOGICAL pgoutput (SNAPSHOT 'use')
2024-05-16 00:23:24.639 UTC [1882382][walsender][5/22:0] LOG: logical decoding found consistent point at 0/164A088
2024-05-16 00:23:24.639 UTC [1882382][walsender][5/22:0] DETAIL: There are no running transactions.
2024-05-16 00:23:24.639 UTC [1882382][walsender][5/22:0] STATEMENT: CREATE_REPLICATION_SLOT "pg_16588_sync_16582_7369385153852978065" LOGICAL pgoutput (SNAPSHOT 'use')
TRAP: FailedAssertion("TransactionIdPrecedesOrEquals(safeXid, snap->xmin)", File: "/home/bf/bf-build/skink/REL_15_STABLE/pgsql.build/../pgsql/src/backend/replication/logical/snapbuild.c", Line: 614, PID: 756819)
2024-05-09 07:11:55.444 UTC [756803][walsender][4/0:0] ERROR: cannot use different column lists for table "public.test_mix_1" in different publications
2024-05-09 07:11:55.444 UTC [756803][walsender][4/0:0] CONTEXT: slot "sub1", output plugin "pgoutput", in the change callback, associated LSN 0/163B860
2024-05-09 07:11:55.444 UTC [756803][walsender][4/0:0] STATEMENT: START_REPLICATION SLOT "sub1" LOGICAL 0/0 (proto_version '3', publication_names '"pub_mix_1","pub_mix_2"')
postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(ExceptionalCondition+0x92)[0x6bc2db]
postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(SnapBuildInitialSnapshot+0x1fd)[0x521e82]
postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(+0x430bb1)[0x538bb1]
postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(exec_replication_command+0x3c9)[0x53ac9a]
postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(PostgresMain+0x748)[0x58f8f1]
postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(+0x3efabb)[0x4f7abb]
postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(+0x3f1bba)[0x4f9bba]
postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(+0x3f1dc8)[0x4f9dc8]
postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(PostmasterMain+0x1133)[0x4fb36b]
postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(main+0x210)[0x448be9]
/lib/x86_64-linux-gnu/libc.so.6(+0x27b8a)[0x4cd0b8a]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x4cd0c45]
postgres: publisher: walsender bf [local] CREATE_REPLICATION_SLOT(_start+0x21)[0x1d2b71]
2024-05-09 07:11:55.588 UTC [747458][postmaster][:0] LOG: server process (PID 756819) was terminated by signal 6: Aborted
2024-05-09 07:11:55.588 UTC [747458][postmaster][:0] DETAIL: Failed process was running: CREATE_REPLICATION_SLOT "pg_16586_sync_16580_7366892877332646335" LOGICAL pgoutput (SNAPSHOT 'use')
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2024-02-09%2012%3A46%3A37 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-05-09%2003%3A48%3A10 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-09-14%2013%3A22%3A59 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=morepork&dt=2025-10-03%2008%3A14%3A48 - REL_15_STABLE
Assertion failure in SnapBuildInitialSnapshot()
Upgrade tests fail on Windows due to pg_upgrade_output.d/ not removed
2/242 postgresql:pg_upgrade / pg_upgrade/004_subscription ERROR 98.04s exit status 1
---
regress_log_004_subscription
Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade.
Once you start the new server, consider running:
C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/PGSQL~1.BUI/TMP_IN~1/tools/nmsys64/home/pgrunner/bf/root/HEAD/inst/bin/vacuumdb --all --analyze-in-stages
Running this script will delete the old cluster's data files:
delete_old_cluster.bat
pg_upgrade: warning: could not remove directory "C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240613T060900.667/log": Directory not empty
pg_upgrade: warning: could not remove directory "C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240613T060900.667": Directory not empty
pg_upgrade: warning: could not stat file "C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240613T060900.667/log/pg_upgrade_internal.log": No such file or directory
pg_upgrade: warning: could not remove directory "C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240613T060900.667/log": Directory not empty
pg_upgrade: warning: could not remove directory "C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240613T060900.667": Directory not empty
[06:09:33.510](34.360s) ok 8 - run of pg_upgrade for old instance when the subscription tables are in init/ready state
[06:09:33.510](0.000s) not ok 9 - pg_upgrade_output.d/ removed after successful pg_upgrade
[06:09:33.511](0.001s) # Failed test 'pg_upgrade_output.d/ removed after successful pg_upgrade'
# at C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql/src/bin/pg_upgrade/t/004_subscription.pl line 265.
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-06-13%2011%3A03%3A07 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-07-30%2008%3A41%3A20 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-09-11%2006%3A16%3A10 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-17%2002%3A19%3A56 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-11-04%2013%3A03%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-11-04%2020%3A03%3A06 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-20%2012%3A29%3A06 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-11-27%2008%3A00%3A08 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-12-19%2010%3A06%3A05 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-01-11%2006%3A34%3A24 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-02-06%2004%3A03%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-02-17%2000%3A02%3A04 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-03-29%2006%3A03%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-05-20%2004%3A48%3A19 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-06-27%2022%3A11%3A44 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-09-14%2001%3A12%3A33 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-09-15%2002%3A08%3A54 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-09-15%2022%3A00%3A58 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-10-31%2002%3A15%3A25 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-11-29%2017%3A08%3A12 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-12-01%2011%3A03%3A10 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-12-02%2001%3A03%3A10 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-12-16%2012%3A34%3A51 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-12-23%2022%3A54%3A17 - REL_17_STABLE
Also buildfarm's check-pg_upgrade fails on removing data.old
c:\\build-farm-local\\buildroot\\REL_12_STABLE\\pgsql.build\\src\\bin\\pg_upgrade>RMDIR /s/q "c:\\build-farm-local\\buildroot\\REL_12_STABLE\\pgsql.build\\src\\bin\\pg_upgrade\\tmp_check\\data.old" \203f\203B\203\214\203N\203g\203\212\202\252\213\363\202\305\202\315\202\240\202\350\202\334\202\271\202\361\201B --- The last line is a Japanese message: ディレクトリが空ではありません。 = Directory not empty encoded in SJIS.
pg_upgrade test failure \ the output directory remains after successful upgrade
Miscellaneous test failures in v14- on Windows due to "Permission denied" errors
============== shutting down postmaster ============== pg_ctl: could not open PID file "C:/tools/nmsys64/home/pgrunner/bf/root/REL_14_STABLE/pgsql.build/src/test/regress/./tmp_check/data/postmaster.pid": Permission denied
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-07-10%2002%3A27%3A04 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-07-10%2002%3A09%3A24 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-08-08%2001%3A11%3A00 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-08-08%2001%3A31%3A42 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-11-06%2011%3A03%3A06 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-10-11%2008%3A54%3A27 - REL_17_STABLE (upgrade from REL_11_STABLE)
stat() vs ERROR_DELETE_PENDING, round N + 1 \ pushing fix e2f0f8ed2 to v15+
Miscellaneous tests fail on Windows due to a connection closed before receiving a final error message
# Failed test 'certificate authorization fails with revoked client cert with server-side CRL directory: matches' # at t/001_ssltests.pl line 742. # 'psql: error: connection to server at "127.0.0.1", port 57497 failed: server closed the connection unexpectedly # This probably means the server terminated abnormally # before or while processing the request. # server closed the connection unexpectedly # This probably means the server terminated abnormally # before or while processing the request.' # doesn't match '(?^:SSL error: ssl[a-z0-9/]* alert certificate revoked)' # Looks like you failed 1 test of 180. [16:08:45] t/001_ssltests.pl .. Dubious, test returned 1 (wstat 256, 0x100) Failed 1/180 subtests
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-08-31%2007%3A54%3A58 - HEAD
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-09-28%2019%3A42%3A52 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-11%2001%3A24%3A14 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-10-29%2001%3A23%3A21 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-07%2006%3A09%3A52 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-14%2012%3A25%3A14 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-11-15%2020%3A38%3A19 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-11-17%2011%3A03%3A16 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-12-11%2005%3A48%3A37 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-12-30%2010%3A30%3A16 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-01-08%2021%3A07%3A23 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-01-12%2008%3A17%3A40 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-01-13%2022%3A29%3A58 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-01-23%2003%3A02%3A59 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-02-07%2015%3A04%3A47 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-02-15%2019%3A47%3A04 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-03-15%2020%3A06%3A23 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-03-15%2013%3A20%3A56 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-03-26%2004%3A33%3A53 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-04-09%2019%3A10%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-04-11%2009%3A48%3A22 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-04-24%2014%3A57%3A44 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-05-01%2018%3A20%3A48 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-05-23%2002%3A39%3A40 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-05-28%2000%3A56%3A35 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-06-27%2005%3A58%3A20 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-07-05%2010%3A09%3A58 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-07-11%2009%3A49%3A42 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-07-13%2008%3A24%3A33 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-07-20%2013%3A20%3A01 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-07-23%2005%3A03%3A06 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-08-04%2002%3A50%3A42 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-08-06%2002%3A00%3A33 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-08-13%2011%3A37%3A16 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-08-18%2007%3A19%3A20 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-08-21%2020%3A58%3A30 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-08-26%2012%3A23%3A41 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-09-04%2009%3A22%3A19 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-09-06%2016%3A18%3A47 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-09-09%2020%3A43%3A14 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-09-13%2007%3A40%3A16 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-09-13%2009%3A50%3A05 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-09-15%2022%3A00%3A58 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-09-19%2020%3A24%3A50 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-09-27%2006%3A38%3A55 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-09-27%2017%3A03%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-09-28%2020%3A39%3A05 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-09-29%2008%3A17%3A52 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-10-03%2009%3A39%3A25 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-10-04%2006%3A22%3A59 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-10-07%2020%3A38%3A36 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-10-09%2004%3A24%3A25 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-10-09%2023%3A26%3A51 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-10-18%2011%3A03%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-10-28%2023%3A38%3A52 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-11-05%2018%3A50%3A06 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-11-07%2011%3A44%3A01 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-11-12%2020%3A29%3A57 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-11-14%2010%3A21%3A56 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-11-18%2007%3A34%3A19 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-11-18%2009%3A55%3A53 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-11-20%2003%3A24%3A29 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-11-20%2021%3A26%3A50 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-11-30%2002%3A38%3A51 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-12-03%2003%3A21%3A32 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-12-05%2018%3A53%3A38 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-12-08%2023%3A16%3A15 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-12-09%2004%3A09%3A35 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-12-18%2000%3A33%3A52 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2026-01-02%2008%3A25%3A13 -
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-02%2022%3A49%3A58 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-10%2000%3A52%3A53 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-17%2009%3A06%3A12 - REL_17_STABLE
Why is src/test/modules/committs/t/002_standby.pl flaky? \ A new attempt to fix this mess
031_recovery_conflict.pl test might fail due to late pgstat entries flushing
23/296 postgresql:recovery / recovery/031_recovery_conflict ERROR 11.55s exit status 1 --- regress_log_031_recovery_conflict [07:58:53.979](0.255s) ok 11 - tablespace conflict: logfile contains terminated connection due to recovery conflict [07:58:54.058](0.080s) not ok 12 - tablespace conflict: stats show conflict on standby [07:58:54.059](0.000s) # Failed test 'tablespace conflict: stats show conflict on standby' # at /home/bf/bf-build/rorqual/REL_17_STABLE/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 332. [07:58:54.059](0.000s) # got: '0' # expected: '1'
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2025-03-05%2014%3A31%3A46 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2025-07-14%2020%3A14%3A03 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2025-08-26%2017%3A09%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2025-10-21%2016%3A35%3A23 - master
The 031_recovery_conflict.pl test might fail due to late pgstat entries flushing
culicidae failed to restart server due to incorrect checksum in control file
(culicidae tests EXEC_BACKEND)
001_auth_node.log 2024-07-24 04:19:28.403 UTC [1018014][postmaster][:0] LOG: starting PostgreSQL 16.3 on x86_64-linux, compiled by gcc-13.3.0, 64-bit 2024-07-24 04:19:28.427 UTC [1018014][postmaster][:0] LOG: listening on Unix socket "/tmp/U3Osq_FaO8/.s.PGSQL.12427" 2024-07-24 04:19:29.036 UTC [1018564][startup][:0] LOG: database system was shut down at 2024-07-24 04:19:27 UTC 2024-07-24 04:19:29.038 UTC [1018562][not initialized][:0] FATAL: incorrect checksum in control file
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-11-07%2006%3A21%3A37 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-12-15%2020%3A29%3A49 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2025-03-21%2017%3A17%3A31 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2025-03-27%2000%3A16%3A30 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2025-03-29%2015%3A00%3A42 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2025-10-06%2008%3A00%3A34 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2025-11-22%2012%3A31%3A23 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2025-12-14%2018%3A24%3A48 - master? (most likely the cause is the same, but there is no log to confirm)
Also culicidae failed regression test due to incorrect checksum
\342\226\266 1/1 + partition_prune 3736 ms FAIL
---
inst/logfile
2024-08-17 01:25:31.254 UTC [2841385][client backend][43/184:0] LOG: connection authorized: user=buildfarm database=regression application_name=pg_regress/partition_prune
...
2024-08-17 01:25:33.676 UTC [2842326][not initialized][:0] FATAL: incorrect checksum in control file
...
2024-08-17 01:25:33.683 UTC [2841385][client backend][43/553:0] ERROR: parallel worker failed to initialize
2024-08-17 01:25:33.683 UTC [2841385][client backend][43/553:0] HINT: More details may be available in the server log.
2024-08-17 01:25:33.683 UTC [2841385][client backend][43/553:0] CONTEXT: PL/pgSQL function explain_parallel_append(text) line 5 at FOR over EXECUTE statement
2024-08-17 01:25:33.683 UTC [2841385][client backend][43/553:0] STATEMENT: select explain_parallel_append('select avg(ab.a) from ab inner join lprt_a a on ab.a = a.a where a.a in(1, 0, 0)');
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2025-01-24%2018%3A22%3A01 - master
Also drongo failed to restart server due to incorrect checksum in control file
(drongo is a Windows animal) https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-03-27%2022%3A20%3A42 - master
43/317 postgresql:recovery / recovery/039_end_of_wal ERROR 89.56s exit status 2 --- pgsql.build/testrun/recovery/039_end_of_wal/log/regress_log_039_end_of_wal [23:38:05.663](26.329s) ok 3 - xl_tot_len short at end-of-page connection error: 'psql: error: connection to server at "127.0.0.1", port 25199 failed: FATAL: the database system is in recovery mode' --- pgsql.build/testrun/recovery/039_end_of_wal/log/039_end_of_wal_node.log 2025-03-27 23:38:05.042 UTC [3144:1] LOG: starting PostgreSQL 18devel on x86_64-windows, compiled by msvc-19.23.28105.4, 64-bit 2025-03-27 23:38:05.043 UTC [3144:2] LOG: listening on IPv4 address "127.0.0.1", port 25199 ... 2025-03-27 23:38:05.496 UTC [6464:1] FATAL: incorrect checksum in control file ... 2025-03-27 23:38:06.231 UTC [6400:2] [unknown] FATAL: the database system is in recovery mode
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-10-27%2005%3A41%3A26 - master
race condition when writing pg_control \ the issue in question apparently happened in the wild
stats.sql is failing sporadically in v14- on POWER/aarch64 animals
test stats ... FAILED 469155 ms
...
--- /home/nm/farm/gcc64/REL_14_STABLE/pgsql.build/src/test/regress/expected/stats.out 2022-03-30 01:18:17.000000000 +0000
+++ /home/nm/farm/gcc64/REL_14_STABLE/pgsql.build/src/test/regress/results/stats.out 2024-07-30 09:49:39.000000000 +0000
@@ -165,11 +165,11 @@
WHERE relname like 'trunc_stats_test%' order by relname;
relname | n_tup_ins | n_tup_upd | n_tup_del | n_live_tup | n_dead_tup
-------------------+-----------+-----------+-----------+------------+------------
- trunc_stats_test | 3 | 0 | 0 | 0 | 0
- trunc_stats_test1 | 4 | 2 | 1 | 1 | 0
- trunc_stats_test2 | 1 | 0 | 0 | 1 | 0
- trunc_stats_test3 | 4 | 0 | 0 | 2 | 2
- trunc_stats_test4 | 2 | 0 | 0 | 0 | 2
+ trunc_stats_test | 0 | 0 | 0 | 0 | 0
+ trunc_stats_test1 | 0 | 0 | 0 | 0 | 0
+ trunc_stats_test2 | 0 | 0 | 0 | 0 | 0
+ trunc_stats_test3 | 0 | 0 | 0 | 0 | 0
+ trunc_stats_test4 | 0 | 0 | 0 | 0 | 0
...
---
inst/logfile
2024-07-30 09:25:11.225 UTC [63307946:1] LOG: using stale statistics instead of current ones because stats collector is
not responding
2024-07-30 09:25:11.345 UTC [11206724:559] pg_regress/create_index LOG: using stale statistics instead of current ones
because stats collector is not responding
...
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2024-03-29%2005%3A27%3A09 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2024-03-19%2002%3A09%3A07 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2024-08-02%2002%3A04%3A10 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chimaera&dt=2023-09-28%2011%3A08%3A08 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chimaera&dt=2024-08-13%2011%3A29%3A27 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2024-09-19%2003%3A34%3A21 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2024-09-27%2008%3A51%3A24 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=blackneck&dt=2024-10-30%2009%3A08%3A05 - REL_12_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2024-12-20%2005%3A33%3A31 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fritillary&dt=2024-12-22%2003%3A21%3A59 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2024-12-24%2007%3A17%3A19 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2024-12-31%2009%3A54%3A40 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2025-01-10%2007%3A13%3A55 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2025-02-12%2012%3A33%3A49 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=ziege&dt=2025-02-20%2000%3A08%3A10 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=turbot&dt=2025-02-20%2010%3A08%3A26 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2025-03-22%2001%3A45%3A17 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2025-05-18%2005%3A27%3A18 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2025-07-24%2005%3A08%3A17 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2025-08-09%2005%3A48%3A14 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2025-10-05%2002%3A03%3A50 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2025-10-28%2005%3A47%3A46 - REL_13_STABLE
Also stats.sql failed on alligator
(aligator is an x86_64 animal)
test stats ... FAILED 31917 ms
========================
1 of 213 tests failed.
========================
diff -U3 /home/postgres/proj/bfgit/buildroot/REL_14_STABLE/pgsql.build/src/test/regress/expected/stats.out /home/postgres/proj/bfgit/buildroot/REL_14_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/regress/results/stats.out
--- /home/postgres/proj/bfgit/buildroot/REL_14_STABLE/pgsql.build/src/test/regress/expected/stats.out 2025-05-20 01:26:08.101850287 +0930
+++ /home/postgres/proj/bfgit/buildroot/REL_14_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/regress/results/stats.out 2025-05-20 01:41:21.796748394 +0930
@@ -165,11 +165,11 @@
WHERE relname like 'trunc_stats_test%' order by relname;
relname | n_tup_ins | n_tup_upd | n_tup_del | n_live_tup | n_dead_tup
-------------------+-----------+-----------+-----------+------------+------------
- trunc_stats_test | 3 | 0 | 0 | 0 | 0
- trunc_stats_test1 | 4 | 2 | 1 | 1 | 0
- trunc_stats_test2 | 1 | 0 | 0 | 1 | 0
- trunc_stats_test3 | 4 | 0 | 0 | 2 | 2
- trunc_stats_test4 | 2 | 0 | 0 | 0 | 2
+ trunc_stats_test | 0 | 0 | 0 | 0 | 0
+ trunc_stats_test1 | 0 | 0 | 0 | 0 | 0
+ trunc_stats_test2 | 0 | 0 | 0 | 0 | 0
+ trunc_stats_test3 | 0 | 0 | 0 | 0 | 0
+ trunc_stats_test4 | 0 | 0 | 0 | 0 | 0
(5 rows)
pgsql.build/src/bin/pg_upgrade/log/postmaster1.log doesn't contain "using stale statistics" messages
The stats.sql test is failing sporadically in v14- on POWER7/AIX 7.1 buildfarm animals
pg_ctl stop/start fails on Windows due to inconsistent check for postmaster.pid existence
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-19%2017%3A32%3A54 - HEAD
... pg_createsubscriber: stopping the subscriber 2024-08-19 18:02:47.608 UTC [6988:4] LOG: received fast shutdown request 2024-08-19 18:02:47.608 UTC [6988:5] LOG: aborting any active transactions 2024-08-19 18:02:47.612 UTC [5884:2] FATAL: terminating walreceiver process due to administrator command 2024-08-19 18:02:47.705 UTC [7036:1] LOG: shutting down pg_createsubscriber: server was stopped ... [18:02:47.900](2.828s) ok 29 - run pg_createsubscriber without --databases ... pg_createsubscriber: starting the standby with command-line options pg_createsubscriber: pg_ctl command is: ... 2024-08-19 18:02:48.163 UTC [5284:1] FATAL: could not create lock file "postmaster.pid": File exists pg_createsubscriber: server was started pg_createsubscriber: checking settings on subscriber 2024-08-19 18:02:48.484 UTC [6988:6] LOG: database system is shut down
DELETE PENDING strikes back, via pg_ctl stop/start
pg_ctl stop fails on Cygwin due to DELETE PENDING state of postmaster.pid
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2024-08-22%2009%3A52%3A46 - HEAD
waiting for server to shut down........pg_ctl: could not open PID file "data-C/postmaster.pid": Permission denied
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2024-11-11%2011%3A26%3A06 - master
DELETE PENDING strikes back, via pg_ctl stop/start \ a lorikeet failure
dblink.sql (and postgres_fdw.sql) fail on Windows due to the cancel packet not sent
40/67 postgresql:dblink-running / dblink-running/regress ERROR 32.97s exit status 1
---
pgsql.build/testrun/dblink-running/regress/regression.diffs
SELECT dblink_cancel_query('dtest1');
- dblink_cancel_query
----------------------
- OK
+ dblink_cancel_query
+--------------------------
+ cancel request timed out
(1 row)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-11%2022%3A42%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-11-27%2018%3A34%3A52 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-12-02%2007%3A59%3A27 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-12-19%2011%3A00%3A15 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-01-06%2004%3A43%3A54 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-02-01%2013%3A35%3A14 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2025-02-03%2011%3A00%3A57 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-02-18%2008%3A01%3A35 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-03-07%2004%3A31%3A37 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-03-20%2016%3A58%3A57 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-03-31%2018%3A38%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-05-20%2023%3A08%3A22 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-06-30%2019%3A02%3A51 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-07-13%2013%3A03%3A32 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2025-07-19%2011%3A01%3A21 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2025-11-18%2011%3A42%3A27 - REL_17_STABLE
Add non-blocking version of PQcancel \ the dblink test failed on drongo
timeouts.spec failed because of statement cancelled due to unexpected reason
257/260 postgresql:isolation / isolation/isolation ERROR 79.90s exit status 1 --- pgsql.build/testrun/isolation/isolation/regression.diffs --- /home/bf/bf-build/mylodon/REL_16_STABLE/pgsql/src/test/isolation/expected/timeouts.out 2023-06-30 00:57:49.207140401 +0000 +++ /home/bf/bf-build/mylodon/REL_16_STABLE/pgsql.build/testrun/isolation/isolation/results/timeouts.out 2024-08-30 23:06:07.610042527 +0000 @@ -78,4 +78,4 @@ step slto: SET lock_timeout = '10s'; SET statement_timeout = '10ms'; step update: DELETE FROM accounts WHERE accountid = 'checking'; <waiting ...> step update: <... completed> -ERROR: canceling statement due to statement timeout +ERROR: canceling statement due to user request
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2025-02-11%2021%3A54%3A52 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-11-10%2021%3A25%3A57 - REL_18_STABLE
Add non-blocking version of PQcancel \ mylodon failed due to reason discussed upthread
002_archiving.pl fails due to promote request not received timely on Windows
(drongo is a Windows animal)
6/289 postgresql:recovery / recovery/002_archiving ERROR 626.63s (exit status 255 or signal 127 SIGinvalid) --- regress_log_002_archiving [17:11:11.519](0.001s) ok 3 - recovery_end_command not executed yet ### Promoting node "standby" # Running: pg_ctl -D C:\\prog\\bf\\root\\REL_17_STABLE\\pgsql.build/testrun/recovery/002_archiving\\data/t_002_archiving_standby_data/pgdata -l C:\\prog\\bf\\root\\REL_17_STABLE\\pgsql.build/testrun/recovery/002_archiving\\log/002_archiving_standby.log promote waiting for server to promote....................................................................................................................................................................................... stopped waiting pg_ctl: server did not promote in time [17:20:06.095](534.576s) Bail out! command "pg_ctl -D C:\\prog\\bf\\root\\REL_17_STABLE\\pgsql.build/testrun/recovery/002_archiving\\data/t_002_archiving_standby_data/pgdata -l C:\\prog\\bf\\root\\REL_17_STABLE\\pgsql.build/testrun/recovery/002_archiving\\log/002_archiving_standby.log promote" exited with value 1 --- 002_archiving_standby.log 2024-09-29 17:11:10.319 UTC [6408:3] LOG: recovery restart point at 0/3028BF8 2024-09-29 17:11:10.319 UTC [6408:4] DETAIL: Last completed transaction was at log time 2024-09-29 17:10:57.783965+00. The system cannot find the file specified. 2024-09-29 17:11:10.719 UTC [7440:5] 002_archiving.pl LOG: disconnection: session time: 0:00:00.488 user=pgrunner database=postgres host=127.0.0.1 port=62549 The system cannot find the file specified. The system cannot find the file specified. ... The system cannot find the file specified. 2024-09-29 17:20:08.237 UTC [6684:4] LOG: received immediate shutdown request The system cannot find the file specified. ...
(there is no "received promote request" message)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-06-28%2001%3A06%3A00 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2026-01-09%2007%3A49%3A28 - REL_16_STABLE
promote request not received timely on slow Windows machines
019_replslot_limit.pl fails due to walsender stuck on sending FATAL to frozen walreceiver
297/297 postgresql:recovery / recovery/019_replslot_limit ERROR 306.28s exit status 29 --- regress_log_019_replslot_limit [12:56:34.033](0.228s) ok 19 - walsender termination logged [13:00:57.133](263.100s) # poll_query_until timed out executing this query: # SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3' # expecting this output: # lost # last actual query output: # unreserved # with stderr: timed out waiting for slot to be lost at /home/bf/bf-build/francolin/REL_17_STABLE/pgsql/src/test/recovery/t/019_replslot_limit.pl line 388. --- 019_replslot_limit_primary3.log 2024-10-03 12:56:34.041 UTC [673987] standby_3 FATAL: terminating connection due to administrator command 2024-10-03 12:56:34.041 UTC [673987] standby_3 STATEMENT: START_REPLICATION SLOT "rep3" 0/800000 TIMELINE 1 2024-10-03 12:56:34.066 UTC [674545] 019_replslot_limit.pl LOG: statement: SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3' 2024-10-03 12:56:34.238 UTC [674628] 019_replslot_limit.pl LOG: statement: SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3' ...
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2023-04-05%2017%3A47%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-02-04%2001%3A53%3A44 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2025-02-19%2021%3A45%3A45 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2025-03-13%2020%3A33%3A25 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2025-05-18%2009%3A50%3A44 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=schnauzer&dt=2025-06-07%2008%3A11%3A55 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2025-07-04%2020%3A43%3A37 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2025-08-20%2000%3A52%3A30 - master
027_stream_regress.pl failed on drongo due to walreceiver not reconnecting after primary restart
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-10-14%2010%3A08%3A17 - master
(drongo is a Windows animal)
166/294 postgresql:recovery / recovery/027_stream_regress ERROR 871.81s exit status 25
---
regress_log_027_stream_regress
Waiting for replication conn standby_1's replay_lsn to pass 0/158C8B98 on primary
[10:41:32.115](661.161s) # poll_query_until timed out executing this query:
# SELECT '0/158C8B98' <= replay_lsn AND state = 'streaming'
# FROM pg_catalog.pg_stat_replication
# WHERE application_name IN ('standby_1', 'walreceiver')
# expecting this output:
# t
# last actual query output:
#
---
027_stream_regress_standby_1.log
2024-10-14 10:30:28.483 UTC [4320:12] 027_stream_regress.pl LOG: disconnection: session time: 0:00:03.793 user=pgrunner
database=postgres host=127.0.0.1 port=61748
2024-10-14 10:30:31.442 UTC [8468:2] LOG: replication terminated by primary server
2024-10-14 10:30:31.442 UTC [8468:3] DETAIL: End of WAL reached on timeline 1 at 0/158C8B98.
2024-10-14 10:30:31.442 UTC [8468:4] FATAL: could not send end-of-streaming message to primary: server closed the
connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
no COPY in progress
2024-10-14 10:30:31.443 UTC [5452:7] LOG: invalid resource manager ID 101 at 0/158C8B98
2024-10-14 10:35:06.986 UTC [8648:21] LOG: restartpoint starting: time
2024-10-14 10:35:06.991 UTC [8648:22] LOG: restartpoint complete: wrote 0 buffers (0.0%), wrote 1 SLRU buffers; 0 WAL
file(s) added, 0 removed, 1 recycled; write=0.001 s, sync=0.001 s, total=0.005 s; sync files=0, longest=0.000 s,
average=0.000 s; distance=15336 kB, estimate=69375 kB; lsn=0/158C8B20, redo lsn=0/158C8B20
2024-10-14 10:35:06.991 UTC [8648:23] LOG: recovery restart point at 0/158C8B20
2024-10-14 10:35:06.991 UTC [8648:24] DETAIL: Last completed transaction was at log time 2024-10-14 10:30:24.820804+00.
2024-10-14 10:41:32.510 UTC [4220:4] LOG: received immediate shutdown request
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-09-30%2020%3A41%3A16 - REL_17_STABLE
Also 001_rep_changes.pl failed on fairywren due to walreceiver not reconnecting after primary restart
+++ tap check in src/test/subscription +++
# poll_query_until timed out executing this query:
# SELECT '0/1534000' <= replay_lsn AND state = 'streaming'
# FROM pg_catalog.pg_stat_replication
# WHERE application_name IN ('tap_sub', 'walreceiver')
# expecting this output:
# t
# last actual query output:
#
# with stderr:
# Tests were run but no plan was declared and done_testing() was not seen.
# Looks like your test exited with 25 just after 23.
[16:07:39] t/001_rep_changes.pl ...............
Dubious, test returned 25 (wstat 6400, 0x1900)
---
pgsql.build/src/test/subscription/tmp_check/log/001_rep_changes_publisher.log
2024-11-15 16:00:58.066 UTC [8716:3] 001_rep_changes.pl LOG: statement: DELETE FROM tab_rep
2024-11-15 16:00:58.068 UTC [8716:4] 001_rep_changes.pl LOG: disconnection: session time: 0:00:00.010 user=pgrunner database=postgres host=[local]
2024-11-15 16:00:58.109 UTC [3628:4] LOG: received fast shutdown request
2024-11-15 16:00:58.109 UTC [3628:5] LOG: aborting any active transactions
2024-11-15 16:00:58.121 UTC [3628:6] LOG: background worker "logical replication launcher" (PID 8756) exited with exit code 1
2024-11-15 16:00:58.121 UTC [6480:1] LOG: shutting down
2024-11-15 16:00:58.392 UTC [7740:14] tap_sub LOG: disconnection: session time: 0:00:00.682 user=pgrunner database=postgres host=[local]
2024-11-15 16:00:58.421 UTC [6480:2] LOG: checkpoint starting: shutdown immediate
2024-11-15 16:00:58.477 UTC [6480:3] LOG: checkpoint complete: wrote 9 buffers (7.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.057 s; sync files=0, longest=0.000 s, average=0.000 s; distance=617 kB, estimate=617 kB
2024-11-15 16:00:58.486 UTC [3628:7] LOG: database system is shut down
2024-11-15 16:00:58.741 UTC [8864:1] LOG: starting PostgreSQL 15.9 on x86_64-w64-mingw32, compiled by gcc.exe (Rev3, Built by MSYS2 project) 14.1.0, 64-bit
---
pgsql.build/src/test/subscription/tmp_check/log/001_rep_changes_subscriber.log
2024-11-15 16:00:57.692 UTC [5512:1] LOG: logical replication apply worker for subscription "tap_sub" has started
2024-11-15 16:00:58.394 UTC [5512:2] LOG: data stream from publisher has ended
2024-11-15 16:00:58.394 UTC [5512:3] ERROR: could not send end-of-streaming message to primary: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
no COPY in progress
2024-11-15 16:00:58.405 UTC [4848:9] LOG: background worker "logical replication worker" (PID 5512) exited with exit code 1
2024-11-15 16:00:58.483 UTC [2204:1] LOG: logical replication apply worker for subscription "tap_sub" has started
2024-11-15 16:05:33.567 UTC [5260:1] LOG: checkpoint starting: time
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-04-02%2010%3A00%3A53 - REL_16_STABLE
Also 021_twophase.pl failed on fairywren due to walreceiver not reconnecting after primary restart
[14:23:23.860](1.196s) ok 9 - Rows inserted via 2PC are visible on the subscriber
### Stopping node "publisher" using mode immediate
# Running: pg_ctl -D C:\\tools\\xmsys64\\home\\pgrunner\\bf\\root\\HEAD\\pgsql.build/testrun/subscription/021_twophase/data/t_021_twophase_publisher_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "publisher"
### Starting node "publisher"
# Running: pg_ctl -w -D C:\\tools\\xmsys64\\home\\pgrunner\\bf\\root\\HEAD\\pgsql.build/testrun/subscription/021_twophase/data/t_021_twophase_publisher_data/pgdata -l C:\\tools\\xmsys64\\home\\pgrunner\\bf\\root\\HEAD\\pgsql.build/testrun/subscription/021_twophase/log/021_twophase_publisher.log -o --cluster-name=publisher start
waiting for server to start.... done
server started
# Postmaster PID for node "publisher" is 8896
Waiting for replication conn tap_sub's replay_lsn to pass 0/178D688 on publisher
[14:31:05.104](461.244s) # poll_query_until timed out executing this query:
# SELECT '0/178D688' <= replay_lsn AND state = 'streaming'
# FROM pg_catalog.pg_stat_replication
# WHERE application_name IN ('tap_sub', 'walreceiver')
# expecting this output:
# t
# last actual query output:
#
# with stderr:
[14:31:05.241](0.137s) # Last pg_stat_replication contents:
timed out waiting for catchup at C:/tools/xmsys64/home/pgrunner/bf/root/HEAD/pgsql/src/test/subscription/t/021_twophase.pl line 242.
---
pgsql.build/testrun/subscription/021_twophase/log/021_twophase_subscriber.log
2024-12-25 14:23:24.064 UTC [4168:2] ERROR: could not receive data from WAL stream: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
2024-12-25 14:23:24.115 UTC [6164:1] LOG: logical replication apply worker for subscription "tap_sub" has started
2024-12-25 14:23:24.120 UTC [5256:4] LOG: background worker "logical replication apply worker" (PID 4168) exited with exit code 1
2024-12-25 14:28:23.097 UTC [276:4] LOG: checkpoint starting: time
2024-12-25 14:28:23.430 UTC [276:5] LOG: 1 two-phase state file was written for a long-running prepared transaction
2024-12-25 14:28:23.431 UTC [276:6] LOG: checkpoint complete: wrote 3 buffers (0.0%), wrote 1 SLRU buffers; 0 WAL file(s) added, 0 removed, 0 recycled; write=0.326 s, sync=0.001 s, total=0.334 s; sync files=0, longest=0.000 s, average=0.000 s; distance=8 kB, estimate=8 kB; lsn=0/178C418, redo lsn=0/178C3F8
2024-12-25 14:31:05.415 UTC [5256:5] LOG: received immediate shutdown request
Also 038_save_logical_slots_shutdown.pl failed due to subscriber not reconnecting after publisher restart
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-04-12%2003%3A59%3A38 - master
51/321 postgresql:recovery / recovery/038_save_logical_slots_shutdown ERROR 441.14s (exit status 255 or signal 127 SIGinvalid) --- regress_log_038_save_logical_slots_shutdown ### Restarting node "pub" # Running: pg_ctl --wait --pgdata C:\\prog\\bf\\root\\HEAD\\pgsql.build/... waiting for server to shut down.... done server stopped waiting for server to start.... done server started # Postmaster PID for node "pub" is 980 timed out waiting for match: (?^:Streaming transactions committing after ([A-F0-9]+/[A-F0-9]+), ... --- 038_save_logical_slots_shutdown_sub.log 2025-04-12 05:08:44.630 UTC [2820:1] LOG: logical replication apply worker for subscription "sub" has started 2025-04-12 05:08:44.642 UTC [5652:6] LOG: background worker "logical replication apply worker" (PID 6344) exited with exit code 1 2025-04-12 05:13:27.352 UTC [3988:1] LOG: checkpoint starting: time 2025-04-12 05:13:36.825 UTC [3988:2] LOG: checkpoint complete: wrote 62 buffers ... 2025-04-12 05:15:01.265 UTC [5652:7] LOG: received immediate shutdown request 2025-04-12 05:15:01.353 UTC [5652:8] LOG: database system is shut down --- 038_save_logical_slots_shutdown_pub.log 2025-04-12 05:08:44.634 UTC [1112:7] LOG: database system is shut down 2025-04-12 05:08:45.685 UTC [980:1] LOG: starting PostgreSQL 18devel on... 2025-04-12 05:08:45.687 UTC [980:2] LOG: listening on IPv4 address "127.0.0.1", port 18057 2025-04-12 05:08:46.225 UTC [4392:1] LOG: database system was shut down at 2025-04-12 05:08:43 UTC 2025-04-12 05:08:46.319 UTC [980:3] LOG: database system is ready to accept connections 2025-04-12 05:15:00.408 UTC [980:4] LOG: received immediate shutdown request 2025-04-12 05:15:00.942 UTC [980:5] LOG: database system is shut down
Also 002_standby.pl and 003_standby_2.pl failed due to standby not reconnecting after restart of primary
273/273 postgresql:commit_ts / commit_ts/003_standby_2 ERROR 713.24s exit status 25 --- pgsql.build/testrun/commit_ts/003_standby_2/log/regress_log_003_standby_2 [19:15:18.976](708.417s) # poll_query_until timed out executing this query: # SELECT '0/30394D8'::pg_lsn <= pg_last_wal_replay_lsn() # expecting this output: # t # last actual query output: # f # with stderr: standby never caught up at C:/tools/xmsys64/home/pgrunner/bf/root/HEAD/pgsql/src/test/modules/commit_ts/t/003_standby_2.pl line 37. --- pgsql.build/testrun/commit_ts/003_standby_2/log/003_standby_2_standby.log 2025-04-08 19:03:55.021 UTC [5396:1] LOG: started streaming WAL from primary at 0/3000000 on timeline 1 2025-04-08 19:04:02.070 UTC [5396:2] LOG: replication terminated by primary server 2025-04-08 19:04:02.070 UTC [5396:3] DETAIL: End of WAL reached on timeline 1 at 0/3030B18. 2025-04-08 19:04:02.070 UTC [5396:4] FATAL: could not send end-of-streaming message to primary: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. invalid socket no COPY in progress 2025-04-08 19:04:02.071 UTC [4144:7] LOG: invalid record length at 0/3030B18: expected at least 24, got 0 2025-04-08 19:04:06.327 UTC [4552:1] [unknown] LOG: connection received: host=[local] 2025-04-08 19:04:06.380 UTC [4552:2] [unknown] LOG: connection authenticated: user="pgrunner" method=trust (C:/tools/xmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/commit_ts/003_standby_2/data/t_003_standby_2_standby_data/pgdata/pg_hba.conf:117) 2025-04-08 19:04:06.380 UTC [4552:3] [unknown] LOG: connection authorized: user=pgrunner database=postgres application_name=003_standby_2.pl 2025-04-08 19:04:06.468 UTC [4552:4] 003_standby_2.pl LOG: statement: SELECT '0/30394D8'::pg_lsn <= pg_last_wal_replay_lsn() 2025-04-08 19:04:06.540 UTC [4552:5] 003_standby_2.pl LOG: disconnection: session time: 0:00:00.224 user=pgrunner database=postgres host=[local] ... (there is no other "started streaming WAL from primary" message)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-12-13%2021%3A40%3A38 - master
WaitEventSetWaitBlock() can still hang on Windows due to connection reset
pageinspect/page.sql fails in v14- due to freeze requested not happening
============== creating database "contrib_regression" ==============
...
test page ... FAILED 401 ms
...
---
pgsql.build/contrib/pageinspect/regression.diffs
--- C:/prog/bf/root/REL_14_STABLE/pgsql.build/contrib/pageinspect/expected/page.out 2024-09-14 14:59:50.899122300 +0000
+++ C:/prog/bf/root/REL_14_STABLE/pgsql.build/contrib/pageinspect/results/page.out 2024-11-09 05:16:52.027703100 +0000
@@ -93,8 +93,8 @@
FROM heap_page_items(get_raw_page('test1', 0)),
LATERAL heap_tuple_infomask_flags(t_infomask, t_infomask2);
t_infomask | t_infomask2 | raw_flags | combined_flags
-------------+-------------+-----------------------------------------------------------+--------------------
- 2816 | 2 | {HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID} | {HEAP_XMIN_FROZEN}
+------------+-------------+-----------------------------------------+----------------
+ 2304 | 2 | {HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID} | {}
(1 row)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2025-02-01%2004%3A57%3A21 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2025-10-14%2003%3A57%3A55 - REL_13_STABLE
Revert "Prevent instability in contrib/pageinspect's regression test."
ssl tests still have opportunity to fail due to TCP port conflict
296/305 postgresql:subscription / subscription/100_bugs OK 26.74s 14 subtests passed \342\226\266 297/305 pg_ctl restart failed ERROR 297/305 postgresql:ssl / ssl/002_scram ERROR 5.96s (exit status 255 or signal 127 SIGinvalid) ... --- regress_log_002_scram ### Restarting node "primary" # Running: pg_ctl -w -D /home/bf/bf-build/culicidae/HEAD/pgsql.build/testrun/ssl/002_scram/data/t_002_scram_primary_data/pgdata -l /home/bf/bf-build/culicidae/HEAD/pgsql.build/testrun/ssl/002_scram/log/002_scram_primary.log restart waiting for server to shut down..... done server stopped waiting for server to start.... stopped waiting pg_ctl: could not start server Examine the log output. # pg_ctl restart failed; see logfile for details: /home/bf/bf-build/culicidae/HEAD/pgsql.build/testrun/ssl/002_scram/log/002_scram_primary.log # No postmaster PID for node "primary" [20:57:30.208](5.688s) Bail out! pg_ctl restart failed --- 002_scram_primary.log 2024-11-21 20:57:28.783 UTC [4067616][postmaster][:0] LOG: received fast shutdown request 2024-11-21 20:57:28.803 UTC [4067616][postmaster][:0] LOG: aborting any active transactions 2024-11-21 20:57:28.818 UTC [4067616][postmaster][:0] LOG: background worker "logical replication launcher" (PID 4067783) exited with exit code 1 2024-11-21 20:57:28.825 UTC [4067730][checkpointer][:0] LOG: shutting down 2024-11-21 20:57:28.835 UTC [4067730][checkpointer][:0] LOG: checkpoint starting: shutdown immediate 2024-11-21 20:57:30.050 UTC [4067730][checkpointer][:0] LOG: checkpoint complete: wrote 5713 buffers (34.9%), wrote 3 SLRU buffers; 0 WAL file(s) added, 0 removed, 3 recycled; write=0.964 s, sync=0.103 s, total=1.220 s; sync files=1797, longest=0.030 s, average=0.001 s; distance=46011 kB, estimate=46011 kB; lsn=0/4474998, redo lsn=0/4474998 2024-11-21 20:57:30.094 UTC [4067616][postmaster][:0] LOG: database system is shut down 2024-11-21 20:57:30.175 UTC [4070346][postmaster][:0] LOG: starting PostgreSQL 18devel on x86_64-linux, compiled by gcc-14.2.0, 64-bit 2024-11-21 20:57:30.175 UTC [4070346][postmaster][:0] LOG: could not bind IPv4 address "127.0.0.1": Address already in use 2024-11-21 20:57:30.175 UTC [4070346][postmaster][:0] HINT: Is another postmaster already running on port 32301? If not, wait a few seconds and retry. 2024-11-21 20:57:30.175 UTC [4070346][postmaster][:0] WARNING: could not create listen socket for "127.0.0.1" 2024-11-21 20:57:30.175 UTC [4070346][postmaster][:0] FATAL: could not create any TCP/IP sockets
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2025-02-19%2004%3A57%3A22 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2025-03-23%2018%3A46%3A11 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-03-27%2022%3A20%3A42 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=francolin&dt=2025-06-26%2020%3A21%3A00 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2025-10-28%2017%3A51%3A58 - master
ssl tests fail due to TCP port conflict \ substantially reduce buildfarm failures
Parallel tests publication and subscription might fail due to concurrent tuple update
# parallel group (2 tests): subscription publication not ok 157 + publication 2251 ms ok 158 + subscription 415 ms --- /home/fedora/17-desman/buildroot/REL_16_STABLE/pgsql.build/src/test/regress/expected/publication.out 2024-12-09 18:34:02.939762233 +0000 +++ /home/fedora/17-desman/buildroot/REL_16_STABLE/pgsql.build/src/test/regress/results/publication.out 2024-12-09 18:44:48.582958859 +0000 @@ -1193,23 +1193,29 @@ ERROR: permission denied for database regression SET ROLE regress_publication_user; GRANT CREATE ON DATABASE regression TO regress_publication_user2; +ERROR: tuple concurrently updated SET ROLE regress_publication_user2; SET client_min_messages = 'ERROR'; CREATE PUBLICATION testpub2; -- ok +ERROR: permission denied for database regression --- pgsql.build/src/test/regress/log/postmaster.log 2024-12-09 18:44:46.753 UTC [1345157:903] pg_regress/publication STATEMENT: CREATE PUBLICATION testpub2; 2024-12-09 18:44:46.753 UTC [1345158:287] pg_regress/subscription LOG: statement: REVOKE CREATE ON DATABASE REGRESSION FROM regress_subscription_user3; 2024-12-09 18:44:46.754 UTC [1345157:904] pg_regress/publication LOG: statement: SET ROLE regress_publication_user; 2024-12-09 18:44:46.754 UTC [1345157:905] pg_regress/publication LOG: statement: GRANT CREATE ON DATABASE regression TO regress_publication_user2; 2024-12-09 18:44:46.754 UTC [1345157:906] pg_regress/publication ERROR: tuple concurrently updated 2024-12-09 18:44:46.754 UTC [1345157:907] pg_regress/publication STATEMENT: GRANT CREATE ON DATABASE regression TO regress_publication_user2;
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-12-24%2019%3A56%3A37 - master
Parallel tests publication and subscription might fail due to concurrent tuple update
019_replslot_limit.pl might fail due to checkpoint skipped
[12:27:41.437](0.024s) ok 18 - have walreceiver pid 637143 [12:30:42.564](181.127s) not ok 19 - walsender termination logged [12:30:42.564](0.000s) [12:30:42.564](0.000s) # Failed test 'walsender termination logged' # at t/019_replslot_limit.pl line 382. --- 019_replslot_limit_primary3.log: 2024-12-13 12:27:40.912 ACDT [637093:7] LOG: checkpoint starting: wal ... 2024-12-13 12:27:41.461 ACDT [637182:4] 019_replslot_limit.pl LOG: statement: SELECT pg_logical_emit_message(false, '', 'foo'); 2024-12-13 12:27:41.462 ACDT [637182:5] 019_replslot_limit.pl LOG: statement: SELECT pg_switch_wal(); 2024-12-13 12:27:41.463 ACDT [637182:6] 019_replslot_limit.pl LOG: disconnection: session time: 0:00:00.003 user=postgres database=postgres host=[local] 2024-12-13 12:27:41.668 ACDT [637093:8] LOG: checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 1 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.756 s; sync files=0, longest=0.000 s, average=0.000 s; distance=1024 kB, estimate=1024 kB; lsn=0/900060, redo lsn=0/700028 2024-12-13 12:27:41.668 ACDT [637093:9] LOG: checkpoints are occurring too frequently (1 second apart) 2024-12-13 12:27:41.668 ACDT [637093:10] HINT: Consider increasing the configuration parameter "max_wal_size". 2024-12-13 12:30:42.565 ACDT [637144:10] standby_3 LOG: terminating walsender process due to replication timeout 2024-12-13 12:30:42.565 ACDT [637144:11] standby_3 STATEMENT: START_REPLICATION SLOT "rep3" 0/700000 TIMELINE 1
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2025-01-30%2021%3A03%3A27 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2025-04-09%2014%3A26%3A35 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2025-05-27%2008%3A32%3A34 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2025-08-04%2019%3A12%3A29 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2025-08-08%2009%3A42%3A32 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2025-08-11%2009%3A20%3A49 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2025-08-15%2009%3A07%3A11 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2025-09-04%2015%3A41%3A50 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2025-09-16%2023%3A52%3A38 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2025-09-16%2014%3A10%3A39 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2025-09-25%2017%3A31%3A25 - REL_16_STABLE
019_replslot_limit.pl might fail due to checkpoint skipped
tablespace.sql is unstable due to lack of ORDER BY (in v15-)
# Failed test 'regression tests pass'
# at t/027_stream_regress.pl line 81.
# got: '256'
# expected: '0'
# Looks like you failed 1 test of 8.
[17:36:30] t/027_stream_regress.pl ..............
---
pgsql.build/src/test/recovery/tmp_check/log/regress_log_027_stream_regress
test tablespace ... FAILED 47555 ms
diff -U3 /home/nm/farm/xlc32/REL_15_STABLE/pgsql.build/src/test/regress/expected/tablespace.out /home/nm/farm/xlc32/REL_15_STABLE/pgsql.build/src/test/recovery/tmp_check/results/tablespace.out
--- /home/nm/farm/xlc32/REL_15_STABLE/pgsql.build/src/test/regress/expected/tablespace.out 2024-11-26 05:26:30.000000000 +0000
+++ /home/nm/farm/xlc32/REL_15_STABLE/pgsql.build/src/test/recovery/tmp_check/results/tablespace.out 2024-12-25 17:13:47.000000000 +0000
@@ -334,9 +334,9 @@
where c.reltablespace = t.oid AND c.relname LIKE 'part%_idx';
relname | spcname
-------------+------------------
+ part_a_idx | regress_tblspace
part1_a_idx | regress_tblspace
part2_a_idx | regress_tblspace
- part_a_idx | regress_tblspace
(3 rows)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2025-06-11%2007%3A49%3A10 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2025-06-16%2003%3A49%3A05 - REL_15_STABLE
Unstable regression test "tablespace" / Add ORDER BY to stabilize tablespace test for partitioned index
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-12-17%2008%3A59%3A44 - master
--- C:/prog/bf/root/HEAD/pgsql/src/test/regress/expected/stats.out 2024-09-18 19:31:14.665516500 +0000 +++ C:/prog/bf/root/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/results/stats.out 2024-12-17 09:57:08.944588500 +0000 @@ -1291,7 +1291,7 @@ SELECT :io_sum_shared_after_writes > :io_sum_shared_before_writes; ?column? ---------- - t + f (1 row) --- pgsql.build/testrun/recovery/027_stream_regress/log/027_stream_regress_primary.log 2024-12-17 09:57:06.782 UTC [8568:115] pg_regress/stats LOG: statement: CHECKPOINT; 2024-12-17 09:57:06.794 UTC [3664:40] LOG: checkpoint starting: immediate force wait 2024-12-17 09:57:06.856 UTC [3664:41] LOG: checkpoint complete: wrote 0 buffers (0.0%), wrote 1 SLRU buffers; 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.062 s; sync files=0, longest=0.000 s, average=0.000 s; distance=1875 kB, estimate=52682 kB; lsn=0/14A2F410, redo lsn=0/14A2F3B8 2024-12-17 09:57:06.857 UTC [8568:116] pg_regress/stats LOG: statement: CHECKPOINT; 2024-12-17 09:57:06.857 UTC [3664:42] LOG: checkpoint starting: immediate force wait 2024-12-17 09:57:06.859 UTC [3664:43] LOG: checkpoint complete: wrote 0 buffers (0.0%), wrote 0 SLRU buffers; 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.002 s; sync files=0, longest=0.000 s, average=0.000 s; distance=0 kB, estimate=47414 kB; lsn=0/14A2F4E0, redo lsn=0/14A2F488 2024-12-17 09:57:06.859 UTC [8568:117] pg_regress/stats LOG: statement: SELECT sum(writes) AS writes, sum(fsyncs) AS fsyncs FROM pg_stat_io WHERE object = 'relation' 2024-12-17 09:57:06.860 UTC [8568:118] pg_regress/stats LOG: statement: SELECT 77693 > 77693;
stats.sql might fail due to shared buffers also used by parallel tests
test_decoding/slot_creation_error.spec fails due to timing issue
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-02-06%2001%3A19%3A14 - master
--- C:/prog/bf/root/HEAD/pgsql/contrib/test_decoding/expected/slot_creation_error.out 2023-01-23 04:39:00.502404900 +0000
+++ C:/prog/bf/root/HEAD/pgsql.build/testrun/test_decoding/isolation/results/slot_creation_error.out 2025-02-06 02:43:51.979727000 +0000
@@ -92,23 +92,7 @@
FROM pg_stat_activity
WHERE application_name = 'isolation/slot_creation_error/s2';
<waiting ...>
-step s2_init: <... completed>
-FATAL: terminating connection due to administrator command
-server closed the connection unexpectedly
+PQconsumeInput failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-step s1_terminate_s2: <... completed>
-pg_terminate_backend
---------------------
-t
-(1 row)
-
-step s1_c: COMMIT;
-step s1_view_slot:
- SELECT slot_name, slot_type, active FROM pg_replication_slots WHERE slot_name = 'slot_creation_error'
-
-slot_name|slot_type|active
----------+---------+------
-(0 rows)
-
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-02-13%2015%3A24%3A51 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2025-12-27%2011%3A25%3A30 - REL_17_STABLE
Improving tracking/processing of buildfarm test failures \ failure of slot_creation_error
postgres_fdw.sql might fail due to autovacuum
--- /home/bf/bf-build/culicidae/HEAD/pgsql/contrib/postgres_fdw/expected/postgres_fdw.out 2025-03-11 15:21:27.681846597 +0000 +++ /home/bf/bf-build/culicidae/HEAD/pgsql.build/testrun/postgres_fdw-running/regress/results/postgres_fdw.out 2025-03-14 04:02:32.573999799 +0000 @@ -6392,6 +6392,7 @@ UPDATE ft2 SET c3 = 'bar' WHERE postgres_fdw_abs(c1) > 2000 RETURNING *; c1 | c2 | c3 | c4 | c5 | c6 | c7 | c8 ------+----+-----+----+----+----+------------+---- + 2010 | 0 | bar | | | | ft2 | 2001 | 1 | bar | | | | ft2 | 2002 | 2 | bar | | | | ft2 | 2003 | 3 | bar | | | | ft2 | @@ -6401,7 +6402,6 @@ 2007 | 7 | bar | | | | ft2 | 2008 | 8 | bar | | | | ft2 | 2009 | 9 | bar | | | | ft2 | - 2010 | 0 | bar | | | | ft2 | (10 rows) EXPLAIN (verbose, costs off)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2025-03-28%2019%3A50%3A57 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2025-03-26%2014%3A38%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2025-04-13%2018%3A49%3A27 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2025-05-03%2019%3A34%3A58 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2025-07-02%2017%3A02%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2025-07-11%2005%3A56%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2025-10-22%2021%3A49%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-11-04%2021%3A30%3A48 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-11-12%2006%3A20%3A10 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2025-11-22%2012%3A25%3A01 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-11-25%2003%3A11%3A52 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2025-11-28%2005%3A33%3A59 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-12-04%2017%3A50%3A38 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-12-06%2001%3A12%3A03 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=scorpion&dt=2025-12-09%2015%3A23%3A53 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-12-10%2018%3A41%3A43 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2025-12-24%2011%3A54%3A48 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-12-27%2015%3A53%3A07 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-12-29%2004%3A58%3A06 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2026-01-14%2008%3A19%3A58 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2026-01-15%2001%3A06%3A07 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2026-01-20%2004%3A32%3A29 - master
Regression test postgres_fdw might fail due to autovacuum
026_overwrite_contrecord.pl might fail fail on extremely slow animals
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-03-14%2013%3A52%3A09 - master
31/324 postgresql:recovery / recovery/026_overwrite_contrecord ERROR 325.72s (exit status 255 or signal 127 SIGinvalid) --- # Initializing node "standby" from backup "backup" of node "primary" ### Enabling streaming replication for node "standby" ### Starting node "standby" # Running: pg_ctl -w -D /home/bf/bf-build/skink-master/HEAD/pgsql.build/testrun/recovery/026_overwrite_contrecord/data/t_026_overwrite_contrecord_standby_data/pgdata -l /home/bf/bf-build/skink-master/HEAD/pgsql.build/testrun/recovery/026_overwrite_contrecord/log/026_overwrite_contrecord_standby.log -o --cluster-name=standby start waiting for server to start........ stopped waiting pg_ctl: could not start server Examine the log output. # pg_ctl start failed; see logfile for details: /home/bf/bf-build/skink-master/HEAD/pgsql.build/testrun/recovery/026_overwrite_contrecord/log/026_overwrite_contrecord_standby.log # No postmaster PID for node "standby" [14:44:39.051](7.936s) Bail out! pg_ctl start failed --- 026_overwrite_contrecord_standby.log: 2025-03-14 14:44:38.533 UTC [1558222][startup][:0] LOG: database system was interrupted; last known up at 2025-03-14 14:44:30 UTC 2025-03-14 14:44:38.806 UTC [1558222][startup][:0] LOG: invalid checkpoint record 2025-03-14 14:44:38.808 UTC [1558222][startup][:0] PANIC: could not locate a valid checkpoint record at 0/2094248 2025-03-14 14:44:38.937 UTC [1555866][postmaster][:0] LOG: startup process (PID 1558222) was terminated by signal 6: Aborted 2025-03-14 14:44:38.937 UTC [1555866][postmaster][:0] LOG: aborting startup due to startup process failure --- 026_overwrite_contrecord_primary.log: 2025-03-14 14:44:30.536 UTC [1553014][client backend][4/2:0] LOG: statement: SELECT pg_walfile_name(pg_current_wal_insert_lsn()) 2025-03-14 14:44:30.621 UTC [1519069][checkpointer][:0] LOG: checkpoint complete: wrote 109 buffers (85.2%), wrote 3 SLRU buffers; 0 WAL file(s) added, 0 removed, 0 recycled; write=12.404 s, sync=0.013 s, total=12.574 s; sync files=28, longest=0.004 s, average=0.001 s; distance=8299 kB, estimate=8299 kB; lsn=0/2094248, redo lsn=0/1FA5A48 2025-03-14 14:44:31.119 UTC [1519026][postmaster][:0] LOG: received immediate shutdown request
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-03-09%2020%3A23%3A09 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-04-02%2018%3A11%3A30 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-08-01%2016%3A14%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-08-29%2011%3A32%3A38 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-09-08%2019%3A22%3A26 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-09-09%2003%3A41%3A32 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-09-09%2009%3A57%3A40 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-10-03%2017%3A56%3A04 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-10-07%2013%3A32%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-10-08%2008%3A25%3A40 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-10-14%2022%3A22%3A32 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-10-22%2022%3A50%3A39 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-10-24%2005%3A42%3A15 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-10-29%2000%3A17%3A08 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-11-11%2005%3A17%3A13 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-11-22%2017%3A07%3A37 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-11-26%2023%3A53%3A45 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-12-22%2022%3A34%3A50 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-12-24%2017%3A01%3A39 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-12-31%2000%3A58%3A00 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2026-01-13%2009%3A14%3A15 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2026-01-18%2016%3A22%3A47 - REL_17_STABLE
The 026_overwrite_contrecord test might fail on extremely slow animals
002_pg_upgrade.pl failed due to primary key not restored
140/334 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade ERROR 340.81s exit status 2
---
pgsql.build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade
...
pg_restore: error: could not execute query: ERROR: there is no unique constraint matching given keys for referenced table "pk"
Command was: ALTER TABLE fkpart5.fk
ADD CONSTRAINT fk_a_fkey FOREIGN KEY (a) REFERENCES fkpart5.pk(a);
...
--- /home/bf/bf-build/serinus/HEAD/pgsql.build/testrun/pg_upgrade/002_pg_upgrade/data/tmp_test_KTss/src_dump.sql_adjusted 2025-04-07 17:58:51.304009096 +0000
+++ /home/bf/bf-build/serinus/HEAD/pgsql.build/testrun/pg_upgrade/002_pg_upgrade/data/tmp_test_KTss/dest_dump.sql_adjusted 2025-04-07 17:58:56.544020733 +0000
@@ -455613,11 +455613,6 @@
ALTER TABLE ONLY fkpart4.dropfk
ADD CONSTRAINT dropfk_a_fkey FOREIGN KEY (a) REFERENCES fkpart4.droppk(a);
--
--- Name: fk fk_a_fkey; Type: FK CONSTRAINT; Schema: fkpart5; Owner: bf
---
-ALTER TABLE fkpart5.fk
- ADD CONSTRAINT fk_a_fkey FOREIGN KEY (a) REFERENCES fkpart5.pk(a);
...
pg_dump/restore failure (dependency?) on BF serinus
Instability of pg_walsummary/002_blocks.pl due to timing (after f4694e0f3)
# Failed test 'WAL summarizer generates statistics for WAL reads' # at /home/bf/bf-build/culicidae/REL_18_STABLE/pgsql/src/bin/pg_walsummary/t/002_blocks.pl line 54. # got: 'f' # expected: 't' # Looks like you failed 1 test of 8. pgsql.build/testrun/pg_walsummary/002_blocks/log/regress_log_002_blocks [12:29:12.131](0.351s) ok 1 - WAL summarization caught up after insert [12:29:12.196](0.065s) not ok 2 - WAL summarizer generates statistics for WAL reads [12:29:12.198](0.002s) # Failed test 'WAL summarizer generates statistics for WAL reads' # at /home/bf/bf-build/culicidae/REL_18_STABLE/pgsql/src/bin/pg_walsummary/t/002_blocks.pl line 54. [12:29:12.198](0.000s) # got: 'f' # expected: 't' [12:29:12.267](0.069s) # after insert, summarized through 0/1821510 [12:29:12.507](0.240s) ok 3 - got new WAL summary after update
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2025-04-09%2007%3A36%3A05 - master
Instability of pg_walsummary/002_blocks.pl due to timing
test_shm_mq times out on Hurd animal fruitcrow due to OS issue
timed out after 3600 secs --- # +++ regress install-check in src/test/modules/test_shm_mq +++ # using postmaster on /home/demo/build-farm-19.1/buildroot/tmp/buildfarm-GBEDDQ, port 5678 ---- hang ----
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-04%2019%3A56%3A32 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-08%2007%3A40%3A39 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-14%2008%3A32%3A45 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-16%2020%3A53%3A55 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-03%2008%3A50%3A53 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-12%2008%3A35%3A31 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-03%2020%3A58%3A54 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-04%2007%3A29%3A13 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-05%2007%3A55%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-08%2019%3A10%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-09%2007%3A10%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-10%2020%3A26%3A33 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-19%2007%3A29%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-25%2020%3A08%3A11 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-26%2020%3A31%3A54 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-26%2021%3A35%3A14 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-29%2007%3A10%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-29%2021%3A31%3A37 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-01%2019%3A35%3A36 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-01%2020%3A38%3A59 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-02%2008%3A30%3A51 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-02%2009%3A34%3A34 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-03%2009%3A00%3A48 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-05%2019%3A10%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-09%2020%3A35%3A07 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-10%2019%3A35%3A23 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-14%2009%3A20%3A17 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-14%2010%3A23%3A40 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-16%2021%3A29%3A17 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-17%2021%3A01%3A46 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-19%2021%3A25%3A20 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-19%2008%3A03%3A07 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-23%2019%3A10%3A04 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-23%2021%3A36%3A30 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-22%2009%3A21%3A57 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-22%2021%3A22%3A55 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-23%2022%3A40%3A11 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-26%2010%3A31%3A47 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-26%2020%3A45%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-27%2009%3A30%3A08 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-28%2020%3A45%3A03 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-29%2009%3A04%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-30%2008%3A10%3A06 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-30%2020%3A41%3A03 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-30%2022%3A10%3A57 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-11-05%2008%3A10%3A05 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-11-05%2011%3A24%3A41 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-11-06%2023%3A13%3A16 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-11-07%2000%3A36%3A32 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-11-10%2008%3A10%3A02 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-11-13%2020%3A10%3A49 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-11-14%2022%3A01%3A58 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-11-17%2009%3A03%3A58 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-11-22%2020%3A10%3A04 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-11-30%2008%3A10%3A03 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-03%2020%3A35%3A16 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-05%2021%3A32%3A26 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-03%2008%3A10%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-09%2008%3A36%3A51 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-10%2020%3A55%3A41 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-11%2009%3A37%3A28 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-11%2011%3A02%3A08 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-16%2008%3A10%3A04 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-17%2008%3A10%3A06 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-16%2009%3A44%3A41 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-16%2010%3A48%3A24 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-17%2010%3A37%3A25 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-15%2022%3A17%3A39 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-17%2022%3A19%3A45 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-19%2008%3A36%3A44 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-19%2009%3A41%3A03 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-18%2022%3A07%3A13 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-23%2009%3A31%3A16 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-23%2021%3A57%3A31 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-22%2008%3A10%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-24%2009%3A20%3A43 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-24%2020%3A36%3A19 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-25%2008%3A10%3A04 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-30%2008%3A10%3A04 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-30%2020%3A10%3A06 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-30%2022%3A52%3A38 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-31%2020%3A10%3A03 - REL_18_STABLE
Also pg_walinspect.sql times out on Hurd animal fruitcrow
# +++ regress check in contrib/pg_walinspect +++ # initializing database system by copying initdb template # using temp instance on port 5678 with PID 1965 ---- hang ----
Also check-pg_upgrade timed out on Hurd animal fruitcrow
prepared_xacts ... ok 146 ms parallel group (20 tests, in groups of 3): gin gist ---- hang ----
Also install-check-C timed out on Hurd animal fruitcrow
ok 201 + xml 1070 ms # parallel group (12 tests, in groups of 1): partition_join partition_prune reloptions hash_part indexing partition_aggregate partition_info tuplesort explain compression memoize ---- hang ----
Also test_decoding timed out on Hurd animal fruitcrow
# +++ regress check in contrib/test_decoding +++ # initializing database system by copying initdb template # using temp instance on port 5678 with PID 10438 ---- hang ----
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-10%2009%3A04%3A15 - REL_18_STABLE
Also pg_freespacemap timed out on Hurd animal fruitcrow
# +++ regress check in contrib/pg_freespacemap +++ # initializing database system by copying initdb template # using temp instance on port 5788 with PID 6502 ---- hang ----
Also multiple-row-versions.spec timed out on Hurd animal fruitcrow
test two-ids ... ok 1788 ms test multiple-row-versions ... ---- hang ----
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-11-29%2020%3A35%3A05 - REL_15_STABLE
GNU/Hurd portability patches \ processes get stuck after poll()
pg_stat_statements/entry_timestamp.sql failed due to zero time diff on Hurd animal fruitcrow
# +++ regress check in contrib/pg_stat_statements +++ ... not ok 9 - entry_timestamp 21 ms ... --- pgsql.build/contrib/pg_stat_statements/regression.diffs diff -U3 /home/demo/client-code-REL_19_1/buildroot/HEAD/pgsql.build/contrib/pg_stat_statements/expected/entry_timestamp.out /home/demo/client-code-REL_19_1/buildroot/HEAD/pgsql.build/contrib/pg_stat_statements/results/entry_timestamp.out --- /home/demo/client-code-REL_19_1/buildroot/HEAD/pgsql.build/contrib/pg_stat_statements/expected/entry_timestamp.out 2025-10-25 08:45:03.000000000 +0100 +++ /home/demo/client-code-REL_19_1/buildroot/HEAD/pgsql.build/contrib/pg_stat_statements/results/entry_timestamp.out 2025-10-25 08:57:31.000000000 +0100 @@ -147,7 +147,7 @@ WHERE query LIKE '%STMTTS%'; total | minmax_exec_zero | minmax_ts_after_ref | stats_since_after_ref -------+------------------+---------------------+----------------------- - 2 | 1 | 2 | 0 + 2 | 2 | 2 | 0 (1 row) -- Cleanup
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-30%2011%3A04%3A28 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-05%2022%3A36%3A14 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-12-05%2022%3A49%3A02 - master
GNU/Hurd portability patches \ failure due to zero time difference
Under investigation
004_subscription.pl failed on Windows due to pg_upgrade failed to restore schema
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-03-10%2019%3A26%3A35 - master
3/310 postgresql:pg_upgrade / pg_upgrade/004_subscription ERROR 78.30s exit status 25 --- pgsql.build/testrun/pg_upgrade/004_subscription/log/regress_log_004_subscription ... Restoring global objects in the new cluster ok Restoring database schemas in the new cluster *failure* Consult the last few lines of "C:/prog/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20250310T194018.517/log/pg_upgrade_dump_1.log" for the probable cause of the failure. Failure, exiting [19:40:50.723](33.928s) not ok 9 - run of pg_upgrade for old instance when the subscription tables are in init/ready state [19:40:50.723](0.001s) # Failed test 'run of pg_upgrade for old instance when the subscription tables are in init/ready state' # at C:/prog/bf/root/HEAD/pgsql/src/bin/pg_upgrade/t/004_subscription.pl line 272. [19:40:50.724](0.001s) not ok 10 - pg_upgrade_output.d/ removed after successful pg_upgrade [19:40:50.725](0.000s) # Failed test 'pg_upgrade_output.d/ removed after successful pg_upgrade' # at C:/prog/bf/root/HEAD/pgsql/src/bin/pg_upgrade/t/004_subscription.pl line 287.
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2026-01-12%2000%3A09%3A37 - master
Random pg_upgrade 004_subscription test failure on drongo
006_transfer_modes.pl failed on Windows due to pg_upgrade failed to restore schema
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-04-08%2004%3A18%3A15 - master
| 5/320 - pg_upgrade with transfer mode --copy: stdout matches FAIL 5/320 postgresql:pg_upgrade / pg_upgrade/006_transfer_modes ERROR 213.34s exit status 1 ... # Performing Upgrade ... # Restoring database schemas in the new cluster # *failure* # # Consult the last few lines of "C:/prog/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/006_transfer_modes/data/t_006_transfer_modes_new_data/pgdata/pg_upgrade_output.d/20250408T043310.337/log/pg_upgrade_dump_1.log" for # the probable cause of the failure. # Failure, exiting # ' # doesn't match '(?^:.* not supported on this platform|could not .* between old and new data directories: .*)' # Looks like you failed 1 test of 12. (test program exited with status code 1) ------------------------------------------------------------------------------
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-04-21%2008%3A03%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-07-21%2012%3A35%3A58 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-08-22%2000%3A04%3A05 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-12-28%2003%3A43%3A24 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-14%2010%3A52%3A35 - REL_18_STABLE
optimize file transfer in pg_upgrade \ 006_transfer_modes failed during the past month
subscription.sql sporadically fails on hamerkop due to zero time difference
(hammerkop is a Windows animal)
diff --strip-trailing-cr -U3 c:/build-farm-local/buildroot/HEAD/pgsql/src/test/regress/expected/subscription.out c:/build-farm-local/buildroot/HEAD/pgsql.build/testrun/regress/regress/results/subscription.out --- c:/build-farm-local/buildroot/HEAD/pgsql/src/test/regress/expected/subscription.out 2025-06-28 20:13:02 +0900 +++ c:/build-farm-local/buildroot/HEAD/pgsql.build/testrun/regress/regress/results/subscription.out 2025-06-28 20:35:21 +0900 @@ -70,7 +70,7 @@ SELECT :'prev_stats_reset' < stats_reset FROM pg_stat_subscription_stats WHERE subname = 'regress_testsub'; ?column? ---------- - t + f (1 row) -- fail - name already exists
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2025-07-09%2011%3A02%3A23 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2025-07-27%2011%3A02%3A25 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2025-08-09%2011%3A05%3A00 - master
stats.sql might fail ... \ similar buildfarm failure
check-pg_upgrade fails on LLVM-enabled animals due to double free or corruption
...
foreign_data ... FAILED (test process exited with exit code 2) 36124 ms
...
---
pgsql.build/src/bin/pg_upgrade/log/postmaster1.log
2025-09-18 04:31:00.509 CEST [978394][client backend][3/4978:0] LOG: statement: RESET search_path;
2025-09-18 04:31:00.515 CEST [978394][client backend][:0] LOG: disconnection: session time: 0:00:35.755 user=bf database=regression host=[local]
double free or corruption (!prev)
...
2025-09-18 04:31:00.870 CEST [719408][postmaster][:0] LOG: server process (PID 978394) was terminated by signal 6: Aborted
---
stack trace: pgsql.build/src/bin/pg_upgrade/tmp_check/data.old/core
07f6b19eaa56c in _int_free_merge_chunk (av=av@entry=0x7f6b19ff1ac0 <main_arena>, p=p@entry=0xfba29e0, size=272) at ./malloc/malloc.c:4721
#7 0x00007f6b19eaa6c6 in _int_free_chunk (av=av@entry=0x7f6b19ff1ac0 <main_arena>, p=p@entry=0xfba29e0, size=<optimized out>, have_lock=<optimized out>, have_lock@entry=0) at ./malloc/malloc.c:4667
#8 0x00007f6b19ead3c0 in _int_free (av=0x7f6b19ff1ac0 <main_arena>, p=0xfba29e0, have_lock=0) at ./malloc/malloc.c:4699
#9 __GI___libc_free (mem=<optimized out>) at ./malloc/malloc.c:3476
#10 0x00007f6b1a29053c in ?? () from /lib/x86_64-linux-gnu/libgcc_s.so.1
#11 0x00007f6b1a290574 in ?? () from /lib/x86_64-linux-gnu/libgcc_s.so.1
#12 0x00007f6b1b2b7fc2 in _dl_call_fini (closure_map=closure_map@entry=0x7f6b1ae49660) at ./elf/dl-call_fini.c:43
#13 0x00007f6b1b2bae72 in _dl_fini () at ./elf/dl-fini.c:120
#14 0x00007f6b19e4c291 in __run_exit_handlers (status=0, listp=0x7f6b19ff1680 <__exit_funcs>, run_list_atexit=run_list_atexit@entry=true, run_dtors=run_dtors@entry=true) at ./stdlib/exit.c:118
#15 0x00007f6b19e4c35a in __GI_exit (status=<optimized out>) at ./stdlib/exit.c:148
#16 0x000000000078d80c in proc_exit (code=0) at /home/bf/bf-build/petalura/REL_13_STABLE/pgsql.build/../pgsql/src/backend/storage/ipc/ipc.c:156
#17 0x00000000007b44e1 in PostgresMain (argc=1, argv=<optimized out>, dbname=<optimized out>, username=<optimized out>) at /home/bf/bf-build/petalura/REL_13_STABLE/pgsql.build/../pgsql/src/backend/tcop/postgres.c:4604
#18 0x000000000073498b in BackendRun (port=0xf8562a0) at /home/bf/bf-build/petalura/REL_13_STABLE/pgsql.build/../pgsql/src/backend/postmaster/postmaster.c:4561
#19 0x0000000000734337 in BackendStartup (port=<optimized out>) at /home/bf/bf-build/petalura/REL_13_STABLE/pgsql.build/../pgsql/src/backend/postmaster/postmaster.c:4245
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2025-09-16%2003%3A29%3A05 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2025-09-27%2002%3A19%3A03 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-09-16%2011%3A09%3A07 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-09-21%2005%3A29%3A42 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-09-16%2016%3A44%3A37 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-09-27%2008%3A11%3A09 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-09-29%2023%3A36%3A24 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2025-09-30%2014%3A58%3A40 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-10-03%2005%3A11%3A12 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-10-03%2009%3A59%3A38 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-10-08%2021%3A23%3A08 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-10-11%2018%3A57%3A42 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-10-13%2019%3A14%3A40 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-10-15%2009%3A12%3A36 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-10-16%2019%3A07%3A52 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-10-17%2020%3A37%3A34 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2025-10-17%2020%3A01%3A15 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-10-19%2021%3A57%3A01 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=desmoxytes&dt=2025-10-21%2008%3A04%3A49 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-10-21%2010%3A49%3A45 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-10-22%2001%3A03%3A27 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2025-10-23%2020%3A32%3A34 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2025-11-04%2007%3A09%3A46 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-11-04%2010%3A11%3A28 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2025-11-05%2012%3A28%3A58 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-11-05%2012%3A28%3A01 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=desmoxytes&dt=2025-11-06%2004%3A03%3A37 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=desmoxytes&dt=2025-11-07%2023%3A49%3A50 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-11-08%2002%3A03%3A23 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-11-11%2002%3A14%3A25 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-11-29%2016%3A59%3A10 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-12-11%2002%3A12%3A37 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dragonet&dt=2025-12-17%2001%3A34%3A36 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-12-17%2019%3A47%3A52 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-12-23%2012%3A26%3A06 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-12-26%2013%3A09%3A00 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2025-12-31%2002%3A11%3A59 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2026-01-08%2007%3A51%3A54 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2026-01-14%2022%3A29%3A18 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dragonet&dt=2026-01-15%2016%3A21%3A08 - REL_14_STABLE
Also stage Check failed on LLVM-enabled animals due to memory error on free
2025-10-22 15:21:06.258 CEST [2770320][client backend][:0] LOG: disconnection: session time: 0:00:23.196 user=bf database=regression host=[local] corrupted size vs. prev_size while consolidating ... #4 0x00007fae8ce33291 in __libc_message_impl (fmt=fmt@entry=0x7fae8cfb532d "%s\\n") at ../sysdeps/posix/libc_fatal.c:134 #5 0x00007fae8cea8465 in malloc_printerr (str=str@entry=0x7fae8cfb8748 "corrupted size vs. prev_size while consolidating") at ./malloc/malloc.c:5829 #6 0x00007fae8ceaa594 in _int_free_merge_chunk (av=av@entry=0x7fae8cff1ac0 <main_arena>, p=0x21f8f730, p@entry=0x21f8fe70, size=2128) at ./malloc/malloc.c:4737 #7 0x00007fae8ceaa6c6 in _int_free_chunk (av=av@entry=0x7fae8cff1ac0 <main_arena>, p=p@entry=0x21f8fe70, size=<optimized out>, have_lock=<optimized out>, have_lock@entry=0) at ./malloc/malloc.c:4667 #8 0x00007fae8cead3c0 in _int_free (av=0x7fae8cff1ac0 <main_arena>, p=0x21f8fe70, have_lock=0) at ./malloc/malloc.c:4699 #9 __GI___libc_free (mem=<optimized out>) at ./malloc/malloc.c:3476 #10 0x00007fae8cd9753c in ?? () from /lib/x86_64-linux-gnu/libgcc_s.so.1 #11 0x00007fae8cd97574 in ?? () from /lib/x86_64-linux-gnu/libgcc_s.so.1 #12 0x00007fae8e3a0fc2 in _dl_call_fini (closure_map=closure_map@entry=0x7fae8e042660) at ./elf/dl-call_fini.c:43 #13 0x00007fae8e3a3e72 in _dl_fini () at ./elf/dl-fini.c:120 #14 0x00007fae8ce4c291 in __run_exit_handlers (status=0, listp=0x7fae8cff1680 <__exit_funcs>, run_list_atexit=run_list_atexit@entry=true, run_dtors=run_dtors@entry=true) at ./stdlib/exit.c:118 #15 0x00007fae8ce4c35a in __GI_exit (status=<optimized out>) at ./stdlib/exit.c:148 #16 0x000000000078d92c in proc_exit (code=0) at /home/bf/bf-build/petalura/REL_13_STABLE/pgsql.build/../pgsql/src/backend/storage/ipc/ipc.c:156 #17 0x00000000007b4601 in PostgresMain (argc=1, argv=<optimized out>, dbname=<optimized out>, username=<optimized out>) at /home/bf/bf-build/petalura/REL_13_STABLE/pgsql.build/../pgsql/src/backend/tcop/postgres.c:4604
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dragonet&dt=2025-10-21%2015%3A14%3A12 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=desmoxytes&dt=2025-11-07%2023%3A49%3A50 - REL_13_STABLE
Instability of phycodorus in pg_upgrade tests with JIT
multiple-row-versions.spec fails on Hurd animal due to OS issue
test two-ids ... ok 1046 ms test multiple-row-versions ... FAILED (test process exited with exit code 1) 46823 ms 2025-09-03 08:55:50.071 BST [27009:4] LOG: server process (PID 27147) was terminated by signal 11: Segmentation fault 2025-09-03 08:55:50.071 BST [27009:5] DETAIL: Failed process was running: CREATE TABLE t (id int NOT NULL, txt text) WITH (fillfactor=50); INSERT INTO t (id) SELECT x FROM (SELECT * FROM generate_series(1, 1000000)) a(x); ALTER TABLE t ADD PRIMARY KEY (id);
(the test can also fail with timeout)
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-15%2019%3A37%3A03 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-18%2007%3A57%3A40 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-29%2020%3A01%3A38 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-11-26%2008%3A33%3A01 - REL_15_STABLE
GNU/Hurd portability patches \ failures in the isolation tests
Miscellaneous tests fail on on Hurd animal due to invalid signal received
TRAP: failed Assert("postgres_signal_arg < PG_NSIG"), File: "pqsignal.c", Line: 91, PID: 27858
postgres(ExceptionalCondition+0x5a) [0x1006667ba]
postgres(+0x6c3242) [0x1006c3242]
/lib/x86_64-gnu/libc.so.0.3(+0x39fee) [0x102b67fee]
/lib/x86_64-gnu/libc.so.0.3(+0x39fdd) [0x102b67fdd]
2025-09-04 22:36:15.309 BST [27428:4] LOG: server process (PID 27858) was terminated by signal 6: Aborted
2025-09-04 22:36:15.309 BST [27428:5] DETAIL: Failed process was running: TRUNCATE hash_cleanup_heap;
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-30%2007%3A28%3A50 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-11-04%2021%3A58%3A07 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-11-05%2008%3A10%3A05 - REL_13_STABLE
GNU/Hurd portability patches \ invalid postgres_signal_arg
001_pgbench_with_server.pl fails on greenfly due to lwlock corruption
# Failed test 'concurrent OID generation status (got 2 vs expected 0)'
# at t/001_pgbench_with_server.pl line 31.
...
TRAP: FailedAssertion("!(oldstate & LW_VAL_EXCLUSIVE)", File: "lwlock.c", Line: 1843, PID: 1101973)
postgres: main: gburd postgres [local] CREATE TYPE(ExceptionalCondition+0x72)[0x2ac1fbffb4]
postgres: main: gburd postgres [local] CREATE TYPE(LWLockRelease+0x51e)[0x2ac22dc088]
postgres: main: gburd postgres [local] CREATE TYPE(_bt_first+0x7fa)[0x2ac20350f8]
postgres: main: gburd postgres [local] CREATE TYPE(btgettuple+0xca)[0x2ac20324da]
...
postgres: main: gburd postgres [local] CREATE TYPE(+0x2e4296)[0x2ac21d1296]
/lib/riscv64-linux-gnu/libc.so.6(+0x277cc)[0x3fa8aa77cc]
/lib/riscv64-linux-gnu/libc.so.6(__libc_start_main+0x78)[0x3fa8aa7878]
postgres: main: gburd postgres [local] CREATE TYPE(_start+0x20)[0x2ac1fc0150]
2025-12-05 02:46:12.401 UTC [1101958:4] LOG: server process (PID 1101973) was terminated by signal 6: Aborted
2025-12-05 02:46:12.401 UTC [1101958:5] DETAIL: Failed process was running: CREATE TYPE pg_temp.e AS ENUM ...
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2025-12-03%2018%3A29%3A56 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2025-12-09%2002%3A06%3A10 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2025-12-03%2018%3A06%3A08 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2025-12-09%2011%3A06%3A06 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2025-12-12%2001%3A06%3A07 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2025-12-16%2018%3A06%3A07 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2025-12-17%2005%3A06%3A07 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2025-12-17%2018%3A06%3A10 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2025-12-19%2004%3A06%3A08 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2025-12-23%2007%3A03%3A01 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2025-12-30%2020%3A05%3A40 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2026-01-04%2008%3A06%3A08 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2026-01-04%2014%3A06%3A07 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2026-01-06%2009%3A06%3A07 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=greenfly&dt=2026-01-15%2023%3A06%3A07 - REL_14_STABLE
greenfly lwlock corruption in REL_14_STABLE and REL_15_STABLE
010_index_concurrently_upsert.pl fails due to unexpected error
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2026-01-08%2011%3A55%3A58 - master
364/364 postgresql:test_misc / test_misc/010_index_concurrently_upsert ERROR 22.47s exit status 29
---
regress_log_010_index_concurrently_upsert
[12:59:26.329](0.008s) ok 44 - hit injection point check-exclusion-or-unique-constraint-no-conflict
[12:59:46.906](20.577s) # wait_for_injection_point timeout waiting for: exec-insert-before-insert-speculative
# Current queries in pg_stat_activity:
# pid=1264616, state=, wait_event_type=Activity, wait_event=IoWorkerMain, backend_xmin=, backend_xid=, query=
# pid=1264617, state=, wait_event_type=Activity, wait_event=IoWorkerMain, backend_xmin=, backend_xid=, query=
# pid=1264618, state=, wait_event_type=Activity, wait_event=IoWorkerMain, backend_xmin=, backend_xid=, query=
# pid=1264620, state=, wait_event_type=Activity, wait_event=CheckpointerMain, backend_xmin=, backend_xid=, query=
# pid=1264622, state=, wait_event_type=Activity, wait_event=BgwriterMain, backend_xmin=, backend_xid=, query=
# pid=1264632, state=, wait_event_type=Activity, wait_event=WalWriterMain, backend_xmin=, backend_xid=, query=
# pid=1264634, state=, wait_event_type=Activity, wait_event=AutovacuumMain, backend_xmin=, backend_xid=, query=
# pid=1264636, state=, wait_event_type=Activity, wait_event=LogicalLauncherMain, backend_xmin=, backend_xid=, query=
# pid=1265126, state=active, wait_event_type=InjectionPoint, wait_event=check-exclusion-or-unique-constraint-no-conflict, backend_xmin=835, backend_xid=, query=INSERT INTO test.tblparted VALUES (13, now()) ON CONFLICT (i) DO UPDATE SET updated_at = now();
# pid=1265129, state=idle, wait_event_type=Client, wait_event=ClientRead, backend_xmin=, backend_xid=, query=INSERT INTO test.tblparted VALUES (13, now()) ON CONFLICT (i) DO UPDATE SET updated_at = now();
# pid=1265133, state=active, wait_event_type=Lock, wait_event=virtualxid, backend_xmin=, backend_xid=, query=REINDEX INDEX CONCURRENTLY test.tbl_partition_pkey;
# pid=1271041, state=active, wait_event_type=, wait_event=, backend_xmin=836, backend_xid=, query=SELECT format('pid=%s, state=%s, wait_event_type=%s, wait_event=%s, backend_xmin=%s, backend_xid=%s,
[12:59:46.906](0.000s) not ok 45 - hit injection point exec-insert-before-insert-speculative
[12:59:46.906](0.000s) # Failed test 'hit injection point exec-insert-before-insert-speculative'
# at /home/bf/bf-build/adder/HEAD/pgsql/src/test/modules/test_misc/t/010_index_concurrently_upsert.pl line 832.
error running SQL: 'psql:<stdin>:3: ERROR: could not find injection point exec-insert-before-insert-speculative to wake up'
while running 'psql --no-psqlrc --no-align --tuples-only --quiet --dbname port=11909 host=/tmp/lZBOBqAly7 dbname='postgres' --file - --variable ON_ERROR_STOP=1' with sql '
SELECT injection_points_detach('exec-insert-before-insert-speculative');
SELECT injection_points_wakeup('exec-insert-before-insert-speculative');
' at /home/bf/bf-build/adder/HEAD/pgsql/src/test/perl/PostgreSQL/Test/Cluster.pm line 2300.
---
010_index_concurrently_upsert_node.log
2026-01-08 12:59:26.334 CET [1265129][client backend][74/5:0] LOG: statement: INSERT INTO test.tblparted VALUES (13, now()) ON CONFLICT (i) DO UPDATE SET updated_at = now();
2026-01-08 12:59:26.335 CET [1265148][client backend][:0] LOG: disconnection: session time: 0:00:00.002 user=bf database=postgres host=[local]
2026-01-08 12:59:26.336 CET [1265129][client backend][74/5:0] ERROR: invalid arbiter index list
2026-01-08 12:59:26.336 CET [1265129][client backend][74/5:0] STATEMENT: INSERT INTO test.tblparted VALUES (13, now()) ON CONFLICT (i) DO UPDATE SET updated_at = now();
2026-01-08 12:59:26.338 CET [1265150][unrecognized][:0] LOG: connection received: host=[local]
036_sequences.pl fails due to assertion triggered on concurrent sequence drop (after 7a485bd64)
[03:55:23.650](3.449s) ok 9 - REFRESH PUBLICATION will not sync newly published sequence with copy_data as false
timed out waiting for file C:\\tools\\xmsys64\\home\\pgrunner\\bf\\root\\HEAD\\pgsql.build/testrun/subscription/036_sequences/log/036_sequences_subscriber.log contents to match: (?^:WARNING: ( [A-Z0-9]+:)? missing sequence on publisher \\("public.regress_s4"\\)) at C:/tools/xmsys64/home/pgrunner/bf/root/HEAD/pgsql/src/test/perl/PostgreSQL/Test/Cluster.pm line 3539.
# Postmaster PID for node "publisher" is 8584
### Stopping node "publisher" using mode immediate
# Running: pg_ctl --pgdata C:\\tools\\xmsys64\\home\\pgrunner\\bf\\root\\HEAD\\pgsql.build/testrun/subscription/036_sequences/data/t_036_sequences_publisher_data/pgdata --mode immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "publisher"
# No postmaster PID for node "subscriber"
[04:01:05.090](341.440s) # Tests were run but no plan was declared and done_testing() was not seen.
---
pgsql.build/testrun/subscription/036_sequences/log/036_sequences_subscriber.log
2026-01-19 03:55:26.294 UTC [7256:3] ERROR: logical replication sequence synchronization failed for subscription "regress_seq_sub"
2026-01-19 03:55:26.605 UTC [8412:4] LOG: background worker "logical replication sequencesync worker" (PID 7256) exited with exit code 1
2026-01-19 03:55:26.778 UTC [8868:1] LOG: logical replication sequence synchronization worker for subscription "regress_seq_sub" has started
TRAP: failed Assert("!isnull"), File: "../pgsql/src/backend/replication/logical/sequencesync.c", Line: 255, PID: 8868
2026-01-19 03:55:35.642 UTC [8412:5] LOG: background worker "logical replication sequencesync worker" (PID 8868) was terminated by exception 0xC0000409
2026-01-19 03:55:35.642 UTC [8412:6] HINT: See C include file "ntstatus.h" for a description of the hexadecimal value.
Logical Replication of sequences \ buildfarm failure in fairywren
Fixed Test Failures
001_extension_control_path.pl fails on Windows due to SSPI authentication error (after f3c9e341c)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2026-01-02%2008%3A25%3A13 - master
319/347 postgresql:test_extensions / test_extensions/001_extension_control_path ERROR 25.60s exit status 2 --- pgsql.build/testrun/test_extensions/001_extension_control_path/log/regress_log_001_extension_control_path connection error: 'psql: error: connection to server at "127.0.0.1", port 28209 failed: FATAL: SSPI authentication failed for user "user01"'
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2026-01-04%2000%3A53%3A20 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2026-01-04%2021%3A10%3A45 - master
Allow role created by new test to log in on Windows
Allow role created by new test to log in on Windows
002_worker_terminate.pl failed on prion due to increased log_error_verbosity (after f1e251be8)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-06%2008%3A26%3A47 - master
t/002_worker_terminate.pl (Wstat: 65280 Tests: 0 Failed: 0) Non-zero exit status: 255 Parse errors: No plan found in TAP output Files=2, Tests=8, 183 wallclock secs ( 0.00 usr 0.02 sys + 0.62 cusr 0.43 csys = 1.07 CPU) Result: FAIL --- regress_log_002_worker_terminate [09:26:08.908](0.019s) # initializing database system by copying initdb template ... ### Starting node "mynode" ... server started # Postmaster PID for node "mynode" is 1727021 timed out waiting for match: (?^:LOG: worker_spi dynamic worker 0 initialized with .*\\..*) at t/002_worker_terminate.pl line 32. # Postmaster PID for node "mynode" is 1727021 ### Stopping node "mynode" using mode immediate
Improve portability of new worker_spi test
Improve portability of new worker_spi test
010_index_concurrently_upsert.pl fails against cache-clobbering builds (after e1c971945d)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-07%2022%3A53%3A07 - master
# Looks like your test exited with 29 just after 55.
[23:50:20] t/010_index_concurrently_upsert.pl ..
Dubious, test returned 29 (wstat 7424, 0x1d00)
---
[23:50:20.299](218.243s) ok 55 - s1 hit injection point during attach (CLOBBER_CACHE_ALWAYS)
[23:50:20.313](0.014s) # issuing query 1 via background psql:
# SELECT injection_points_set_local();
# SELECT injection_points_attach('exec-insert-before-insert-speculative', 'wait');
IPC::Run: timeout on timer #29 at /usr/share/perl5/vendor_perl/IPC/Run.pm line 2951.
# Postmaster PID for node "node" is 3925758
### Stopping node "node" using mode immediate
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-08%2000%3A33%3A07 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-08%2001%3A33%3A07 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-08%2012%3A13%3A07 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-08%2016%3A53%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-08%2018%3A03%3A07 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-08%2019%3A13%3A07 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-08%2022%3A03%3A07 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-09%2008%3A13%3A07 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-09%2009%3A33%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-09%2011%3A33%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-09%2017%3A58%3A15 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-09%2019%3A03%3A30 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-09%2019%3A03%3A30 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-10%2023%3A33%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-11%2021%3A36%3A45 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-11%2021%3A36%3A45 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-12%2007%3A43%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-12%2013%3A33%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-12%2015%3A43%3A07 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-12%2016%3A53%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-12%2018%3A03%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-12%2019%3A13%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-13%2003%3A13%3A07 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2026-01-13%2005%3A43%3A06 - master
Issues with ON CONFLICT UPDATE and REINDEX CONCURRENTLY \ the new test timed out in prion
Fix test_misc/010_index_concurrently_upsert for cache-clobbering builds
031_recovery_conflict.pl fails due to recovery conflicts during WAIT FOR (after f30848cb0)
32/363 postgresql:recovery / recovery/031_recovery_conflict ERROR 10.13s exit status 29 --- pgsql.build/testrun/recovery/031_recovery_conflict/log/regress_log_031_recovery_conflict [09:55:31.946](0.000s) ok 10 - tablespace conflict: cursor with conflicting temp file established Waiting for replication conn standby's replay_lsn to pass 0/03475618 on primary error running SQL: 'psql:<stdin>:1: ERROR: canceling statement due to conflict with recovery DETAIL: User was or might have been using tablespace that must be dropped.' while running 'psql --no-psqlrc --no-align --tuples-only --quiet --dbname port=16788 host=/tmp/xFqzrUdBJ3 dbname='postgres' --file - --variable ON_ERROR_STOP=1' with sql 'WAIT FOR LSN '0/03475618' WITH (MODE 'standby_replay', timeout '180s', no_throw);' at /home/bf/bf-build/kestrel/HEAD/pgsql/src/test/perl/PostgreSQL/Test/Cluster.pm line 2300.
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2026-01-06%2011%3A27%3A36 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2026-01-06%2022%3A12%3A22 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2026-01-07%2002%3A41%3A57 - master
waiting for wal lsn replay: reloaded flapping failures in 031_recovery_conflict
Revert "Use WAIT FOR LSN in PostgreSQL::Test::Cluster::wait_for_catchup()"
oid8.sql fails on animals that use -DSTRESS_SORT_INT_MIN (after b139bd3b6)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sifaka&dt=2026-01-07%2002%3A56%3A35 - master
not ok 58 + oid8 113 ms --- /Users/buildfarm/bf-data/HEAD/pgsql.build/src/test/regress/expected/oid8.out 2026-01-06 21:56:36 +++ /Users/buildfarm/bf-data/HEAD/pgsql.build/src/test/regress/results/oid8.out 2026-01-06 21:57:06 @@ -345,9 +345,9 @@ -- 3-way compare for btrees SELECT btoid8cmp(1::oid8, 2::oid8); - btoid8cmp ------------ - -1 + btoid8cmp +------------- + -2147483648 (1 row) ...
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=longfin&dt=2026-01-07%2003%3A38%3A04 - master
pgsql: Add data type oid8, 64-bit unsigned identifier \ sifaka doesn't like this
Improve portability of test with oid8 comparison function
040_pg_createsubscriber.pl fails on Msys animal fairywren due to timeout at the end (after 639352d90)
295/295 postgresql:pg_basebackup / pg_basebackup/040_pg_createsubscriber TIMEOUT 3000.19s exit status 1 --- pgsql.build/testrun/pg_basebackup/040_pg_createsubscriber/log/regress_log_040_pg_createsubscriber # Running: pg_createsubscriber --help [01:18:42.255](0.904s) ok 1 - pg_createsubscriber --help exit code 0 ... # No postmaster PID for node "node_s" [01:20:18.370](0.412s) 1..54
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-10%2019%3A03%3A13 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-11%2003%3A03%3A13 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-11%2003%3A03%3A13 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-11%2021%3A43%3A27 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-12%2011%3A03%3A14 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-12%2017%3A03%3A13 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-12%2022%3A03%3A11 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-13%2007%3A03%3A12 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-13%2012%3A03%3A12 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-14%2002%3A15%3A33 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-14%2019%3A03%3A13 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-15%2001%3A03%3A12 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-15%2011%3A03%3A12 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2026-01-15%2014%3A03%3A13 - master
Fix stability issue with new TAP test of pg_createsubscriber
nbtree_half_dead_pages fails due to injection point nbtree-leave-page-half-dead not reached
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2025-12-31%2003%3A34%3A51 - master
135/362 postgresql:nbtree / nbtree/regress ERROR 763.07s exit status 1
@@ -41,8 +41,6 @@
(1 row)
vacuum nbtree_half_dead_pages;
-ERROR: error triggered for injection point nbtree-leave-page-half-dead
-CONTEXT: while vacuuming index "nbtree_half_dead_pages_id_idx" of relation "public.nbtree_half_dead_pages"
SELECT injection_points_detach('nbtree-leave-page-half-dead');
injection_points_detach
-------------------------
@@ -67,7 +65,6 @@
-- Finish the deletion and re-check
vacuum nbtree_half_dead_pages;
-NOTICE: notice triggered for injection point nbtree-finish-half-dead-page-vacuum
select * from nbtree_half_dead_pages where id > 99998 and id < 120002;
id
--------
Other occurrences:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=scorpion&dt=2026-01-02%2004%3A54%3A38 - master
Bug in amcheck? \ scorpion and skink have failed nbtree_half_dead_pages
Fix rare test failure in nbtree_half_dead_pages
Unsorted/Unhelpful Test Failures
(kingsnake is a ppc64le (POWER9) animal)
not ok 1 - basic_archive 122486 ms
---
pgsql.build/contrib/basic_archive/regression.diffs
--- /home/fedora/17-kingsnake/buildroot/REL_17_STABLE/pgsql.build/contrib/basic_archive/expected/basic_archive.out 2024-08-19 19:18:02.127953655 +0000
+++ /home/fedora/17-kingsnake/buildroot/REL_17_STABLE/pgsql.build/contrib/basic_archive/results/basic_archive.out 2024-08-19 20:08:27.248588589 +0000
@@ -23,7 +23,7 @@
WHERE a ~ '^[0-9A-F]{24}$';
?column?
----------
- t
+ f
(1 row)
---
pgsql.build/contrib/basic_archive/log/postmaster.log
2024-08-19 20:06:25.585 UTC [381940:6] pg_regress/basic_archive LOG: statement: DO $$
DECLARE
archived bool;
loops int := 0;
BEGIN
LOOP
archived := count(*) > 0 FROM pg_ls_dir('.', false, false) a
WHERE a ~ '^[0-9A-F]{24}$';
IF archived OR loops > 120 * 10 THEN EXIT; END IF;
PERFORM pg_sleep(0.1);
loops := loops + 1;
END LOOP;
END
$$;
2024-08-19 20:08:27.252 UTC [381940:7] pg_regress/basic_archive LOG: statement: SELECT count(*) > 0 FROM pg_ls_dir('.', false, false) a
WHERE a ~ '^[0-9A-F]{24}$';
the expected archive file (000000010000000000000001?) didn't appear in the data directory within 120 seconds?
+++ isolation install-check in src/test/modules/delay_execution +++
============== running regression test queries ==============
test partition-addition ... FAILED 312375 ms
---
inst/logfile
...
2024-08-29 14:31:13.852 UTC [4029501:5] isolation/partition-addition/control connection LOG: statement:
CREATE TABLE foo (a int, b text) PARTITION BY LIST(a);
CREATE TABLE foo1 PARTITION OF foo FOR VALUES IN (1);
CREATE TABLE foo3 PARTITION OF foo FOR VALUES IN (3);
CREATE TABLE foo4 PARTITION OF foo FOR VALUES IN (4);
INSERT INTO foo VALUES (1, 'ABC');
INSERT INTO foo VALUES (3, 'DEF');
INSERT INTO foo VALUES (4, 'GHI');
2024-08-29 14:31:13.859 UTC [4029503:5] isolation/partition-addition/s2 LOG: statement: SELECT pg_advisory_lock(12345);
2024-08-29 14:31:13.859 UTC [4029502:5] isolation/partition-addition/s1 LOG: statement: LOAD 'delay_execution';
SET delay_execution.post_planning_lock_id = 12345;
SELECT * FROM foo WHERE a <> 1 AND a <> (SELECT 3);
2024-08-29 14:31:13.870 UTC [4029501:6] isolation/partition-addition/control connection LOG: execute isolationtester_waiting: SELECT pg_catalog.pg_isolation_test_session_is_blocked($1, '{4029502,4029503}')
2024-08-29 14:31:13.870 UTC [4029501:7] isolation/partition-addition/control connection DETAIL: parameters: $1 = '4029502'
...
2024-08-29 14:36:26.052 UTC [4029501:60550] isolation/partition-addition/control connection LOG: execute isolationtester_waiting: SELECT pg_catalog.pg_isolation_test_session_is_blocked($1, '{4029502,4029503}')
2024-08-29 14:36:26.052 UTC [4029501:60551] isolation/partition-addition/control connection DETAIL: parameters: $1 = '4029502'
2024-08-29 14:36:26.055 UTC [4029502:6] isolation/partition-addition/s1 ERROR: canceling statement due to user request
2024-08-29 14:36:26.055 UTC [4029502:7] isolation/partition-addition/s1 STATEMENT: LOAD 'delay_execution';
SET delay_execution.post_planning_lock_id = 12345;
SELECT * FROM foo WHERE a <> 1 AND a <> (SELECT 3);
(iguana is a ppc64le (POWER9) animal)
Session "s1" was blocked but pg_isolation_test_session_is_blocked() could not determine that, either because pg_blocking_pids() somehow omitted PID 4029503 (can be emulated with "PG_RETURN_BOOL(false);" inserted at the start of pg_isolation_test_session_is_blocked()), or because "s1" was blocked somehow before reaching planner_hook (= delay_execution_planner) (can be emulated with "SELECT pg_sleep(330);" added before "SET delay_execution.post_planning_lock_id = 12345;" in the session "s1" declaration).
Not reproduced. Moreover, this is the only failure of this kind among all TestModulesCheck-C failures recorded (50+).
(wrasse is a sparc64 animal running Solaris 11.3)
make (01:25:46) ... scripts-check (01:38:17) ... [04:33:31] t/020_createdb.pl ......... ok 1326343 ms ( 0.03 usr 0.00 sys + 7.41 cusr 9.33 csys = 16.77 CPU) [04:40:17] t/040_createuser.pl ....... ok 406303 ms ( 0.02 usr 0.00 sys + 3.07 cusr 2.79 csys = 5.88 CPU) # Tests were run but no plan was declared and done_testing() was not seen. # Looks like your test exited with 29 just after 13. [04:47:09] t/050_dropdb.pl ........... Dubious, test returned 29 (wstat 7424, 0x1d00) All 13 subtests passed ... --- pgsql.build/src/bin/scripts/tmp_check/log/regress_log_050_dropdb [04:47:00.052](0.069s) ok 13 - fails with nonexistent database error running SQL: 'psql:<stdin>:2: ERROR: source database "template1" is being accessed by other users DETAIL: There is 1 other session using the database.' while running 'psql -XAtq -d port=13455 host=/home/nm/farm/tmp/gsuqPCSa4L dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql ' CREATE DATABASE regression_invalid; UPDATE pg_database SET datconnlimit = -2 WHERE datname = 'regression_invalid'; --- pgsql.build/src/bin/scripts/tmp_check/log/050_dropdb_main.log 2024-09-03 04:47:00.116 CEST [4558:3] 050_dropdb.pl LOG: statement: CREATE DATABASE regression_invalid; 2024-09-03 04:47:05.118 CEST [4558:4] 050_dropdb.pl ERROR: source database "template1" is being accessed by other users 2024-09-03 04:47:05.118 CEST [4558:5] 050_dropdb.pl DETAIL: There is 1 other session using the database. 2024-09-03 04:47:05.118 CEST [4558:6] 050_dropdb.pl STATEMENT: CREATE DATABASE regression_invalid;
compare duration with the next (successful) run:
make (00:02:09) ... scripts-check (00:01:31) ... [20:01:56] t/020_createdb.pl ......... ok 12996 ms ( 0.02 usr 0.00 sys + 5.46 cusr 6.11 csys = 11.59 CPU) [20:02:01] t/040_createuser.pl ....... ok 4888 ms ( 0.01 usr 0.00 sys + 2.47 cusr 2.16 csys = 4.64 CPU) [20:02:06] t/050_dropdb.pl ........... ok 5093 ms ( 0.00 usr 0.00 sys + 2.62 cusr 2.19 csys = 4.81 CPU)
(Perhaps, wrasse was extremely slow at the time of the failed test run and autovacuum worker started in template1 could not stop during 5 seconds.)
(widowbird is an aarch64 animal)
# +++ isolation install-check in src/test/modules/brin +++ # using postmaster on /mnt/data/buildfarm/buildroot/tmp/buildfarm-JvF6xG, port 5678 ERROR: source database "template0" is being accessed by other users DETAIL: There is 1 other session using the database. ERROR: database "isolation_regression_summarization-and-inprogress-insertion" does not exist --- inst/logfile 2025-10-25 12:46:26.839 UTC [441951:4] pg_regress LOG: statement: CREATE DATABASE "isolation_regression_summarization-and-inprogress-insertion" TEMPLATE=template0 2025-10-25 12:46:27.589 UTC [441922:1] FATAL: terminating autovacuum process due to administrator command 2025-10-25 12:46:32.543 UTC [441951:5] pg_regress ERROR: source database "template0" is being accessed by other users 2025-10-25 12:46:32.543 UTC [441951:6] pg_regress DETAIL: There is 1 other session using the database. 2025-10-25 12:46:32.543 UTC [441951:7] pg_regress STATEMENT: CREATE DATABASE "isolation_regression_summarization-and-inprogress-insertion" TEMPLATE=template0
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2024-12-14%2005%3A54%3A52 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2025-01-24%2004%3A08%3A24 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2025-04-03%2005%3A00%3A49 - REL_14_STABLE
# poll_query_until timed out executing this query: # # SELECT vacuum_count > 0 # FROM pg_stat_all_tables WHERE relname = 'vac_horizon_floor_table'; 2024-12-14 10:43:37.277 UTC [11534840:9] 043_vacuum_horizon_floor.pl LOG: statement: VACUUM (VERBOSE, FREEZE) vac_horizon_floor_table; ... 2024-12-14 10:43:47.361 UTC [11534840:10] 043_vacuum_horizon_floor.pl LOG: using stale statistics instead of current ones because stats collector is not responding 2024-12-14 10:43:47.361 UTC [11534840:11] 043_vacuum_horizon_floor.pl STATEMENT: VACUUM (VERBOSE, FREEZE) vac_horizon_floor_table; 2024-12-14 10:43:47.362 UTC [11534840:12] 043_vacuum_horizon_floor.pl INFO: aggressively vacuuming "public.vac_horizon_floor_table" ... 2024-12-14 10:43:49.296 UTC [11534840:25] 043_vacuum_horizon_floor.pl INFO: index "vac_horizon_floor_table_col1_idx" now contains 3 row versions in 551 pages 2024-12-14 10:43:49.296 UTC [11534840:26] 043_vacuum_horizon_floor.pl DETAIL: 200000 index row versions were removed. 544 index pages were newly deleted. 544 index pages are currently deleted, of which 0 are currently reusable. CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s. 2024-12-14 10:43:49.296 UTC [11534840:27] 043_vacuum_horizon_floor.pl CONTEXT: while cleaning up index "vac_horizon_floor_table_col1_idx" of relation "public.vac_horizon_floor_table" 2024-12-14 10:43:49.296 UTC [11534840:28] 043_vacuum_horizon_floor.pl INFO: table "vac_horizon_floor_table": found 199559 removable, 3 nonremovable row versions in 885 out of 885 pages 2024-12-14 10:43:49.296 UTC [11534840:29] 043_vacuum_horizon_floor.pl DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 741 Skipped 0 pages due to buffer pins, 0 frozen pages. CPU: user: 0.09 s, system: 0.03 s, elapsed: 1.93 s. 2024-12-14 10:43:49.296 UTC [11534840:30] 043_vacuum_horizon_floor.pl CONTEXT: while scanning relation "public.vac_horizon_floor_table" ...
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=francolin&dt=2025-01-06%2010%3A02%3A36 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2025-02-24%2022%3A05%3A28 - REL_16_STABLE
--- /home/bf/bf-build/francolin/REL_17_STABLE/pgsql/src/pl/plpgsql/src/expected/plpgsql_cache.out 2024-09-25 13:23:23.310891515 +0000 +++ /home/bf/bf-build/francolin/REL_17_STABLE/pgsql.build/testrun/plpgsql-running/regress/results/plpgsql_cache.out 2025-01-06 10:17:15.939118906 +0000 @@ -23,8 +23,11 @@ -- currently, this fails due to cached plan for "r.f1 + 1" expression -- (but if debug_discard_caches is on, it will succeed) select c_sillyaddone(42); -ERROR: type of parameter 4 (double precision) does not match that when preparing the plan (integer) -CONTEXT: PL/pgSQL function c_sillyaddone(integer) line 1 at RETURN + c_sillyaddone +--------------- + 43 +(1 row) +
--- /home/bf/proj/bf/build-farm-17/REL_16_STABLE/pgsql.build/src/test/regress/expected/partition_prune.out 2025-01-07 22:00:03.731034208 +0000
+++ /home/bf/proj/bf/build-farm-17/REL_16_STABLE/pgsql.build/src/test/regress/results/partition_prune.out 2025-01-07 22:00:50.219782536 +0000
@@ -2438,8 +2438,8 @@
Index Cond: (a = a.a)
-> Index Scan using ab_a2_b3_a_idx on ab_a2_b3 ab_6 (never executed)
Index Cond: (a = a.a)
- -> Index Scan using ab_a3_b1_a_idx on ab_a3_b1 ab_7 (never executed)
- Index Cond: (a = a.a)
+ -> Seq Scan on ab_a3_b1 ab_7 (never executed)
+ Filter: (a = a.a)
...
-> Index Scan using ab_a3_b3_a_idx on ab_a3_b3 ab_9 (never executed)
@@ -2629,11 +2629,8 @@
Filter: (b = $1)
-> Bitmap Index Scan on ab_a2_b3_a_idx (never executed)
Index Cond: (a = $0)
- -> Bitmap Heap Scan on ab_a3_b1 ab_7 (never executed)
- Recheck Cond: (a = $0)
- Filter: (b = $1)
- -> Bitmap Index Scan on ab_a3_b1_a_idx (never executed)
- Index Cond: (a = $0)
+ -> Seq Scan on ab_a3_b1 ab_7 (never executed)
+ Filter: ((a = $0) AND (b = $1))
-> Bitmap Heap Scan on ab_a3_b2 ab_8 (actual rows=0 loops=1)
Recheck Cond: (a = $0)
Filter: (b = $1)
@@ -2644,7 +2641,7 @@
Filter: (b = $1)
-> Bitmap Index Scan on ab_a3_b3_a_idx (never executed)
Index Cond: (a = $0)
-(52 rows)
+(49 rows)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=leafhopper&dt=2025-01-11%2016%3A49%3A04 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=leafhopper&dt=2025-01-14%2013%3A16%3A03 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=parula&dt=2025-03-25%2019%3A00%3A06 - REL_16_STABLE
=== dumping /home/bf/proj/bf/build-farm-17/REL_15_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/regression.diffs ===
diff -U3 /home/bf/proj/bf/build-farm-17/REL_15_STABLE/pgsql.build/src/test/regress/expected/partition_prune.out /home/bf/proj/bf/build-farm-17/REL_15_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/results/partition_prune.out
--- /home/bf/proj/bf/build-farm-17/REL_15_STABLE/pgsql.build/src/test/regress/expected/partition_prune.out 2025-01-11 16:49:03.976529417 +0000
+++ /home/bf/proj/bf/build-farm-17/REL_15_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/results/partition_prune.out 2025-01-11 16:51:52.701998985 +0000
@@ -2440,8 +2440,8 @@
Index Cond: (a = a.a)
-> Index Scan using ab_a3_b1_a_idx on ab_a3_b1 ab_7 (never executed)
Index Cond: (a = a.a)
- -> Index Scan using ab_a3_b2_a_idx on ab_a3_b2 ab_8 (never executed)
- Index Cond: (a = a.a)
+ -> Seq Scan on ab_a3_b2 ab_8 (never executed)
+ Filter: (a.a = a)
-> Index Scan using ab_a3_b3_a_idx on ab_a3_b3 ab_9 (never executed)
Index Cond: (a = a.a)
(27 rows)
...
tuplesort ... FAILED 592 ms
diff -U3 /home/bf/proj/bf/build-farm-17/REL_13_STABLE/pgsql.build/src/test/regress/expected/tuplesort.out /home/bf/proj/bf/build-farm-17/REL_13_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/regress/results/tuplesort.out
--- /home/bf/proj/bf/build-farm-17/REL_13_STABLE/pgsql.build/src/test/regress/expected/tuplesort.out 2025-05-28 00:01:05.094607646 +0000
+++ /home/bf/proj/bf/build-farm-17/REL_13_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/regress/results/tuplesort.out 2025-05-28 00:02:17.222477543 +0000
@@ -587,7 +587,7 @@
SELECT NULL, NULL, NULL, NULL, NULL) s;
array_agg | array_agg | array_agg | percentile_disc | percentile_disc | percentile_disc | percentile_disc | rank
--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------+-----------------+-----------------+--------------------------------------+-----------------+------
- {NULL,20010,20009,20008,20007} | {00000000-0000-0000-0000-000000020000,00000000-0000-0000-0000-000000020000,00000000-0000-0000-0000-000000019999,00000000-0000-0000-0000-000000019998,00000000-0000-0000-0000-000000019997} | {9999,9998,9997,9996,9995} | 19810 | 200 | 00000000-0000-0000-0000-000000016003 | 136 | 2
+ {NULL,20010,20009,20008,20007} | {00000000-0000-0000-0000-000000020000,00000000-0000-0000-0000-000000020000,00000000-0000-0000-0000-000000019999,00000000-0000-0000-0000-000000019998,00000000-0000-0000-0000-000000019997} | {9999,9998,9997,9996,9995} | 19810 | 200 | 00000000-0000-0000-0000-000000019804 | 136 | 2
(1 row)
ROLLBACK;
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-02-08%2015%3A09%3A04 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-02-15%2003%3A15%3A23 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2026-01-15%2015%3A12%3A33 - REL_14_STABLE
Stage install-check-English_United-States.1252 test tablespace ... FAILED 3740 ms --- C:/prog/bf/root/REL_14_STABLE/pgsql.build/src/test/regress/expected/tablespace.out 2025-02-08 15:57:43.889887000 +0000 +++ C:/prog/bf/root/REL_14_STABLE/pgsql.build/src/test/regress/results/tablespace.out 2025-02-08 15:57:49.333051100 +0000 @@ -929,6 +929,7 @@ NOTICE: no matching relations in tablespace "regress_tblspace_renamed" found -- Should succeed DROP TABLESPACE regress_tblspace_renamed; +ERROR: tablespace "regress_tblspace_renamed" is not empty
(Not reproduced locally under seemingly the same conditions.)
--- /mnt/data/buildfarm/buildroot/REL_16_STABLE/pgsql.build/src/test/regress/expected/numeric.out 2025-02-16 02:18:52.820119000 +0000 +++ /mnt/data/buildfarm/buildroot/REL_16_STABLE/pgsql.build/src/test/regress/results/numeric.out 2025-02-16 02:19:57.772058000 +0000 @@ -3583,9 +3583,9 @@ SET LOCAL parallel_setup_cost = 0; SET LOCAL max_parallel_workers_per_gather = 4; SELECT variance(a) FROM num_variance; - variance --------------------- - 2.5000000000000000 + variance +---------- + 0 (1 row)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2025-02-23%2006%3A54%3A54 - master
--- /repos/client-code-REL_18/HEAD/pgsql.build/src/test/isolation/expected/stats_1.out 2025-02-23 03:55:35.961552179 -0300 +++ /repos/client-code-REL_18/HEAD/pgsql.build/src/test/isolation/output_iso/results/stats.out 2025-02-23 04:21:35.039876561 -0300 @@ -1688,7 +1688,7 @@ name |pg_stat_get_function_calls|total_above_zero|self_above_zero ---------------+--------------------------+----------------+--------------- -test_stat_func2| 1|t |t +test_stat_func2| 1|f |f (1 row)
(Not reproduced locally on Fedora 43 VM.)
# Failed test 'check replication statistics are updated' # at t/001_repl_stats.pl line 81. # got: 'regression_slot1|f|f # regression_slot2|f|f # regression_slot3|f|f' # expected: 'regression_slot1|t|t # regression_slot2|t|t # regression_slot3|t|t' # Looks like you failed 1 test of 2. [10:41:59] t/001_repl_stats.pl .. Dubious, test returned 1 (wstat 256, 0x100)
38/267 postgresql:recovery / recovery/043_vacuum_horizon_floor ERROR 335.83s exit status 29 --- pgsql.build/testrun/recovery/043_vacuum_horizon_floor/log/regress_log_043_vacuum_horizon_floor [16:59:38.183](0.003s) ok 3 - Cursor query returned 1 from second fetch. Expected value 1. IPC::Run: timeout on timer #2 at /usr/share/perl5/IPC/Run.pm line 3025.
the test's performance was improved by 571e0ee40 (this timeout can be reproduced easier on a slow machine at 571e0ee40~1), but evidently the test may still fail
2025-03-12 16:10:18.791 EDT [3272767:2] [unknown] FATAL: could not open shared memory segment "/PostgreSQL.2964382536": No such file or directory 2025-03-12 16:10:18.802 EDT [3272623:1141] pg_regress/without_overlaps WARNING: could not remove shared memory segment "/PostgreSQL.2193469640": No such file or directory
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-03-12%2012%3A48%3A39 - master
*failure* Consult the last few lines of "C:/prog/bf/root/upgrade.drongo/HEAD/inst/REL_17_STABLE-upgrade/pg_upgrade_output.d/20250312T153830.460/log/pg_upgrade_server_start.log" or "pg_upgrade_server.log" for the probable cause of the failure. connection to server at "localhost" (::1), port 5978 failed: FATAL: role "buildfarm" does not exist
227/305 postgresql:recovery / recovery/027_stream_regress ERROR 3364.50s exit status 1 [01:19:39.781](0.108s) not ok 9 - check contents of pg_stat_statements on regression database [01:19:39.781](0.000s) # Failed test 'check contents of pg_stat_statements on regression database' # at /home/bf/bf-build/skink/REL_17_STABLE/pgsql/src/test/recovery/t/027_stream_regress.pl line 173. [01:19:39.781](0.000s) # got: 'CREATE|f # SELECT|t' # expected: 'CREATE|t # DELETE|t # INSERT|t # SELECT|t # UPDATE|t'
t/035_standby_logical_decoding.pl (Wstat: 139 Tests: 78 Failed: 0)
Non-zero wait status: 139
Parse errors: No plan found in TAP output
Files=43, Tests=584, 746 wallclock secs ( 0.23 usr 0.29 sys + 31.09 cusr 97.03 csys = 128.64 CPU)
Result: FAIL
---
pgsql.build/src/test/recovery/tmp_check/log/regress_log_035_standby_logical_decoding
[14:47:39.973](0.022s) ok 78 - otherslot on standby not dropped
### Reloading node "standby"
# Running: pg_ctl --pgdata /Users/buildfarm/bf-data/HEAD/pgsql.build/src/test/recovery/tmp_check/t_035_standby_logical_decoding_standby_data/pgdata reload
server signaled
psql:<stdin>:1: WARNING: databases created by regression test cases should have names including "regression"
=== EOF ===
---
pgsql.build/src/test/recovery/tmp_check/log/035_standby_logical_decoding_standby.log
2025-03-28 14:47:39.987 EDT [86054:1] [unknown] LOG: connection received: host=[local]
2025-03-28 14:47:39.988 EDT [86054:2] [unknown] LOG: connection authenticated: user="buildfarm" method=trust (/Users/buildfarm/bf-data/HEAD/pgsql.build/src/test/recovery/tmp_check/t_035_standby_logical_decoding_standby_data/pgdata/pg_hba.conf:117)
2025-03-28 14:47:39.988 EDT [86054:3] [unknown] LOG: connection authorized: user=buildfarm database=postgres application_name=035_standby_logical_decoding.pl
2025-03-28 14:47:39.989 EDT [86054:4] 035_standby_logical_decoding.pl LOG: statement: SELECT pg_drop_replication_slot('otherslot')
2025-03-28 14:47:39.991 EDT [86054:5] 035_standby_logical_decoding.pl LOG: disconnection: session time: 0:00:00.003 user=buildfarm database=postgres host=[local]
2025-03-28 14:47:39.998 EDT [85835:8] LOG: received SIGHUP, reloading configuration files
=== EOF ===
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-03-30%2013%3A03%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-05-08%2010%3A03%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2025-10-07%2004%3A03%3A07 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-12-14%2021%3A42%3A47 - master
2/270 postgresql:pg_upgrade / pg_upgrade/005_char_signedness ERROR 30.79s exit status 1 --- pgsql.build/testrun/pg_upgrade/005_char_signedness/log/regress_log_005_char_signedness Restoring global objects in the new cluster ok Restoring database schemas in the new cluster *failure* Consult the last few lines of "C:/tools/xmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/005_char_signedness/data/t_005_char_signedness_new_data/pgdata/pg_upgrade_output.d/20250330T130832.527/log/pg_upgrade_dump_1.log" for the probable cause of the failure. Failure, exiting [13:08:52.913](21.073s) not ok 13 - run of pg_upgrade (there is no pg_upgrade_dump_1.log stored in the failure logs)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2025-04-05%2004%3A54%3A28 - master
pgsql.build/src/bin/pg_dump/tmp_check/log/006_pg_dumpall_target_format_tar.log 2025-04-05 02:31:39.503 EDT [2357:1] FATAL: could not create semaphores: No space left on device
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-04-01%2000%3A44%3A56 - master
SUCCESS: The process with PID 3388 (child process of PID 3416) has been terminated. SUCCESS: The process with PID 3416 (child process of PID 8076) has been terminated. SUCCESS: The process with PID 8076 (child process of PID 1636) has been terminated. postgresql:recovery / recovery/009_twophase time out (After 3000.0 seconds) 317/317 postgresql:recovery / recovery/009_twophase TIMEOUT 3000.17s exit status 1 [01:04:20.506](1.224s) ok 17 - Replay prepared transaction with DDL ### Stopping node "paris" using mode immediate # Running: pg_ctl --pgdata C:\\prog\\bf\\root\\HEAD\\pgsql.build/testrun/recovery/009_twophase\\data/t_009_twophase_paris_data/pgdata --mode immediate stop waiting for server to shut down.... done server stopped # No postmaster PID for node "paris" ### Starting node "paris" # Running: pg_ctl --wait --pgdata C:\\prog\\bf\\root\\HEAD\\pgsql.build/testrun/recovery/009_twophase\\data/t_009_twophase_paris_data/pgdata --log C:\\prog\\bf\\root\\HEAD\\pgsql.build/testrun/recovery/009_twophase\\log/009_twophase_paris.log --options --cluster-name=paris start waiting for server to start.... done server started # Postmaster PID for node "paris" is 7360 2025-04-01 01:04:21.852 UTC [1692:3] [unknown] LOG: connection authorized: user=pgrunner database=postgres application_name=009_twophase.pl 2025-04-01 01:04:21.873 UTC [1692:4] 009_twophase.pl LOG: statement: COMMIT PREPARED 'xact_009_14' 2025-04-01 01:09:21.513 UTC [6348:4] LOG: checkpoint starting: time 2025-04-01 01:09:21.775 UTC [6348:5] LOG: checkpoint complete: wrote 2 buffers (1.6%), wrote 1 SLRU buffers; 0 WAL file(s) added, 0 removed, 0 recycled; write=0.258 s, sync=0.001 s, total=0.262 s; sync files=0, longest=0.000 s, average=0.000 s; distance=13 kB, estimate=150 kB; lsn=0/3096A78, redo lsn=0/30969A8
parallel group (18 tests): conversion prepare returning limit plancache copy2 temp polymorphism sequence with rowtypes largeobject truncate rangefuncs domain xmlmake: *** wait: No child processes. Stop. make: *** Waiting for unfinished jobs.... make: *** wait: No child processes. Stop.
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2025-05-29%2004%3A01%3A02 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2025-05-29%2005%3A03%3A27 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2025-05-29%2006%3A15%3A39 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2025-05-29%2007%3A35%3A08 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2025-05-29%2009%3A02%3A54 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2025-05-29%2010%3A30%3A54 - master
/home/pgbf/buildroot/saves.copperhead/REL_12_STABLE/bin/postgres: error while loading shared libraries: libldap-2.5.so.0: cannot open shared object file: No such file or directory
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=desman&dt=2025-06-03%2018%3A13%3A30 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2025-07-28%2013%3A05%3A41 - REL_13_STABLE
Stage stopdb-C-1 waiting for server to shut down........................................................................................................................... failed pg_ctl: server does not shut down
(cisticola is a loongarch64 animal)
diff -U3 /home/postgres/buildfarm/HEAD/pgsql.build/src/test/regress/expected/jsonb_jsonpath.out /home/postgres/buildfarm/HEAD/pgsql.build/src/test/regress/results/jsonb_jsonpath.out
--- /home/postgres/buildfarm/HEAD/pgsql.build/src/test/regress/expected/jsonb_jsonpath.out 2025-06-30 12:20:06.178757862 +0800
+++ /home/postgres/buildfarm/HEAD/pgsql.build/src/test/regress/results/jsonb_jsonpath.out 2025-06-30 12:22:39.798581232 +0800
@@ -1704,20 +1704,20 @@
(1 row)
select jsonb_path_query('null', '$.datetime()');
-ERROR: jsonpath item method .datetime() can only be applied to a string
+ERROR: jsonpath item method .dateti}e() can only be applied to a string
select jsonb_path_query('true', '$.datetime()');
-ERROR: jsonpath item method .datetime() can only be applied to a string
+ERROR: jsonpath item method .dateti}e() can only be applied to a string
select jsonb_path_query('1', '$.datetime()');
-ERROR: jsonpath item method .datetime() can only be applied to a string
+ERROR: jsonpath item method .dateti}e() can only be applied to a string
select jsonb_path_query('[]', '$.datetime()');
...
2025-07-01 06:24:45.395 CST [1721532:144] LOG: client backend (PID 1727138) was terminated by signal 11: Segmentation fault 2025-07-01 06:24:45.395 CST [1721532:145] DETAIL: Failed process was running: create view tt27v as select a from tt27v_tbl;
============== running sepgsql regression tests ==============
# using postmaster on /tmp/buildfarm-fGhjjT, default port
ok 1 - label 1246 ms
ok 2 - dml 2664 ms
not ok 3 - ddl 881 ms
ok 4 - alter 962 ms
ok 5 - misc 189 ms
ok 6 - truncate 105 ms
1..6
# 1 of 6 tests failed.
diff -U3 /opt/src/pgsql-git/build-farm-root/REL_18_STABLE/pgsql.build/contrib/sepgsql/expected/ddl.out /opt/src/pgsql-git/build-farm-root/REL_18_STABLE/pgsql.build/contrib/sepgsql/results/ddl.out
--- /opt/src/pgsql-git/build-farm-root/REL_18_STABLE/pgsql.build/contrib/sepgsql/expected/ddl.out 2025-06-30 08:52:08.658727528 -0700
+++ /opt/src/pgsql-git/build-farm-root/REL_18_STABLE/pgsql.build/contrib/sepgsql/results/ddl.out 2025-06-30 09:01:30.691625479 -0700
@@ -304,6 +304,8 @@
LOG: SELinux: allowed { search } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0 tcontext=unconfined_u:object_r:sepgsql_schema_t:s0 tclass=db_schema name="regtest_schema" permissive=0
LOG: SELinux: allowed { search } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0 tcontext=system_u:object_r:sepgsql_schema_t:s0 tclass=db_schema name="public" permissive=0
LOG: SELinux: allowed { search } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0 tcontext=system_u:object_r:sepgsql_schema_t:s0 tclass=db_schema name="pg_catalog" permissive=0
+LINE 1: ALTER TABLE regtest_table_4 ALTER COLUMN y TYPE float;
+ ^
LOG: SELinux: allowed { search } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0 tcontext=system_u:object_r:sepgsql_schema_t:s0 tclass=db_schema name="pg_catalog" permissive=0
LOG: SELinux: allowed { setattr } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0 tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column name="regtest_schema.regtest_table_4.y" permissive=0
LOG: SELinux: allowed { execute } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0 tcontext=system_u:object_r:sepgsql_proc_exec_t:s0 tclass=db_procedure name="pg_catalog.float8(integer)" permissive=0
...
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-07-18%2021%3A47%3A01 - master
/home/andrew/bf/root/saves.crake/REL_18_STABLE/bin/postgres: error while loading shared libraries: libicuuc.so.74: cannot open shared object file: No such file or directory
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-07-18%2023%3A32%3A01 - master
/home/andrew/bf/root/saves.crake/REL_18_STABLE/bin/postgres: symbol lookup error: /home/andrew/bf/root/saves.crake/REL_18_STABLE/bin/postgres: undefined symbol: u_strToLower_74
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2025-08-05%2005%3A52%3A51 - master
2025-08-05 04:34:09.577 EDT [29432:4] 008_fsm_truncation.pl ERROR: syntax error at or near "select" at character 21 2025-08-05 04:34:09.577 EDT [29432:5] 008_fsm_truncation.pl STATEMENT: insert into testtab select generate_series(1,1000), 'foo';
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-08-08%2011%3A53%3A16 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-08-08%2011%3A54%3A15 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-08-08%2011%3A55%3A08 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-08-08%2011%3A56%3A10 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-08-08%2011%3A57%3A09 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-08-08%2011%3A58%3A08 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-08-08%2011%3A59%3A08 - master
timed out after 21600 secs
[07:48:04.931](0.000s) ok 77 - pg_recvlogical exited non-zero
[07:48:04.931](0.000s) not ok 78 - slot has been invalidated
[07:48:04.931](0.000s) # Failed test 'slot has been invalidated'
# at /home/bf/bf-build/kestrel/HEAD/pgsql/src/test/recovery/t/035_standby_logical_decoding.pl line 118.
[07:48:04.931](0.000s) # 'pg_recvlogical: error: could not send replication command "START_REPLICATION SLOT "drop_db_activeslot" LOGICAL 0/00000000 ("include-xids" '0', "skip-empty-xacts" '1')": server closed the connection unexpectedly
# This probably means the server terminated abnormally
# before or while processing the request.
# pg_recvlogical: error: disconnected
# '
# doesn't match '(?^:conflict with recovery)'
[07:48:04.940](0.009s) ok 79 - otherslot on standby not dropped
pgsql.build/testrun/regress/regress/log/postmaster.log ... 2025-08-11 18:56:24.131 UTC client backend[6048] pg_regress/tablespace STATEMENT: REINDEX (TABLESPACE regress_tblspace, CONCURRENTLY) TABLE tablespace_table; 2025-08-11 18:56:24.150 UTC checkpointer[8808] LOG: checkpoint starting: immediate force wait 2025-08-11 18:56:24.155 UTC checkpointer[8808] LOG: checkpoint complete: wrote 46 buffers (0.3%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.001 s, total=0.005 s; sync files=0, longest=0.000 s, average=0.000 s; distance=340 kB, estimate=118925 kB; lsn=0/FB2F940, redo lsn=0/FB2F8E8 2025-08-11 18:56:24.388 UTC postmaster[4864] LOG: received fast shutdown request
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-08-19%2022%3A18%3A55 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-08-25%2021%3A43%3A04 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-08-27%2019%3A24%3A39 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-08-27%2007%3A29%3A18 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-09-05%2023%3A03%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-09-08%2009%3A38%3A31 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-09-12%2006%3A56%3A00 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-09-15%2011%3A46%3A10 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-09-18%2001%3A02%3A26 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-09-19%2002%3A53%3A06 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-09-22%2008%3A03%3A05 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2025-09-25%2014%3A43%3A07 - REL_17_STABLE
+ERROR: could not extend file "base/16384/1249" with FileFallocate(): No space left on device
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2025-08-18%2022%3A00%3A47 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2025-08-19%2004%3A14%3A55 - master
timed out after 10800 secs
2025-08-19 06:09:58.789 UTC [605782:10] PANIC: could not write to file "pg_wal/xlogtemp.605782": No space left on device
pgsql.build/src/test/modules/test_oat_hooks/regression.diffs --- /home/pgbf/buildroot/HEAD/pgsql.build/src/test/modules/test_oat_hooks/expected/test_oat_hooks.out 2025-09-03 10:17:14.343516305 +0200 +++ /home/pgbf/buildroot/HEAD/pgsql.build/src/test/modules/test_oat_hooks/results/test_oat_hooks.out 2025-09-03 10:34:59.499087150 +0200 @@ -2,6 +2,7 @@ -- flushes cause extra calls of the OAT hook in recomputeNamespacePath, -- resulting in more NOTICE messages than are in the expected output. SET debug_discard_caches = 0; +ERROR: unrecognized configuration parameter "debug_discard_caches"
ERROR: Build data file '/home/bf/bf-build/skink/REL_17_STABLE/pgsql.build/meson-private/build.dat' references functions or classes that don't exist. This probably means that it was generated with an old version of meson. Consider reconfiguring the directory with "meson setup --reconfigure".
2025-09-16 20:40:38.864 UTC [221287:2] [unknown] FATAL: could not open shared memory segment "/PostgreSQL.715611118": No such file or directory ... 2025-09-16 20:40:39.540 UTC [221321:2] [unknown] FATAL: could not open shared memory segment "/PostgreSQL.2228928852": No such file or directory ...
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2025-09-09%2010%3A20%3A01 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2025-09-10%2018%3A36%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2025-09-17%2022%3A48%3A30. - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2025-10-17%2018%3A32%3A07 - REL_15_STABLE
# parallel group (15 tests): predicate reloptions numa hash_part memoize compression_lz4 partition_info explain timed out after 3600 secs
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-09-26%2018%3A15%3A06 - master
(drongo is performing the test extremely slowly)
ok 139 + merge 19699 ms not ok 140 + misc_functions 18003 ms ok 141 + sysviews 7925 ms --- C:/prog/bf/root/HEAD/pgsql/src/test/regress/expected/misc_functions.out 2025-09-18 05:16:31.504924900 +0000 +++ C:/prog/bf/root/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/results/misc_functions.out 2025-09-26 19:57:23.414238300 +0000 @@ -325,9 +325,10 @@ -- permissions are set properly. -- SELECT pg_log_backend_memory_contexts(pg_backend_pid()); +WARNING: could not send signal to process 4400: Invalid argument pg_log_backend_memory_contexts -------------------------------- - t + f (1 row)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=baza&dt=2025-09-20%2012%3A00%3A06 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=baza&dt=2025-09-22%2012%3A39%3A12 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=baza&dt=2025-09-24%2012%3A00%3A02 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=baza&dt=2025-09-24%2012%3A43%3A08 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=baza&dt=2025-09-30%2012%3A10%3A54 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=baza&dt=2025-10-02%2012%3A05%3A54 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=baza&dt=2025-10-03%2012%3A05%3A49 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=baza&dt=2025-11-03%2013%3A27%3A24 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=baza&dt=2025-11-04%2013%3A36%3A15 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=baza&dt=2026-01-17%2012%3A15%3A27 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=baza&dt=2026-01-17%2012%3A32%3A34 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=baza&dt=2026-01-18%2012%3A57%3A19 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=baza&dt=2026-01-19%2013%3A19%3A47 - master
pg_basebackup: error: could not write to file "/home/animal/build/HEAD/pgsql.build/src/test/modules/brin/tmp_check/t_02_wal_consistency_whiskey_data/backup/brinbkp/base/1/1255": No space left on device
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2025-10-05%2008%3A07%3A06 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2025-10-05%2008%3A05%3A05 - REL_13_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2025-10-05%2008%3A06%3A07 - REL_14_STABLE
timed out after 14400 secs The database cluster will be initialized with locales COLLATE: en_US.UTF-8 CTYPE: en_US.UTF-8 MESSAGES: en_US.UTF-8 MONETARY: en_AU.UTF-8 NUMERIC: en_AU.UTF-8 TIME: e\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0...
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=indri&dt=2025-10-01%2015%3A32%3A03 - master
[11:42:57.021](0.001s) not ok 3 - fails without mapping: log matches [11:42:57.022](0.001s) [11:42:57.022](0.000s) # Failed test 'fails without mapping: log matches' # at t/001_auth.pl line 149. [11:42:57.022](0.000s) # '2025-10-01 11:41:56.991 EDT [70040:30] DEBUG: assigned pm child slot 1 for client backend ... # 2025-10-01 11:42:56.995 EDT [70040:41] DEBUG: client backend (PID 70056) exited with exit code 1 # ' # doesn't match '(?^:connection\\ authenticated\\:\\ identity\\=\\"test1\\@EXAMPLE\\.COM\\"\\ method\\=gss)' [11:42:57.022](0.000s) not ok 4 - fails without mapping: log matches [11:42:57.023](0.000s) [11:42:57.023](0.000s) # Failed test 'fails without mapping: log matches' # at t/001_auth.pl line 149. [11:42:57.023](0.000s) # '2025-10-01 11:41:56.991 EDT [70040:30] DEBUG: assigned pm child slot 1 for client backend ... # 2025-10-01 11:42:56.995 EDT [70040:41] DEBUG: client backend (PID 70056) exited with exit code 1 # ' # doesn't match '(?^:no\\ match\\ in\\ usermap\\ \\"mymap\\"\\ for\\ user\\ \\"test1\\")' ### Restarting node "node" --- 001_auth_node.log 2025-10-01 11:41:56.922 EDT [70051:2] [unknown] FATAL: GSSAPI authentication failed for user "test1" 2025-10-01 11:41:56.922 EDT [70051:3] [unknown] DETAIL: Connection matched file "/Users/buildfarm/bf-data/HEAD/pgsql.build/src/test/kerberos/tmp_check/t_001_auth_node_data/pgdata/pg_hba.conf" line 3: "host all all 127.0.0.1/32 gss map=mymap" 2025-10-01 11:41:56.925 EDT [70040:28] DEBUG: releasing pm child slot 1 2025-10-01 11:41:56.925 EDT [70040:29] DEBUG: client backend (PID 70051) exited with exit code 1 2025-10-01 11:41:56.991 EDT [70040:30] DEBUG: assigned pm child slot 1 for client backend 2025-10-01 11:41:56.991 EDT [70040:31] DEBUG: forked new client backend, pid=70056 socket=9 2025-10-01 11:41:56.991 EDT [70056:1] [unknown] LOG: connection received: host=127.0.0.1 port=58323 ... 2025-10-01 11:42:56.995 EDT [70040:41] DEBUG: client backend (PID 70056) exited with exit code 1 2025-10-01 11:42:57.043 EDT [70040:42] DEBUG: postmaster received shutdown request signal 2025-10-01 11:42:57.043 EDT [70040:43] LOG: received fast shutdown request
# +++ tap check in contrib/bloom +++ # Failed test 'delete 2: query result matches' # at t/001_wal.pl line 74. # got: '0|4 ... # 3|f # 4|f # 8|f # 9|f # 3|c # 3|c # 3|c ... # 7|e' # expected: '0|4 # 0|f ... # 3|f # 4|f # 8|f # 9|f' # Looks like you failed 1 test of 31. [06:39:38] t/001_wal.pl .. Dubious, test returned 1 (wstat 256, 0x100) ### "got" contains +200 rows
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-13%2019%3A10%3A03 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-10-13%2019%3A52%3A56 - REL_18_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-11-26%2010%3A31%3A38 - REL_18_STABLE
2025-10-13 20:31:57.115 BST [14928:6] FATAL: could not extend file "base/66421/66558": No space left on device
--- /home/demo/client-code-REL_19_1/buildroot/HEAD/pgsql.build/src/pl/plpgsql/src/expected/plpgsql_trap.out 2025-10-13 21:09:42.000000000 +0100 +++ /home/demo/client-code-REL_19_1/buildroot/HEAD/pgsql.build/src/pl/plpgsql/src/results/plpgsql_trap.out 2025-10-13 21:27:10.000000000 +0100 @@ -155,9 +155,8 @@ begin; set statement_timeout to 1000; select trap_timeout(); -NOTICE: nyeah nyeah, can't stop me -ERROR: end of function -CONTEXT: PL/pgSQL function trap_timeout() line 15 at RAISE +ERROR: canceling statement due to statement timeout +CONTEXT: PL/pgSQL function trap_timeout() line 9 at RAISE
2025-10-13 20:51:53.558 BST [940:1] PANIC: could not write to file "pg_wal/xlogtemp.940": (os/kern) memory error 2025-10-13 20:51:54.320 BST [936:4] LOG: WAL writer process (PID 940) was terminated by signal 6: Aborted
+FATAL: could not enable SIGALRM timer: (os/kern) aborted +FATAL: could not enable SIGALRM timer: (os/kern) aborted +CONTEXT: while updating tuple (0,1) in relation "accounts" +server closed the connection unexpectedly
2025-11-27 21:57:44.032 GMT [15372:4] LOG: server process (PID 15805) was terminated by signal 11: Segmentation fault 2025-11-27 21:57:44.032 GMT [15372:5] DETAIL: Failed process was running: DROP TABLESPACE regress_create_idx_tblspace;
diff -U3 /mnt/data/buildfarm/buildroot/REL_13_STABLE/pgsql.build/contrib/postgres_fdw/expected/postgres_fdw.out /mnt/data/buildfarm/buildroot/REL_13_STABLE/pgsql.build/contrib/postgres_fdw/results/postgres_fdw.out --- /mnt/data/buildfarm/buildroot/REL_13_STABLE/pgsql.build/contrib/postgres_fdw/expected/postgres_fdw.out 2025-10-23 15:35:04.145079126 +0000 +++ /mnt/data/buildfarm/buildroot/REL_13_STABLE/pgsql.build/contrib/postgres_fdw/results/postgres_fdw.out 2025-10-23 16:44:07.383233364 +0000 @@ -9424,9 +9424,9 @@ SET ROLE regress_nosuper; -- Should finally work now SELECT * FROM ft1_nopw LIMIT 1; - c1 | c2 | c3 | c4 | c5 | c6 | c7 | c8 -------+----+----+----+----+----+------------+---- - 1111 | 2 | | | | | ft1 | + c1 | c2 | c3 | c4 | c5 | c6 | c7 | c8 +----+----+-------+------------------------------+--------------------------+----+------------+----- + 6 | 6 | 00006 | Wed Jan 07 00:00:00 1970 PST | Wed Jan 07 00:00:00 1970 | 6 | 6 | foo (1 row) -- unpriv user also cannot set sslcert / sslkey on the user mapping @@ -9450,9 +9450,9 @@ -- The user mapping for public is passwordless and lacks the password_required=false -- mapping option, but will work because the current user is a superuser. SELECT * FROM ft1_nopw LIMIT 1; - c1 | c2 | c3 | c4 | c5 | c6 | c7 | c8 -------+----+----+----+----+----+------------+---- - 1111 | 2 | | | | | ft1 | + c1 | c2 | c3 | c4 | c5 | c6 | c7 | c8 +----+----+-------+------------------------------+--------------------------+----+------------+----- + 6 | 6 | 00006 | Wed Jan 07 00:00:00 1970 PST | Wed Jan 07 00:00:00 1970 | 6 | 6 | foo (1 row) -- cleanup
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=turaco&dt=2025-11-04%2012%3A00%3A39 - master
2025-11-04 13:11:47.724 GMT [19349] LOG: invalid value for parameter "lc_monetary": "en_GB.UTF-8" 2025-11-04 13:11:47.725 GMT [19349] LOG: invalid value for parameter "lc_numeric": "en_GB.UTF-8" 2025-11-04 13:11:47.725 GMT [19349] LOG: invalid value for parameter "lc_time": "en_GB.UTF-8" 2025-11-04 13:11:47.725 GMT [19349:4] FATAL: configuration file "/mnt/data/buildfarm/buildroot/HEAD/pgsql.build/src/test/modules/test_misc/tmp_check/t_005_timeouts_master_data/pgdata/postgresql.conf" contains errors
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=turaco&dt=2025-11-04%2022%3A18%3A32 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=turaco&dt=2025-11-13%2016%3A15%3A02 - master
2025-11-04 22:32:06.597 GMT [7418:1] [unknown] LOG: connection received: host=[local] 2025-11-04 22:32:06.598 GMT [7418:2] [unknown] FATAL: could not open shared memory segment "/PostgreSQL.1548010858": No such file or directory
# +++ tap install-check in src/test/modules/test_misc +++ t/001_constraint_validation.pl .. ok t/002_tablespace.pl ............. ok t/003_check_guc.pl .............. ok =================================================== timed out after 14400 secs
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tayra&dt=2025-11-17%2012%3A45%3A02 - master
sh: line 1: /repos/client-code-REL_19_1/HEAD/pgsql.build/src/test/regress/results/create_table_like.out.diff: No such file or directory ... Bail out!pg_ctl: directory "/repos/client-code-REL_19_1/HEAD/pgsql.build/src/test/regress/tmp_check/data" does not exist # could not stop postmaster: exit code was 256
configure: error: `PG_TEST_EXTRA' was not set in the previous run configure: error: in `/home/animal/build/REL_18_STABLE/pgsql.build': configure: error: changes in the environment can compromise the build configure: error: run `make distclean' and/or `rm /home/animal/build/accache-baza/config-REL_18_STABLE.cache' and start over make: *** [../../../src/Makefile.global:871: ../../../config.status] Error 1
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2025-11-24%2002%3A34%3A31 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2026-01-09%2014%3A19%3A13 - REL_18_STABLE
--- C:/prog/bf/root/HEAD/pgsql/contrib/postgres_fdw/expected/query_cancel.out 2024-12-24 01:05:36.062873300 +0000 +++ C:/prog/bf/root/HEAD/pgsql.build/testrun/postgres_fdw-running/regress/results/query_cancel.out 2025-11-24 05:42:27.496860600 +0000 @@ -31,4 +31,5 @@ -- This would take very long if not canceled: SELECT count(*) FROM ft1 a CROSS JOIN ft1 b CROSS JOIN ft1 c CROSS JOIN ft1 d; ERROR: canceling statement due to statement timeout +WARNING: could not get result of cancel request due to timeout COMMIT;
# initdb failed # Examine "c:/build-farm-local/buildroot/REL_17_STABLE/pgsql.build/testrun/regress/regress/log/initdb.log" for the reason. ==~_~===-=-===~_~== pgsql.build/testrun/regress/regress/log/initdb.log ==~_~===-=-===~_~== '"initdb"' \202\315\201A\223\340\225\224\203R\203}\203\223\203h\202\334\202\275\202\315\212O\225\224\203R\203}\203\223\203h\201A \221\200\215\354\211\302\224\\\202\310\203v\203\215\203O\203\211\203\200\202\334\202\275\202\315\203o\203b\203` \203t\203@\203C\203\213\202\306\202\265\202\304\224F\216\257\202\263\202\352\202\304\202\242\202\334\202\271\202\361\201B
"initdb" is not recognized as an internal or external command, operable program or batch file. (in Japanese, encoding SJIS)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-12-14%2018%3A27%3A08 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-12-14%2020%3A17%3A02 - master
log files for step xversion-upgrade-REL_18_STABLE-HEAD: upgrade.crake/HEAD/REL_18_STABLE-dump1.log pg_dump: error: Dumping the contents of table "city" failed: PQgetResult() failed. pg_dump: detail: Error message from server: ERROR: could not access file "/home/andrew/bf/root/REL_18_STABLE/pgsql.build/src/test/regress/regress.so": No such file or directory pg_dump: detail: Command was: COPY public.city (name, location, budget) TO stdout;
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2025-12-14%2019%3A32%3A03 - master
log files for step xversion-upgrade-save: upgrade.crake/HEAD/ctl.log pg_ctl: directory "/home/andrew/bf/root/upgrade.crake/HEAD/inst/data-C" does not exist
parallel group (17 tests): xmlmap portals_p2 functional_deps dependency equivclass tsdicts guc select_views indirect_toast advisory_lock cluster combocid window tsearch foreign_data foreign_key =================================================== timed out after 14400 secs
(bitmapops is missing in the list)
parallel group (17 tests): portals_p2 advisory_lock xmlmap dependency guc functional_deps tsdicts combocid select_views indirect_toast window tsearch foreign_data bitmapops cluster foreign_key =================================================== timed out after 14400 secs
(equivclass is missing in the list)
parallel group (20 tests): init_privs drop_operator security_label password tablesample lock collate object_address replica_identity spgist groupingsets identity matview gin gist generated rowsecurity join_hash brin =================================================== timed out after 14400 secs
(privileges is missing in the list)
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2026-01-02%2008%3A25%3A13 - master
186/347 postgresql:recovery / recovery/051_effective_wal_level ERROR 1685.45s exit status 25
[10:20:45.722](0.321s) ok 24 - effective_wal_level got increased to 'logical' again on standby
### Restarting node "primary"
# Running: pg_ctl --wait --pgdata C:\\prog\\bf\\root\\HEAD\\pgsql.build/testrun/recovery/051_effective_wal_level\\data/t_051_effective_wal_level_primary_data/pgdata --log C:\\prog\\bf\\root\\HEAD\\pgsql.build/testrun/recovery/051_effective_wal_level\\log/051_effective_wal_level_primary.log restart
waiting for server to shut down.... done
server stopped
waiting for server to start.... done
server started
# Postmaster PID for node "primary" is 7660
Waiting for replication conn standby3's replay_lsn to pass 0/05033CE8 on primary
[10:46:10.121](1524.399s) # poll_query_until timed out executing this query:
# SELECT '0/05033CE8' <= replay_lsn AND state = 'streaming'
# FROM pg_catalog.pg_stat_replication
# WHERE application_name IN ('standby3', 'walreceiver')
# expecting this output:
# t
# last actual query output:
#
# with stderr:
[10:46:10.682](0.561s) # Last pg_stat_replication contents:
timed out waiting for catchup at C:/prog/bf/root/HEAD/pgsql/src/test/recovery/t/051_effective_wal_level.pl line 292.
initdb: error: invalid locale settings; check LANG and LC_* environment variables [13:06:04.684](0.197s) Bail out! command "initdb -D /home/gburd/build/REL_15_STABLE/pgsql.build/src/test/subscription/tmp_check/t_029_on_error_publisher_data/pgdata -A trust -N" exited with value 1
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=broadbill&dt=2026-01-08%2013%3A08%3A04 - REL_14_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=broadbill&dt=2026-01-08%2013%3A22%3A00 - REL_16_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=broadbill&dt=2026-01-08%2013%3A28%3A53 - REL_17_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=broadbill&dt=2026-01-12%2013%3A12%3A33 - master
+setup failed: ERROR: could not extend file "base/39047/43641": No space left on device
Also manifested as:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=broadbill&dt=2026-01-07%2013%3A15%3A52 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=broadbill&dt=2026-01-08%2013%3A15%3A27 - REL_15_STABLE
not ok 11 - plpgsql_trap 748 ms diff -U3 /home/almalinux/20-broadbill/buildroot/HEAD/pgsql.build/src/pl/plpgsql/src/expected/plpgsql_trap.out /home/almalinux/20-broadbill/buildroot/HEAD/pgsql.build/src/pl/plpgsql/src/results/plpgsql_trap.out --- /home/almalinux/20-broadbill/buildroot/HEAD/pgsql.build/src/pl/plpgsql/src/expected/plpgsql_trap.out 2026-01-07 13:15:54.280172185 +0000 +++ /home/almalinux/20-broadbill/buildroot/HEAD/pgsql.build/src/pl/plpgsql/src/results/plpgsql_trap.out 2026-01-07 13:32:08.095986939 +0000 @@ -155,7 +155,7 @@ begin; set statement_timeout to 1000; select trap_timeout(); -NOTICE: nyeah nyeah, can't stop me +NOTICE: caught others? ERROR: end of function CONTEXT: PL/pgSQL function trap_timeout() line 15 at RAISE rollback;
reproduced locally when running "<path-to-source>/configure && make -s -j10 && make -s check -C src/pl/plpgsql" inside "sudo mount -t tmpfs -o size=500M tmpfs /tmp/pg1"
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bulbul&dt=2026-01-09%2001%3A15%3A09 - REL_15_STABLE
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bulbul&dt=2026-01-14%2001%3A08%3A04 - REL_16_STABLE
+setup failed: ERROR: could not extend file "base/40967/45561": No space left on device
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=unicorn&dt=2026-01-07%2022%3A50%3A04 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=unicorn&dt=2026-01-08%2000%3A50%3A05 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=unicorn&dt=2026-01-08%2001%3A50%3A03 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=unicorn&dt=2026-01-08%2008%3A50%3A04 - master
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=unicorn&dt=2026-01-08%2009%3A55%3A36 - master
test: setup - postgresql:initdb_cache start time: 23:02:00 duration: 0.19s result: (exit status 3221225781 or 0xc0000135)
(unicorn is a new Windows 11 arm64 animal)
config changed: "'CommandPromptType' => 'Cross'" -> "'CommandPromptType' => 'Native'", "'VSCMD_ARG_HOST_ARCH' => 'x86'" => "'VSCMD_ARG_HOST_ARCH' => 'arm64'"...
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=icarus&dt=2026-01-20%2001%3A17%3A43 - master
timed out after 14400 secs