On 08/11/2024 20:10, Maxim Orlov wrote:
Sorry for a late reply. There was a problem in upgrade with offset wraparound. Here is a fixed version. Test also added. I decide to use my old patch to set a non-standard multixacts for the old cluster, fill it with data and do pg_upgrade.

The wraparound logic is still not correct. To test, I created a cluster where multixids have wrapped around, so that:

$ ls -l data-old/pg_multixact/offsets/
total 720
-rw------- 1 heikki heikki 212992 Nov 12 01:11 0000
-rw-r--r-- 1 heikki heikki 262144 Nov 12 00:55 FFFE
-rw------- 1 heikki heikki 262144 Nov 12 00:56 FFFF

After running pg_upgrade:

$ ls -l data-new/pg_multixact/offsets/
total 1184
-rw------- 1 heikki heikki 155648 Nov 12 01:12 0001
-rw------- 1 heikki heikki 262144 Nov 12 01:11 1FFFD
-rw------- 1 heikki heikki 262144 Nov 12 01:11 1FFFE
-rw------- 1 heikki heikki 262144 Nov 12 01:11 1FFFF
-rw------- 1 heikki heikki 262144 Nov 12 01:11 20000
-rw------- 1 heikki heikki 155648 Nov 12 01:11 20001

That's not right. The segments 20000 and 20001 were created by the new pg_upgrade conversion code from old segment '0000'. But multixids are still 32-bit values, so after segment 1FFFF, you should still wrap around to 0000. The new segments should be '0000' and '0001'. The segment '0001' is created when postgres is started after upgrade, but it's created from scratch and doesn't contain the upgraded values.

When I try to select from a table after upgrade that contains post-wraparound multixids:

TRAP: failed Assert("offset != 0"), File: "../src/backend/access/transam/multixact.c", Line: 1353, PID: 63386


On a different note, I'm surprised you're rewriting member segments from scratch, parsing all the individual member groups and writing them out again. There's no change to the members file format, except for the numbering of the files, so you could just copy the files under the new names without paying attention to the contents. It's not wrong to parse them in detail, but I'd assume that it would be simpler not to.

Here is how to test. All the patches are for 14e87ffa5c543b5f3 master branch. 1) Get the 14e87ffa5c543b5f3 master branch apply patches 0001-Add- initdb-option-to-initialize-cluster-with-non-sta.patch and 0002-TEST- lower-SLRU_PAGES_PER_SEGMENT.patch 2) Get the 14e87ffa5c543b5f3 master branch in a separate directory and apply v6 patch set.
3) Build two branches.
4) Use ENV oldinstall to run the test: PROVE_TESTS=t/005_mxidoff.pl <http://005_mxidoff.pl> oldinstall=/home/orlov/proj/pgsql-new PG_TEST_NOCLEAN=1 make check -C src/bin/pg_upgrade/

Maybe, I'll make a shell script to automate this steps if required.

Yeah, I think we need something to automate this. I did the testing manually. I used the attached python script to consume multixids faster, but it's still tedious.

I used pg_resetwal to quickly create a cluster that's close to multixid wrapround:

initdb -D data
pg_resetwal -D data -m 4294900001,4294900000
dd if=/dev/zero of=data/pg_multixact/offsets/FFFE bs=8192 count=32

--
Heikki Linnakangas
Neon (https://neon.tech)
import sys;
import threading;
import psycopg2;

def test_multixact(tblname: str):
    with psycopg2.connect() as conn:
        cur = conn.cursor()
        cur.execute(
            f"""
            DROP TABLE IF EXISTS {tblname};
            CREATE TABLE {tblname}(i int primary key, n_updated int) WITH (autovacuum_enabled=false);
            INSERT INTO {tblname} select g, 0 from generate_series(1, 50) g;
            """
        )

    # Lock entries using parallel connections in a round-robin fashion.
    nclients = 50
    update_every = 97
    connections = []
    for _ in range(nclients):
        # Do not turn on autocommit. We want to hold the key-share locks.
        conn = psycopg2.connect()
        connections.append(conn)

    # On each iteration, we commit the previous transaction on a connection,
    # and issue another select. Each SELECT generates a new multixact that
    # includes the new XID, and the XIDs of all the other parallel transactions.
    # This generates enough traffic on both multixact offsets and members SLRUs
    # to cross page boundaries.
    for i in range(20000):
        conn = connections[i % nclients]
        conn.commit()

        # Perform some non-key UPDATEs too, to exercise different multixact
        # member statuses.
        if i % update_every == 0:
            conn.cursor().execute(f"update {tblname} set n_updated = n_updated + 1 where i = {i % 50}")
        else:
            conn.cursor().execute(f"select * from {tblname} for key share")

#nthreads=10
#
#threads = []
#for threadno in range(nthreads):
#    tblname = f"tbl{threadno}"
#    t = threading.Thread(target=test_multixact, args=(tblname,))
#    t.start()
#    threads.append(t)
#
#for threadno in range(nthreads):
#    threads[threadno].join()

test_multixact(sys.argv[1])

Reply via email to