(new thread)

On Fri, Jul 18, 2025 at 11:05:04AM -0500, Nathan Bossart wrote:
> I'm cautiously optimistic that we can find some better gains for upgrades
> from v16 and newer.  That would involve dumping lo_create() commands for
> all LOs with comments/seclabels, dumping the relevant pg_shdepend rows, and
> then copying/linking the pg_largeobject_metadata files like we did prior to
> v12.

Here is a patch.  For background, the reason this is limited to upgrades
from v16 and newer is because the aclitem data type (needed by
pg_largeobject_metadata.lomacl) changed its storage format in v16 (see
commit 7b378237aa).  Note that the patch is essentially a revert of commit
12a53c732c, but there are enough differences that it should be considered a
fresh effort.

Something I hadn't anticipated is that we need to take special care to
transfer the relfilenode of pg_largeobject_metadata and its index, as was
done for pg_largeobject in commits d498e052b4 and bbe08b8869.  In fact, the
majority of the patch is dedicated to that.

My testing showed some decent, but not earth-shattering performance
improvements from this patch.  For upgrades with many large objects with
NULL lomacl/lomowner columns, pg_upgrade was 50% faster.  With non-NULL
lomacl/lomowner, that dropped to 25%.  When each large object had a
comment, there was no change.  I'm assuming that its rare to have lots of
large objects with comments or security labels, so I don't see any need to
expend energy trying to optimize that use-case.

I am a bit concerned that we'll forget to add checks for new types of
dependencies similar to comments and security labels.  If we do, pg_upgrade
should just fail to restore the schema, and fixing the code should be easy
enough.  Also, we'll need to remember to revisit this code if there's
another storage format change for one of pg_largeobject_metadata's columns,
but that seems unlikely to happen anytime soon.  On the whole, I'm not too
worried about either of these points.

-- 
nathan
>From 9845e0e1c6a2bfacc53390d244fdb47f9a276169 Mon Sep 17 00:00:00 2001
From: Nathan Bossart <nat...@postgresql.org>
Date: Thu, 14 Aug 2025 10:14:43 -0500
Subject: [PATCH v1 1/1] pg_upgrade: Transfer pg_largeobject_metadata's files
 when possible.
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Commit 161a3e8b68 taught pg_upgrade to use COPY for large object
metadata for upgrades from v12 and newer, which is much faster to
restore than the proper large object SQL commands.  For upgrades
from v16 and newer, we can take this a step further and transfer
the large object metadata files as if they were user tables.  We
can't transfer the files from older versions because the aclitem
data type (needed by pg_largeobject_metadata.lomacl) changed its
storage format in v16 (see commit 7b378237aa).  Note that this
commit is essentially a revert of commit 12a53c732c, but there are
enough differences that it should be considered a fresh effort.

There are a couple of caveats.  First, we still need to COPY the
corresponding pg_shdepend rows for large objects, since those
aren't transferred by anything else.  Second, we need to COPY
anything in pg_largeobject_metadata with a comment or security
label, else restoring those will fail.  This means that an upgrade
in which every large object has a comment or security label won't
gain anything from this commit, but it should at least avoid making
these unusual use-cases any worse.

pg_upgrade must also take care to transfer the relfilenode of
pg_largeobject_metadata and its index, à la commits d498e052b4 and
bbe08b8869.
---
 src/backend/commands/tablecmds.c | 12 +++--
 src/bin/pg_dump/pg_dump.c        | 80 ++++++++++++++++++++++++++------
 src/bin/pg_upgrade/info.c        | 11 +++--
 src/bin/pg_upgrade/pg_upgrade.c  |  6 +--
 4 files changed, 86 insertions(+), 23 deletions(-)

diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index c6dd2e020da..4132b570513 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -42,6 +42,7 @@
 #include "catalog/pg_foreign_table.h"
 #include "catalog/pg_inherits.h"
 #include "catalog/pg_largeobject.h"
+#include "catalog/pg_largeobject_metadata.h"
 #include "catalog/pg_namespace.h"
 #include "catalog/pg_opclass.h"
 #include "catalog/pg_policy.h"
@@ -2389,12 +2390,15 @@ truncate_check_rel(Oid relid, Form_pg_class reltuple)
        /*
         * Most system catalogs can't be truncated at all, or at least not 
unless
         * allow_system_table_mods=on. As an exception, however, we allow
-        * pg_largeobject to be truncated as part of pg_upgrade, because we need
-        * to change its relfilenode to match the old cluster, and allowing a
-        * TRUNCATE command to be executed is the easiest way of doing that.
+        * pg_largeobject and pg_largeobject_metadata to be truncated as part of
+        * pg_upgrade, because we need to change its relfilenode to match the 
old
+        * cluster, and allowing a TRUNCATE command to be executed is the 
easiest
+        * way of doing that.
         */
        if (!allowSystemTableMods && IsSystemClass(relid, reltuple)
-               && (!IsBinaryUpgrade || relid != LargeObjectRelationId))
+               && (!IsBinaryUpgrade ||
+                       (relid != LargeObjectRelationId &&
+                        relid != LargeObjectMetadataRelationId)))
                ereport(ERROR,
                                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
                                 errmsg("permission denied: \"%s\" is a system 
catalog",
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index fc7a6639163..48066fab744 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1131,6 +1131,23 @@ main(int argc, char **argv)
                shdepend->dataObj->filtercond = "WHERE classid = 
'pg_largeobject'::regclass "
                        "AND dbid = (SELECT oid FROM pg_database "
                        "            WHERE datname = current_database())";
+
+               /*
+                * If upgrading from v16 or newer, only dump large objects with
+                * comments/seclabels.  For these upgrades, pg_upgrade can 
copy/link
+                * pg_largeobject_metadata's files (which is usually faster) 
but we
+                * still need to dump LOs with comments/seclabels here so that 
the
+                * subsequent COMMENT and SECURITY LABEL commands work.  
pg_upgrade
+                * can't copy/link the files from older versions because aclitem
+                * (needed by pg_largeobject_metadata.lomacl) changed its 
storage
+                * format in v16.
+                */
+               if (fout->remoteVersion >= 160000)
+                       lo_metadata->dataObj->filtercond = "WHERE oid IN "
+                               "(SELECT objoid FROM pg_description "
+                               "WHERE classoid = " 
CppAsString2(LargeObjectRelationId) " "
+                               "UNION SELECT objoid FROM pg_seclabel "
+                               "WHERE classoid = " 
CppAsString2(LargeObjectRelationId) ")";
        }
 
        /*
@@ -3629,26 +3646,32 @@ dumpDatabase(Archive *fout)
        /*
         * pg_largeobject comes from the old system intact, so set its
         * relfrozenxids, relminmxids and relfilenode.
+        *
+        * pg_largeobject_metadata also comes from the old system intact for
+        * upgrades from v16 and newer, so set its relfrozenxids, relminmxids, 
and
+        * relfilenode, too.  pg_upgrade can't copy/link the files from older
+        * versions because aclitem (needed by pg_largeobject_metadata.lomacl)
+        * changed its storage format in v16.
         */
        if (dopt->binary_upgrade)
        {
                PGresult   *lo_res;
                PQExpBuffer loFrozenQry = createPQExpBuffer();
                PQExpBuffer loOutQry = createPQExpBuffer();
+               PQExpBuffer lomOutQry = createPQExpBuffer();
                PQExpBuffer loHorizonQry = createPQExpBuffer();
+               PQExpBuffer lomHorizonQry = createPQExpBuffer();
                int                     ii_relfrozenxid,
                                        ii_relfilenode,
                                        ii_oid,
                                        ii_relminmxid;
 
-               /*
-                * pg_largeobject
-                */
                if (fout->remoteVersion >= 90300)
                        appendPQExpBuffer(loFrozenQry, "SELECT relfrozenxid, 
relminmxid, relfilenode, oid\n"
                                                          "FROM 
pg_catalog.pg_class\n"
-                                                         "WHERE oid IN (%u, 
%u);\n",
-                                                         
LargeObjectRelationId, LargeObjectLOidPNIndexId);
+                                                         "WHERE oid IN (%u, 
%u, %u, %u);\n",
+                                                         
LargeObjectRelationId, LargeObjectLOidPNIndexId,
+                                                         
LargeObjectMetadataRelationId, LargeObjectMetadataOidIndexId);
                else
                        appendPQExpBuffer(loFrozenQry, "SELECT relfrozenxid, 0 
AS relminmxid, relfilenode, oid\n"
                                                          "FROM 
pg_catalog.pg_class\n"
@@ -3663,35 +3686,57 @@ dumpDatabase(Archive *fout)
                ii_oid = PQfnumber(lo_res, "oid");
 
                appendPQExpBufferStr(loHorizonQry, "\n-- For binary upgrade, 
set pg_largeobject relfrozenxid and relminmxid\n");
+               appendPQExpBufferStr(lomHorizonQry, "\n-- For binary upgrade, 
set pg_largeobject_metadata relfrozenxid and relminmxid\n");
                appendPQExpBufferStr(loOutQry, "\n-- For binary upgrade, 
preserve pg_largeobject and index relfilenodes\n");
+               appendPQExpBufferStr(lomOutQry, "\n-- For binary upgrade, 
preserve pg_largeobject_metadata and index relfilenodes\n");
                for (int i = 0; i < PQntuples(lo_res); ++i)
                {
                        Oid                     oid;
                        RelFileNumber relfilenumber;
+                       PQExpBuffer horizonQry;
+                       PQExpBuffer outQry;
+
+                       oid = atooid(PQgetvalue(lo_res, i, ii_oid));
+                       relfilenumber = atooid(PQgetvalue(lo_res, i, 
ii_relfilenode));
 
-                       appendPQExpBuffer(loHorizonQry, "UPDATE 
pg_catalog.pg_class\n"
+                       if (oid == LargeObjectRelationId ||
+                               oid == LargeObjectLOidPNIndexId)
+                       {
+                               horizonQry = loHorizonQry;
+                               outQry = loOutQry;
+                       }
+                       else
+                       {
+                               horizonQry = lomHorizonQry;
+                               outQry = lomOutQry;
+                       }
+
+                       appendPQExpBuffer(horizonQry, "UPDATE 
pg_catalog.pg_class\n"
                                                          "SET relfrozenxid = 
'%u', relminmxid = '%u'\n"
                                                          "WHERE oid = %u;\n",
                                                          
atooid(PQgetvalue(lo_res, i, ii_relfrozenxid)),
                                                          
atooid(PQgetvalue(lo_res, i, ii_relminmxid)),
                                                          
atooid(PQgetvalue(lo_res, i, ii_oid)));
 
-                       oid = atooid(PQgetvalue(lo_res, i, ii_oid));
-                       relfilenumber = atooid(PQgetvalue(lo_res, i, 
ii_relfilenode));
-
-                       if (oid == LargeObjectRelationId)
-                               appendPQExpBuffer(loOutQry,
+                       if (oid == LargeObjectRelationId ||
+                               oid == LargeObjectMetadataRelationId)
+                               appendPQExpBuffer(outQry,
                                                                  "SELECT 
pg_catalog.binary_upgrade_set_next_heap_relfilenode('%u'::pg_catalog.oid);\n",
                                                                  
relfilenumber);
-                       else if (oid == LargeObjectLOidPNIndexId)
-                               appendPQExpBuffer(loOutQry,
+                       else if (oid == LargeObjectLOidPNIndexId ||
+                                        oid == LargeObjectMetadataOidIndexId)
+                               appendPQExpBuffer(outQry,
                                                                  "SELECT 
pg_catalog.binary_upgrade_set_next_index_relfilenode('%u'::pg_catalog.oid);\n",
                                                                  
relfilenumber);
                }
 
                appendPQExpBufferStr(loOutQry,
                                                         "TRUNCATE 
pg_catalog.pg_largeobject;\n");
+               appendPQExpBufferStr(lomOutQry,
+                                                        "TRUNCATE 
pg_catalog.pg_largeobject_metadata;\n");
+
                appendPQExpBufferStr(loOutQry, loHorizonQry->data);
+               appendPQExpBufferStr(lomOutQry, lomHorizonQry->data);
 
                ArchiveEntry(fout, nilCatalogId, createDumpId(),
                                         ARCHIVE_OPTS(.tag = "pg_largeobject",
@@ -3699,11 +3744,20 @@ dumpDatabase(Archive *fout)
                                                                  .section = 
SECTION_PRE_DATA,
                                                                  .createStmt = 
loOutQry->data));
 
+               if (fout->remoteVersion >= 160000)
+                       ArchiveEntry(fout, nilCatalogId, createDumpId(),
+                                                ARCHIVE_OPTS(.tag = 
"pg_largeobject_metadata",
+                                                                         
.description = "pg_largeobject_metadata",
+                                                                         
.section = SECTION_PRE_DATA,
+                                                                         
.createStmt = lomOutQry->data));
+
                PQclear(lo_res);
 
                destroyPQExpBuffer(loFrozenQry);
                destroyPQExpBuffer(loHorizonQry);
+               destroyPQExpBuffer(lomHorizonQry);
                destroyPQExpBuffer(loOutQry);
+               destroyPQExpBuffer(lomOutQry);
        }
 
        PQclear(res);
diff --git a/src/bin/pg_upgrade/info.c b/src/bin/pg_upgrade/info.c
index c39eb077c2f..7ce08270168 100644
--- a/src/bin/pg_upgrade/info.c
+++ b/src/bin/pg_upgrade/info.c
@@ -498,7 +498,10 @@ get_rel_infos_query(void)
         *
         * pg_largeobject contains user data that does not appear in pg_dump
         * output, so we have to copy that system table.  It's easiest to do 
that
-        * by treating it as a user table.
+        * by treating it as a user table.  We can do the same for
+        * pg_largeobject_metadata for upgrades from v16 and newer.  pg_upgrade
+        * can't copy/link the files from older versions because aclitem (needed
+        * by pg_largeobject_metadata.lomacl) changed its storage format in v16.
         */
        appendPQExpBuffer(&query,
                                          "WITH regular_heap (reloid, indtable, 
toastheap) AS ( "
@@ -514,10 +517,12 @@ get_rel_infos_query(void)
                                          "                        
'binary_upgrade', 'pg_toast') AND "
                                          "      c.oid >= %u::pg_catalog.oid) 
OR "
                                          "     (n.nspname = 'pg_catalog' AND "
-                                         "      relname IN ('pg_largeobject') 
))), ",
+                                         "      relname IN 
('pg_largeobject'%s) ))), ",
                                          (user_opts.transfer_mode == 
TRANSFER_MODE_SWAP) ?
                                          ", " CppAsString2(RELKIND_SEQUENCE) : 
"",
-                                         FirstNormalObjectId);
+                                         FirstNormalObjectId,
+                                         
(GET_MAJOR_VERSION(old_cluster.major_version) >= 1600) ?
+                                         ", 'pg_largeobject_metadata'" : "");
 
        /*
         * Add a CTE that collects OIDs of toast tables belonging to the tables
diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c
index d5cd5bf0b3a..490e98fa26f 100644
--- a/src/bin/pg_upgrade/pg_upgrade.c
+++ b/src/bin/pg_upgrade/pg_upgrade.c
@@ -29,9 +29,9 @@
  *     We control all assignments of pg_enum.oid because these oids are stored
  *     in user tables as enum values.
  *
- *     We control all assignments of pg_authid.oid for historical reasons (the
- *     oids used to be stored in pg_largeobject_metadata, which is now copied 
via
- *     SQL commands), that might change at some point in the future.
+ *     We control all assignments of pg_authid.oid because the oids are stored 
in
+ *     pg_largeobject_metadata, which is copied via file transfer for upgrades
+ *     from v16 and newer.
  *
  *     We control all assignments of pg_database.oid because we want the 
directory
  *     names to match between the old and new cluster.
-- 
2.39.5 (Apple Git-154)

Reply via email to