Hi,

On 2025-03-06 10:07:43 -0800, Jeff Davis wrote:
> On Thu, 2025-03-06 at 12:16 -0500, Andres Freund wrote:
> > I don't follow. We already have the tablenames, schemanames and oids
> > of the
> > to-be-dumped tables/indexes collected in pg_dump, all that's needed
> > is to send
> > a list of those to the server to filter there?
> 
> Would it be appropriate to create a temp table? I wouldn't normally
> expect pg_dump to create temp tables, but I can't think of a major
> reason not to.

It doesn't work on a standby.


> If not, did you have in mind a CTE with a large VALUES expression, or
> just a giant IN() list?

An array, with a server-side unnest(), like we do in a bunch of other
places. E.g.


        /* need left join to pg_type to not fail on dropped columns ... */
        appendPQExpBuffer(q,
                                          "FROM unnest('%s'::pg_catalog.oid[]) 
AS src(tbloid)\n"
                                          "JOIN pg_catalog.pg_attribute a ON 
(src.tbloid = a.attrelid) "
                                          "LEFT JOIN pg_catalog.pg_type t "
                                          "ON (a.atttypid = t.oid)\n",
                                          tbloids->data);

Greetings,

Andres Freund


Reply via email to