Re: Cannot pg_dump_all anymore...

2025-03-19 Thread Greg Sabino Mullane
On Wed, Mar 19, 2025 at 10:02 AM E-BLOKOS  wrote:

> is it possible a crash happened with a VACUUM and a machine reboot in same
> time?
>

More likely to be a problem with pg_repack.  Please tell us the exact
versions of pg_repack and Postgres in use here.

Cheers,
Greg

--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products & Tech Support


Re: size of attributes table is too big

2025-03-19 Thread Álvaro Herrera
Hello

On 2025-Mar-19, Siraj G wrote:

> I have a PG (v16) instance which is occupying around 1TB of storage. Out of
> this, around 350GB is occupied by the table pg_catalog.pg_attribute.
> Why is the catalog table's size so big?

Heavy use of temp tables is a known cause of this.

> I think this table must have tons of dead tuples. Please suggest to me if
> we can purge any data/shrink the size of this table.

Yeah, I'd also bet that there are tons of dead tuples, or just unused
free space.  To purge it you would use VACUUM FULL, though that would
need to lock all accesses to the table.

Does your instance run with autovacuum disabled perchance?

-- 
Álvaro Herrera PostgreSQL Developer  —  https://www.EnterpriseDB.com/
Bob [Floyd] used to say that he was planning to get a Ph.D. by the "green
stamp method," namely by saving envelopes addressed to him as 'Dr. Floyd'.
After collecting 500 such letters, he mused, a university somewhere in
Arizona would probably grant him a degree.  (Don Knuth)




Re: size of attributes table is too big

2025-03-19 Thread Adrian Klaver

On 3/19/25 10:06, Siraj G wrote:

Hello!

I have a PG (v16) instance which is occupying around 1TB of storage. Out 


Exact version of Postgres 16, include the x in 16.x.


of this, around 350GB is occupied by the table pg_catalog.pg_attribute.
Why is the catalog table's size so big?

Here are the sizes:


How did you measure the sizes?

pg_attribute maintains information about table columns, how many table 
columns do you have?


Are you creating/dropping tables on a regular basis?

Is autovacuum running properly?

Have you run VACUUM manually on pg_attribute?



pg_attribute

338 GB
pg_attribute_relid_attnam_index

117 GB
pg_attribute_relid_attnum_index

69 GB


I think this table must have tons of dead tuples. Please suggest to me 
if we can purge any data/shrink the size of this table.


REgards
Siraj


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: size of attributes table is too big

2025-03-19 Thread Pavel Stehule
st 19. 3. 2025 v 18:14 odesílatel Álvaro Herrera 
napsal:

> Hello
>
> On 2025-Mar-19, Siraj G wrote:
>
> > I have a PG (v16) instance which is occupying around 1TB of storage. Out
> of
> > this, around 350GB is occupied by the table pg_catalog.pg_attribute.
> > Why is the catalog table's size so big?
>
> Heavy use of temp tables is a known cause of this.
>
> > I think this table must have tons of dead tuples. Please suggest to me if
> > we can purge any data/shrink the size of this table.
>
> Yeah, I'd also bet that there are tons of dead tuples, or just unused
> free space.  To purge it you would use VACUUM FULL, though that would
> need to lock all accesses to the table.
>
> Does your instance run with autovacuum disabled perchance?
>

or long unclosed transactions, maybe forgotten replication slots, ...

It is very strange so with this size it is still usable. Today hardware is
unbelievable strong

Regards

Pavel


>
> --
> Álvaro Herrera PostgreSQL Developer  —
> https://www.EnterpriseDB.com/
> Bob [Floyd] used to say that he was planning to get a Ph.D. by the "green
> stamp method," namely by saving envelopes addressed to him as 'Dr. Floyd'.
> After collecting 500 such letters, he mused, a university somewhere in
> Arizona would probably grant him a degree.  (Don Knuth)
>
>
>


Re: How to recover correctly master and replica using backup made by pg_basebackup?

2025-03-19 Thread Evgeniy Ratkov
On 07/11/2024 22:19, Evgeniy Ratkov wrote:Hello.I try to use pg_basebackup to make backup and recover master and replicafrom it.I recover master with action "promote". Next, I recover replica from thesamebackup with action "shutdown". After it, I start replica with configuredconnectionand replication slot to the master. I get error, because replica readsand appliesall WAL of irrelevant timeline, which contains logs after restore point.I wrote Perl script for pg_basebackup's TAP tests. It is attached asfile "050_check_replication.pl".It works on PostgreSQL 17, commit7d85d87f4d5c35fd5b2d38adaef63dfbfa542ccc afterapplying small patch, which adds possibility to start node with shutdownoption.Patch is attached as file "add_node_start_with_shutdown.diff".Script generates situation when timeline changes several times. Smallschema mayexplain it:Timeline 1:   Create the first backup   Create restore point 'rp_repl_bug'Timeline 1 -> 2:   Recovery with promote using the first backup and 'rp_repl_bug'   Create the second backup   Create restore point 'rp_repl_bug2'Timeline 2 -> 3:   Recovery with promote using the second backup and 'rp_repl_bug2'Timeline 1 -> 4:   Recovery with promote using the first backup and 'rp_repl_bug'After it I get "0004.history" file, which looks strange:1    0/390    at restore point "rp_repl_bug"2    0/590    at restore point "rp_repl_bug2"3    0/390    at restore point "rp_repl_bug"Next, I try to restore replica from the first backup. Replica did notfind WAL fortimeline 4 and downloaded and applied WAL for timeline 2.cp: cannot stat '${ARCHIVE_PATH}/00040003': No such fileor directoryLOG:  redo starts at 0/228cp: cannot stat '${ARCHIVE_PATH}/00040003': No such fileor directory'${ARCHIVE_PATH}/00020003' ->'${REPLICA_PGDATA}/pg_wal/RECOVERYXLOG'LOG:  restored log file "00020003" from archiveLOG:  completed backup recovery with redo LSN 0/228 and end LSN0/2000120LOG:  consistent recovery state reached at 0/2000120LOG:  recovery stopping at restore point "rp_repl_bug", time 2024-11-0522:00:16.037126+03Than, at the next replica's start (with configured connection to themaster andreplication slot) replica can not start receiving WAL from master,because itapplied all WAL from timeline 2 including all lines after LSN, where itshouldswitch timeline:LOG:  entering standby modeLOG:  redo starts at 0/228LOG:  consistent recovery state reached at 0/390LOG:  database system is ready to accept read-only connectionsLOG:  started streaming WAL from primary at 0/400 on timeline 4FATAL:  could not receive data from WAL stream: ERROR:  requestedstarting point 0/400 is ahead of the WAL flush position of thisserver 0/3002000If I restart replica, it will not start:LOG:  database system was shut down in recovery at 2024-11-05 22:00:18 MSKLOG:  entering standby modeFATAL:  requested timeline 4 is not a child of this server's historyDETAIL:  Latest checkpoint is at 0/390 on timeline 2, but in thehistory of the requested timeline, the server forked off from thattimeline at 0/590.LOG:  startup process (PID 2384880) exited with exit code 1Questions:1) Why does mirror not switch to the next timeline when it get LSN inWAL, whichthere is in the history file? And why does mirror apply all WAL?2) Why does 0004.history contain info about switching from 2 and 3timelineseven if the 4 timeline was switched from the first timeline?Thank you in advance. There is Heikki Linnakangas's patch:0003-Don-t-read-past-current-TLI-during-archive-recovery.patch fromhttps://www.postgresql.org/message-id/fe7af87d-b9bf-481a-902c-089c49aa911d%40iki.fi.It fixes my problem with recovery backup on standby. I refactored the test, which may show the problem. The test passedwith Heikki Linnakangas's patch. Without patch it fails with error:"FATAL:  could not receive data from WAL stream: ERROR:  requestedstarting point 0/400 is ahead of the WAL flush position of thisserver 0/3002000". The test is attached as "050_check_recovery_backup.pl". I addedthe test to "src/test/recovery/t" directory. Thank you. --Evgeniy Ratkov

050_check_recovery_backup.pl
Description: Binary data


Re: size of attributes table is too big

2025-03-19 Thread Siraj G
Hello Pavel

The SQL instance is a target of google DMS and it does have a physical
replica.

A couple of weeks back we did have a performance issue and vacuum was run
at that time to fix the problem. Very soon we may run into the same problem
I presume.

Regards
Siraj

On Wed, Mar 19, 2025 at 10:47 PM Pavel Stehule 
wrote:

>
>
> st 19. 3. 2025 v 18:14 odesílatel Álvaro Herrera 
> napsal:
>
>> Hello
>>
>> On 2025-Mar-19, Siraj G wrote:
>>
>> > I have a PG (v16) instance which is occupying around 1TB of storage.
>> Out of
>> > this, around 350GB is occupied by the table pg_catalog.pg_attribute.
>> > Why is the catalog table's size so big?
>>
>> Heavy use of temp tables is a known cause of this.
>>
>> > I think this table must have tons of dead tuples. Please suggest to me
>> if
>> > we can purge any data/shrink the size of this table.
>>
>> Yeah, I'd also bet that there are tons of dead tuples, or just unused
>> free space.  To purge it you would use VACUUM FULL, though that would
>> need to lock all accesses to the table.
>>
>> Does your instance run with autovacuum disabled perchance?
>>
>
> or long unclosed transactions, maybe forgotten replication slots, ...
>
> It is very strange so with this size it is still usable. Today hardware is
> unbelievable strong
>
> Regards
>
> Pavel
>
>
>>
>> --
>> Álvaro Herrera PostgreSQL Developer  —
>> https://www.EnterpriseDB.com/
>> Bob [Floyd] used to say that he was planning to get a Ph.D. by the "green
>> stamp method," namely by saving envelopes addressed to him as 'Dr. Floyd'.
>> After collecting 500 such letters, he mused, a university somewhere in
>> Arizona would probably grant him a degree.  (Don Knuth)
>>
>>
>>


Re: Restoring only a subset of schemas

2025-03-19 Thread Sylvain Cuaz

Le 17/03/2025 à 16:29, Tom Lane a écrit :

Sylvain Cuaz  writes:

      Now if I want to restore from a full dump of this DB, but with only one "cXXX" and 
the "Common"
schema :
- if I pass --create --schema=Common, then the CREATE SCHEMA is missing, i.e. 
it only emits data
inside "Common" and the restore fails.
- if I could pass --create --exclude-schema='c*' (fictional notation as 
patterns are only recognized
by pg_dump), then all schemas would be created, with no data inside except for 
"Common". Creating
all schemas is a waste of time, but more importantly would make restoring other 
schemas more
difficult (e.g. rows should be inserted before creating foreign keys).

In general, the solution for edge-case restore selection needs is to
make a list of the dump's contents with "pg_restore -l", edit out what
you don't want using any method you like, then use the edited list with
"pg_restore -L".


Hi,

    I am aware of that feature, but that forces me to know every type of entry that pertains to a 
schema or database (e.g. DEFAULT ACL, ACL, COMMENT, DATABASE PROPERTIES, etc.) and what about new 
ones that will be added in the future ?


Further, I don't see how it's an edge-case, at the core I just want to restore some but not all the 
schemas. This is possible for pg_dump, see my response to Adrian Klaver.



While I'd be in favor of improving pg_restore to accept wild-card
patterns,

That would definitely be appreciated.

  I'm very hesitant to start inventing new kinds of selection
switches for it.  The interactions between such switches would be a
mess.


Which interactions ? It seems to me that the name of the schema should be used as the namespace to 
check in _tocEntryRequired() in pg_backup_archiver.c, and then the dependent entries (e.g. ACL, 
COMMENT) would be handled around line 3050. I've attached a patch with some pseudo-code. In fact, 
were it not for compatibility, I'd argue that my proposed options should be the default, at least 
with --create, so as to neither output invalid SQL (for -n) nor extra unwanted ones (for -N) and to 
behave like pg_dump.


Cheers,

Sylvain

--- pg_backup_archiver.c	2025-03-18 19:43:14.297545537 +0100
+++ pg_backup_archiver_create-schema.c	2025-03-18 21:11:25.681586139 +0100
@@ -3069,17 +3069,20 @@
 		/* Apply selective-restore rules for standalone TOC entries. */
 		if (ropt->schemaNames.head != NULL)
 		{
+			ns = &ropt->include-create-schema && strcmp(te->desc, "SCHEMA") == 0 ? te->tag : te->namespace;
 			/* If no namespace is specified, it means all. */
-			if (!te->namespace)
+			if (!ns)
 return 0;
-			if (!simple_string_list_member(&ropt->schemaNames, te->namespace))
+			if (!simple_string_list_member(&ropt->schemaNames, ns))
 return 0;
 		}
 
-		if (ropt->schemaExcludeNames.head != NULL &&
-			te->namespace &&
-			simple_string_list_member(&ropt->schemaExcludeNames, te->namespace))
-			return 0;
+		if (ropt->schemaExcludeNames.head != NULL)
+		{
+			ns = &ropt->exclude-create-schema && strcmp(te->desc, "SCHEMA") == 0 ? te->tag : te->namespace;
+			if(ns && simple_string_list_member(&ropt->schemaExcludeNames, ns))
+return 0;
+		}
 
 		if (ropt->selTypes)
 		{


Re: pgvector as standard PostgreSQL feature?

2025-03-19 Thread Sebastien Flaesch

Go it, makes total sense.

So pgvector etc will probably remain an extension for a while.

Thanks for the note about BIT type.
I have missed that it's a standard built-in type.

Seb

From: Christophe Pettus 
Sent: Wednesday, March 19, 2025 9:19 AM
To: Sebastien Flaesch 
Cc: pgsql-gene...@postgresql.org 
Subject: Re: pgvector as standard PostgreSQL feature?

EXTERNAL: Do not click links or open attachments if you do not recognize the 
sender.

> On Mar 19, 2025, at 07:47, Sebastien Flaesch  
> wrote:
>
> Is there a plan to get pgvector's types (vector, halfvec, sparsevec, bit) 
> implemented as native built-in data types like json/jsonb ?

(I'm speaking just for myself here.)  I would not base any plans on this 
functionality being available in the PostgreSQL core in the near future (and by 
"near future," I mean the next five years).

1. You list three different extensions with overlapping functionality, and 
that's a good sign that there isn't consensus on what the features that would 
be offered in core should be.

2. Adding a type to the core distribution (or even to contrib/) creates a 
maintenance burden on the core developers, and that's not something assumed 
lightly.  Once a type is in core, it (almost) never can be removed, and the 
more specialized the type and detailed the implementation, the greater the risk 
that the developers who know and care about it won't be available in the 
future.  Search the archives for a discussion of the "money" type for what 
happens when a type added to core starts becoming ill-supported... and "money" 
isn't anywhere near as complex as vector functionality.

3. PostgreSQL is designed to have a rich ecosystem of extensions.  The ability 
to add this kind of functionality in an extension is exactly what distinguishes 
PostgreSQL from many other RDBMS systems.  There's no burning need to add 
functionality like this to core.

It is true that hosted environments take time to adopt new extensions (although 
AWS RDS has supported pgvector for nearly two years now), but that's not in 
itself a reason to move things into core.

> Side note: I have some doubts about these type names, especially "bit" ... 
> why not "bitvec"?

BIT and BIT VARYING are the SQL standard names for these types.





Re: Restoring only a subset of schemas

2025-03-19 Thread Sylvain Cuaz

Le 17/03/2025 à 16:21, Adrian Klaver a écrit :

On 3/17/25 07:57, Sylvain Cuaz wrote:

Hi all,

 I have a DB with one schema named "Common" holding data referenced by other schemas. All 
other schemas have the same structure (tables and fields) and are named "cXXX" where XXX is just 
an int. Thus the only cross-schema foreign keys are in "cXXX" pointing to "Common", and each 
"cXXX" is completely independent of other "cXXX" schemas.
 Now if I want to restore from a full dump of this DB, but with only one "cXXX" and the 
"Common" schema :
- if I pass --create --schema=Common, then the CREATE SCHEMA is missing, i.e. it only emits data 
inside "Common" and the restore fails.


I am not seeing that.

For:

pg_dump -d test -U postgres -s --create --schema=other_sch --schema=public

What is the complete command you are using for the pg_dump?


Hi,

    As I said I'm restoring, not dumping. I make daily full backups and sometimes need to restore a 
specific day, but the full database is quite big and I would like to only restore one or two schemas.


As you said, if one passes --create --schema to pg_dump then a valid SQL is produced with CREATE 
DATABASE, CREATE SCHEMA, CREATE TABLE and all objects inside the schema.


But if one passes --create --schema to pg_restore then an invalid SQL is produced because it 
contains CREATE DATABASE, CREATE TABLE but it doesn't contain the CREATE SCHEMA needed for the 
tables. Is there any reason for that discrepancy between dump & restore and for outputting invalid 
SQL ?


My proposed --include-create-schema would just add the CREATE SCHEMA so that pg_restore behaves the 
same as pg_dump, and would allow to output valid SQL. But ideally this option shouldn't even be 
needed because pg_restore would just emit CREATE SCHEMA like pg_dump.




What Postgres version(s) are you using? 


A lot :-) But for this problem I'm using 13 & 15.


Cheers,

Sylvain.





Re: size of attributes table is too big

2025-03-19 Thread Siraj G
Hi Adrian

Used this query to find the sizes:

select  relname AS object_name,relkind AS
object_type,pg_size_pretty(pg_relation_size(oid)) AS object_size
FROM pg_class WHERE relnamespace = (SELECT oid FROM pg_namespace WHERE
nspname = 'pg_catalog')
ORDER BY pg_relation_size(oid) DESC;

We have close to 2000 tables, not sure about the columns. There should be
around 120 - 150 tables that are active, getting the data from DMS.
We do not create/drop tables in our instance unless required.

This is a cloud managed SQL and we do not have any custom setting on the
vacuum part.

On Wed, Mar 19, 2025 at 10:47 PM Adrian Klaver 
wrote:

> On 3/19/25 10:06, Siraj G wrote:
> > Hello!
> >
> > I have a PG (v16) instance which is occupying around 1TB of storage. Out
>
> Exact version of Postgres 16, include the x in 16.x.
>
> > of this, around 350GB is occupied by the table pg_catalog.pg_attribute.
> > Why is the catalog table's size so big?
> >
> > Here are the sizes:
>
> How did you measure the sizes?
>
> pg_attribute maintains information about table columns, how many table
> columns do you have?
>
> Are you creating/dropping tables on a regular basis?
>
> Is autovacuum running properly?
>
> Have you run VACUUM manually on pg_attribute?
>
> >
> > pg_attribute
> >
> > 338 GB
> > pg_attribute_relid_attnam_index
> >
> > 117 GB
> > pg_attribute_relid_attnum_index
> >
> > 69 GB
> >
> >
> > I think this table must have tons of dead tuples. Please suggest to me
> > if we can purge any data/shrink the size of this table.
> >
> > REgards
> > Siraj
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: After upgrading libpq, the same function(PQftype) call returns a different OID

2025-03-19 Thread Tom Lane
Adrian Klaver  writes:
> On 3/18/25 23:41, Sebastien Flaesch wrote:
>> I was not expecting this file to be in a "server" folder, when it's to 
>> be used for client apps.

> Not surprising. As I understand it this is the code used to build the 
> type entries in the system catalog pg_type.

More the other way around: pg_type_d.h is built from the authoritative
source files pg_type.h and pg_type.dat, according to the process
described here:

https://www.postgresql.org/docs/devel/bki.html

>> And still, I do not trust the content.

Why not?  If it's the "Backwards compatibility" comment that's
bothering you, a look at pg_type.h will show you that that's
only intended to apply to the CASHOID and LSNOID symbols.
Everything below that in pg_type_d.h is machine-generated.

regards, tom lane




Re: size of attributes table is too big

2025-03-19 Thread Adrian Klaver




On 3/19/25 10:36 AM, Siraj G wrote:

Hi Adrian

Used this query to find the sizes:

select  relname AS object_name,relkind AS 
object_type,pg_size_pretty(pg_relation_size(oid)) AS object_size
FROM pg_class WHERE relnamespace = (SELECT oid FROM pg_namespace WHERE 
nspname = 'pg_catalog')

ORDER BY pg_relation_size(oid) DESC;

We have close to 2000 tables, not sure about the columns. There should 
be around 120 - 150 tables that are active, getting the data from DMS.


This would have been a good thing to have included in the original post.


We do not create/drop tables in our instance unless required.

This is a cloud managed SQL and we do not have any custom setting on the 
vacuum part.


To be clear the issue is in the Google Cloud SQL instance?

Or are you seeing the problem on the Postgres instance the data is 
coming from?





On Wed, Mar 19, 2025 at 10:47 PM Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


On 3/19/25 10:06, Siraj G wrote:
 > Hello!
 >
 > I have a PG (v16) instance which is occupying around 1TB of
storage. Out

Exact version of Postgres 16, include the x in 16.x.

 > of this, around 350GB is occupied by the table
pg_catalog.pg_attribute.
 > Why is the catalog table's size so big?
 >
 > Here are the sizes:

How did you measure the sizes?

pg_attribute maintains information about table columns, how many table
columns do you have?

Are you creating/dropping tables on a regular basis?

Is autovacuum running properly?

Have you run VACUUM manually on pg_attribute?

 >
 > pg_attribute
 >
 > 338 GB
 > pg_attribute_relid_attnam_index
 >
 > 117 GB
 > pg_attribute_relid_attnum_index
 >
 > 69 GB
 >
 >
 > I think this table must have tons of dead tuples. Please suggest
to me
 > if we can purge any data/shrink the size of this table.
 >
 > REgards
 > Siraj

-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Cannot pg_dump_all anymore...

2025-03-19 Thread E-BLOKOS


On 3/18/2025 5:49 AM, Greg Sabino Mullane wrote:
First figure out which database is having that issue, by using pg_dump 
--schema-only on each database in turn. Then run this SQL on the 
database giving the error to see if the type exists, or what is nearby:


select oid, typname, typtype, typnamespace::regnamespace from pg_type 
where oid <= 794978 order by 1 desc limit 3;


Also let us know the version of pg_dump and the version of Postgres 
being dumped.



Cheers,
Greg

--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products & Tech Support


ok I fixed it with:

SELECT * FROM pg_depend WHERE objid IN (794964, 794968);
DELETE FROM pg_depend WHERE objid IN (794964, 794968);

systemctl restart postgresql

is it possible a crash happened with a VACUUM and a machine reboot in 
same time?


--
E-BLOKOS


Re: size of attributes table is too big

2025-03-19 Thread Ron Johnson
On Wed, Mar 19, 2025 at 1:06 PM Siraj G  wrote:

> Hello!
>
> I have a PG (v16) instance which is occupying around 1TB of storage. Out
> of this, around 350GB is occupied by the table pg_catalog.pg_attribute.
> Why is the catalog table's size so big?
>
> Here are the sizes:
>
> pg_attribute
> 338 GB
> pg_attribute_relid_attnam_index
> 117 GB
> pg_attribute_relid_attnum_index
> 69 GB
>
> I think this table must have tons of dead tuples. Please suggest to me if
> we can purge any data/shrink the size of this table.
>
>
Run pgstattuple and pgstatindex on them.  They'll tell you how much bloat
you have.

And tune your autovacuum parameters to be more aggressive.  These, for
example, are my settings:
autovacuum_analyze_scale_factor = 0.015
autovacuum_vacuum_scale_factor = 0.015
autovacuum_vacuum_insert_scale_factor = 0.015
autovacuum_vacuum_insert_threshold = 250

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!


Re: Cannot pg_dump_all anymore...

2025-03-19 Thread E-BLOKOS

Hi,

On 3/19/2025 7:08 AM, Greg Sabino Mullane wrote:

On Wed, Mar 19, 2025 at 10:02 AM E-BLOKOS  wrote:

is it possible a crash happened with a VACUUM and a machine reboot
in same time?


More likely to be a problem with pg_repack.  Please tell us the exact 
versions of pg_repack and Postgres in use here.




PG 17.4

pg_repack last git

thanks

David

--
E-BLOKOS


Re: Restoring only a subset of schemas

2025-03-19 Thread Tom Lane
Sylvain Cuaz  writes:
> Further, I don't see how it's an edge-case, at the core I just want to 
> restore some but not all the 
> schemas. This is possible for pg_dump, see my response to Adrian Klaver.

You have a very good point that it's annoying that pg_restore's
--schema switch doesn't act like pg_dump's --schema switch.
I could support changing that, but only if we also did something
about the fact that pg_restore's switch doesn't handle patterns.

The pattern aspect is kind of a mess, because pg_dump implements
that through a server-side regex, which is a facility pg_restore
doesn't have access to.  Maybe it'd be good enough to implement
the "*" and "?" wildcards and stop there, but I'm not sure.

regards, tom lane