I am experiencing an odd issue, i've noticed it on 9.3 , but i can
reproduce it on 9.6.

Basically, i have a database with a lot of schemas, but not that much data.
Each schema is maybe 2-4 GB in size, and often much less than that.

The database has ~300-500 schemas, each with ~100-300 tables. Generally a
few hundred thousand tables total. Entire cluster has 2 or 3 such databases.

As the amount of tables grows, the time it takes to vacuum an _empty_ table
grows as well. The table is in public schema, and it is the only table
there.

I made a simple testing script to make sure that these things are related.
I set up a blank database, create a table with one column in public and
restore one schema. Then i vacuum that table three times, measure the
execution times and repeat the process, adding another schema to db.

At ~200 tables it takes ~100ms for psql to issue a vacuum verbose and exit.
At 83K tables the time is already at ~1.5second. The progress appars to be
directly proportional to table amount, and grows linearly, eventually
crossing past 3seconds - for blank table with no data.

I think this may severely impact the entire vacuumdb run, but i have not
verified that yet.

This is irrelevant of amount of data restored, i am seeing the same
behavior with just schema restore, as well as with schema+data restores.

If anyone is interested i may upload the schema data + my benchmarking
script with collected whisper data from my test run (i've been plotting it
in grafana via carbon)

Is this a known issue? Can i do anything to improve performance here?

Reply via email to