Jeremy Schneider wrote on 6/27/23 11:47 AM:
Thank Ben, not a concern but I'm trying to better understand how common
this might be. And I think sharing general statistics about how people
use PostgreSQL is a great help to the developers who build and maintain it.
One really nice thing about PostgreSQL is that two quick copies of
pg_stat_all_tables and you can easily see this sort of info.
If you have a database where more than 100 tables are updated within a
10 second period - this seems really uncommon to me - I'm very curious
about the workload.
Well, in our case we have a SaaS model where a moderately complicated
schema is replicated hundreds of times per db. It doesn't take much load
to end up scattering writes across many tables (not to mention their
indices). We do have table partitioning too, but it's a relatively small
part of our schema and the partitioning is done by date, so we really
only have one hot partition at a time. FWIW, most of our dbs have 32 cores.
All that aside, as others have said there are many reasonable ways to
reach the threshold you have set.