Le 15/01/2010 00:30, Tzvi R a écrit :
> [...]
> A quick overview of our database server:
> * Four databases.
> * Each database has about 20 schemas.
> 
> 
> The largest database contains:
> * select count(*) from pg_class where relkind = 'v'
>    101
> * select count(*) from pg_class where relkind = 'r'
>    11911 (about 500 tables in each schema, I know, it's a lot - but I'd bet 
> it's not uncommon)
> * About 10 sequences.
> * About 150 functions.
> 
> 
> select count(*) from pg_class
>> 36444
> 
> 
> All these tables are large ones and have some toasted rows (you can see it in 
> pg_type).
> 
> 
> Those queries are rather fast, it's just that operating over a (relatively) 
> slow network exposes us to latencies of shipping that much traffic.
> I was wondering if the need to access that table can be delayed, so queries 
> would join against it instead of prefetching it? Or perhaps cache it locally 
> on disk and fetch only higher OID values (I'm guessing here, possibly 
> incorrectly, that rows are not updated but only added) this would enable one 
> full fetch and incremental updates since.
> 
> 

That would need quite a lot of work. I know there are a lot of things to
do to behave better with a database containing a lot of objects. Not
sure we'll have time to address this for the next release.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgadmin-support mailing list (pgadmin-support@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-support

Reply via email to