>-----Original Message-----
>From: se...@rielau.com [mailto:se...@rielau.com]
>Sent: Wednesday, June 27, 2018 2:04 AM
>To: Ideriha, Takeshi/出利葉 健 <ideriha.take...@jp.fujitsu.com>; pgsql-hackers
><pgsql-hack...@postgresql.org>
>Subject: RE: Global shared meta cache
>
>Takeshi-san,
>
>
>>My customer created hundreds of thousands of partition tables and tried
>>to select data from hundreds of applications, which resulted in
>>enormous consumption of memory because it consumed # of backend multiplied by
>#  of local memory (ex. 100 backends X 1GB = 100GB).
>>Relation caches are loaded on each backend local memory.
>My team and I have been working to make caches shared for the past two years, 
>but
>the system and rel caches we have chosen not to share..
>Reason being that these caches play a big role in transactional DDL processing.
>When you do DDL your backend can see all the changes since you update your own
>cache, but no anyone else's until you commit.
>You will find that dealing with that will be the true complexity.

Hi Serge,

Thank you for sharing your experience.
I didn't thought much about DDL visibility problem. 
Introducing version control like MVCC to catcache may solve the problem, but it 
seems too much to me.
It may be a good to keep local catcache for in-progress transaction rather than 
sharing everything. 
(Other hackers also pointed out it. )

>Have you tried to simply cap the size of these caches?
>That's a rather straight forward piece of work and will get you quite far.
>We run with a 20MB syscache and a 10MB relcache with 100k+ objects and hundreds
>of backends A dumb LRU is plenty good for the purpose.
>

I haven't tried yet but read some relevant discussion:
 
https://www.postgresql.org/message-id/flat/20161219.201505.11562604.horiguchi.kyot...@lab.ntt.co.jp

I think the cap solution alleviates memory bloating in some cases but there is 
a still problematic case if there are so many backends.

>That being said I would love to see these caches shared. :-)
Thank you!

Regards,
Takeshi

Reply via email to