I think we must meet some corner cases about our storage. The relation has
32TB blocks, so 'mdnblocks' gets the unexpected value, we will check it again.
Thanks a lot.
Hello
We know that PostgreSQL doesn't support a single relation size over 32TB,
limited by the MaxBlockNumber. But if we just 'insert into' one relation over
32TB, it will get an error message 'unexpected data beyond EOF in block 0 of
relation' in ReadBuffer_common. The '0 block' is from mdnblo
+1, This would be an nice improvement even the lseek is fast usually, it is a
system call after all
Buzhen--
发件人:Andy Fan
日 期:2021年05月31日 13:46:22
收件人:PostgreSQL Hackers
主 题:Re: Regarding the necessity of RelationGetNumberOfBlocks f
Hi Thomas
I want to share a patch with you, I change the replacement algorithm from fifo
to a simple lru.
Buzhen
0001-update-fifo-to-lru-to-sweep-a-valid-cache.patch
Description: Binary data
--原始邮件 --
发件人:Thomas Munro
发送时间:Fri Jan 8 00:56:17 2021
收件人:陈佳昕(步真)
抄送:Amit Kapila , Konstantin Knizhnik
, PostgreSQL Hackers
主题:Re: Cache relation sizes?
On Wed, Dec 30, 2020 at 4:13 AM 陈佳昕(步真) wrote:
> I found some other problems which I want to share my change with you
't
remove the sr_pool. But in smgrnblocks_fast, it just get sr from sr_pool, so I
add some codes as above to avoid some corner cases get an unexpected result
from smgrnblocks_fast. Is it necessary, I also want some advice from you.
Thanks a lot.
Buzhen
--原始邮件 --------
Hi Thomas:
I studied your patch these days and found there might be a problem.
When execute 'drop database', the smgr shared pool will not be removed because
of no call 'smgr_drop_sr'. Function 'dropdb' in dbcommands.c remove the buffer
from bufferpool and unlink the real files by 'rmtree', It d