Hi Thomas: I studied your patch these days and found there might be a problem. When execute 'drop database', the smgr shared pool will not be removed because of no call 'smgr_drop_sr'. Function 'dropdb' in dbcommands.c remove the buffer from bufferpool and unlink the real files by 'rmtree', It doesn't call smgrdounlinkall, so the smgr shared cache will not be dropped although the table has been removed. This will cause some errors when smgr_alloc_str -> smgropen、smgrimmedsync. Table file has been removed, so smgropen and smgrimmedsync will get a unexpected result. Buzhen
------------------原始邮件 ------------------ 发件人:Thomas Munro <thomas.mu...@gmail.com> 发送时间:Tue Dec 22 19:57:35 2020 收件人:Amit Kapila <amit.kapil...@gmail.com> 抄送:Konstantin Knizhnik <k.knizh...@postgrespro.ru>, PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org> 主题:Re: Cache relation sizes? On Tue, Nov 17, 2020 at 10:48 PM Amit Kapila <amit.kapil...@gmail.com> wrote: > Yeah, it is good to verify VACUUM stuff but I have another question > here. What about queries having functions that access the same > relation (SELECT c1 FROM t1 WHERE c1 <= func(); assuming here function > access the relation t1)? Now, here I think because the relation 't1' > is already opened, it might use the same value of blocks from the > shared cache even though the snapshot for relation 't1' when accessed > from func() might be different. Am, I missing something, or is it > dealt in some way? I think it should be covered by the theory about implicit memory barriers snapshots, but to simplify things I have now removed the lock-free stuff from the main patch (0001), because it was a case of premature optimisation and it distracted from the main concept. The main patch has 128-way partitioned LWLocks for the mapping table, and then per-relfilenode spinlocks for modifying the size. There are still concurrency considerations, which I think are probably handled with the dirty-update-wins algorithm you see in the patch. In short: due to extension and exclusive locks, size changes AKA dirty updates are serialised, but clean updates can run concurrently, so we just have to make sure that clean updates never clobber dirty updates -- do you think that is on the right path?