"Heikki Linnakangas" <[EMAIL PROTECTED]> writes: > Let's use a normal hash table instead, and use a lock to protect it. If we > only > update it every 10 pages or so, the overhead should be negligible. To further > reduce contention, we could modify ReadBuffer to let the caller know if the > read resulted in a physical read or not, and only update the entry when a page > is physically read in. That way all the synchronized scanners wouldn't be > updating the same value, just the one performing the I/O. And while we're at > it, let's use the full relfilenode instead of just the table oid in the hash.
It's probably fine to just do that. But if we find it's a performance bottleneck we could probably still manage to avoid the lock except when actually inserting a new hash element. If you just store in the hash an index into an array stored in global memory then you could get away without a lock on the element in the array. It starts to get to be a fair amount of code when you think about how you would reuse elements of the array. That's why I suggest only looking at this if down the road we find that it's a bottleneck. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---------------------------(end of broadcast)--------------------------- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate