Heikki Linnakangas <[EMAIL PROTECTED]> writes: > Tom Lane wrote: >> The pending-fsync stuff in md.c is also expecting to be able to add >> entries during a scan.
> No, mdsync starts the scan from scratch after calling AbsorbFsyncRequests. That was last month ;-). It doesn't restart any more. > We could have two kinds of seq scans, with and without support for > concurrent inserts. Yeah, I considered that too, but it just seems too error-prone. We could maybe make it trustworthy by having hash_seq_search complain if it noticed there had been any concurrent insertions --- but then you're putting new overhead into hash_seq_search, which kind of defeats the argument for it (and hash_seq_search is a bit of a bottleneck, so extra cycles there matter). > Hmm. Unlike lwlocks, hash tables can live in different memory contexts, > so we can't just have list of open scans similar to held_lwlocks array. I had first thought about adding a scan counter to the hashtable control struct, but the prospect of hash tables being deallocated while the central list still has references to them seems pretty scary --- we could find ourselves clobbering some other data structure entirely when we go to decrement the count. What seems better now is to have an array or list of HTAB pointers, one for each active scan (so the same hashtable might appear in the list multiple times). When we are considering whether to split, we have to look through the list to see if our table is listed. The list is unlikely to be long so this shouldn't affect performance. If a hash table is deallocated while we still think it has an active scan, nothing very bad happens. The absolute worst possible consequence is if some new hash table gets allocated at exactly the same spot; we'd inhibit splits on it, which still doesn't break correctness though it might kill performance. In any case we can have checking code that complains about leaked scan pointers at transaction end, so any such bug shouldn't survive long. For shared hash tables, this design only works for scans being done by the same backend doing insertion; but locking considerations would probably require that no other backend inserts while we scan anyway (you'd need something much more complicated than shared/exclusive locks to manage it otherwise). regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq