On Tue, Nov 19, 2013 at 12:33 AM, Kohei KaiGai <kai...@kaigai.gr.jp> wrote: > * on-dsm-detach-v2.patch > It reminded me the hook registration/invocation mechanism on apache/httpd. > It defines five levels for invocation order (REALLY_FIRST, FIRST, MIDDLE, > LAST, REALLY_LAST), but these are alias of integer values, in other words, > they are just kicks the hook in order to the priority value associated with a > function pointer. > These flexibility may make sense. We may want to control the order to > release resources more fine grained in the future. For example, isn't it > a problem if we have only two levels when a stuff in a queue needs to be > released earlier than the queue itself? > I'm not 100% certain on this suggestion because on_shmen_exit is a hook > that does not host so much callbacks, thus extension may implement > their own way on the SHMEM_EXIT_EARLY or SHMEM_EXIT_LATE stage > of course.
I don't really see much point in adding more flexibility here until we need it, but I can imagine that we might someday need it, for reasons that are not now obvious. > * shm-toc-v1.patch > > From my experience, it makes sense to put a magic number on the tail of > toc segment to detect shared-memory usage overrun. It helps to find bugs > bug hard to find because of concurrent jobs. > Probably, shm_toc_freespace() is a point to put assertion. > > How many entries is shm_toc_lookup() assumed to chain? > It assumes a liner search from the head of shm_toc segment, and it is > prerequisite for lock-less reference, isn't it? > Does it make a problem if shm_toc host many entries, like 100 or 1000? > Or, it is not an expected usage? It is not an expected usage. In typical usage, I expect that the number of TOC entries will be about N+K, where K is a small constant (< 10) and N is the number of cooperating parallel workers. It's possible that we'll someday be in a position to leverage 100 or 1000 parallel workers on the same task, but I don't expect it to be soon. And, actually, I doubt that revising the data structure would pay off at N=100. At N=1000, maybe. At N=10000, probably. But we are *definitely* not going to need that kind of scale any time soon, and I don't think it makes sense to design a complex data structure to handle that case when there are so many more basic problems that need to be solved before we can go there. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers