Hi, Planned to look at this for a while... Not a detailed review, just some thoughts. I'll let what I read sink in and possibly comment later.
On 2013-10-31 12:21:31 -0400, Robert Haas wrote: > The attached patches attempt to rectify some of these problems. Well, I wouldn't call it problems. Just natural, incremental development ;) > Patch #1, on-dsm-detach-v1.patch, adds the concept of on_dsm_detach > hooks > [snip] > The part of this patch which I > suppose will elicit some controversy is that I've had to rearrange > on_shmem_exit a bit. It turns out that during shmem_exit, we do > "user-level" cleanup, like aborting the transaction, first. We expect > that will probably release all of our shared-memory resources. Hm. The API change of on_shmem_exit() is going to cause a fair bit of pain. There are a fair number of extensions out there using it and all would need to be littered by version dependent #ifdefs. What about providing a version of on_shmem_exit() that allows to specify the exit phase, and make on_shmem_exit() a backward compatible wrapper? FWIW, I find on_dsm_detach_cancel() to be a confusing name. I thought it might be a variant that is called in different circumstances than on_dsm_detach(). Maybe go with cancel_on_shmem_detach(), like with cancel_shmem_exit()? > Patch #2, shm-toc-v1.patch, provides a facility for sizing a dynamic > shared memory segment before creation, and for dividing it up into > chunks after it's been created. It therefore serves a function quite > similar to RequestAddinShmemSpace, except of course that there is only > one main shared memory segment created at postmaster startup time, > whereas new dynamic shared memory segments can come into existence on > the fly; and it serves even more conspicuously the function of > ShmemIndex, which enables backends to locate particular data > structures within the shared memory segment. It is however quite a > bit simpler than the ShmemIndex mechanism: we don't need the same > level of extensibility here that we do for the main shared memory > segment, because a new extension need not piggyback on an existing > dynamic shared memory segment, but can create a whole segment of its > own. Hm. A couple of questions/comments: * How do you imagine keys to be used/built? * Won't the sequential search over toc entries become problematic? * Some high level overview of how it's supposed to be used wouldn't hurt. * the estimator stuff doesn't seem to belong in the public header? > Patch #3, shm-mq-v1.patch, is the heart of this series. It creates an > infrastructure for sending and receiving messages of arbitrary length > using ring buffers stored in shared memory (presumably dynamic shared > memory, but hypothetically the main shared memory segment could be > used). The API seems to assume it's in dsm tho? > Queues are single-reader and single-writer; they use process > latches to implement waiting for the queue to fill (in the case of the > reader) or drain (in the case of the writer). Hm. Too bad, I'd hoped for single-reader, multiple-writer :P > A non-blocking mode is also available for situations where other > options might lead to deadlock. That's cool. I wonder if there's a way to discover, on the writing end, when new data can be sent. For reading there's a comment about waiting for the latch... Couple of questions: * Some introductory comments about the concurrency approach would be good. * shm_mq_attach() is talking about BackgroundWorkerHandles - is there a limitation that a bgworker has to be involved? Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers