On Thu, Oct 31, 2013 at 7:48 PM, Heikki Linnakangas <hlinnakan...@vmware.com> wrote: > On 31.10.2013 16:43, Robert Haas wrote: >> There should be no cases where the main shared memory >> segment gets cleaned up and the dynamic shared memory segments do not. > > 1. initdb -D data1 > 2. initdb -D data2 > 3. postgres -D data1 > 4. killall -9 postgres > 5. postgres -D data2 > > The system V shmem segment orphaned at step 4 will be cleaned up at step 5. > The DSM segment will not.
OK, true. However, the fact that that "works" relies on the fact that you've got two postmasters configured to running on the same port, which in practice is a rather unlikely configuration. And even if you do have that configuration, I'm not sure that it's a feature that they can interfere with each other like that. Do you think it is? If we want the behavior, we could mimic what the main shared memory code does here: instead of choosing a random value for the control segment identifier and saving it in a state file, start with something like port * 100 + 1000000 (the main shared memory segment uses port * 100, so we'd want something at least slightly different) and search forward one value at a time from there until we find an unused ID. > BTW, 9.3 actually made the situation a lot better for the main memory > segment. You only leak the small interlock shmem segment, the large mmap'd > block does get automatically free'd when the last process using it exits. Yeah. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers