On Tue, Jul 3, 2012 at 1:46 PM, Josh Kupershmidt wrote:
> On Tue, Jul 3, 2012 at 6:57 AM, Robert Haas wrote:
>> Here's a patch that attempts to begin the work of adjusting the
>> documentation for this brave new world. I am guessing that there may
>> be other places in the documentation that als
On Tue, Jul 3, 2012 at 6:57 AM, Robert Haas wrote:
> Here's a patch that attempts to begin the work of adjusting the
> documentation for this brave new world. I am guessing that there may
> be other places in the documentation that also require updating, and
> this page probably needs more work,
Andres Freund writes:
> On Tuesday, July 03, 2012 05:41:09 PM Tom Lane wrote:
>> I'd really rather not. If we're going to go in this direction, we
>> should just go there.
> I don't really care, just wanted to bring up that at least one experienced
> user would be disappointed ;). As the old im
On Tuesday, July 03, 2012 05:41:09 PM Tom Lane wrote:
> Andres Freund writes:
> > Btw, RhodiumToad/Andrew Gierth on irc talked about a reason why sysv
> > shared memory might be advantageous on some platforms. E.g. on freebsd
> > there is the kern.ipc.shm_use_phys setting which prevents paging out
On Tue, Jul 3, 2012 at 5:36 PM, Andres Freund wrote:
> On Wednesday, June 27, 2012 05:28:14 AM Robert Haas wrote:
>> On Tue, Jun 26, 2012 at 6:25 PM, Tom Lane wrote:
>> > Josh Berkus writes:
>> >> So let's fix the 80% case with something we feel confident in, and then
>> >> revisit the no-sysv i
On Tue, Jul 3, 2012 at 11:36 AM, Andres Freund wrote:
> Btw, RhodiumToad/Andrew Gierth on irc talked about a reason why sysv shared
> memory might be advantageous on some platforms. E.g. on freebsd there is the
> kern.ipc.shm_use_phys setting which prevents paging out shared memory and also
> seem
Andres Freund writes:
> Btw, RhodiumToad/Andrew Gierth on irc talked about a reason why sysv shared
> memory might be advantageous on some platforms. E.g. on freebsd there is the
> kern.ipc.shm_use_phys setting which prevents paging out shared memory and
> also
> seems to make tlb translation
On Wednesday, June 27, 2012 05:28:14 AM Robert Haas wrote:
> On Tue, Jun 26, 2012 at 6:25 PM, Tom Lane wrote:
> > Josh Berkus writes:
> >> So let's fix the 80% case with something we feel confident in, and then
> >> revisit the no-sysv interlock as a separate patch. That way if we can't
> >> fix
On Thu, Jun 28, 2012 at 11:26 AM, Robert Haas wrote:
> Assuming things go well, there are a number of follow-on things that
> we need to do finish this up:
>
> 1. Update the documentation. I skipped this for now, because I think
> that what we write there is going to be heavily dependent on how
>
On Fri, Jun 29, 2012 at 2:31 PM, Josh Berkus wrote:
>> My idea of "not dedicated" is "I can launch a dozen postmasters on this
>> machine, and other services too, and it'll be okay as long as they're
>> not doing too much".
>
> Oh, 128MB then?
Proposed patch attached.
--
Robert Haas
EnterpriseD
On Fri, Jun 29, 2012 at 04:03:40PM -0700, Daniel Farina wrote:
> On Fri, Jun 29, 2012 at 1:00 PM, Merlin Moncure wrote:
> > On Fri, Jun 29, 2012 at 2:52 PM, Andres Freund
> > wrote:
> >> Hi All,
> >>
> >> In a *very* quick patch I tested using huge pages/MAP_HUGETLB for the
> >> mmap'ed
> >> me
On Fri, Jun 29, 2012 at 1:00 PM, Merlin Moncure wrote:
> On Fri, Jun 29, 2012 at 2:52 PM, Andres Freund wrote:
>> Hi All,
>>
>> In a *very* quick patch I tested using huge pages/MAP_HUGETLB for the mmap'ed
>> memory.
>> That gives around 9.5% performance benefit in a read-only pgbench run (-n -S
On Fri, Jun 29, 2012 at 2:52 PM, Andres Freund wrote:
> Hi All,
>
> In a *very* quick patch I tested using huge pages/MAP_HUGETLB for the mmap'ed
> memory.
> That gives around 9.5% performance benefit in a read-only pgbench run (-n -S -
> j 64 -c 64 -T 10 -M prepared, scale 200, 6GB s_b, 8 cores,
Hi All,
In a *very* quick patch I tested using huge pages/MAP_HUGETLB for the mmap'ed
memory.
That gives around 9.5% performance benefit in a read-only pgbench run (-n -S -
j 64 -c 64 -T 10 -M prepared, scale 200, 6GB s_b, 8 cores, 24GB mem).
It also saves a bunch of memory per process due to th
> My idea of "not dedicated" is "I can launch a dozen postmasters on this
> machine, and other services too, and it'll be okay as long as they're
> not doing too much".
Oh, 128MB then?
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-hackers mailing list (pgsql-
Josh Berkus writes:
>>> 10% isn't assuming dedicated.
>> Really?
> Yes. As I said, the allocation for dedicated PostgreSQL servers is
> usually 20% to 25%, up to 8GB.
Any percentage is assuming dedicated, IMO. 25% might be the more common
number, but you're still assuming that you can have yo
>> 10% isn't assuming dedicated.
>
> Really?
Yes. As I said, the allocation for dedicated PostgreSQL servers is
usually 20% to 25%, up to 8GB.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes
Josh Berkus writes:
>> If we could do that on *all* platforms, I might be for it, but we only
>> know how to get that number on some platforms.
> I don't see what's wrong with using it where we can get it, and not
> using it where we can't.
Because then we still need to define, and document, a
Tom,
> If we could do that on *all* platforms, I might be for it, but we only
> know how to get that number on some platforms.
I don't see what's wrong with using it where we can get it, and not
using it where we can't.
> There's also the issue
> of whether we really want to assume that the ma
Josh Berkus writes:
> The other thing which will avoid the problem for most Mac users is if we
> simply allocate 10% of RAM at initdb as a default. If we do that, then
> 90% of users will never touch Shmem themselves, and not have the
> opportunity to mess up.
If we could do that on *all* platfo
> According to the Google, there is absolutely no way of gettIng MacOS X
> not to overcommit like crazy.
Well, this is one of a long list of broken things about OSX. If you
want to see *real* breakage, do some IO performance testing of HFS+
FWIW, I have this issue with Mac desktop application
On Thu, Jun 28, 2012 at 2:51 PM, Tom Lane wrote:
> Robert Haas writes:
>> I tried this. At least on my fairly vanilla MacOS X desktop, an mlock
>> for a larger amount of memory than was conveniently on hand (4GB, on a
>> 4GB box) neither succeeded nor failed in a timely fashion but instead
>> pr
Robert Haas writes:
> I tried this. At least on my fairly vanilla MacOS X desktop, an mlock
> for a larger amount of memory than was conveniently on hand (4GB, on a
> 4GB box) neither succeeded nor failed in a timely fashion but instead
> progressively hung the machine, apparently trying to progr
On Thu, Jun 28, 2012 at 1:43 PM, Tom Lane wrote:
> Magnus Hagander writes:
>> On Thu, Jun 28, 2012 at 7:27 PM, Andres Freund
>> wrote:
>>> On Thursday, June 28, 2012 07:19:46 PM Magnus Hagander wrote:
What happens if you mlock() it into memory - does that fail quickly?
Is that not som
Andres Freund writes:
> On Thursday, June 28, 2012 08:00:06 PM Tom Lane wrote:
>> Well, the permissions angle is actually a good thing here. There is
>> pretty much no risk of the mlock succeeding on a box that hasn't been
>> specially configured --- and, in most cases, I think you'd need root
>>
On Thursday, June 28, 2012 08:00:06 PM Tom Lane wrote:
> Andres Freund writes:
> > On Thursday, June 28, 2012 07:43:16 PM Tom Lane wrote:
> >> I think it *would* be a good idea to mlock if we could. Setting shmem
> >> large enough that it swaps has always been horrible for performance,
> >> and i
Andres Freund writes:
> On Thursday, June 28, 2012 07:43:16 PM Tom Lane wrote:
>> I think it *would* be a good idea to mlock if we could. Setting shmem
>> large enough that it swaps has always been horrible for performance,
>> and in sysv-land there's no way to prevent that. But we can't error
>
On Thursday, June 28, 2012 07:43:16 PM Tom Lane wrote:
> Magnus Hagander writes:
> > On Thu, Jun 28, 2012 at 7:27 PM, Andres Freund
wrote:
> >> On Thursday, June 28, 2012 07:19:46 PM Magnus Hagander wrote:
> >>> What happens if you mlock() it into memory - does that fail quickly?
> >>> Is that n
Magnus Hagander writes:
> On Thu, Jun 28, 2012 at 7:27 PM, Andres Freund wrote:
>> On Thursday, June 28, 2012 07:19:46 PM Magnus Hagander wrote:
>>> What happens if you mlock() it into memory - does that fail quickly?
>>> Is that not something we might want to do *anyway*?
>> You normally can on
On Thu, Jun 28, 2012 at 7:27 PM, Andres Freund wrote:
> On Thursday, June 28, 2012 07:19:46 PM Magnus Hagander wrote:
>> On Thu, Jun 28, 2012 at 7:15 PM, Robert Haas wrote:
>> > On Thu, Jun 28, 2012 at 12:13 PM, Thom Brown wrote:
>> >> On 64-bit Linux, if I allocate more shared buffers than the
On Thursday, June 28, 2012 07:19:46 PM Magnus Hagander wrote:
> On Thu, Jun 28, 2012 at 7:15 PM, Robert Haas wrote:
> > On Thu, Jun 28, 2012 at 12:13 PM, Thom Brown wrote:
> >> On 64-bit Linux, if I allocate more shared buffers than the system is
> >> capable of reserving, it doesn't start. This
On Thu, Jun 28, 2012 at 7:15 PM, Robert Haas wrote:
> On Thu, Jun 28, 2012 at 12:13 PM, Thom Brown wrote:
>> On 64-bit Linux, if I allocate more shared buffers than the system is
>> capable of reserving, it doesn't start. This is expected, but there's
>> no error logged anywhere (actually, nothi
On Thu, Jun 28, 2012 at 12:13 PM, Thom Brown wrote:
> On 64-bit Linux, if I allocate more shared buffers than the system is
> capable of reserving, it doesn't start. This is expected, but there's
> no error logged anywhere (actually, nothing logged at all), and the
> postmaster.pid file is left b
On Thu, Jun 28, 2012 at 8:26 AM, Robert Haas wrote:
> 3. Consider adjusting the logic inside initdb. If this works
> everywhere, the code for determining how to set shared_buffers should
> become pretty much irrelevant. Even if it only works some places, we
> could add 64MB or 128MB or whatever
On 28 June 2012 16:26, Robert Haas wrote:
> On Thu, Jun 28, 2012 at 10:11 AM, Tom Lane wrote:
>> ... btw, I rather imagine that Robert has already noticed this, but OS X
>> (and presumably other BSDen) spells the flag "MAP_ANON" not
>> "MAP_ANONYMOUS". I also find this rather interesting flag th
On Thu, Jun 28, 2012 at 10:11 AM, Tom Lane wrote:
> ... btw, I rather imagine that Robert has already noticed this, but OS X
> (and presumably other BSDen) spells the flag "MAP_ANON" not
> "MAP_ANONYMOUS". I also find this rather interesting flag there:
>
> MAP_HASSEMAPHORE Notify the kernel
... btw, I rather imagine that Robert has already noticed this, but OS X
(and presumably other BSDen) spells the flag "MAP_ANON" not
"MAP_ANONYMOUS". I also find this rather interesting flag there:
MAP_HASSEMAPHORE Notify the kernel that the region may contain sema-
p
On Thu, Jun 28, 2012 at 8:57 AM, Robert Haas wrote:
> On Thu, Jun 28, 2012 at 9:47 AM, Jon Nelson wrote:
>> Why not just mmap /dev/zero (MAP_SHARED but not MAP_ANONYMOUS)? I
>> seem to think that's what I did when I needed this functionality oh so
>> many moons ago.
>
> From the reading I've don
Magnus Hagander writes:
> On Thu, Jun 28, 2012 at 7:00 AM, Robert Haas wrote:
>> A related question is - if we do this - should we enable it only on
>> ports where we've verified that it works, or should we just turn it on
>> everywhere and fix breakage if/when it's reported? I lean toward the
>
On Thu, Jun 28, 2012 at 9:47 AM, Jon Nelson wrote:
> Why not just mmap /dev/zero (MAP_SHARED but not MAP_ANONYMOUS)? I
> seem to think that's what I did when I needed this functionality oh so
> many moons ago.
From the reading I've done on this topic, that seems to be a trick
invented on Solaris
On Thu, Jun 28, 2012 at 6:05 AM, Magnus Hagander wrote:
> On Thu, Jun 28, 2012 at 7:00 AM, Robert Haas wrote:
>> On Wed, Jun 27, 2012 at 9:44 AM, Tom Lane wrote:
>>> Robert Haas writes:
On Wed, Jun 27, 2012 at 12:00 AM, Tom Lane wrote:
> Would Posix shmem help with that at all? Why d
On Thu, Jun 28, 2012 at 7:05 AM, Magnus Hagander wrote:
> Do we really need a runtime check for that? Isn't a configure check
> enough? If they *do* deploy postgresql 9.3 on something that old,
> they're building from source anyway...
[...]
>
> Could we actually turn *that* into a configure test,
On Thu, Jun 28, 2012 at 7:00 AM, Robert Haas wrote:
> On Wed, Jun 27, 2012 at 9:44 AM, Tom Lane wrote:
>> Robert Haas writes:
>>> On Wed, Jun 27, 2012 at 12:00 AM, Tom Lane wrote:
Would Posix shmem help with that at all? Why did you choose not to
use the Posix API, anyway?
>>
>>> It
On Wed, Jun 27, 2012 at 9:44 AM, Tom Lane wrote:
> Robert Haas writes:
>> On Wed, Jun 27, 2012 at 12:00 AM, Tom Lane wrote:
>>> Would Posix shmem help with that at all? Why did you choose not to
>>> use the Posix API, anyway?
>
>> It seemed more complicated. If we use the POSIX API, we've got
On Jun 27, 2012, at 7:34 AM, Robert Haas wrote:
> On Wed, Jun 27, 2012 at 12:00 AM, Tom Lane wrote:
>> Robert Haas writes:
>>> So, here's a patch. Instead of using POSIX shmem, I just took the
>>> expedient of using mmap() to map a block of MAP_SHARED|MAP_ANONYMOUS
>>> memory. The sysv shm is
On Wed, Jun 27, 2012 at 9:52 AM, Stephen Frost wrote:
> What this all boils down to is- can you have a shm segment that goes
> away when no one is still attached to it, but actually give it a name
> and then detect if it already exists atomically on startup on
> Linux/Unixes? If so, perhaps we co
On Wed, Jun 27, 2012 at 9:44 AM, Tom Lane wrote:
> Robert Haas writes:
>> On Wed, Jun 27, 2012 at 12:00 AM, Tom Lane wrote:
>>> Would Posix shmem help with that at all? Why did you choose not to
>>> use the Posix API, anyway?
>
>> It seemed more complicated. If we use the POSIX API, we've got
Magnus Hagander writes:
> On Wed, Jun 27, 2012 at 3:40 PM, Tom Lane wrote:
>> AFAIR we basically punted on those problems for the Windows port,
>> for lack of an equivalent to nattch.
> No, we spent a lot of time trying to *fix* it, and IIRC we did.
OK, in that case this isn't as interesting as
On Wed, Jun 27, 2012 at 3:40 PM, Tom Lane wrote:
> Magnus Hagander writes:
>> On Wed, Jun 27, 2012 at 3:50 AM, Tom Lane wrote:
>>> I wonder whether this design can be adapted to Windows? IIRC we do
>>> not have a bulletproof data directory lock scheme for Windows.
>>> It seems like this makes f
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Right, but does it provide honest protection against starting two
> postmasters in the same data directory? Or more to the point,
> does it prevent starting a new postmaster when the old postmaster
> crashed but there are still orphaned backends making chan
All,
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Robert Haas writes:
> > On Wed, Jun 27, 2012 at 12:00 AM, Tom Lane wrote:
> >> Would Posix shmem help with that at all? Why did you choose not to
> >> use the Posix API, anyway?
>
> > It seemed more complicated. If we use the POSIX API, we've got
Robert Haas writes:
> On Wed, Jun 27, 2012 at 12:00 AM, Tom Lane wrote:
>> Would Posix shmem help with that at all? Why did you choose not to
>> use the Posix API, anyway?
> It seemed more complicated. If we use the POSIX API, we've got to
> have code to find a non-colliding name for the shm,
Magnus Hagander writes:
> On Wed, Jun 27, 2012 at 3:50 AM, Tom Lane wrote:
>> I wonder whether this design can be adapted to Windows? IIRC we do
>> not have a bulletproof data directory lock scheme for Windows.
>> It seems like this makes few enough demands on the lock mechanism
>> that there ou
On Wed, Jun 27, 2012 at 3:50 AM, Tom Lane wrote:
> "A.M." writes:
>> On 06/26/2012 07:30 PM, Tom Lane wrote:
I solved this via fcntl locking.
>
>>> No, you didn't, because fcntl locks aren't inherited by child processes.
>>> Too bad, because they'd be a great solution otherwise.
>
>> You cla
On Wed, Jun 27, 2012 at 12:00 AM, Tom Lane wrote:
> Robert Haas writes:
>> So, here's a patch. Instead of using POSIX shmem, I just took the
>> expedient of using mmap() to map a block of MAP_SHARED|MAP_ANONYMOUS
>> memory. The sysv shm is still allocated, but it's just a copy of
>> PGShmemHead
Robert Haas writes:
> So, here's a patch. Instead of using POSIX shmem, I just took the
> expedient of using mmap() to map a block of MAP_SHARED|MAP_ANONYMOUS
> memory. The sysv shm is still allocated, but it's just a copy of
> PGShmemHeader; the "real" shared memory is the anonymous block. Thi
On Tue, Jun 26, 2012 at 6:25 PM, Tom Lane wrote:
> Josh Berkus writes:
>> So let's fix the 80% case with something we feel confident in, and then
>> revisit the no-sysv interlock as a separate patch. That way if we can't
>> fix the interlock issues, we still have a reduced-shmem version of Postg
I wrote:
> Reflecting on this further, it seems to me that the main remaining
> failure modes are (1) file locking doesn't work, or (2) idiot DBA
> manually removes the lock file.
Oh, wait, I just remembered the really fatal problem here: to quote from
the SUS fcntl spec,
All locks associ
"A.M." writes:
> On 06/26/2012 07:30 PM, Tom Lane wrote:
>>> I solved this via fcntl locking.
>> No, you didn't, because fcntl locks aren't inherited by child processes.
>> Too bad, because they'd be a great solution otherwise.
> You claimed this last time and I replied:
> http://archives.postgr
On Tue, Jun 26, 2012 at 6:20 PM, Tom Lane wrote:
> Robert Haas writes:
>> So, what about keeping a FIFO in the data directory?
>
> Hm, does that work if the data directory is on NFS? Or some other weird
> not-really-Unix file system?
I would expect NFS to work in general. We could test that.
On 06/26/2012 07:15 PM, Alvaro Herrera wrote:
Excerpts from Tom Lane's message of mar jun 26 18:58:45 -0400 2012:
Even if you actively try to configure the shmem settings to exactly
fill shmmax (which I concede some installation scripts might do),
it's going to be hard to do because of the 8K
On 06/26/2012 07:30 PM, Tom Lane wrote:
"A.M." writes:
On Jun 26, 2012, at 6:12 PM, Daniel Farina wrote:
I'm simply suggesting that for additional benefits it may be worth
thinking about getting around nattach and thus SysV shmem, especially
with regard to safety, in an open-ended way.
I so
"A.M." writes:
> On Jun 26, 2012, at 6:12 PM, Daniel Farina wrote:
>> I'm simply suggesting that for additional benefits it may be worth
>> thinking about getting around nattach and thus SysV shmem, especially
>> with regard to safety, in an open-ended way.
> I solved this via fcntl locking.
No,
Excerpts from Tom Lane's message of mar jun 26 18:58:45 -0400 2012:
> Even if you actively try to configure the shmem settings to exactly
> fill shmmax (which I concede some installation scripts might do),
> it's going to be hard to do because of the 8K granularity of the main
> knob, shared_buff
"A.M." writes:
> This can be trivially reproduced if one runs an old (SysV shared
> memory-based) postgresql alongside a potentially newer postgresql with a
> smaller SysV segment. This can occur with applications that bundle postgresql
> as part of the app.
I don't believe that that case is a
Tom Lane wrote:
> In the meantime, insisting that we solve this problem before we do
> anything is a good recipe for ensuring that nothing happens, just
> like it hasn't happened for the last half dozen years. (I see
> Alvaro just made the same point.)
And now so has Josh.
+1 from me, too.
Josh Berkus writes:
> So let's fix the 80% case with something we feel confident in, and then
> revisit the no-sysv interlock as a separate patch. That way if we can't
> fix the interlock issues, we still have a reduced-shmem version of Postgres.
Yes. Insisting that we have the whole change in
On Jun 26, 2012, at 6:12 PM, Daniel Farina wrote:
>
> (Emphasis mine).
>
> I don't think that -hackers at the time gave the zero-shmem rationale
> much weight (I also was not that happy about the safety mechanism of
> that patch), but upon more reflection (and taking into account *other*
> softw
Robert Haas writes:
> So, what about keeping a FIFO in the data directory?
Hm, does that work if the data directory is on NFS? Or some other weird
not-really-Unix file system?
> When the
> postmaster starts up, it tries to open the file with O_NONBLOCK |
> O_WRONLY (or O_NDELAY | O_WRONLY, if t
> This can be trivially reproduced if one runs an old (SysV shared
> memory-based) postgresql alongside a potentially newer postgresql with a
> smaller SysV segment. This can occur with applications that bundle postgresql
> as part of the app.
I'm not saying it doesn't happen at all. I'm sayi
On Jun 26, 2012, at 5:44 PM, Josh Berkus wrote:
>
>> On that, I used to be of the opinion that this is a good compromise (a
>> small amount of interlock space, plus mostly posix shmem), but I've
>> heard since then (I think via AgentM indirectly, but I'm not sure)
>> that there are cases where e
On Tue, Jun 26, 2012 at 2:53 PM, Alvaro Herrera
wrote:
>
> Excerpts from Daniel Farina's message of mar jun 26 17:40:16 -0400 2012:
>
>> On that, I used to be of the opinion that this is a good compromise (a
>> small amount of interlock space, plus mostly posix shmem), but I've
>> heard since then
Excerpts from Daniel Farina's message of mar jun 26 17:40:16 -0400 2012:
> On that, I used to be of the opinion that this is a good compromise (a
> small amount of interlock space, plus mostly posix shmem), but I've
> heard since then (I think via AgentM indirectly, but I'm not sure)
> that there
On Tue, Jun 26, 2012 at 5:44 PM, Josh Berkus wrote:
>
>> On that, I used to be of the opinion that this is a good compromise (a
>> small amount of interlock space, plus mostly posix shmem), but I've
>> heard since then (I think via AgentM indirectly, but I'm not sure)
>> that there are cases where
> On that, I used to be of the opinion that this is a good compromise (a
> small amount of interlock space, plus mostly posix shmem), but I've
> heard since then (I think via AgentM indirectly, but I'm not sure)
> that there are cases where even the small SysV segment can cause
> problems -- notab
On Tue, Jun 26, 2012 at 5:18 PM, Josh Berkus wrote:
> On 6/26/12 2:13 PM, Robert Haas wrote:
>> On Tue, Jun 26, 2012 at 4:29 PM, Alvaro Herrera
>> wrote:
>>> Excerpts from Josh Berkus's message of mar jun 26 15:49:59 -0400 2012:
Robert, all:
Last I checked, we had a reasonably acce
On Tue, Jun 26, 2012 at 2:18 PM, Josh Berkus wrote:
> On 6/26/12 2:13 PM, Robert Haas wrote:
>> On Tue, Jun 26, 2012 at 4:29 PM, Alvaro Herrera
>> wrote:
>>> Excerpts from Josh Berkus's message of mar jun 26 15:49:59 -0400 2012:
Robert, all:
Last I checked, we had a reasonably acce
On 6/26/12 2:13 PM, Robert Haas wrote:
> On Tue, Jun 26, 2012 at 4:29 PM, Alvaro Herrera
> wrote:
>> Excerpts from Josh Berkus's message of mar jun 26 15:49:59 -0400 2012:
>>> Robert, all:
>>>
>>> Last I checked, we had a reasonably acceptable patch to use mostly Posix
>>> Shared mem with a very s
On Tue, Jun 26, 2012 at 4:29 PM, Alvaro Herrera
wrote:
> Excerpts from Josh Berkus's message of mar jun 26 15:49:59 -0400 2012:
>> Robert, all:
>>
>> Last I checked, we had a reasonably acceptable patch to use mostly Posix
>> Shared mem with a very small sysv ram partition. Is there anything
>> k
Excerpts from Josh Berkus's message of mar jun 26 15:49:59 -0400 2012:
> Robert, all:
>
> Last I checked, we had a reasonably acceptable patch to use mostly Posix
> Shared mem with a very small sysv ram partition. Is there anything
> keeping this from going into 9.3? It would eliminate a major
Robert, all:
Last I checked, we had a reasonably acceptable patch to use mostly Posix
Shared mem with a very small sysv ram partition. Is there anything
keeping this from going into 9.3? It would eliminate a major
configuration headache for our users.
--
Josh Berkus
PostgreSQL Experts Inc.
htt
81 matches
Mail list logo