On Mon, Nov 22, 2010 at 6:54 AM, Heikki Linnakangas
wrote:
> On 21.11.2010 15:18, Robert Haas wrote:
>>
>> On Sat, Nov 20, 2010 at 4:07 PM, Tom Lane wrote:
>>>
>>> Robert Haas writes:
So what DO we need to guard against here?
>>>
>>> I think the general problem can be stated as "proces
On 21.11.2010 15:18, Robert Haas wrote:
On Sat, Nov 20, 2010 at 4:07 PM, Tom Lane wrote:
Robert Haas writes:
So what DO we need to guard against here?
I think the general problem can be stated as "process A changes two or
more values in shared memory in a fairly short span of time, and proc
On Sat, Nov 20, 2010 at 4:07 PM, Tom Lane wrote:
> Robert Haas writes:
>> So what DO we need to guard against here?
>
> I think the general problem can be stated as "process A changes two or
> more values in shared memory in a fairly short span of time, and process
> B, which is concurrently exam
Robert Haas writes:
> So what DO we need to guard against here?
I think the general problem can be stated as "process A changes two or
more values in shared memory in a fairly short span of time, and process
B, which is concurrently examining the same variables, sees those
changes occur in a diff
On Fri, Nov 19, 2010 at 5:59 PM, Tom Lane wrote:
> Robert Haas writes:
>> But what about timings vs. random other stuff? Like in this case
>> there's a problem if the signal arrives before the memory update to
>> latch->is_set becomes visible. I don't know what we need to do to
>> guarantee tha
On Saturday 20 November 2010 00:08:07 Tom Lane wrote:
> Andres Freund writes:
> > On Friday 19 November 2010 18:46:00 Tom Lane wrote:
> >> I poked around in the Intel manuals a bit. They do have mfence (also
> >> lfence and sfence) but so far as I can tell, those are only used to
> >> manage load
Andres Freund writes:
> On Friday 19 November 2010 18:46:00 Tom Lane wrote:
>> I poked around in the Intel manuals a bit. They do have mfence (also
>> lfence and sfence) but so far as I can tell, those are only used to
>> manage loads and stores that are issued by special instructions that
>> exp
Robert Haas writes:
> But what about timings vs. random other stuff? Like in this case
> there's a problem if the signal arrives before the memory update to
> latch->is_set becomes visible. I don't know what we need to do to
> guarantee that.
I don't believe there's an issue there. A context s
On Fri, Nov 19, 2010 at 1:51 PM, Tom Lane wrote:
> However, for lock-free interactions I think this model isn't terribly
> helpful: it's not clear what is "inside" and what is "outside" the sync
> block, and forcing your code into that model doesn't improve either
> clarity or performance. What y
On Friday 19 November 2010 20:03:27 Andres Freund wrote:
> Which means something like (in intel's terminology) can happen:
>
> initially x = 0
>
> P1: mov [_X], 1
> P1: lock xchg Y, 1
>
> P2. lock xchg [_Z], 1
> P2: mov r1, [_X]
>
> A valid result is that r1 on P2 is 0.
>
> I think that is not
On Friday 19 November 2010 18:46:00 Tom Lane wrote:
> I wrote:
> > Markus Wanner writes:
> >> Well, that certainly doesn't apply to full fences, that are not specific
> >> to a particular piece of memory. I'm thinking of 'mfence' on x86_64 or
> >> 'mf' on ia64.
> >
> > Hm, what do those do exactl
Tom Lane wrote:
> What you typically need is a guarantee about the order in which
> writes become visible.
> In some cases you also need to guarantee the order of reads.
Doesn't that suggest different primitives?
-Kevin
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org
"Kevin Grittner" writes:
> Tom Lane wrote:
>> That's really entirely the wrong way to think about it. You need
>> a fence primitive, full stop. It's a sequence point, not an
>> operation in itself.
> I was taking it to mean something similar to the memory guarantees
> around synchronized block
Tom Lane wrote:
> Robert Haas writes:
>> I think it would be useful to try to build up a library of
>> primitives in this area. For this particular task, we really
>> only need a write-with-fence primitive and a read-with-fence
>> primitive.
>
> That's really entirely the wrong way to think abo
I wrote:
> Markus Wanner writes:
>> Well, that certainly doesn't apply to full fences, that are not specific
>> to a particular piece of memory. I'm thinking of 'mfence' on x86_64 or
>> 'mf' on ia64.
> Hm, what do those do exactly?
I poked around in the Intel manuals a bit. They do have mfence
* Andres Freund:
> I was never talking about 'locking the whole cache' - I was talking about
> flushing/fencing it like a "global" read/write barrier would. And "lock
> xchgb/xaddl" does not imply anything for other cachelines but its own.
My understanding is that once you've seen the result of
Robert Haas writes:
> I think it would be useful to try to build up a library of primitives
> in this area. For this particular task, we really only need a
> write-with-fence primitive and a read-with-fence primitive.
That's really entirely the wrong way to think about it. You need a
fence prim
On Fri, Nov 19, 2010 at 10:44 AM, Tom Lane wrote:
> Robert Haas writes:
>> I completely agree, but I'm not too sure I want to drop support for
>> any platform for which we haven't yet implemented such primitives.
>> What's different about this case is that "fall back to taking the spin
>> lock" i
Andres Freund writes:
> I was never talking about 'locking the whole cache' - I was talking about
> flushing/fencing it like a "global" read/write barrier would. And "lock
> xchgb/xaddl" does not imply anything for other cachelines but its own.
If that's the case, why aren't the parallel regres
On Friday 19 November 2010 17:25:57 Tom Lane wrote:
> Andres Freund writes:
> > Locked statments like 'lock xaddl;' guarantee that the specific operands
> > (or their cachelines) are visible on all processors and are done
> > atomically - but its not influencing the whole cache like mfence would.
Andres Freund writes:
> Locked statments like 'lock xaddl;' guarantee that the specific operands (or
> their cachelines) are visible on all processors and are done atomically - but
> its not influencing the whole cache like mfence would.
Where is this "locking the whole cache" meme coming from?
On Friday 19 November 2010 16:51:00 Tom Lane wrote:
> Markus Wanner writes:
> > Well, that certainly doesn't apply to full fences, that are not specific
> > to a particular piece of memory. I'm thinking of 'mfence' on x86_64 or
> > 'mf' on ia64.
> Hm, what do those do exactly? We've never had any
On 11/19/2010 04:51 PM, Tom Lane wrote:
> Hm, what do those do exactly?
"Performs a serializing operation on all load-from-memory and
store-to-memory instructions that were issued prior the MFENCE
instruction." [1]
Given the memory ordering guarantees of x86, this instruction might only
be releva
Markus Wanner writes:
> Well, that certainly doesn't apply to full fences, that are not specific
> to a particular piece of memory. I'm thinking of 'mfence' on x86_64 or
> 'mf' on ia64.
Hm, what do those do exactly? We've never had any such thing in the
Intel-ish spinlock asm, but if out-of-orde
Robert Haas writes:
> I completely agree, but I'm not too sure I want to drop support for
> any platform for which we haven't yet implemented such primitives.
> What's different about this case is that "fall back to taking the spin
> lock" is not a workable option.
The point I was trying to make
Robert Haas writes:
> ... The reason memory
> barriers solve the problem is because they'll be atomically released
> when we jump into the signal handler, but that is not true of a
> spin-lock or a semaphore.
Hm, I wonder whether your concern is stemming from a wrong mental
model. There is nothi
On 11/19/2010 03:58 PM, Aidan Van Dyk wrote:
> Well, it's not quite enough just to call into the kernel to serialize
> on "some point of memory", because your point is to make sure that
> *this particular piece of memory* is coherent.
Well, that certainly doesn't apply to full fences, that are not
On Friday 19 November 2010 15:58:39 Aidan Van Dyk wrote:
> On Fri, Nov 19, 2010 at 9:49 AM, Andres Freund wrote:
> > Well, its not generally true - you are right there. But there is a wide
> > range for syscalls available where its inherently true (which is what I
> > sloppily referred to). And yo
On Fri, Nov 19, 2010 at 9:31 AM, Robert Haas wrote:
>> Just a small point of clarification - you need to have both that
>> unknown archtecture, and that architecture has to have postgres
>> process running simultaneously on difference CPUs with different
>> caches that are incoherent to have thos
On Fri, Nov 19, 2010 at 10:01 AM, Tom Lane wrote:
> Robert Haas writes:
>> If we're going to work on memory primitives, I would much rather see
>> us put that effort into, say, implementing more efficient LWLock
>> algorithms to solve the bottlenecks that the MOSBENCH guys found,
>> rather than s
Robert Haas writes:
> If we're going to work on memory primitives, I would much rather see
> us put that effort into, say, implementing more efficient LWLock
> algorithms to solve the bottlenecks that the MOSBENCH guys found,
> rather than spending it on trying to avoid a minor API complication
>
On Fri, Nov 19, 2010 at 9:49 AM, Andres Freund wrote:
> Well, its not generally true - you are right there. But there is a wide range
> for syscalls available where its inherently true (which is what I sloppily
> referred to). And you are allowed to call a, although quite restricted, set of
> syst
On Fri, Nov 19, 2010 at 9:51 AM, Andres Freund wrote:
> On Friday 19 November 2010 15:49:45 Robert Haas wrote:
>> If we're going to work on memory primitives, I would much rather see
>> us put that effort into, say, implementing more efficient LWLock
>> algorithms to solve the bottlenecks that the
On Friday 19 November 2010 15:49:45 Robert Haas wrote:
> If we're going to work on memory primitives, I would much rather see
> us put that effort into, say, implementing more efficient LWLock
> algorithms to solve the bottlenecks that the MOSBENCH guys found,
> rather than spending it on trying to
On Friday 19 November 2010 15:14:58 Robert Haas wrote:
> On Thu, Nov 18, 2010 at 11:38 PM, Tom Lane wrote:
> > Robert Haas writes:
> >> I'm all in favor of having some memory ordering primitives so that we
> >> can try to implement better algorithms, but if we use it here it
> >> amounts to a fai
On Friday 19 November 2010 15:38:37 Robert Haas wrote:
> Eh, really? If there's a workaround for platforms for which we don't
> know what the appropriate read-fencing incantation is, then I'd feel
> more comfortable about doing this. But I don't see how to make that
> work. The whole problem her
On Fri, Nov 19, 2010 at 9:35 AM, Aidan Van Dyk wrote:
> On Fri, Nov 19, 2010 at 9:31 AM, Robert Haas wrote:
>>> Just a small point of clarification - you need to have both that
>>> unknown archtecture, and that architecture has to have postgres
>>> process running simultaneously on difference CPU
On Fri, Nov 19, 2010 at 9:29 AM, Andres Freund wrote:
> On Friday 19 November 2010 15:16:24 Robert Haas wrote:
>> On Fri, Nov 19, 2010 at 3:07 AM, Andres Freund wrote:
>> > So the complicated case seems to be !defined(HAS_TEST_AND_SET) which uses
>> > spinlocks for that purpose - no idea where th
On Friday 19 November 2010 15:29:10 Andres Freund wrote:
> Besides, we can just jump into the kernel and back in that case (which the
> TAS implementation already does), that does more than just a fence...
Or if you don't believe that is enough initialize a lock on the stack, lock
and forget it..
On Fri, Nov 19, 2010 at 9:16 AM, Robert Haas wrote:
> On Fri, Nov 19, 2010 at 3:07 AM, Andres Freund wrote:
>> So the complicated case seems to be !defined(HAS_TEST_AND_SET) which uses
>> spinlocks for that purpose - no idea where that is true these days.
>
> Me neither, which is exactly the prob
On Fri, Nov 19, 2010 at 9:27 AM, Aidan Van Dyk wrote:
> On Fri, Nov 19, 2010 at 9:16 AM, Robert Haas wrote:
>> On Fri, Nov 19, 2010 at 3:07 AM, Andres Freund wrote:
>>> So the complicated case seems to be !defined(HAS_TEST_AND_SET) which uses
>>> spinlocks for that purpose - no idea where that i
On Friday 19 November 2010 15:16:24 Robert Haas wrote:
> On Fri, Nov 19, 2010 at 3:07 AM, Andres Freund wrote:
> > So the complicated case seems to be !defined(HAS_TEST_AND_SET) which uses
> > spinlocks for that purpose - no idea where that is true these days.
> Me neither, which is exactly the pr
On Fri, Nov 19, 2010 at 3:07 AM, Andres Freund wrote:
> So the complicated case seems to be !defined(HAS_TEST_AND_SET) which uses
> spinlocks for that purpose - no idea where that is true these days.
Me neither, which is exactly the problem. Under Tom's proposal, any
architecture we don't explic
On Thu, Nov 18, 2010 at 11:38 PM, Tom Lane wrote:
> Robert Haas writes:
>> I'm all in favor of having some memory ordering primitives so that we
>> can try to implement better algorithms, but if we use it here it
>> amounts to a fairly significant escalation in the minimum requirements
>> to comp
On Friday 19 November 2010 05:38:14 Tom Lane wrote:
> Robert Haas writes:
> > I'm all in favor of having some memory ordering primitives so that we
> > can try to implement better algorithms, but if we use it here it
> > amounts to a fairly significant escalation in the minimum requirements
> > to
Robert Haas writes:
> I'm all in favor of having some memory ordering primitives so that we
> can try to implement better algorithms, but if we use it here it
> amounts to a fairly significant escalation in the minimum requirements
> to compile PG (which is bad) rather than just a performance
> op
On Thu, Nov 18, 2010 at 5:17 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Mon, Nov 15, 2010 at 11:12 AM, Tom Lane wrote:
>>> Hmm ... I just remembered the reason why we didn't use a spinlock in
>>> these functions already. Namely, that it's unsafe for a signal handler
>>> to try to acquire a
Robert Haas writes:
> On Mon, Nov 15, 2010 at 11:12 AM, Tom Lane wrote:
>> Hmm ... I just remembered the reason why we didn't use a spinlock in
>> these functions already. Namely, that it's unsafe for a signal handler
>> to try to acquire a spinlock that the interrupted code might be holding.
>
On Mon, Nov 15, 2010 at 11:12 AM, Tom Lane wrote:
> Heikki Linnakangas writes:
>> In SetLatch, is it enough to add the SpinLockAcquire() call *after*
>> checking that is_set is not already set? Ie. still do the quick exit
>> without holding a lock. Or do we need a memory barrier operation before
Heikki Linnakangas writes:
> In SetLatch, is it enough to add the SpinLockAcquire() call *after*
> checking that is_set is not already set? Ie. still do the quick exit
> without holding a lock. Or do we need a memory barrier operation before
> the fetch, to ensure that we see if the other proce
On 15.11.2010 16:51, Tom Lane wrote:
Heikki Linnakangas writes:
I believe it's safe to
assume that two operations using a volatile pointer will not be
rearranged wrt. each other.
This is entirely wrong, so far as cross-processor visibility of changes
is concerned.
Ok.
In SetLatch, is it en
On Mon, Nov 15, 2010 at 9:51 AM, Tom Lane wrote:
> Heikki Linnakangas writes:
Hmm, SetLatch only sets one flag, so I don't see how it could malfunction
all by itself. And I would've thought that declaring the Latch variable
"volatile" prevents rearrangements.
>>>
>>> It's not a que
Heikki Linnakangas writes:
>>> Hmm, SetLatch only sets one flag, so I don't see how it could malfunction
>>> all by itself. And I would've thought that declaring the Latch variable
>>> "volatile" prevents rearrangements.
>>
>> It's not a question of code rearrangement.
Precisely. "volatile" pre
On Mon, Nov 15, 2010 at 8:45 AM, Heikki Linnakangas
wrote:
>> It's not a question of code rearrangement.
>
> Rearrangement of code, rearrangement of CPU instructions, or rearrangement
> of the order the changes in the memory become visible to other processes.
> The end result is the same.
I'll le
On 15.11.2010 15:22, Robert Haas wrote:
On Mon, Nov 15, 2010 at 2:15 AM, Heikki Linnakangas
wrote:
Can you elaborate?
Weak memory ordering means that stores into shared memory initiated by
one processor are not guaranteed to be observed to occur in the same
sequence by another processor. Th
On Mon, Nov 15, 2010 at 2:15 AM, Heikki Linnakangas
wrote:
>>> Can you elaborate?
>>
>> Weak memory ordering means that stores into shared memory initiated by
>> one processor are not guaranteed to be observed to occur in the same
>> sequence by another processor. This implies first that the latc
On 14.11.2010 22:55, Tom Lane wrote:
Heikki Linnakangas writes:
On 13.11.2010 17:07, Tom Lane wrote:
Robert Haas writes:
Come to think of it, I'm not really sure I understand what protects
SetLatch() against memory ordering hazards. Is that actually safe?
Hmm ... that's a good question.
Heikki Linnakangas writes:
> On 13.11.2010 17:07, Tom Lane wrote:
>> Robert Haas writes:
>>> Come to think of it, I'm not really sure I understand what protects
>>> SetLatch() against memory ordering hazards. Is that actually safe?
>>
>> Hmm ... that's a good question. It certainly *looks* lik
On 13.11.2010 17:07, Tom Lane wrote:
Robert Haas writes:
Come to think of it, I'm not really sure I understand what protects
SetLatch() against memory ordering hazards. Is that actually safe?
Hmm ... that's a good question. It certainly *looks* like it could
malfunction on machines with wea
On Sat, Nov 13, 2010 at 10:07 AM, Tom Lane wrote:
> Robert Haas writes:
>> One idea I've had is that we might want to think about defining an
>> operation that is effectively "store, with a memory barrier". For
>> example, on x86, this could be implemented using xchg. I think if you
>> have a s
Robert Haas writes:
> One idea I've had is that we might want to think about defining an
> operation that is effectively "store, with a memory barrier". For
> example, on x86, this could be implemented using xchg. I think if you
> have a single-word variable in shared memory that is always updat
On Fri, Nov 12, 2010 at 11:27 PM, Tom Lane wrote:
> Bruce Momjian writes:
>> Right. I propose that we set max_wal_senders to unlimited when
>> wal_level = hot_standby.
>
> It's a memory allocation parameter ... you can't just set it to
> "unlimited", at least not without a nontrivial amount of w
Bruce Momjian writes:
> Right. I propose that we set max_wal_senders to unlimited when
> wal_level = hot_standby.
It's a memory allocation parameter ... you can't just set it to
"unlimited", at least not without a nontrivial amount of work.
regards, tom lane
--
Sent vi
Josh Berkus wrote:
>
> > None of us know. What I do know is that I don't want PostgreSQL to be
> > slower out of the box.
>
> Understandable. So it seems like the answer is getting replication down
> to one configuration variable for the common case. That eliminates the
> cycle of "oops, need
Andres Freund wrote:
> I guess you built both in the same place and just prefix installed
> it to different directories?
We always build in a directory tree with a name based on the
version, with a prefix based on the version. This is routine for
us. I have a hard time believing that they ma
Hi,
On Wednesday 03 November 2010 20:28:03 Kevin Grittner wrote:
> They said that except for the quirky path behavior, the installation
> went fine; the Wiki page instructions were clear and adequate and
> that installation process was not difficult or confusing.
>
> This path issue sounds like a
[going back on list with this]
Selena Deckelmann wrote:
> Kevin Grittner > the other three DBAs here implemented the HS/SR while I was out
>> They told me that it was working great once they figured it out,
>> but it was confusing; it took them a lot of time and a few false
>> starts to get it
On Thu, 2010-10-28 at 17:12 -0700, Josh Berkus wrote:
> > Sorry, didn't know... I have 122 responses so far, which I think will be
> > surprising (some of them certainly surprised me). I will keep it open
> > until next week and then post the results.
>
> Well, for any community site poll, I hope
> Sorry, didn't know... I have 122 responses so far, which I think will be
> surprising (some of them certainly surprised me). I will keep it open
> until next week and then post the results.
Well, for any community site poll, I hope you realize that there's a LOT
of sampling error. Here's anoth
On Thu, 2010-10-28 at 16:25 -0700, Josh Berkus wrote:
> > https://www.postgresqlconference.org/content/replication-poll
> >
> > You don't have to login to take it but of course it helps with validity
> > of results.
>
> Oh, I'd already put something up on http://www.postgresql.org/community
Sorr
> https://www.postgresqlconference.org/content/replication-poll
>
> You don't have to login to take it but of course it helps with validity
> of results.
Oh, I'd already put something up on http://www.postgresql.org/community
--
-- Josh Berkus
On Thu, 2010-10-28 at 07:05 -0500, Kevin Grittner wrote:
> "Joshua D. Drake" wrote:
> > On Wed, 2010-10-27 at 19:52 -0400, Robert Haas wrote:
> >> Josh Berkus wrote:
> >
> >>> *you don't know* how many .org users plan to implement
> >>> replication, whether it's a minority or majority.
> >>
> >
Alvaro Herrera writes:
> BTW I note that there are no elog(ERROR) calls in that code path at all,
> because syscall errors are ignored, so PANIC is not a concern (as the
> code stands currently, at least). ISTM it would be good to have a
> comment on SetLatch stating that it's used inside critica
Excerpts from Tom Lane's message of mié oct 27 19:01:38 -0300 2010:
> I don't know what Simon is thinking, but I think he's nuts. There is is
> obvious extra overhead in COMMIT:
>
> /*
> * Wake up all walsenders to send WAL up to the COMMIT record
> * immediately if rep
"Joshua D. Drake" wrote:
> On Wed, 2010-10-27 at 19:52 -0400, Robert Haas wrote:
>> Josh Berkus wrote:
>
>>> *you don't know* how many .org users plan to implement
>>> replication, whether it's a minority or majority.
>>
>> None of us know. What I do know is that I don't want PostgreSQL to
>> b
> None of us know. What I do know is that I don't want PostgreSQL to be
> slower out of the box.
Understandable. So it seems like the answer is getting replication down
to one configuration variable for the common case. That eliminates the
cycle of "oops, need to set X and restart/reload" with
On Wed, 2010-10-27 at 19:52 -0400, Robert Haas wrote:
> On Wed, Oct 27, 2010 at 7:45 PM, Josh Berkus wrote:
> >> I would also agree that the minority of our users will want replication.
> >> The majority of CMD customers, PGX customers, EDB Customers will want
> >> replication but that is by far N
On Wed, Oct 27, 2010 at 7:45 PM, Josh Berkus wrote:
>> I would also agree that the minority of our users will want replication.
>> The majority of CMD customers, PGX customers, EDB Customers will want
>> replication but that is by far NOT the majority of our (.Org) users.
>
> That just means that
> I would also agree that the minority of our users will want replication.
> The majority of CMD customers, PGX customers, EDB Customers will want
> replication but that is by far NOT the majority of our (.Org) users.
That just means that *you don't know* how many .org users plan to
implement rep
On Wed, 2010-10-27 at 16:13 -0700, Josh Berkus wrote:
> > That's not even considering the extra WAL that is generated when you
> > move up from wal_level = "minimal". That's probably the bigger
> > performance issue in practice.
>
> Yeah, I think we've established that we can't change that.
>
>
> That's not even considering the extra WAL that is generated when you
> move up from wal_level = "minimal". That's probably the bigger
> performance issue in practice.
Yeah, I think we've established that we can't change that.
> I said, and meant, that you didn't make the case at all; you just
Josh Berkus writes:
>> You're assuming that we should set up the default behavior to support
>> replication and penalize those who aren't using it.
> What's the penalty? Simon just said that there isn't one.
I don't know what Simon is thinking, but I think he's nuts. There is is
obvious extra
On Wed, Oct 27, 2010 at 12:33 PM, Tom Lane wrote:
> You're assuming that we should set up the default behavior to support
> replication and penalize those who aren't using it. Considering that
> we haven't even *had* replication until now, it seems a pretty safe
> bet that the majority of our use
> You're assuming that we should set up the default behavior to support
> replication and penalize those who aren't using it.
What's the penalty? Simon just said that there isn't one.
And there's a difference between saying that I "failed to make a case"
vs. "the cost is too great". Saying the
On Wed, 2010-10-27 at 15:33 -0400, Tom Lane wrote:
> Josh Berkus writes:
> >>> Josh has completely failed to make a case that
> >>> that should be the default.
> >>
> >> Agreed.
>
> > In what way have a failed to make a case?
>
> You're assuming that we should set up the default behavior to sup
Josh Berkus writes:
>>> Josh has completely failed to make a case that
>>> that should be the default.
>>
>> Agreed.
> In what way have a failed to make a case?
You're assuming that we should set up the default behavior to support
replication and penalize those who aren't using it. Considering
On Wed, 2010-10-27 at 10:05 -0700, Josh Berkus wrote:
> >> Josh has completely failed to make a case that
> >> that should be the default.
> >
> > Agreed.
>
> In what way have a failed to make a case?
I just removed a huge hurdle on the journey to simplification. That
doesn't mean I think you hav
On Tue, 2010-10-19 at 15:32 -0400, Tom Lane wrote:
> Robert Haas writes:
> > On Tue, Oct 19, 2010 at 12:18 PM, Josh Berkus wrote:
> >> On 10/19/2010 09:06 AM, Greg Smith wrote:
> >>> I think Magnus's idea to bump the default to 5 triages the worst of the
> >>> annoyance here, without dropping the
On Thu, Oct 21, 2010 at 8:33 PM, Bruce Momjian wrote:
> Robert Haas wrote:
>> On Thu, Oct 21, 2010 at 4:21 PM, Josh Berkus wrote:
>> > On 10/20/10 6:54 PM, Robert Haas wrote:
>> >> I find it impossible to believe that's
>> >> a good decision, and IMHO we should be focusing on how to make the
>> >
Robert Haas wrote:
> On Thu, Oct 21, 2010 at 4:21 PM, Josh Berkus wrote:
> > On 10/20/10 6:54 PM, Robert Haas wrote:
> >> I find it impossible to believe that's
> >> a good decision, and IMHO we should be focusing on how to make the
> >> parameters PGC_SIGHUP rather than PGC_POSTMASTER, which woul
Robert Haas wrote:
> On Wed, Oct 20, 2010 at 3:40 PM, Greg Stark wrote:
> > On Wed, Oct 20, 2010 at 6:29 AM, Robert Haas wrote:
> >> Exactly. ?It doesn't take many 3-7% slowdowns to add up to being 50%
> >> or 100% slower, and that sucks. ?In fact, I'm still not convinced that
> >> we were wise t
On Thu, Oct 21, 2010 at 4:21 PM, Josh Berkus wrote:
> On 10/20/10 6:54 PM, Robert Haas wrote:
>> I find it impossible to believe that's
>> a good decision, and IMHO we should be focusing on how to make the
>> parameters PGC_SIGHUP rather than PGC_POSTMASTER, which would give us
>> most of the same
On 10/20/10 6:54 PM, Robert Haas wrote:
> I find it impossible to believe that's
> a good decision, and IMHO we should be focusing on how to make the
> parameters PGC_SIGHUP rather than PGC_POSTMASTER, which would give us
> most of the same benefits without throwing away hard-won performance.
I'd
Josh Berkus wrote:
If we could agree on some workloads, I could run some benchmarks. I'm
not sure what those would be though, given that COPY and ALTER TABLE
aren't generally included in most benchmarks.
You can usefully and easily benchmark this by timing a simple pgbench
initialization at a
On Wed, Oct 20, 2010 at 6:17 PM, Josh Berkus wrote:
>> Quite. Josh, have you got any evidence showing that the penalty is
>> only 10%? There are cases, such as COPY and ALTER TABLE, where
>> you'd be looking at 2X or worse penalties, because of the existing
>> optimizations that avoid writing WA
> Quite. Josh, have you got any evidence showing that the penalty is
> only 10%? There are cases, such as COPY and ALTER TABLE, where
> you'd be looking at 2X or worse penalties, because of the existing
> optimizations that avoid writing WAL at all for operations where a
> single final fsync can
On Wed, Oct 20, 2010 at 1:12 PM, Robert Haas wrote:
> On Wed, Oct 20, 2010 at 3:40 PM, Greg Stark wrote:
>> On Wed, Oct 20, 2010 at 6:29 AM, Robert Haas wrote:
>
>>> Actually, I think the best thing for default_statistics_target might
>>> be to scale the target based on the number of rows in the
On Wed, Oct 20, 2010 at 3:40 PM, Greg Stark wrote:
> On Wed, Oct 20, 2010 at 6:29 AM, Robert Haas wrote:
>> Exactly. It doesn't take many 3-7% slowdowns to add up to being 50%
>> or 100% slower, and that sucks. In fact, I'm still not convinced that
>> we were wise to boost default_statistics_ta
On Wed, Oct 20, 2010 at 6:29 AM, Robert Haas wrote:
> Exactly. It doesn't take many 3-7% slowdowns to add up to being 50%
> or 100% slower, and that sucks. In fact, I'm still not convinced that
> we were wise to boost default_statistics_target as much as we did. I
> argued for a smaller boost a
On Wed, Oct 20, 2010 at 10:53 AM, Alvaro Herrera
wrote:
> Excerpts from Robert Haas's message of mié oct 20 10:29:04 -0300 2010:
>
>> Actually, I think the best thing for default_statistics_target might
>> be to scale the target based on the number of rows in the table, e.g.
>> given N rows:
>>
>>
1 - 100 of 117 matches
Mail list logo