as [mailto:robertmh...@gmail.com]
Sent: Monday, September 12, 2011 9:31 PM
To: Amit Kapila
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] cheaper snapshots redux
On Mon, Sep 12, 2011 at 11:07 AM, Amit Kapila
wrote:
>>If you know what transactions were running the last time a snap
On Tue, Sep 13, 2011 at 7:49 AM, Amit Kapila wrote:
>>Yep, that's pretty much what it does, although xmax is actually
>>defined as the XID *following* the last one that ended, and I think
>>xmin needs to also be in xip, so in this case you'd actually end up
>>with xmin = 15, xmax = 22, xip = { 15,
>> 4. Won't it effect if we don't update xmin everytime and just noting the
>> committed XIDs. The reason I am asking is that it is used in tuple
>> visibility check so with new idea in some cases instead of just returning >>
>> from begining by checking xmin it has to go through the committ
se notify the sender by
phone or email immediately and delete it!
-Original Message-
From: Robert Haas [mailto:robertmh...@gmail.com]
Sent: Thursday, September 08, 2011 7:50 PM
To: Amit Kapila
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] cheaper snapshots redux
On Tue, Sep
ne or email immediately and delete it!
-Original Message-
From: Robert Haas [mailto:robertmh...@gmail.com]
Sent: Monday, September 12, 2011 7:39 PM
To: Amit Kapila
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] cheaper snapshots redux
On Sun, Sep 11, 2011 at 11:08 PM, Amit
On Mon, Sep 12, 2011 at 11:07 AM, Amit Kapila wrote:
>>If you know what transactions were running the last time a snapshot summary
>> was written and what >transactions have ended since then, you can work out
>> the new xmin on the fly. I have working >code for this and it's actually
>> quite sim
On Sun, Sep 11, 2011 at 11:08 PM, Amit Kapila wrote:
> In the approach mentioned in your idea, it mentioned that once after
> taking snapshot, only committed XIDs will be updated and sometimes snapshot
> itself.
>
> So when the xmin will be updated according to your idea as snapshot will
> not
On Tue, Sep 6, 2011 at 11:06 PM, Amit Kapila wrote:
> 1. With the above, you want to reduce/remove the concurrency issue between
> the GetSnapshotData() [used at begining of sql command execution] and
> ProcArrayEndTransaction() [used at end transaction]. The concurrency issue
> is mainly ProcArra
to:pgsql-hackers-ow...@postgresql.org] On Behalf Of Robert Haas
Sent: Sunday, August 28, 2011 7:17 AM
To: Gokulakannan Somasundaram
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] cheaper snapshots redux
On Sat, Aug 27, 2011 at 1:38 AM, Gokulakannan Somasundaram
wrote:
> First i respectfu
On Sun, Aug 28, 2011 at 4:33 AM, Gokulakannan Somasundaram
wrote:
>> No, I don't think it will all be in memory - but that's part of the
>> performance calculation. If you need to check on the status of an XID
>> and find that you need to read a page of data in from disk, that's
>> going to be ma
> No, I don't think it will all be in memory - but that's part of the
> performance calculation. If you need to check on the status of an XID
> and find that you need to read a page of data in from disk, that's
> going to be many orders of magnitude slower than anything we do with s
> snapshot now
On Sat, Aug 27, 2011 at 1:38 AM, Gokulakannan Somasundaram
wrote:
> First i respectfully disagree with you on the point of 80MB. I would say
> that its very rare that a small system( with <1 GB RAM ) might have a long
> running transaction sitting idle, while 10 million transactions are sitting
>
On Tue, Aug 23, 2011 at 5:25 AM, Robert Haas wrote:
> I've been giving this quite a bit more thought, and have decided to
> abandon the scheme described above, at least for now. It has the
> advantage of avoiding virtually all locking, but it's extremely
> inefficient in its use of memory in the
On Thu, Aug 25, 2011 at 6:29 PM, Jim Nasby wrote:
> Actually, I wasn't thinking about the system dynamically sizing shared memory
> on it's own... I was only thinking of providing the ability for a user to
> change something like shared_buffers and allow that change to take effect
> with a SIGH
On Thu, Aug 25, 2011 at 6:24 PM, Jim Nasby wrote:
> On Aug 25, 2011, at 8:24 AM, Robert Haas wrote:
>> My hope (and it might turn out that I'm an optimist) is that even with
>> a reasonably small buffer it will be very rare for a backend to
>> experience a wraparound condition. For example, consi
On Aug 22, 2011, at 6:22 PM, Robert Haas wrote:
> With respect to a general-purpose shared memory allocator, I think
> that there are cases where that would be useful to have, but I don't
> think there are as many of them as many people seem to think. I
> wouldn't choose to implement this using a
On Aug 25, 2011, at 8:24 AM, Robert Haas wrote:
> My hope (and it might turn out that I'm an optimist) is that even with
> a reasonably small buffer it will be very rare for a backend to
> experience a wraparound condition. For example, consider a buffer
> with ~6500 entries, approximately 64 * Ma
On Thu, Aug 25, 2011 at 11:15 AM, Markus Wanner wrote:
> On 08/25/2011 04:59 PM, Tom Lane wrote:
>> That's a good point. If the ring buffer size creates a constraint on
>> the maximum number of sub-XIDs per transaction, you're going to need a
>> fallback path of some sort.
>
> I think Robert envi
Tom,
On 08/25/2011 04:59 PM, Tom Lane wrote:
> That's a good point. If the ring buffer size creates a constraint on
> the maximum number of sub-XIDs per transaction, you're going to need a
> fallback path of some sort.
I think Robert envisions the same fallback path we already have:
subxids.over
Robert,
On 08/25/2011 04:48 PM, Robert Haas wrote:
> What's a typical message size for imessages?
Most message types in Postgres-R are just a couple bytes in size.
Others, especially change sets, can be up to 8k.
However, I think you'll have an easier job guaranteeing that backends
"consume" the
Robert Haas writes:
> Well, one long-running transaction that only has a single XID is not
> really a problem: the snapshot is still small. But one very old
> transaction that also happens to have a large number of
> subtransactions all of which have XIDs assigned might be a good way to
> stress
On Thu, Aug 25, 2011 at 10:19 AM, Markus Wanner wrote:
> Note, however, that for imessages, I've also had the policy in place
> that a backend *must* consume its message before sending any. And that
> I took great care for all receivers to consume their messages as early
> as possible. None the
Robert,
On 08/25/2011 03:24 PM, Robert Haas wrote:
> My hope (and it might turn out that I'm an optimist) is that even with
> a reasonably small buffer it will be very rare for a backend to
> experience a wraparound condition.
It certainly seems less likely than with the ring-buffer for imessages
On Thu, Aug 25, 2011 at 1:55 AM, Markus Wanner wrote:
>> One difference with snapshots is that only the latest snapshot is of
>> any interest.
>
> Theoretically, yes. But as far as I understood, you proposed the
> backends copy that snapshot to local memory. And copying takes some
> amount of ti
Robert,
On 08/25/2011 04:59 AM, Robert Haas wrote:
> True; although there are some other complications. With a
> sufficiently sophisticated allocator you can avoid mutex contention
> when allocating chunks, but then you have to store a pointer to the
> chunk somewhere or other, and that then requ
On Wed, Aug 24, 2011 at 4:30 AM, Markus Wanner wrote:
> I'm in respectful disagreement regarding the ring-buffer approach and
> think that dynamic allocation can actually be more efficient if done
> properly, because there doesn't need to be head and tail pointers, which
> might turn into a point
Robert, Jim,
thanks for thinking out loud about dynamic allocation of shared memory.
Very much appreciated.
On 08/23/2011 01:22 AM, Robert Haas wrote:
> With respect to a general-purpose shared memory allocator, I think
> that there are cases where that would be useful to have, but I don't
> thi
Hello Dimitri,
On 08/23/2011 06:39 PM, Dimitri Fontaine wrote:
> I'm far from familiar with the detailed concepts here, but allow me to
> comment. I have two open questions:
>
> - is it possible to use a distributed algorithm to produce XIDs,
>something like Vector Clocks?
>
>Then each
Robert Haas writes:
> That's certainly a fair concern, and it might even be worse than
> O(n^2). On the other hand, the current approach involves scanning the
> entire ProcArray for every snapshot, even if nothing has changed and
> 90% of the backends are sitting around playing tiddlywinks, so I
Robert Haas writes:
> I think the real trick is figuring out a design that can improve
> concurrency.
I'm far from familiar with the detailed concepts here, but allow me to
comment. I have two open questions:
- is it possible to use a distributed algorithm to produce XIDs,
something like Ve
On Tue, Aug 23, 2011 at 12:13 PM, Tom Lane wrote:
> I'm a bit concerned that this approach is trying to optimize the heavy
> contention situation at the cost of actually making things worse anytime
> that you're not bottlenecked by contention for access to this shared
> data structure. In particu
Robert Haas writes:
> With respect to the first problem, what I'm imagining is that we not
> do a complete rewrite of the snapshot in shared memory on every
> commit. Instead, when a transaction ends, we'll decide whether to (a)
> write a new snapshot or (b) just record the XIDs that ended. If w
On Mon, Aug 22, 2011 at 10:25 PM, Robert Haas wrote:
> I've been giving this quite a bit more thought, and have decided to
> abandon the scheme described above, at least for now.
I liked your goal of O(1) snapshots and think you should go for that.
I didn't realise you were still working on thi
On Mon, Aug 22, 2011 at 6:45 PM, Jim Nasby wrote:
> Something that would be really nice to fix is our reliance on a fixed size of
> shared memory, and I'm wondering if this could be an opportunity to start in
> a new direction. My thought is that we could maintain two distinct shared
> memory s
On Aug 22, 2011, at 4:25 PM, Robert Haas wrote:
> What I'm thinking about
> instead is using a ring buffer with three pointers: a start pointer, a
> stop pointer, and a write pointer. When a transaction ends, we
> advance the write pointer, write the XIDs or a whole new snapshot into
> the buffer,
On Wed, Jul 27, 2011 at 10:51 PM, Robert Haas wrote:
> On Wed, Oct 20, 2010 at 10:07 PM, Tom Lane wrote:
>> I wonder whether we could do something involving WAL properties --- the
>> current tuple visibility logic was designed before WAL existed, so it's
>> not exploiting that resource at all. I
36 matches
Mail list logo