On Aug 4, 2011, at 8:40 AM, Kevin Grittner wrote:
> Robert Haas wrote:
>> On Wed, Aug 3, 2011 at 6:05 PM, Jim Nasby wrote:
>>> Not sure how much this relates to this discussion, but I have
>>> often wished we had AFTER FOR EACH STATEMENT triggers that
>>> provided OLD and NEW recordsets you could
Robert Haas wrote:
> On Wed, Aug 3, 2011 at 6:05 PM, Jim Nasby wrote:
>> Not sure how much this relates to this discussion, but I have
>> often wished we had AFTER FOR EACH STATEMENT triggers that
>> provided OLD and NEW recordsets you could make use of. Sometimes
>> it's very valuably to be able
Excerpts from Jim Nasby's message of mié ago 03 18:05:21 -0400 2011:
> Not sure how much this relates to this discussion, but I have often wished we
> had AFTER FOR EACH STATEMENT triggers that provided OLD and NEW recordsets
> you could make use of. Sometimes it's very valuably to be able to lo
On Wed, Aug 3, 2011 at 6:05 PM, Jim Nasby wrote:
> Not sure how much this relates to this discussion, but I have often wished we
> had AFTER FOR EACH STATEMENT triggers that provided OLD and NEW recordsets
> you could make use of. Sometimes it's very valuably to be able to look at
> *all* the r
On Aug 2, 2011, at 7:09 AM, Simon Riggs wrote:
>> The best compression and flexibility in
>>> that case is to store a bitmap since that will average out at about 1
>>> bit per row, with variable length bitmaps. Which is about 8 times
>>> better compression ratio than originally suggested, without
On Tue, Aug 2, 2011 at 12:28 PM, Dean Rasheed wrote:
> On 1 August 2011 21:02, Simon Riggs wrote:
>> I would prefer an approach where we store the events in a buffer,
>> which gets added to the main event queue when it fills/block number
>> changes/etc. That way we can apply intelligence to the a
On 1 August 2011 21:02, Simon Riggs wrote:
> I would prefer an approach where we store the events in a buffer,
> which gets added to the main event queue when it fills/block number
> changes/etc. That way we can apply intelligence to the actual
> compression format used, yet retain all required in
On 1 August 2011 20:53, Tom Lane wrote:
> Dean Rasheed writes:
>> OK, so I should split this into 2 patches?
>> Even without the compression, it's probably worth the 16 -> 10 byte
>> reduction that would result from removing the 2nd CTID in the UPDATE
>> case, and that part would be a pretty smal
On Mon, Aug 1, 2011 at 7:56 PM, Tom Lane wrote:
> However, this means that Dean is not simply adding some self-contained
> compression logic; he's eliminating information from the data structure
> on the grounds that he can get it from elsewhere. I think that that
> ought to be treated as a sepa
Dean Rasheed writes:
> OK, so I should split this into 2 patches?
> Even without the compression, it's probably worth the 16 -> 10 byte
> reduction that would result from removing the 2nd CTID in the UPDATE
> case, and that part would be a pretty small patch.
Yeah, my point exactly. The rest of
On 1 August 2011 19:56, Tom Lane wrote:
> Dean Rasheed writes:
>> On 1 August 2011 18:55, Tom Lane wrote:
>>> Robert Haas writes:
>> On Mon, Aug 1, 2011 at 1:42 PM, Dean Rasheed
>> wrote:
Don't we already do that when pruning HOT chains?
>
>>> I thought that only happens after the transa
Dean Rasheed writes:
> On 1 August 2011 18:55, Tom Lane wrote:
>> Robert Haas writes:
> On Mon, Aug 1, 2011 at 1:42 PM, Dean Rasheed wrote:
>>> Don't we already do that when pruning HOT chains?
>> I thought that only happens after the transaction is committed, and
>> old enough, whereas the tr
On 1 August 2011 18:55, Tom Lane wrote:
> Robert Haas writes:
>> On Mon, Aug 1, 2011 at 1:42 PM, Dean Rasheed
>> wrote:
Don't we already do that when pruning HOT chains?
>
>>> I thought that only happens after the transaction is committed, and
>>> old enough, whereas the trigger code only
Robert Haas writes:
> On Mon, Aug 1, 2011 at 1:42 PM, Dean Rasheed wrote:
>>> Don't we already do that when pruning HOT chains?
>> I thought that only happens after the transaction is committed, and
>> old enough, whereas the trigger code only needs to follow the chain in
>> the updating transac
On Mon, Aug 1, 2011 at 1:42 PM, Dean Rasheed wrote:
>>> H ... not sure. It seems a bit scary, but on the other hand we
>>> should be able to assume that the updating subtransaction hasn't been
>>> rolled back (else surely we shouldn't be firing the trigger). So in
>>> principle it seems like
On 1 August 2011 18:36, Robert Haas wrote:
> On Mon, Aug 1, 2011 at 1:31 PM, Tom Lane wrote:
>> Dean Rasheed writes:
>>> On 1 August 2011 17:49, Tom Lane wrote:
Ummm ... I only read the data structure comments, not the code, but I
don't see where you store the second CTID for an updat
On Mon, Aug 1, 2011 at 1:31 PM, Tom Lane wrote:
> Dean Rasheed writes:
>> On 1 August 2011 17:49, Tom Lane wrote:
>>> Ummm ... I only read the data structure comments, not the code, but I
>>> don't see where you store the second CTID for an update event?
>
>> Ah yes, I forgot to mention that bit
Dean Rasheed writes:
> On 1 August 2011 17:49, Tom Lane wrote:
>> Ummm ... I only read the data structure comments, not the code, but I
>> don't see where you store the second CTID for an update event?
> Ah yes, I forgot to mention that bit. I'm using
> &(tuple1.t_data->t_ctid) to get the second
On 1 August 2011 17:49, Tom Lane wrote:
> Dean Rasheed writes:
>> I've been thinking some more about the long-standing problem of the
>> AFTER TRIGGER queue using too much memory, and I think that the
>> situation can be improved by using some basic compression.
>
>> Currently each event added to
Dean Rasheed writes:
> I've been thinking some more about the long-standing problem of the
> AFTER TRIGGER queue using too much memory, and I think that the
> situation can be improved by using some basic compression.
> Currently each event added to the AFTER TRIGGER queue uses 10 bytes
> per tri
I've been thinking some more about the long-standing problem of the
AFTER TRIGGER queue using too much memory, and I think that the
situation can be improved by using some basic compression.
Currently each event added to the AFTER TRIGGER queue uses 10 bytes
per trigger per row for INSERTs and DEL
21 matches
Mail list logo