On Fri, Jan 6, 2012 at 12:45 PM, Robert Haas wrote:
> Oh, that's brilliant. OK, I'll go try that.
All right, that test does in fact reveal the broken-ness of the
current code, and the patch I committed upthread does seem to fix it,
so I've committed that.
After further reflection, I'm thinking
On Fri, Jan 6, 2012 at 12:34 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Fri, Jan 6, 2012 at 12:24 PM, Tom Lane wrote:
>>> So at this point I've got serious doubts as to the quality of testing of
>>> that whole patch, not just this part.
>
>> I tested the case where we skip a block during th
Robert Haas writes:
> On Fri, Jan 6, 2012 at 12:24 PM, Tom Lane wrote:
>> So at this point I've got serious doubts as to the quality of testing of
>> that whole patch, not just this part.
> I tested the case where we skip a block during the first pass, but I
> admit that I punted on testing the
On Fri, Jan 6, 2012 at 12:24 PM, Tom Lane wrote:
> I started to wonder how likely it would be that some other process would
> sit on a buffer pin for so long as to allow 134217727 iterations of
> ReadBufferExtended, even given the slowdowns associated with
> CLOBBER_CACHE_ALWAYS. This led to some
I started to wonder how likely it would be that some other process would
sit on a buffer pin for so long as to allow 134217727 iterations of
ReadBufferExtended, even given the slowdowns associated with
CLOBBER_CACHE_ALWAYS. This led to some fruitless searching for possible
deadlock conditions, but
On Fri, Jan 6, 2012 at 10:51 AM, Simon Riggs wrote:
> I *have* explained what is wrong. It "leaves unused tuples in the heap
> that would previously have been removed".
>
> More simply: lazy_vacuum_page() does some work and we can't skip that
> work just because we don't get the lock. Elsewhere in
On Fri, Jan 6, 2012 at 3:28 PM, Robert Haas wrote:
> On Fri, Jan 6, 2012 at 9:53 AM, Simon Riggs wrote:
>> On Fri, Jan 6, 2012 at 2:29 PM, Robert Haas wrote:
>>> On Thu, Jan 5, 2012 at 7:37 PM, Tom Lane wrote:
I suppose Robert had something more intelligent in mind than a tight
loop w
On Fri, Jan 6, 2012 at 9:53 AM, Simon Riggs wrote:
> On Fri, Jan 6, 2012 at 2:29 PM, Robert Haas wrote:
>> On Thu, Jan 5, 2012 at 7:37 PM, Tom Lane wrote:
>>> I suppose Robert had something more intelligent in mind than a tight
>>> loop when the buffer can't be exclusively locked, so maybe there
On Fri, Jan 6, 2012 at 2:29 PM, Robert Haas wrote:
> On Thu, Jan 5, 2012 at 7:37 PM, Tom Lane wrote:
>> I suppose Robert had something more intelligent in mind than a tight
>> loop when the buffer can't be exclusively locked, so maybe there is
>> some other change that should be made here instead
On Thu, Jan 5, 2012 at 7:37 PM, Tom Lane wrote:
> I suppose Robert had something more intelligent in mind than a tight
> loop when the buffer can't be exclusively locked, so maybe there is
> some other change that should be made here instead.
My intention was to skip the tuple, but I failed to no
On Fri, Jan 6, 2012 at 12:37 AM, Tom Lane wrote:
> We could fix the direct symptom by inserting UnlockReleaseBuffer()
> in front of the continue, but AFAICS that just makes this into a
> busy-waiting equivalent of waiting unconditionally, so I don't really
> see why we should not revert the above
Lately I have noticed buildfarm member jaguar occasionally failing
regression tests with
ERROR: invalid memory alloc request size 1073741824
during a VACUUM, as for example at
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jaguar&dt=2012-01-04%2023%3A05%3A02
Naturally I supposed it to be
12 matches
Mail list logo