Andres Freund writes:
> On 2015-01-16 12:15:35 -0500, Tom Lane wrote:
>> It strikes me that this patch leaves some lookups on the table,
>> specifically that it fails to avoid repeated hash_search lookups
>> inside tbm_page_is_lossy() in the situation where we're adding
>> new TIDs to an already-l
On 2015-01-16 12:15:35 -0500, Tom Lane wrote:
> Andres Freund writes:
> > On 2014-12-25 01:26:53 +1300, David Rowley wrote:
> >> So I think v3 is the one to go with, and I can't see any problems with it,
> >> so I'm marking it as ready for committer.
>
> > And committed.
>
> It strikes me that t
Andres Freund writes:
> On 2014-12-25 01:26:53 +1300, David Rowley wrote:
>> So I think v3 is the one to go with, and I can't see any problems with it,
>> so I'm marking it as ready for committer.
> And committed.
It strikes me that this patch leaves some lookups on the table,
specifically that
On 2014-12-25 01:26:53 +1300, David Rowley wrote:
> So I think v3 is the one to go with, and I can't see any problems with it,
> so I'm marking it as ready for committer.
And committed.
Thanks Teodor and David.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadr
On 24 December 2014 at 00:24, Teodor Sigaev wrote:
> I've also attached some benchmark results using your original table from
>>
> up-thread. It seems that the caching if the page was seen as lossy is not
>> much
>> of a help in this test case. Did you find another one where you saw some
>> bette
Oh, that makes sense. Though I wonder if you need to clear the caches at all
when calling tbm_lossify(). Surely it never becomes un-lossified and plus, at
least for lossy_page it would never be set to the current page anyway, it's
either going to be set to InvalidBlockNumber, or some other previo
On 18 December 2014 at 04:56, Teodor Sigaev wrote:
>
> You could well be right, but it would be good to compare the numbers just
>> so we
>> know this for sure.
>>
> I wasn't right :(
>
> # set work_mem='64kB';
> # set enable_seqscan = off;
> Patched: 1194.094 ms
> Master: 1765.338 ms
>
> > Are
You could well be right, but it would be good to compare the numbers just so we
know this for sure.
I wasn't right :(
# set work_mem='64kB';
# set enable_seqscan = off;
Patched: 1194.094 ms
Master: 1765.338 ms
> Are you seeing the same?
Fixed too, the mistake was in supposition that current p
On 17 December 2014 at 05:25, Teodor Sigaev wrote:
>
> I've been having a look at this and I'm wondering about a certain scenario:
>>
>> In tbm_add_tuples, if tbm_page_is_lossy() returns true for a given block,
>> and on
>> the next iteration of the loop we have the same block again, have you
>> b
I've been having a look at this and I'm wondering about a certain scenario:
In tbm_add_tuples, if tbm_page_is_lossy() returns true for a given block, and on
the next iteration of the loop we have the same block again, have you
benchmarked any caching code to store if tbm_page_is_lossy() returned
On 23 October 2014 at 00:52, Teodor Sigaev wrote:
>
> In specific workload postgres could spend a lot of time in
> tbm_add_tuples, up to 50% of query time. hash_search call is expensive and
> called twice for each ItemPointer to insert. Suggested patch tries to cache
> last PagetableEntry pointer
11 matches
Mail list logo