On Sat, Feb 8, 2014 at 7:16 AM, Jason Petersen wrote:
> Bump.
>
> I'm interested in many of the issues that were discussed in this thread. Was
> this patch ever wrapped up (I can't find it in any CF), or did this thread
> die off?
This and variant of this patch have been discussed multiple time
Bump.
I’m interested in many of the issues that were discussed in this thread. Was
this patch ever wrapped up (I can’t find it in any CF), or did this thread die
off?
—Jason
On Aug 6, 2013, at 12:18 AM, Amit Kapila wrote:
> On Friday, June 28, 2013 6:20 PM Robert Haas wrote:
>> On Fri, Jun 2
On Friday, June 28, 2013 6:20 PM Robert Haas wrote:
> On Fri, Jun 28, 2013 at 12:52 AM, Amit Kapila
> wrote:
> > Currently it wakes up based on bgwriterdelay config parameter which
> is by
> > default 200ms, so you means we should
> > think of waking up bgwriter based on allocations and number of
On Wednesday, July 03, 2013 6:10 PM Simon Riggs wrote:
On 3 July 2013 12:56, Amit Kapila wrote:
>>>My perspectives here would be
>>> * BufFreelistLock is a huge issue. Finding a next victim block needs to
be
an O(1) operation, yet it is currently much worse than that. Measuring
>>> contention o
On 3 July 2013 12:56, Amit Kapila wrote:
> >My perspectives here would be
>
> > * BufFreelistLock is a huge issue. Finding a next victim block needs to
> be
> an O(1) operation, yet it is currently much worse than that. Measuring
> > contention on that lock hides that problem, since having share
On Wednesday, July 03, 2013 12:27 PM Simon Riggs wrote:
On 28 June 2013 05:52, Amit Kapila wrote:
>> As per my understanding Summarization of points raised by you and Andres
>> which this patch should address to have a bigger win:
>> 1. Bgwriter needs to be improved so that it can help in reduc
On 28 June 2013 05:52, Amit Kapila wrote:
> As per my understanding Summarization of points raised by you and Andres
> which this patch should address to have a bigger win:
>
> 1. Bgwriter needs to be improved so that it can help in reducing usage
> count
> and finding next victim buffer
>(r
On Tuesday, July 02, 2013 12:00 AM Robert Haas wrote:
> On Sun, Jun 30, 2013 at 3:24 AM, Amit kapila
> wrote:
> > Do you think it will be sufficient to just wake bgwriter when the
> buffers in freelist drops
> > below low watermark, how about it's current job of flushing dirty
> buffers?
>
> Well
On Sun, Jun 30, 2013 at 3:24 AM, Amit kapila wrote:
> Do you think it will be sufficient to just wake bgwriter when the buffers in
> freelist drops
> below low watermark, how about it's current job of flushing dirty buffers?
Well, the only point of flushing dirty buffers in the background
writer
On Friday, June 28, 2013 6:38 PM Robert Haas wrote:
On Fri, Jun 28, 2013 at 8:50 AM, Robert Haas wrote:
> On Fri, Jun 28, 2013 at 12:52 AM, Amit Kapila wrote:
>>> Currently it wakes up based on bgwriterdelay config parameter which is by
>>> default 200ms, so you means we should
>>> think of waki
On Friday, June 28, 2013 6:20 PM Robert Haas wrote:
On Fri, Jun 28, 2013 at 12:52 AM, Amit Kapila wrote:
>> Currently it wakes up based on bgwriterdelay config parameter which is by
>> default 200ms, so you means we should
>> think of waking up bgwriter based on allocations and number of elements
On Fri, Jun 28, 2013 at 12:10 PM, Greg Smith wrote:
> This refactoring idea will make that hard to keep around. I think this is
> OK though. Switching to a latch based design should eliminate the
> bgwriter_delay, which means you won't have this worst case of a 200ms stall
> while heavy activity
On 6/28/13 8:50 AM, Robert Haas wrote:
On Fri, Jun 28, 2013 at 12:52 AM, Amit Kapila wrote:
4. Separate processes for writing dirty buffers and moving buffers to
freelist
I think this part might be best pushed to a separate patch, although I
agree we probably need it.
This might be necessar
On Fri, Jun 28, 2013 at 8:50 AM, Robert Haas wrote:
> On Fri, Jun 28, 2013 at 12:52 AM, Amit Kapila wrote:
>> Currently it wakes up based on bgwriterdelay config parameter which is by
>> default 200ms, so you means we should
>> think of waking up bgwriter based on allocations and number of elemen
On Fri, Jun 28, 2013 at 12:52 AM, Amit Kapila wrote:
> Currently it wakes up based on bgwriterdelay config parameter which is by
> default 200ms, so you means we should
> think of waking up bgwriter based on allocations and number of elements left
> in freelist?
I think that's what Andres and I a
On Thursday, June 27, 2013 5:54 PM Robert Haas wrote:
> On Wed, Jun 26, 2013 at 8:09 AM, Amit Kapila
> wrote:
> > Configuration Details
> > O/S - Suse-11
> > RAM - 128GB
> > Number of Cores - 16
> > Server Conf - checkpoint_segments = 300; checkpoint_timeout = 15 min,
> > synchronous_commit = 0FF,
Andres Freund wrote:
> I don't think I actually found any workload where the bgwriter
> actually wroute out a relevant percentage of the necessary pages.
I had one at Wisconsin Courts. The database which we targeted with
logical replication from the 72 circuit court databases (plus a few
others
On 2013-06-27 09:50:32 -0400, Robert Haas wrote:
> On Thu, Jun 27, 2013 at 9:01 AM, Andres Freund wrote:
> > Contention wise I aggree. What I have seen is that we have a huge
> > amount of cacheline bouncing around the buffer header spinlocks.
>
> How did you measure that?
perf record -e cache-m
On Thu, Jun 27, 2013 at 9:01 AM, Andres Freund wrote:
> Contention wise I aggree. What I have seen is that we have a huge
> amount of cacheline bouncing around the buffer header spinlocks.
How did you measure that?
> I have previously added some adhoc instrumentation that printed the
> amount of
On 2013-06-27 08:23:31 -0400, Robert Haas wrote:
> I'd like to just back up a minute here and talk about the broader
> picture here.
Sounds like a very good plan.
> So in other words,
> there's no huge *performance* problem for a working set larger than
> shared_buffers, but there is a huge *scal
On Wed, Jun 26, 2013 at 8:09 AM, Amit Kapila wrote:
> Configuration Details
> O/S - Suse-11
> RAM - 128GB
> Number of Cores - 16
> Server Conf - checkpoint_segments = 300; checkpoint_timeout = 15 min,
> synchronous_commit = 0FF, shared_buffers = 14GB, AutoVacuum=off Pgbench -
> Select-only Scalefa
On Tuesday, June 25, 2013 10:25 AM Amit Kapila wrote:
> On Monday, June 24, 2013 11:00 PM Robert Haas wrote:
> > On Thu, Jun 6, 2013 at 3:01 AM, Amit Kapila
> > wrote:
> > > To avoid above 3 factors in test readings, I used below steps:
> > > 1. Initialize the database with scale factor such that
On Monday, June 24, 2013 11:00 PM Robert Haas wrote:
> On Thu, Jun 6, 2013 at 3:01 AM, Amit Kapila
> wrote:
> > To avoid above 3 factors in test readings, I used below steps:
> > 1. Initialize the database with scale factor such that database size
> +
> > shared_buffers = RAM (shared_buffers = 1/4
On Thu, Jun 6, 2013 at 3:01 AM, Amit Kapila wrote:
> To avoid above 3 factors in test readings, I used below steps:
> 1. Initialize the database with scale factor such that database size +
> shared_buffers = RAM (shared_buffers = 1/4 of RAM).
>For example:
>Example -1
> if
On Tuesday, May 28, 2013 6:54 PM Robert Haas wrote:
> >> Instead, I suggest modifying BgBufferSync, specifically this part
> right
> >> here:
> >>
> >> else if (buffer_state & BUF_REUSABLE)
> >> reusable_buffers++;
> >>
> >> What I would suggest is that if the BUF_REUSABLE flag
>> Instead, I suggest modifying BgBufferSync, specifically this part right
>> here:
>>
>> else if (buffer_state & BUF_REUSABLE)
>> reusable_buffers++;
>>
>> What I would suggest is that if the BUF_REUSABLE flag is set here, use
>> that as the trigger to do StrategyMoveBufferToFr
On Friday, May 24, 2013 8:22 PM Jim Nasby wrote:
On 5/14/13 8:42 AM, Amit Kapila wrote:
>> In the attached patch, bgwriter/checkpointer moves unused (usage_count =0 &&
>> refcount = 0) buffer’s to end of freelist. I have implemented a new API
>> StrategyMoveBufferToFreeListEnd() to
>>
>> move buf
On 5/14/13 8:42 AM, Amit Kapila wrote:
In the attached patch, bgwriter/checkpointer moves unused (usage_count =0 &&
refcount = 0) buffer’s to end of freelist. I have implemented a new API
StrategyMoveBufferToFreeListEnd() to
move buffer’s to end of freelist.
Instead of a separate function,
On Friday, May 24, 2013 2:47 AM Jim Nasby wrote:
> On 5/14/13 2:13 PM, Greg Smith wrote:
> > It is possible that we are told to put something in the freelist that
> > is already in it; don't screw up the list if so.
> >
> > I don't see where the code does anything to handle that though. What
> was
On Thursday, May 23, 2013 8:45 PM Robert Haas wrote:
> On Tue, May 21, 2013 at 3:06 AM, Amit Kapila
> wrote:
> >> Here are the results. The first field in each line is the number of
> >> clients. The second number is the scale factor. The numbers after
> >> "master" and "patched" are the median
On 5/14/13 2:13 PM, Greg Smith wrote:
It is possible that we are told to put something in the freelist that
is already in it; don't screw up the list if so.
I don't see where the code does anything to handle that though. What was your
intention here?
IIRC, the code that pulls from the freeli
On Tue, May 21, 2013 at 3:06 AM, Amit Kapila wrote:
>> Here are the results. The first field in each line is the number of
>> clients. The second number is the scale factor. The numbers after
>> "master" and "patched" are the median of three runs.
>>
>> 01 100 master 1433.297699 patched 1420.306
On Tuesday, May 21, 2013 12:36 PM Amit Kapila wrote:
> On Monday, May 20, 2013 6:54 PM Robert Haas wrote:
> > On Thu, May 16, 2013 at 10:18 AM, Amit Kapila
>
> > wrote:
> > > Further Performance Data:
> > >
> > > Below data is for average 3 runs of 20 minutes
> > >
> > > Scale Factor - 1200
> >
On Monday, May 20, 2013 6:54 PM Robert Haas wrote:
> On Thu, May 16, 2013 at 10:18 AM, Amit Kapila
> wrote:
> > Further Performance Data:
> >
> > Below data is for average 3 runs of 20 minutes
> >
> > Scale Factor - 1200
> > Shared Buffers - 7G
>
> These results are good but I don't get similar
On Thu, May 16, 2013 at 10:18 AM, Amit Kapila wrote:
> Further Performance Data:
>
> Below data is for average 3 runs of 20 minutes
>
> Scale Factor - 1200
> Shared Buffers - 7G
These results are good but I don't get similar results in my own
testing. I ran pgbench tests at a variety of client
On Wednesday, May 15, 2013 12:44 AM Greg Smith wrote:
> On 5/14/13 9:42 AM, Amit Kapila wrote:
> > In the attached patch, bgwriter/checkpointer moves unused
> (usage_count
> > =0 && refcount = 0) buffer's to end of freelist. I have implemented a
> > new API StrategyMoveBufferToFreeListEnd() to
>
>
On 5/14/13 9:42 AM, Amit Kapila wrote:
In the attached patch, bgwriter/checkpointer moves unused (usage_count
=0 && refcount = 0) buffer’s to end of freelist. I have implemented a
new API StrategyMoveBufferToFreeListEnd() to
There's a comment in the new function:
It is possible that we are tol
As discussed and concluded in mail thread
(http://www.postgresql.org/message-id/006f01ce34f0$d6fa8220$84ef8660$@kapila
@huawei.com), for moving unused buffer's to freelist end,
I having implemented the idea and taken some performance data.
In the attached patch, bgwriter/checkpointer moves
38 matches
Mail list logo