On Wed, 2009-03-18 at 13:49 +, Matthew Wakeling wrote:
> On Wed, 18 Mar 2009, Jignesh K. Shah wrote:
> > I thought about that.. Except without putting a restriction a huge queue
> > will cause lot of time spent in manipulating the lock
> > list every time. One more thing will be to maintain t
On Wed, 2009-03-18 at 16:26 -0400, Tom Lane wrote:
> Simon Riggs writes:
> > On Mon, 2009-03-16 at 16:26 +, Matthew Wakeling wrote:
> >> One possibility would be for the locks to alternate between exclusive
> >> and
> >> shared - that is:
> >>
> >> 1. Take a snapshot of all shared waits, an
On 03/18/09 17:25, Robert Haas wrote:
On Wed, Mar 18, 2009 at 1:43 PM, Scott Carey wrote:
Its worth ruling out given that even if the likelihood is small, the fix is
easy. However, I don¹t see the throughput drop from peak as more
concurrency is added that is the hallmark of this problem
On 03/18/09 17:16, Scott Carey wrote:
On 3/18/09 4:36 AM, "Gregory Stark" wrote:
"Jignesh K. Shah" writes:
In next couple of weeks I plan to test the patch on a different x64 based
system to do a sanity testing on lower number of cores and also try out other
workloads ...
On Wed, Mar 18, 2009 at 1:43 PM, Scott Carey wrote:
>>> Its worth ruling out given that even if the likelihood is small, the fix is
>>> easy. However, I don¹t see the throughput drop from peak as more
>>> concurrency is added that is the hallmark of this problem < usually with a
>>> lot of contex
On 3/18/09 4:36 AM, "Gregory Stark" wrote:
>
>
> "Jignesh K. Shah" writes:
>
>> In next couple of weeks I plan to test the patch on a different x64 based
>> system to do a sanity testing on lower number of cores and also try out other
>> workloads ...
>
> I'm actually more interested in the
Simon Riggs writes:
> On Mon, 2009-03-16 at 16:26 +, Matthew Wakeling wrote:
>> One possibility would be for the locks to alternate between exclusive
>> and
>> shared - that is:
>>
>> 1. Take a snapshot of all shared waits, and grant them all -
>> thundering
>> herd style.
>> 2. Wait until A
Ron Mayer writes:
> Oleg Bartunov wrote:
> OB:> it's not about short or long arrays, it's about small or big
> OB:> cardinality of the whole set (the number of unique elements)
> I'm re-reading the docs and still wasn't obvious to me. A
> potential docs patch is attached below.
Done, though no
On 3/12/09 6:29 PM, "Robert Haas" wrote:
>> Its worth ruling out given that even if the likelihood is small, the fix is
>> easy. However, I don¹t see the throughput drop from peak as more
>> concurrency is added that is the hallmark of this problem < usually with a
>> lot of context switching an
Tom Lane wrote:
> Ron Mayer writes:
>> vm=# create index "gist7" on tmp_intarray_test using GIST (my_int_array
>> gist__int_ops);
>> CREATE INDEX
>> Time: 2069836.856 ms
>
>> Is that expected, or does it sound like a bug to take over
>> half an hour to index 7 rows of mostly 5 and 6-elem
On Wed, 18 Mar 2009, Jignesh K. Shah wrote:
I thought about that.. Except without putting a restriction a huge queue will
cause lot of time spent in manipulating the lock
list every time. One more thing will be to maintain two list shared and
exclusive and round robin through them for every tim
On 03/18/09 08:06, Simon Riggs wrote:
On Wed, 2009-03-18 at 11:45 +, Matthew Wakeling wrote:
On Wed, 18 Mar 2009, Simon Riggs wrote:
I agree with that, apart from the "granting no more" bit.
The most useful behaviour is just to have two modes:
* exclusive-lock held - all other x
Hi,
>Has anyone done similar work in the light of upcoming many-core CPUs/systems?
>Any better results than 2x improvement?
Yes, in fact I've done a very similar thing on a quad CPU box a while back. In
my case the table in question had about 26 million rows. I did nothing special
to the table
On Wed, 18 Mar 2009, Simon Riggs wrote:
On Wed, 2009-03-18 at 11:45 +, Matthew Wakeling wrote:
The problem with making all other locks welcome is that there is a
possibility of starvation. Imagine a case where there is a constant stream
of shared locks - the exclusive locks may never actuall
On Wed, 2009-03-18 at 11:45 +, Matthew Wakeling wrote:
> On Wed, 18 Mar 2009, Simon Riggs wrote:
> > I agree with that, apart from the "granting no more" bit.
> >
> > The most useful behaviour is just to have two modes:
> > * exclusive-lock held - all other x locks welcome, s locks queue
> > *
On Wed, 18 Mar 2009, Heikki Linnakangas wrote:
A linked list or an array of in-progress writes was my first thought as well.
But the real problem is: how does the reader wait until all WAL up to X have
been written? It could poll, but that's inefficient.
Good point - waiting for an exclusive l
Matthew Wakeling wrote:
On Sat, 14 Mar 2009, Heikki Linnakangas wrote:
It's going require some hard thinking to bust that bottleneck. I've
sometimes thought about maintaining a pre-calculated array of
in-progress XIDs in shared memory. GetSnapshotData would simply
memcpy() that to private memo
On Wed, 18 Mar 2009, Simon Riggs wrote:
I agree with that, apart from the "granting no more" bit.
The most useful behaviour is just to have two modes:
* exclusive-lock held - all other x locks welcome, s locks queue
* shared-lock held - all other s locks welcome, x locks queue
The problem with
"Jignesh K. Shah" writes:
> In next couple of weeks I plan to test the patch on a different x64 based
> system to do a sanity testing on lower number of cores and also try out other
> workloads ...
I'm actually more interested in the large number of cores but fewer processes
and lower max_conne
Joshua D. Drake wrote:
On Mon, 2009-03-16 at 12:11 -0400, Mark Steben wrote:
The issue is that during a restore on a remote site, (Postgres 8.2.5)
8.2.5 is quite old. You should upgrade to the latest 8.2.X release.
archived logs are taking an average of 35 – 40 seconds apiece to
restore.
On Mon, 2009-03-16 at 16:26 +, Matthew Wakeling wrote:
> One possibility would be for the locks to alternate between exclusive
> and
> shared - that is:
>
> 1. Take a snapshot of all shared waits, and grant them all -
> thundering
> herd style.
> 2. Wait until ALL of them have finished,
On Sat, 2009-03-14 at 12:09 -0400, Tom Lane wrote:
> Heikki Linnakangas writes:
> > WALInsertLock is also quite high on Jignesh's list. That I've seen
> > become the bottleneck on other tests too.
>
> Yeah, that's been seen to be an issue before. I had the germ of an idea
> about how to fix th
22 matches
Mail list logo