From: pgsql-performance-ow...@postgresql.org
[pgsql-performance-ow...@postgresql.org] On Behalf Of Simon Riggs
[si...@2ndquadrant.com]
Sent: Wednesday, March 18, 2009 12:53 AM
To: Matthew Wakeling
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM]
From: Robert Haas [robertmh...@gmail.com]
Sent: Thursday, March 19, 2009 8:45 PM
To: Scott Carey
Cc: Jignesh K. Shah; Greg Smith; Kevin Grittner;
pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Proposal of tunable fix for scalability of 8.4
>
> >On Thu, Mar 19, 2009 at 5:43 PM, Scott Ca
On Thu, Mar 19, 2009 at 5:43 PM, Scott Carey wrote:
>> Well, unless I'm misunderstanding something, waking all waiters every
>> time could lead to arbitrarily long delays for writers on mostly
>> read-only workloads... and by arbitrarily along, we mean to say
>> "potentially just about forever".
m...@bortal.de wrote:
Hi Greg,
thanks a lot for your hints. I changed my config and changed raid6 to
raid10, but whatever i do, the benchmark breaks down at a scaling
factor 75 where the database is "only" 1126MB big.
Here are my benchmark Results (scaling factor, DB size in MB, TPS) using
Scott Carey wrote:
> On 3/19/09 10:37 AM, "Bruce Momjian" wrote:
>
> > Robert Haas wrote:
> >>> The original poster's request is for a config parameter, for
> >>> experimentation
> >>> and testing by the brave. My own request was for that version of the lock
> >>> to
> >>> prevent possible star
Robert Haas wrote:
Actually the patch I submitted shows no overhead from what I have seen and I
think it is useful depending on workloads where it can be turned on even on
production.
Well, unless I'm misunderstanding something, waking all waiters every
time could lead to arbitrarily lo
On 3/19/09 2:25 PM, "m...@bortal.de" wrote:
>
> Here is my config (maybe with some odd setting):
> http://pastebin.com/m5d7f5717
>
> I played around with:
> - max_connections
> - shared_buffers
> - work_mem
> - maintenance_work_mem
> - checkpoint_segments
> - effective_cache_size
>
> ..but wh
On Thu, Mar 19, 2009 at 3:25 PM, m...@bortal.de wrote:
> Hi Greg,
>
> thanks a lot for your hints. I changed my config and changed raid6 to
> raid10, but whatever i do, the benchmark breaks down at a scaling factor 75
> where the database is "only" 1126MB big.
>
> Here are my benchmark Results (sc
On 3/19/09 1:49 PM, "Robert Haas" wrote:
>> Actually the patch I submitted shows no overhead from what I have seen and I
>> think it is useful depending on workloads where it can be turned on even on
>> production.
>
> Well, unless I'm misunderstanding something, waking all waiters every
> tim
Hi Greg,
thanks a lot for your hints. I changed my config and changed raid6 to
raid10, but whatever i do, the benchmark breaks down at a scaling factor
75 where the database is "only" 1126MB big.
Here are my benchmark Results (scaling factor, DB size in MB, TPS) using:
pgbench -S -c X -t
On 3/19/09 10:37 AM, "Bruce Momjian" wrote:
> Robert Haas wrote:
>>> The original poster's request is for a config parameter, for experimentation
>>> and testing by the brave. My own request was for that version of the lock to
>>> prevent possible starvation but improve performance by unlocking a
On 3/18/09 2:25 PM, "Robert Haas" wrote:
> On Wed, Mar 18, 2009 at 1:43 PM, Scott Carey wrote:
Its worth ruling out given that even if the likelihood is small, the fix is
easy. However, I don¹t see the throughput drop from peak as more
concurrency is added that is the hallmark of
> Actually the patch I submitted shows no overhead from what I have seen and I
> think it is useful depending on workloads where it can be turned on even on
> production.
Well, unless I'm misunderstanding something, waking all waiters every
time could lead to arbitrarily long delays for writers o
Hi,
We have the following 2 tables:
\d audit_change
Table "public.audit_change"
Column | Type | Modifiers
++---
id | character varying(32) | not null
audit_entry_id | character varying(32) |
...
In
Robert Haas wrote:
> > The original poster's request is for a config parameter, for experimentation
> > and testing by the brave. My own request was for that version of the lock to
> > prevent possible starvation but improve performance by unlocking all shared
> > at once, then doing all exclusives
On Thu, 19 Mar 2009, Oleg Bartunov wrote:
On Thu, 19 Mar 2009, Tom Lane wrote:
Oleg Bartunov writes:
We usually say about 200 unique values as a limit for
gist_int_ops.
That seems awfully small ... should we make gist_intbig_ops the default,
or more likely, raise the signature size of both
On Thu, 19 Mar 2009, Tom Lane wrote:
Oleg Bartunov writes:
We usually say about 200 unique values as a limit for
gist_int_ops.
That seems awfully small ... should we make gist_intbig_ops the default,
or more likely, raise the signature size of both opclasses? Even at a
crossover point of 10
Oleg Bartunov writes:
> We usually say about 200 unique values as a limit for
> gist_int_ops.
That seems awfully small ... should we make gist_intbig_ops the default,
or more likely, raise the signature size of both opclasses? Even at a
crossover point of 1 I'm not sure that many real-world
--- On Thu, 19/3/09, Nimesh Satam wrote:
>
> We are receving the following error in the postgres
> database logs:
>
> 2009-03-19 02:14:20 PDT [2547]: [79-1] LOG: duration:
> 0.039 ms statement:
> RESET ALL
> 2009-03-19 02:14:20 PDT [2547]: [80-1] LOG: duration:
> 0.027 ms statement:
> SE
Hi,
I am not sure if sending this to the right place. I did try to get the
answer from pgpool mailing list but no luck . Would appreciate if someone
can help here.
We are receving the following error in the postgres database logs:
2009-03-19 02:14:20 PDT [2547]: [79-1] LOG: duration: 0.039 ms
We usually say about 200 unique values as a limit for
gist_int_ops.
On Wed, 18 Mar 2009, Tom Lane wrote:
Ron Mayer writes:
Oleg Bartunov wrote:
OB:> it's not about short or long arrays, it's about small or big
OB:> cardinality of the whole set (the number of unique elements)
I'm re-reading
21 matches
Mail list logo