On Fri, May 28, 2010 at 5:02 PM, Greg Smith wrote:
> Merlin Moncure wrote:
>>
>> I'm of the opinion (rightly or wrongly) that the prevailing opinions
>> on how to configure shared_buffers are based on special case
>> benchmarking information or simply made up.
>
> Well, you're wrong, but it's OK;
If, like me, you came from the Oracle world, you may be tempted to throw a
ton of RAM at this. Don't. PG does not like it.
On Fri, May 28, 2010 at 4:11 PM, Scott Marlowe wrote:
> On Mon, May 24, 2010 at 12:25 PM, Merlin Moncure
> wrote:
> > *) shared_buffers is one of the _least_ important perfo
On Mon, May 24, 2010 at 12:25 PM, Merlin Moncure wrote:
> *) shared_buffers is one of the _least_ important performance settings
> in postgresql.conf
Yes, and no. It's usually REALLY helpful to make sure it's more than
8 or 24Megs. But it doesn't generally need to be huge to make a
difference.
Merlin Moncure wrote:
I'm of the opinion (rightly or wrongly) that the prevailing opinions
on how to configure shared_buffers are based on special case
benchmarking information or simply made up.
Well, you're wrong, but it's OK; we'll forgive you this time. It's true
that a lot of the earlier
On Fri, May 28, 2010 at 2:57 PM, Greg Smith wrote:
> Merlin Moncure wrote:
>>
>> I would prefer to see the annotated performance oriented .conf
>> settings to be written in terms of trade offs (too low? X too high? Y
>> setting in order to get? Z). For example, did you know that if crank
>> max_l
Anybody on the list have any experience with these drives? They get
good numbers but I can't find diddly on them on the internet for the
last year or so.
http://www.stec-inc.com/product/zeusiops.php
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
On 28 May 2010 19:54, Tom Lane wrote:
> Thom Brown writes:
>> I get this:
>
>> Limit (cost=0.00..316895.11 rows=400 width=211) (actual
>> time=3.880..1368.936 rows=400 loops=1)
>> -> GroupAggregate (cost=0.00..41843621.95 rows=52817 width=211)
>> (actual time=3.872..1367.048 rows=400 loops=
Merlin Moncure wrote:
I would prefer to see the annotated performance oriented .conf
settings to be written in terms of trade offs (too low? X too high? Y
setting in order to get? Z). For example, did you know that if crank
max_locks_per_transaction you also increase the duration of every
query
Thom Brown writes:
> I get this:
> Limit (cost=0.00..316895.11 rows=400 width=211) (actual
> time=3.880..1368.936 rows=400 loops=1)
>-> GroupAggregate (cost=0.00..41843621.95 rows=52817 width=211)
> (actual time=3.872..1367.048 rows=400 loops=1)
> -> Index Scan using "binaryID_25
I'm using PostgreSQL 9.0 beta 1. I've got the following table definition:
# \d parts_2576
Table "public.parts_2576"
Column | Type |
Modifiers
++--
On Fri, May 28, 2010 at 4:04 AM, Joachim Worringen
wrote:
> On 05/26/2010 06:03 PM, Joachim Worringen wrote:
>>
>> Am 25.05.2010 12:41, schrieb Andres Freund:
>>>
>>> On Tuesday 25 May 2010 11:00:24 Joachim Worringen wrote:
Thanks. So, the Write-Ahead-Logging (being used or not) does not
On Wed, May 26, 2010 at 12:41 PM, Eliot Gable
wrote:
> Ah, that clears things up. Yes, the connections are more or less persistent.
> I have a connection manager which doles connections out to the worker
> threads and reclaims them when the workers are done with them. It
> dynamically adds new con
On 05/26/2010 06:03 PM, Joachim Worringen wrote:
Am 25.05.2010 12:41, schrieb Andres Freund:
On Tuesday 25 May 2010 11:00:24 Joachim Worringen wrote:
Thanks. So, the Write-Ahead-Logging (being used or not) does not matter?
It does matter quite significantly in my experience. Both from an io
an
2010/5/28 Konrad Garus :
> 2010/5/27 Cédric Villemain :
>
>> Exactly. And the time to browse depend on the number of blocks already
>> in core memory.
>> I am interested by tests results and benchmarks if you are going to do some
>> :)
>
> I am still thinking whether I want to do it on this prod m
2010/5/27 Cédric Villemain :
> Exactly. And the time to browse depend on the number of blocks already
> in core memory.
> I am interested by tests results and benchmarks if you are going to do some :)
I am still thinking whether I want to do it on this prod machine.
Maybe on something less critic
15 matches
Mail list logo