On 9/26/07, James Williams <[EMAIL PROTECTED]> wrote:
> The last is based mostly on the observation that another tiddly
> unrelated mysql db which normally runs fast, grinds to a halt when
> we're querying the postgres db (and cpu, memory appear to have spare
> capacity).
Just a quick observation
On Wed, 26 Sep 2007, James Williams wrote:
The box has 4 x Opterons, 4Gb RAM & five 15k rpm disks, RAID 5. We
wanted fast query/lookup. We know we can get fast disk IO.
You might want to benchmark to prove that if you haven't already. You
would not be the first person to presume you have f
Bill Moran <[EMAIL PROTECTED]> writes:
> Give it enough shared_buffers and it will do that. You're estimating
> the size of your table @ 3G (try a pg_relation_size() on it to get an
> actual size) If you really want to get _all_ of it in all the time,
> you're probably going to need to add RAM to
:24 AM
To: James Williams
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Help tuning a large table off disk and into RAM
In response to "James Williams" <[EMAIL PROTECTED]>:
> I'm stuck trying to tune a big-ish postgres db and wondering if anyone
> has any
James Williams wrote:
> The box has 4 x Opterons, 4Gb RAM & five 15k rpm disks, RAID 5. We
> wanted fast query/lookup. We know we can get fast disk IO.
RAID 5 is usually adviced against here. It's not particularly fast or
safe, IIRC. Try searching the ML archives for RAID 5 ;)
--
Alban Hertroy
In response to "James Williams" <[EMAIL PROTECTED]>:
> I'm stuck trying to tune a big-ish postgres db and wondering if anyone
> has any pointers.
>
> I cannot get Postgres to make good use of plenty of available RAM and
> stop thrashing the disks.
>
> One main table. ~30 million rows, 20 columns
I'm stuck trying to tune a big-ish postgres db and wondering if anyone
has any pointers.
I cannot get Postgres to make good use of plenty of available RAM and
stop thrashing the disks.
One main table. ~30 million rows, 20 columns all integer, smallint or
char(2). Most have an index. It's a tabl