Re: [PERFORM] Large tables (was: RAID 0 not as fast as

2006-09-20 Thread Luke Lonergan
Markus, On 9/20/06 11:02 AM, "Markus Schaber" <[EMAIL PROTECTED]> wrote: > I thought that posix_fadvise() with POSIX_FADV_WILLNEED was exactly > meant for this purpose? This is a good idea - I wasn't aware that this was possible. We'll do some testing and see if it works as advertised on Linux

Re: [PERFORM] running benchmark test on a 50GB database

2006-09-20 Thread Jim C. Nasby
On Wed, Sep 20, 2006 at 05:47:41PM +0200, Chris Mair wrote: > > > I am running bechmark test in a 50 GB postgresql database. > > I have the postgresql.conf with all parameters by default. > > In this configuration the database is very, very slow. > > > > Could you please tell which is the best co

Re: [PERFORM] Large tables (was: RAID 0 not as fast as

2006-09-20 Thread Ron
IMHO, AIO is the architecturally cleaner and more elegant solution. We in fact have a project on the boards to do this but funding (as yet) has not been found. My $.02, Ron At 02:02 PM 9/20/2006, Markus Schaber wrote: Hi, Luke, Luke Lonergan wrote: >> Do you think that adding some posix_f

Re: [PERFORM] Large tables (was: RAID 0 not as fast as

2006-09-20 Thread Markus Schaber
Hi, Luke, Luke Lonergan wrote: >> Do you think that adding some posix_fadvise() calls to the backend to >> pre-fetch some blocks into the OS cache asynchroneously could improve >> that situation? > > Nope - this requires true multi-threading of the I/O, there need to be > multiple seek operation

Re: [PERFORM] Large tables (was: RAID 0 not as fast as

2006-09-20 Thread Luke Lonergan
Markus, On 9/20/06 1:09 AM, "Markus Schaber" <[EMAIL PROTECTED]> wrote: > Do you think that adding some posix_fadvise() calls to the backend to > pre-fetch some blocks into the OS cache asynchroneously could improve > that situation? Nope - this requires true multi-threading of the I/O, there ne

Re: [PERFORM] running benchmark test on a 50GB database

2006-09-20 Thread Chris Mair
> I am running bechmark test in a 50 GB postgresql database. > I have the postgresql.conf with all parameters by default. > In this configuration the database is very, very slow. > > Could you please tell which is the best configuration? > > My system: > Pentium D 3.0Ghz > RAM: 1GB > HD: 150GB S

Re: [PERFORM] running benchmark test on a 50GB database

2006-09-20 Thread Dave Dutcher
I would start by reading this web page: http://powerpostgresql.com/PerfList There are probably some other web pages out there with similar information, or you can check the mailing list archives for a lot of info. If those places don't help, then you should try to indentify what queries are slow

[PERFORM] running benchmark test on a 50GB database

2006-09-20 Thread Nuno Alves
Hi, I am running bechmark test in a 50 GB postgresql database. I have the postgresql.conf with all parameters by default. In this configuration the database is very, very slow. Could you please tell which is the best configuration? My system: Pentium D 3.0Ghz RAM: 1GB HD: 150GB SATA Thanks in

Re: [PERFORM] Update on high concurrency OLTP application and Postgres

2006-09-20 Thread Cosimo Streppone
Andrew wrote: On Wed, Sep 20, 2006 at 11:09:23AM +0200, Cosimo Streppone wrote: I scheduled a cron job every hour or so that runs an analyze on the 4/5 most intensive relations and sleeps 30 seconds between every analyze. This suggests to me that your statistics need a lot of updating. Agre

Re: [PERFORM] Update on high concurrency OLTP application and Postgres 8 tuning

2006-09-20 Thread Andrew Sullivan
On Wed, Sep 20, 2006 at 11:09:23AM +0200, Cosimo Streppone wrote: > > I scheduled a cron job every hour or so that runs an analyze on the > 4/5 most intensive relations and sleeps 30 seconds between every > analyze. > > This has optimized db response times when many clients run together. > I want

[PERFORM] Update on high concurrency OLTP application and Postgres 8 tuning

2006-09-20 Thread Cosimo Streppone
Hi all, I was searching tips to speed up/reduce load on a Pg8 app. Thank you for all your suggestions on the matter. Thread is archived here: http://www.mail-archive.com/pgsql-performance@postgresql.org/msg18342.html After intensive application profiling and database workload analysis, I manage

Re: [PERFORM] Large tables (was: RAID 0 not as fast as

2006-09-20 Thread Markus Schaber
Hi, Luke, Luke Lonergan wrote: > Since PG's heap scan is single threaded, the seek rate is equivalent to a > single disk (even though RAID arrays may have many spindles), the typical > random seek rates are around 100-200 seeks per second from within the > backend. That means that as sequential