Re: [PERFORM] Decreasing BLKSZ

2006-09-25 Thread Marc Morin
> > The bottom line here is likely to be "you need more RAM" :-( Yup. Just trying to get a handle on what I can do if I need more than 16G Of ram... That's as much as I can put on the installed based of servers 100s of them. > > I wonder whether there is a way to use table partitioning t

Re: [PERFORM] Decreasing BLKSZ

2006-09-25 Thread Mark Lewis
I'm not sure if decreasing BLKSZ is the way to go. It would allow you to have more smaller blocks in memory, but the actual coverage of the index would remain the same; if only 33% of the index fits in memory with the 8K BLKSZ then only 33% would fit in memory with a 4k BLKSZ. I can see where you

Re: [PERFORM] Decreasing BLKSZ

2006-09-25 Thread Tom Lane
"Marc Morin" <[EMAIL PROTECTED]> writes: > No, an insert consists of roughly 10,000+ rows per transaction block. Perhaps it would help to pre-sort these rows by key? Like Markus, I'm pretty suspicious of lowering BLCKSZ ... you can try it but it's likely to prove counterproductive (more btree i

Re: [PERFORM] Decreasing BLKSZ

2006-09-25 Thread Marc Morin
> Would it be possible to change the primary key to > (logtime,key)? This could help keeping the "working window" small. No, the application accessing the data wants all the rows between start and end time for a particular key value. > > Secondly, the real working set is smaller, as the rows

Re: [PERFORM] Decreasing BLKSZ

2006-09-25 Thread Markus Schaber
Hi, Marc, Marc Morin wrote: > The problem is, the insert pattern has low correlation with the > (key,logtime) index. In this case, would need >1M blocks in my > shared_buffer space to prevent a read-modify-write type of pattern > happening during the inserts (given a large enough database). Wo

[PERFORM] Decreasing BLKSZ

2006-09-25 Thread Marc Morin
Our application has a number of inserters posting rows of network statistics into a database.  This is occuring continously.  The following is an example of a stats table (simplified but maintains key concepts).     CREATE TABLE stats (   logtime timestamptz,   key int,   s

Re: [PERFORM] PostgreSQL and sql-bench

2006-09-25 Thread yoav x
Hi I am not comparing Postgres to MyISAM (obviously it is not a very fair comparison) and we do need ACID, so all comparison are made against InnoDB (which now supports MVCC as well). I will try again with the suggestions posted here. Thanks. --- Tom Lane <[EMAIL PROTECTED]> wrote: > yoav x

Re: [PERFORM] Large tables (was: RAID 0 not as fast as

2006-09-25 Thread Luke Lonergan
Jim, On 9/22/06 7:01 AM, "Jim C. Nasby" <[EMAIL PROTECTED]> wrote: > There's been talk of adding code that would have a seqscan detect if > another seqscan is happening on the table at the same time, and if it > is, to start it's seqscan wherever the other seqscan is currently > running. That wou

Re: [PERFORM] Multi-processor question

2006-09-25 Thread Markus Schaber
Hi, Kjell Tore, Kjell Tore Fossbakk wrote: > I got two AMD Opteron 885 processors (2.6ghz) and 8 gig of memory. > Harddrives are 4 scsi disks in 10 raid. > > I'm running gentoo, and the kernel finds and uses all of my 2 (4) cpu's. > > How can i actually verify that my PostgreSQL (or that my OS)

[PERFORM] Multi-processor question

2006-09-25 Thread Kjell Tore Fossbakk
Hello!I got two AMD Opteron 885 processors (2.6ghz) and 8 gig of memory.Harddrives are 4 scsi disks in 10 raid.I'm running gentoo, and the kernel finds and uses all of my 2 (4) cpu's.How can i actually verify that my PostgreSQL (or that my OS) actually gives each new query a fresh idle CPU) all of