> IMHO the size of the DB is less relevant than the query workload. For
> example, if you're storying 100GB of data but only doing a single
> index scan on it every 10 seconds, any modern machine with enough HD
> space should be fine.
I agree that the workload is likely to be the main issue in mos
Hi,
I have done some performance tests on 1Gb and 4 Gb Databases on a mono
Pentium 4 , 1 Gb RAM, IDE disk, SCSI disks and RAID0 LUN on DAS 5300 on
Linux RedHat 7.3.
In each cases my tests make select, update and insert.
One of them is pgbench. You can find it in Postgres/contrib/pgbench.
The othe
Thanks for reply. Actually our database only supply
some scientists to use (we predict that). so there
is no workload problem. there is only very
infrequent updates. the query is not complex. the
problem is, we have one table that store most of the
data ( with 200 million rows). In this table, ther
l Message -
From: Neil Conway <[EMAIL PROTECTED]>
Date: Wednesday, November 26, 2003 10:03 pm
Subject: Re: [PERFORM] very large db performance
question
> LIANHE SHAO <[EMAIL PROTECTED]> writes:
> > We will have a very large database to store
microarray data (may
> > exce
LIANHE SHAO <[EMAIL PROTECTED]> writes:
> We will have a very large database to store microarray data (may
> exceed 80-100G some day). now we have 1G RAM, 2G Hz Pentium 4, 1
> CPU. and enough hard disk.
> Could anybody tell me that our hardware is an issue or not?
IMHO the size of the DB is less
Hello All,
We will have a very large database to store
microarray data (may exceed 80-100G some day). now
we have 1G RAM, 2G Hz Pentium 4, 1 CPU. and enough
hard disk.
I never touched such large database before. I ask
several dbas if the hardware is ok, some said it is
ok for the query, but I am