On Tue, Apr 03, 2007 at 09:28:28AM +0100, Tim Perrett wrote:
> Hey all
> 
> I am possibly looking to use PSGSQL in a project I am working on for a very
> large client. The upshot of this is the throughput of data will be pretty
> massive, around 20,000 new rows in one of the tables per day. We also have to
> keep this data online for a set period so after 5 or 6 weeks it could have
> nearly a million rows.
> 
> Are there any implications with possibly doing this? will PG handle it? Are
> there realworld systems using PG that have a massive amount of data in them?

This is in no way massive for pg. Many millions of rows is not a problem at
all, given that you have proper schema and indexing, and run on reasonable
hardware (hint: it might be a bit slow on your laptop). 20,000 rows / day
is still no more than about 14 / minute, which is a very light load for a
server grade machine to deal with without any problem at all.

//Magnus


---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to