I have the same table as yours with potential to grow over 50 billion of 
records once operational. But our hardware is currently very limited (8GB 
RAM).

I concur with Tom Lane about the fact that partial indexes aren't really an 
option, but what about partitioning?

I read from the Postgres docs that "The exact point at which a table will 
benefit from partitioning depends on the application, although a rule of 
thumb is that the size of the table should exceed the physical memory of the 
database server." 
http://www.postgresql.org/docs/current/static/ddl-partitioning.html

Now, a table with 500M records would exceed our RAM, so I wonder what impact 
a table of 50G would have on simple lookup performance (i.e. source = fixed, 
timestamp = range), taking into account that a global index would exceed our 
RAM on some 1G records.

Did anyone do some testing? Is partitioning a viable option in such 
scenario?

"Adrian von Bidder" <avbid...@fortytwo.ch> wrote in message 
news:201003020849.19...@fortytwo.ch... 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to