Roger Tannous wrote:

Hello,

I have a database table that is growing too big (few hundred million rows) that needs to be optimized, but before I get into partitioning it, I thought I'd ask about suggestions.

Here is the usage:

     0 . Table contains about 10 columns of length about 20 bytes each.

  1.

      INSERTS are performed at a rate of hundreds of times per second.

  2.

      SELECT statements are performed based on column 'a' (where
      a='xxxx' ) a few times per hour.

  3.

      DELETE statements are performed based on a DATE column. (delete
      where date older than 1 year) usually once per day.

  4.


The key requirement is to speed up INSERT and SELECT statements, and be able to keep history data of 1 year back without locking the whole table down while deleting.

I would guess that I must have two indexes, one for column 'a', and the other for the date field. or is it possible to optimize both ?

Will there be a necessary trade-off between speed on select and speed of delete?

Is partitioning the only solution ? What are good strategies for partitioning such table?

I'm using a PostgreSQL 8.4 database.


we partition similar tables by date, typically by month. this way you can simply drop an old month table after updating the triggers to put new data into a new month table.

.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to