From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Roger Tannous
Sent: Thursday, March 04, 2010 11:33 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Optimal database table optimization method

 

Hello,

I have a database table that is growing too big (few hundred million
rows) that needs to be optimized, but before I get into partitioning it,
I thought I'd ask about suggestions.

Here is the usage:

     0 . Table contains about 10 columns of length about 20 bytes each.

1.   INSERTS are performed at a rate of hundreds of times per second.

2.   SELECT statements are performed based on column 'a' (where a='xxxx'
) a few times per hour.

3.   DELETE statements are performed based on a DATE column. (delete
where date older than 1 year) usually once per day.

4.       

The key requirement is to speed up INSERT and SELECT statements, and be
able to keep history data of 1 year back without locking the whole table
down while deleting.

I would guess that I must have two indexes, one for column 'a', and the
other for the date field. or is it possible to optimize both ?

Will there be a necessary trade-off between speed on select and speed of
delete?

Is partitioning the only solution ? What are good strategies for
partitioning such table?

I'm using a PostgreSQL 8.4 database.

>> 

INSERT statements are slowed down by adding indexes.  Both the SELECT
and DELETE statements should speed up by adding the indexes.

I suggest adding the indexes in a test environment to see if the changes
are beneficial.

<< 

Reply via email to