he timestamp index where it
is but I should probably get rid of the others.
Thank you again for your help people
Regards
Loic Petit
Quite a lot of answers !
> Does all INSERTs are made by one application? Maybe PREPARED STATEMENTS
will help you? What about optimization on application level?
Yes it's on only one application (through JDBC), optimizations (grouped
transaction, prepared statement) will be done but that our very fi
What I described in the last mail is what I try to do.
But I said earlier that I only do about 3-4 inserts / seconds because of my
problem.
So it's about one insert each 30 minutes for each table.
One sensor (so one table) sends a packet each seconds (for 3000 sensors).
=> So we have : 1 insert per second for 3000 tables (and their indexes).
Hopefully there is no update nor delete in it...
1 table contains about 5 indexes : timestamp, one for each sensor type - 3,
and one for packet counting (measures packet dropping)
(I reckon that this is quite heavy, but a least the timestamp and the values
are really usefull)
I was a bit confused about the read and write sorry ! I understand what you
mean...
But do you think that the IO cost (of only one page) needed to handle the
index writing is superior than 300ms ? Because each insert in any of these
tables is that slow.
NB: between my "small" and my "big" tests the
Actually, I've got another test system with only few sensors (thus few
tables) and it's working well (<10ms insert) with all the indexes.
I know it's slowing down my performance but I need them to interogate the
big tables (each one can reach millions rows with time) really fast.
Regards
Loïc
Hi,
I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount
of sensors. In order to have good performances on querying by timestamp on
each sensor, I partitionned my measures table for each sensor. Thus I create
a lot of tables.
I simulated a large sensor network with 3000 nodes