On Tue, Aug 26, 2008 at 6:50 AM, Matthew Wakeling <[EMAIL PROTECTED]> wrote:
> On Sat, 23 Aug 2008, Loic Petit wrote:
>>
>> I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount
>> of sensors. In order to have good
>> performances on querying by timestamp on each sensor, I part
On Sat, 23 Aug 2008, Loic Petit wrote:
I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount of
sensors. In order to have good
performances on querying by timestamp on each sensor, I partitionned my
measures table for each sensor. Thus I create
a lot of tables.
As far as
That's not a bad idea, at least for historical data.
But actually one of the most common thing in sensor network monitoring is
last readings monitoring.
With indexes what I can do is : SELECT * FROM measures_xx ORDER BY timestamp
DESC LIMIT 1 => And I got the very last reading in a blink (one page
I don't know if the max_fsm_relations issue will solve your problem or not.
I do know that you definitely want to increase it to a number larger than
the sum of all your tables and indexes -- preferably with room to grow.
Additionally the max_fsm_pages value will likely need to be increased as
your
Quite a lot of answers !
> Does all INSERTs are made by one application? Maybe PREPARED STATEMENTS
will help you? What about optimization on application level?
Yes it's on only one application (through JDBC), optimizations (grouped
transaction, prepared statement) will be done but that our very fi
> I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount
> of sensors. In order to have good performances on querying by timestamp on
> each sensor, I partitionned my measures table for each sensor. Thus I create
> a lot of tables.
> I simulated a large sensor network with 3000
Loic Petit wrote:
Hi,
I use Postgresql 8.3.1-1 to store a lot of data coming from a large
amount of sensors. In order to have good performances on querying by
timestamp on each sensor, I partitionned my measures table for each
sensor. Thus I create a lot of tables.
I simulated a large sensor
Just a guess, but have you tried increasing max_fsm_relations ? This
probably shouldn't matter but you'll want this to be larger than the sum of
all your tables and indexes and it doesn't take that much memory to increase
it.
My next suggestion would be to log in as the superuser and 'vacuum anal
Does all INSERTs are made by one application? Maybe PREPARED STATEMENTS
will help you? What about optimization on application level?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-perfor
hello to all,
I've a question regarding the folowing comments.
How to estimate vacuum aggressiveness ?
It's for me very deficulte to setup the autovaccum setting correctly. It
seems for me that it is not enough aggressive, but when I change the
settings the autovacuum process is almost always
What I described in the last mail is what I try to do.
But I said earlier that I only do about 3-4 inserts / seconds because of my
problem.
So it's about one insert each 30 minutes for each table.
On Sat, Aug 23, 2008 at 7:31 PM, Loic Petit <[EMAIL PROTECTED]> wrote:
> One sensor (so one table) sends a packet each seconds (for 3000 sensors).
> => So we have : 1 insert per second for 3000 tables (and their indexes).
> Hopefully there is no update nor delete in it...
Wait, I'm confused, I tho
One sensor (so one table) sends a packet each seconds (for 3000 sensors).
=> So we have : 1 insert per second for 3000 tables (and their indexes).
Hopefully there is no update nor delete in it...
On Sat, Aug 23, 2008 at 6:59 PM, Loic Petit <[EMAIL PROTECTED]> wrote:
> 1 table contains about 5 indexes : timestamp, one for each sensor type - 3,
> and one for packet counting (measures packet dropping)
> (I reckon that this is quite heavy, but a least the timestamp and the values
> are really u
1 table contains about 5 indexes : timestamp, one for each sensor type - 3,
and one for packet counting (measures packet dropping)
(I reckon that this is quite heavy, but a least the timestamp and the values
are really usefull)
On Sat, Aug 23, 2008 at 6:47 PM, Loic Petit <[EMAIL PROTECTED]> wrote:
> I was a bit confused about the read and write sorry ! I understand what you
> mean...
> But do you think that the IO cost (of only one page) needed to handle the
> index writing is superior than 300ms ? Because each insert in
I was a bit confused about the read and write sorry ! I understand what you
mean...
But do you think that the IO cost (of only one page) needed to handle the
index writing is superior than 300ms ? Because each insert in any of these
tables is that slow.
NB: between my "small" and my "big" tests the
On Sat, Aug 23, 2008 at 6:09 PM, <[EMAIL PROTECTED]> wrote:
> On this smaller test, the indexes are over the allowed memory size (I've got
> over 400.000 readings per sensor) so they are mostly written in disk.
They're always written to disk. Just sometimes they're not read.
Note that the OS cac
On this smaller test, the indexes are over the allowed memory size (I've got
over 400.000 readings per sensor) so they are mostly written in disk. And on the
big test, I had small indexes (< page_size) because I only had about 5-10 rows
per table, thus it was 3000*8kb = 24mb which is lower than the
On Sat, Aug 23, 2008 at 1:35 PM, <[EMAIL PROTECTED]> wrote:
> Actually, I've got another test system with only few sensors (thus few tables)
> and it's working well (<10ms insert) with all the indexes.
> I know it's slowing down my performance but I need them to interogate the big
> tables (each o
Actually, I've got another test system with only few sensors (thus few
tables) and it's working well (<10ms insert) with all the indexes.
I know it's slowing down my performance but I need them to interogate the
big tables (each one can reach millions rows with time) really fast.
Regards
Loïc
Actually, I've got another test system with only few sensors (thus few tables)
and it's working well (<10ms insert) with all the indexes.
I know it's slowing down my performance but I need them to interogate the big
tables (each one can reach millions rows with time) really fast.
> Each INDEX crea
Each INDEX create a delay on INSERT. Try to measure performance w/o any
indexes.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hi,
I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount
of sensors. In order to have good performances on querying by timestamp on
each sensor, I partitionned my measures table for each sensor. Thus I create
a lot of tables.
I simulated a large sensor network with 3000 nodes
24 matches
Mail list logo