I also forgot to note that I had no problems setting up replication via
londiste (skytools). The cronjob that creates the partition each week for
me also adds the table to the replication set. As simple as:
londiste.py londiste.ini provider add 'public.Data__WI'
londiste.py londiste.i
I currently have a database doing something very similar. I setup partition
tables with predictable names based on the the data's timestamp week number
eg: (Data__WI).
I have a tigger on the parent partition table to redirect data to the
correct partition( tablename:='Data_' || to_char('$NEW
On Thu, May 28, 2009 at 05:24:33PM +0200, Ivan Voras wrote:
> 2009/5/28 Kenneth Marshall :
>
> >
> > One big benefit of partitioning is that you can prune old data with
> > minimal impact to the running system. Doing a large bulk delete would
> > be extremely I/O impacting without partion support.
depends on how soon do you need to access that data after it's being
created, the way I do it in my systems, I get data from 8 points, bit
less than you - but I dump it to csv, and import it on database host
(separate server).
now, you could go to BDB or whatever, but that's not the solution.
So,
On Thu, May 28, 2009 at 5:06 PM, Ivan Voras wrote:
>> If you require precise data with the ability to filter, aggregate and
>> correlate over multiple dimensions, something like Hadoop -- or one of
>> the Hadoop-based column database implementations, such as HBase or
>> Hypertable -- might be a be
2009/5/28 Kenneth Marshall :
>
> One big benefit of partitioning is that you can prune old data with
> minimal impact to the running system. Doing a large bulk delete would
> be extremely I/O impacting without partion support. We use this for
> a DB log system and it allows us to simply truncate a
2009/5/28 Alexander Staubo :
> On Thu, May 28, 2009 at 2:54 PM, Ivan Voras wrote:
>> The volume of sensor data is potentially huge, on the order of 500,000
>> updates per hour. Sensor data is few numeric(15,5) numbers.
>
> The size of that dataset, combined with the apparent simplicity of
> your s
On Thu, May 28, 2009 at 04:55:34PM +0200, Ivan Voras wrote:
> 2009/5/28 Heikki Linnakangas :
> > Ivan Voras wrote:
> >>
> >> I need to store data about sensor readings. There is a known (but
> >> configurable) number of sensors which can send update data at any time.
> >> The "current" state needs
2009/5/28 Nikolas Everett :
> Option 1 is about somewhere between 2 and 3 times more work for the database
> than option 2.
Yes, for writes.
> Do you need every sensor update to hit the database? In a situation like
We can't miss an update - they can be delayed but they all need to be recorded.
2009/5/28 Heikki Linnakangas :
> Ivan Voras wrote:
>>
>> I need to store data about sensor readings. There is a known (but
>> configurable) number of sensors which can send update data at any time.
>> The "current" state needs to be kept but also all historical records.
>> I'm trying to decide betw
On Thu, May 28, 2009 at 2:54 PM, Ivan Voras wrote:
> The volume of sensor data is potentially huge, on the order of 500,000
> updates per hour. Sensor data is few numeric(15,5) numbers.
The size of that dataset, combined with the apparent simplicity of
your schema and the apparent requirement for
Option 1 is about somewhere between 2 and 3 times more work for the database
than option 2.
Do you need every sensor update to hit the database? In a situation like
this I'd be tempted to keep the current values in the application itself and
then sweep them all into the database periodically. If
Ivan Voras wrote:
I need to store data about sensor readings. There is a known (but
configurable) number of sensors which can send update data at any time.
The "current" state needs to be kept but also all historical records.
I'm trying to decide between these two designs:
1) create a table for
Hi,
I need to store data about sensor readings. There is a known (but
configurable) number of sensors which can send update data at any time.
The "current" state needs to be kept but also all historical records.
I'm trying to decide between these two designs:
1) create a table for "current" data,
14 matches
Mail list logo