On Sat, Mar 5, 2011 at 6:09 AM, Rosser Schwarz wrote:
> On Fri, Mar 4, 2011 at 10:34 AM, Glyn Astill wrote:
>> I'm wondering (and this may be a can of worms) what peoples opinions are on
>> these schedulers? I'm going to have to do some real world testing myself
>> with postgresql too, but ini
On Fri, Mar 4, 2011 at 4:18 PM, Landreville
wrote:
> create temporary table deltas on commit drop as
> select * from get_delta_table(p_switchport_id, p_start_date,
> p_end_date);
>
> select round(count(volume_id) * 0.95) into v_95th_row from deltas;
> select in_rate into v_record.
On 5 March 2011 12:59, Mark Thornton wrote:
> If your partitions a loosely time based and you don't want to discard old
> data, then surely the number of partitions will grow without limit.
True, but is it relevant? With monthly table partitioning it takes
hundreds of years before having "thousa
On 05/03/2011 09:37, Tobias Brox wrote:
Sorry for not responding directly to your question and for changing
the subject ... ;-)
On 4 March 2011 18:18, Landreville wrote:
That is partitioned into about 3000 tables by the switchport_id (FK to
a lookup table), each table has about 30 000 rows cur
Sorry for not responding directly to your question and for changing
the subject ... ;-)
On 4 March 2011 18:18, Landreville wrote:
> That is partitioned into about 3000 tables by the switchport_id (FK to
> a lookup table), each table has about 30 000 rows currently (a row is
> inserted every 5 min
Any time the volume tables are queried it is to calculate the deltas
between each in_octets and out_octets from the previous row (ordered
by timestamp). The delta is used because the external system, where
the data is retrieved from, will roll over the value sometimes. I
have a function to do t