This approach wouldn't work I think. The data keeps growing in the "hot"
table.
On Tue, Feb 10, 2015 at 6:01 AM, Melvin Davidson
wrote:
> Well, without knowing too much about your application, it certainly sounds
> like using the metricts_MMDD is the way to go. As for modifying the
> constr
>Don't you have duplicate information within your UTC, location and
local_time data ? Maybe you can just attach a timezone to each location...
Yes there is duplicate information but dealing with time zones are a PITA
and the easiest way to solve the myriad of problems I have is to store the
local
60529
From: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org]
on behalf of Melvin Davidson [melvin6...@gmail.com]
Sent: Tuesday, February 10, 2015 6:01 AM
To: Marc Mamin
Cc: Tim Uckun; pgsql-general
Subject: Re: [GENERAL] Partioning with overlapping and non overla
Well, without knowing too much about your application, it certainly sounds
like using the metricts_MMDD is the way to go. As for modifying the
constraint daily, couldn't you just use
where timestamp > current_date - Interval '1 Day'
?
On Mon, Feb 9, 2015 at 5:14 AM, Marc Mamin wrote:
>
> >
>I have two partitioning questions I am hoping somebody can help me with.
>
>I have a fairly busy metric(ish) table. It gets a few million records per day,
>the data is transactional for a while but then settles down and is used for
>analytical purposes later.
>
>When a metric is reported both t
Partitioning by day would result in less partitions but of course it would
create a "hot" table where all the writes go.
Actually I have thought of an alternative and I'd be interested in your
opinion of it.
I leave the metrics table alone, The current code continues to read and
write from the
Perhaps, I do not fully understand completely, but would it not be simpler
to just rearrange the key (and partition) by date & location?
EG: 2015_01_01_metrics_location_X
In that way, you would only have 365 partitions per year at most. But you
also have the option to break it down by week or
Tim Uckun wrote
> 1. Should I be worried about having possibly hundreds of thousands of
> shards.
IIRC, yes.
> 2. Is PG smart enough to handle overlapping constraints on table and limit
> it's querying to only those tables that have the correct time constraint.
Probably yes, but seems easy enou
I have two partitioning questions I am hoping somebody can help me with.
I have a fairly busy metric(ish) table. It gets a few million records per
day, the data is transactional for a while but then settles down and is
used for analytical purposes later.
When a metric is reported both the UTC tim