On May 25, 2009, at 10:58 AM, Ramiro Diaz Trepat wrote:
The table with the atmosphere pixels, currently has about 140MM
records, and the one the values about 1000MM records. They should
both grow to about twice this size.
Did you tune postgres to use the available resources? By default it
Hello,
I'd try to use postgis ( http://postgis.refractions.net ), that allows
you to geolocalize your data. We use such an extention here for the
same kind of data you use. It will lighten up table size, and queries.
Otherwise, partitioning and cluster are good things to do. I would
increase defau
You might want to try CREATE CLUSTER. I had a 40M row table that was
taking ages to access, I tried partitioning it into 12000 sub-tables,
and obtained a modest speed up. Using CREATE CLUSTER on an
un-partitioned table resulted in an enormous speed up though.
You will need to choose the axis you w
Try partitioning. It should sort you out.
-Peter-
On 5/25/09, Ramiro Diaz Trepat wrote:
>
> Hello list,I will try to make this as brief as possible.
> I have a brother who is a scientist studding atmospheric problems. He was
> trying to handle all of his data with flat files and MatLab, when I
Grzegorz Jaśkiewicz wrote:
true, if you don't want to search on values too much ,or at all - use
float[]. But otherwise, keep stuff in a tables as such.
It might be humongous in size, but at the end of the day - prime thing
when designing a db is speed of queries.
If he's worried about speed
On Fri, Nov 28, 2008 at 5:46 PM, Simon Riggs <[EMAIL PROTECTED]> wrote:
>
> I would look carefully at the number of bits required for each float
> value. 4 bytes is the default, but you may be able to use less bits than
> that rather than rely upon the default compression scheme working in
> your f
On Fri, 2008-11-28 at 15:40 +, William Temperley wrote:
> Hi all
>
> Has anyone any experience with very large tables?
>
> I've been asked to store a grid of 1.5 million geographical locations,
> fine. However, associated with each point are 288 months, and
> associated with each month are 5
2008/11/28 William Temperley <[EMAIL PROTECTED]>
>
> Any more normalized and I'd have 216 billion rows! Add an index and
> I'd have - well, a far bigger table than 432 million rows each
> containing a float array - I think?
>
> Really I'm worried about reducing storage space and network overhead
>
William Temperley escribió:
> Really I'm worried about reducing storage space and network overhead
> - therefore a nicely compressed chunk of binary would be perfect for
> the 500 values - wouldn't it?
An array that large would likely be compressed by Postgres internally
for storage; see
http://w
Really I'm worried about reducing storage space and network overhead
- therefore a nicely compressed chunk of binary would be perfect for
the 500 values - wouldn't it?
For storage space you might want to look at ZFS with compression on in
case you are using FreeBSD or Solaris.
That would s
On Fri, Nov 28, 2008 at 3:48 PM, Alvaro Herrera
<[EMAIL PROTECTED]> wrote:
> William Temperley escribió:
>> So a 216 billion row table is probably out of the question. I was
>> considering storing the 500 floats as bytea.
>
> What about a float array, float[]?
I guess that would be the obvious cho
On Fri, Nov 28, 2008 at 3:48 PM, Alvaro Herrera
<[EMAIL PROTECTED]>wrote:
> William Temperley escribió:
>
> > I've been asked to store a grid of 1.5 million geographical locations,
> > fine. However, associated with each point are 288 months, and
> > associated with each month are 500 float values
William Temperley escribió:
> I've been asked to store a grid of 1.5 million geographical locations,
> fine. However, associated with each point are 288 months, and
> associated with each month are 500 float values (a distribution
> curve), i.e. 1,500,000 * 288 * 500 = 216 billion values :).
>
>
13 matches
Mail list logo