My only comment is what is the layout of your data (just one table with
indexes?).
I found on my date with dozens of joins my view speed was not good for me to
use, so I made a flat file with no joins and it flies.
Joel Fradkin
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel. 941-75
Quoting Christopher Kings-Lynne <[EMAIL PROTECTED]>:
> > Another trick you can use with large data sets like this when you
> want
> > results
> > back in seconds is to have regularly updated tables that aggregate
> the data
> > along each column normally aggregated against the main data set.
>
>
Christopher Kings-Lynne wrote:
Another trick you can use with large data sets like this when you want
results
back in seconds is to have regularly updated tables that aggregate the
data
along each column normally aggregated against the main data set.
Maybe some bright person will prove me wrong
On Wed, 2005-05-11 at 12:53 +0800, Christopher Kings-Lynne wrote:
> > Another trick you can use with large data sets like this when you want
> > results
> > back in seconds is to have regularly updated tables that aggregate the data
> > along each column normally aggregated against the main data s
Another trick you can use with large data sets like this when you want
results
back in seconds is to have regularly updated tables that aggregate the data
along each column normally aggregated against the main data set.
Maybe some bright person will prove me wrong by posting some working
informat
Matt Olson wrote:
Other databases like Oracle and DB2 implement some sort of row prefetch. Has
there been serious consideration of implementing something like a prefetch
subsystem? Does anyone have any opinions as to why this would be a bad idea
for postgres?
Postges is great for a multiuser envi
kSQL. )
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Chris Browne
Sent: Tuesday, May 10, 2005 4:14 PM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Prefetch - OffTopic
[EMAIL PROTECTED] ("Mohan, Ross") writes:
> for time-serie
[EMAIL PROTECTED] ("Mohan, Ross") writes:
> for time-series and "insane fast", nothing beats kdB, I believe
>
> www.kx.com
... Which is well and fine if you're prepared to require that all of
the staff that interact with data are skilled APL hackers. Skilled
enough that they're all ready to leap
Greg Stark <[EMAIL PROTECTED]> writes:
> Actually forcing things to use indexes is the wrong direction to go if you're
> trying to process lots of data and want to stream it off disk as rapidly as
> possible. I would think about whether you can structure your data such that
> you can use sequential
My postgres binaries and WAL are on a separate disk from the raid array. The
table I'm doing the selects from is probably about 4GB in size and 18-20
million records. No concurrent or dependent inserts or deletes are going on.
Tom's point and your points about optimizing the application are we
Matt Olson <[EMAIL PROTECTED]> writes:
> I've done other things that make sense, like using indexes, playing with the
> planner constants and turning up the postgres cache buffers.
>
> Even playing with extream hdparm read-ahead numbers (i.e. 64738) yields no
> apparent difference in database pe
> I've done other things that make sense, like using indexes, playing with the
> planner constants and turning up the postgres cache buffers.
After you load the new days data try running CLUSTER on the structure
using a key of (stockID, date) -- probably your primary key.
This should significantl
Matt Olson <[EMAIL PROTECTED]> writes:
> Other databases like Oracle and DB2 implement some sort of row prefetch. Has
> there been serious consideration of implementing something like a prefetch
> subsystem?
No.
> Does anyone have any opinions as to why this would be a bad idea for
> postgres?
13 matches
Mail list logo