d to drill down
So, I'd like my cake and eat it too... :-)
I'd like to have my indexes built as rows are inserted into the
partition so help with the drill down...
> -Original Message-
> From: Bucky Jordan [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, September 26,
>
> The bottom line here is likely to be "you need more RAM" :-(
Yup. Just trying to get a handle on what I can do if I need more than
16G
Of ram... That's as much as I can put on the installed based of
servers 100s of them.
>
> I wonder whether there is a way to use table partitioning t
> Would it be possible to change the primary key to
> (logtime,key)? This could help keeping the "working window" small.
No, the application accessing the data wants all the rows between start
and end time for a particular key value.
>
> Secondly, the real working set is smaller, as the rows
Our application has
a number of inserters posting rows of network statistics into a database.
This is occuring continously. The following is an example of a stats table
(simplified but maintains key concepts).
CREATE TABLE stats
(
logtime timestamptz,
key
int,
s
Explain analyze would be nice ;-)
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Andrus
> Sent: Tuesday, April 04, 2006 3:37 PM
> To: pgsql-performance@postgresql.org
> Subject: [PERFORM] Query runs too long for indexed tables
>
> I have relative
Wondering if
Update firma1.rid set toode=null where toode is not null and not
exists(select 1 from firma1.toode where toode=rid.toode);
Would be faster... Problem appears to be the seqscan of seqscan... No?
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On B
ailto:[EMAIL PROTECTED]
> Sent: Friday, March 10, 2006 1:31 PM
> To: Marc Morin
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Trouble managing planner for
> timestamptz columns
>
> "Marc Morin" <[EMAIL PROTECTED]> writes:
> > We tend
We have large tables that hold statistics based on time. They are of the
form.
CREATE TABLE stats (
id serial primary key,
logtime timestamptz,
d1 int,
s1 bigint
);
CREATE INDEX idx on stats(logtime);
Some of these tables have new data inserted at a rate of 500,000+ rows /
hour
We've done a lot of testing on large DB's with a lot of "inserts" and
have a few comments.
The updates are "treated" as a large "insert" as we all know from pg's
point of view.
We've run into 2 classes of problems: excessing WAL checkpoints and
affects of low correlation.
WAL log write's full 8K
All good ideas, unfortunately, we can't change the inserting applicatin
code easily.
> -Original Message-
> From: Simon Riggs [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, February 07, 2006 5:09 PM
> To: Marc Morin
> Cc: Markus Schaber; pgsql-performance@postgres
, 2006 7:44 AM
> To: Marc Morin
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] partitioning and locking problems
>
> Hi, Marc,
>
> Marc Morin wrote:
>
> > 1- long running report is running on view
> > 2- continuous inserters into v
m: Tom Lane [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, February 01, 2006 12:50 AM
> To: Marc Morin
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] partitioning and locking problems
>
> "Marc Morin" <[EMAIL PROTECTED]> writes:
> > Would l
We have a large database system designed around partitioning. Our
application is characterized with
- terabytes of data
- billions of rows in dozens of base tables (and 100s of paritions)
- 24x7 insert load of new data that cannot be stopped, data is time
sensitive.
- periodic reports that can h
We have a large DB
with partitioned tables in postgres. We have had trouble with a
ORDER/LIMIT type query. The order and limit are not pushed down to the
sub-tables
CREATE TABLE base
(
foo int
);
CREATE TABLE
bar_0
extra int
) INHERITS
(base);
ALTER TABLE bar ADD
PRIMA
I have run into this type of query problem as well. I solved it in my
application by the following type of query.
SELECT tlid
FROM completechain AS o
WHERE not exists (
SELECT 1
FROM completechain
WHERE tlid=o.tlid and ogc_fid!=o.ogc_fid
);
Assumes of course that you hav
15 matches
Mail list logo