visualize
the problem in 2D or 3D, but I'm not sure that that geometric intuition
holds up in such a high-dimensional space as we have here.
I will say that I'm not understanding the problem well enough to
understand themulti-dimentional nature of this problem.
David Lang
--
Hi,
I have an unique requirement. I have a feed of 2.5 - 3 million rows of
data which arrives every 1/2 an hour. Each row has 2 small string
values (about 50 chars each) and 10 int values. I need
searcheability and running arbitrary queries on any of these values.
This means i have to create an i
ff (a==b) )
if you could drop that constraint (the cost of which would be extra 'real'
compares within a bucket) then a helper function per datatype could work
as you are talking.
David Lang
---(end of broadcast)---
TIP 6: explain analyze is your friend
Andre Felipe Machado wrote:
It seems that Firebird windows can use adequately as much ram it finds
and postgresql windows can not.
PostgreSQL relies on the OS cache to utilize RAM. Make sure that most of
the RAM is 'available' so Windows can do its thing.
effective_cache_size should be set
rite (which you then wait for), so you can only do one transaction per
rotation.
David Lang
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On Sat, 11 Mar 2006, Joost Kraaijeveld wrote:
Date: Sat, 11 Mar 2006 09:17:09 +0100
From: Joost Kraaijeveld <[EMAIL PROTECTED]>
To: David Lang <[EMAIL PROTECTED]>
Cc: Richard Huxton , pgsql-performance@postgresql.org
Subject: Re: [PERFORM] x206-x225
On Fri, 2006-03-10 at 23:57
e you get on it.
David Lang
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
uery approach no
faster than the loop approach? I assume that if my tests managed 100
or 1000 rows in the collection table, the batch approach would show
an improvement, but why would it not when I was using it to manage
8-10 rows? Have I missed something here?
Thanks,
David
Actually, that was from an article from this last month that compared
the dual core intel to the amd. for every dollar spent on the intel,
you got about half the performance of the amd. Not bigotry. fact.
But don't believe me or the other people who've seen the difference. Go
buy the I
My personal favorite pg platform at this time is one based on a 2 socket, dual
core ready mainboard with 16 DIMM slots combined with dual core AMD Kx's.
Right. We've been buying Tyan bare-bones boxes like this.
It's better to go with bare-bones than building boxes from bare metal
because th
h use. Is there no way to EXPLAIN ANALYZE this stuff?
Thanks Tom.
Best,
David
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On Apr 25, 2006, at 19:36, Tom Lane wrote:
It looks like you had something trivial as the definition of foo().
Yeah, the function call. :-)
Try one of the actual queries from the plpgsql function.
Oh. Duh. Will do. Tomorrow.
Best,
David
---(end of broadcast
The reason AMD is has held off from supporting DDR2 until now are:
1. DDR is EOL. JEDEC is not ratifying any DDR faster than 200x2 while DDR2
standards as fast as 333x4 are likely to be ratified (note that Intel pretty
much avoided DDR, leaving it to AMD, while DDR2 is Intel's main RAM tech
>While in general there may not be that much of a % difference between
the 2 chips,
>there's a huge gap in Postgres. For whatever reason, Postgres likes
Opterons.
>Way more than Intel P4-architecture chips.
It isn't only Postgres. I work on a number of other server applications
that also run
ime=3.210 calls=7
Trigger for constraint entry_coll_tag_tag_id_fkey: time=4.412 calls=7
Total runtime: 158.672 ms
(8 rows)
Actually looks pretty good to me. Although is generate_series() being
rather slow?
Thanks,
David
---(end of broadcast)---
TI
On May 2, 2006, at 16:49, David Wheeler wrote:
On Apr 25, 2006, at 19:36, Tom Lane wrote:
Try one of the actual queries from the plpgsql function.
Here we go:
try=# PREPARE foo(int, int[], int) AS
try-# INSERT INTO entry_coll_tag (entry_id, tag_id, ord )
try-# SELECT $1, $2[gs.ser], gs.ser
On May 2, 2006, at 16:52, David Wheeler wrote:
Actually looks pretty good to me. Although is generate_series()
being rather slow?
Scratch that:
Bah, dammit, there were no rows in that relevant table. Please
disregard my previous EXPLAIN ANALYZE posts.
I've re-run my scrip
2006-05-04 18:04:58 EDT USER=postgres DB=FIX1 [12427] PORT
= [local] ERROR: invalid memory alloc request size 18446744073709551613
Perhaps I'm off beam here, but any
time I've seen an app try to allocate a gazillion bytes, it's
due to some code incorrectly calculating the size of something
TABLE. But
the performance was not much better:
try=# select fib(28);
fib
317811
(1 row)
Time: 8505.668 ms
try=# select fib_cached(28);
fib_cached
317811
(1 row)
So, what gives? Am I missing something, or not understanding how
IMMUTABLE works?
Many TIA,
David
1.
rst time I called fib(28), that the next call to fib(28)
would be lightening fast, even if fib(27) wasn't.
So, uh, would you mind telling me what I'm missing? I'm happy to turn
that knowledge into a documentation patch to help future boneheads
like mys
y helps. And it's easy to tell for a given function
whether or not it really helps by simply trying it with CACHED and
without.
Would this be a simple thing to implement?
Best,
David
---(end of broadcast)---
TIP 1: if posting/reading
be a simple thing to implement?
It's called a "table" :)
http://www.justatheory.com/computers/databases/postgresql/
higher_order_plpgsql.html
Yes, I know. :-P But it'd be easier to have a CACHED keyword, of course.
Best,
David
---
the benchmarking function from the code actually being tested. So I
guess my question is, how important is it to have the control there,
and, if it is important, how should it actually work?
Many TIA,
David
---(end of broadcast)---
TIP 5: don&
queries.
Per our conversation I eliminated the EXECUTE 'string' +
generate_series. Check it out.
http://theory.kineticode.com/computers/databases/postgresql/
benchmarking_functions.html
(Temporary URL; justatheory.com seems to have disappeared from DNS...
Best,
David
-
I cannot scale beyond that value and the funny thing, is that none of the
servers is swapping, or heavy loaded, neither postgres nor apache are
refusing connexions.
Hearing a story like this (throughput hits a hard limit, but
hardware doesn't appear to be 100% utilized), I'd suspect
insuffi
Tom Lane wrote:
[EMAIL PROTECTED] writes:
I'm using httperf/autobench for measurments and the best result I can get
is that my system can handle a trafiic of almost 1600 New con/sec.
As per PFC's comment, if connections/sec is a bottleneck for you then
the answer is to
Anthony Presley wrote:
I had an interesting discussion today w/ an Enterprise DB developer and
sales person, and was told, twice, that the 64-bit linux version of
Enterprise DB (which is based on the 64-bit version of PostgreSQL 8.1)
is SIGNIFICANTLY SLOWER than the 32-bit version. Since the gu
Linux distribution.
:-)
David
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
My suggestion is to look at something like this:
http://www.abmx.com/1u-supermicro-amd-opteron-rackmount-server-p-210.html
1U rackmount opteron from Supermicro that can have two dual core
opterons and 4 drives and up to 16 gigs of ram. Supermicro server
motherboards have always treated me wel
Hello!
I am trying to delete an entire table. There are about 41,000 rows in
the table (based on count(*)).
I am using the SQL comment: delete from table;
The operation seems to take in the order of hours, rather than seconds
or minutes.
"Explain delete from table" gives me:
2006-06-16 at 12:52 +0530, Gourish Singbal wrote:
>
> David,
>
> Truncate table would be a good idea if u want to delete all the data
> in the table.
> You need not perform vacuum in this case since there are no dead rows
> created.
>
> ~gourish
>
>
> On
against text data. Thoughts?
Many TIA,
David
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
7;pages generated' ought to be G times the
number (virtual) processors, where G is the gradient of the graph. In
such a case the graph will go through the origin (o,o), but you graph
does not show this.
I'm a bit confused, what is the 'perfect' supposed to be?
Thanks
David
t for something like a simple
INSERT ... VALUES into a domain column, the setup overhead is still
bad.
I assume that there's no domain thingy that you already have that
could cache it, eh?
Sorry, I ask this as someone who knows no C and less about
PostgreSQL's internals.
'm sure. I should bug potential future SoC students about it. ;-)
Best,
David
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
or.
Yes, I've seen it. I haven't tried it, either. It'd be nice if it had
a compatible license with PostgreSQL, though.
Best,
David
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
choice in my project since the beginning but
before putting all the money (That I don`t have ..:-)) to buy some
hardware I just want to be sure I'm not crazy!
Thanks for your help I really appreciate it!!
Best Regards
/David
---(end of broadcast)---
d=re_hphqiss/Ovw_Buy/DL385
I will look more deeply into them in detail trying to understand
something ...
Thanks for your help!
Best Regards
/David
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of David
Gagnon
Sent: den 3 juli 2006 13:42
To: pgsql-
ng the list right now and
there is not a lot of info... :-( Any Idea?
So thanks for the info about the DL385 I will look deeply into it !
Best Regards
/David
/Mikael
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of David
Gagnon
Sent: den 3 ju
Guoping Zhang wrote:
a) SERVER A to SERVER B: 0.35ms
SERVER A to itself (Local host): 0.022ms
0.35ms seems rather slow. You might try investigating what's in the path.
For comparison, between two machines here (three GigE switches in the
path), I see 0.10ms RTT. Between two machines on
hing ubuntu with great interest, it's debian under the covers,
but they're starting to get the recognition from the support groups of
companies)
David Lang
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
On Wed, 9 Aug 2006, Stephen Frost wrote:
* David Lang ([EMAIL PROTECTED]) wrote:
there's a huge difference between 'works on debian' and 'supported on
debian'. I do use debian extensivly, (along with slackware on my personal
machines), so i am comfortable getting thing
would these be? As a representative of the most
prominent one in the US I can tell you that you are not speaking from a
knowledgeable position.
note I said many, not all. I am aware that your company does not fall into
this catagory.
David Lang
---(end of broa
ease note, when I'm talking about support, it's not just postgresql
support, but also hardware/driver support that can run into these problems
David Lang
---(end of broadcast)---
TIP 4: Have you searched our list arch
tial corruption.
note tha the ext3, reiserfs, jfs, and xfs developers (at least) consider
fsck nessasary even for journaling fileysstems. they just let you get away
without it being mandatory after a unclean shutdown.
David Lang
---(end of broadcast)--
I learned the hard way that just rising it can lead to a hard
performance loss :)
I looked back in the list archives to try to find your post on the
underlying problem, but could only find this rather terse sentence.
If you have more detailed information please post or point me at it.
But...m
really makes me think that that area is just a comfortable way to
access files on disk as memory areas; with the hope of propably better
caching then not-memory-mapped files.
No, absolutely not. CreateFileMaping() does much the same thing
as mmap() in Unix.
That would explain my disturbing i
Unfortunately often operating system virtual memory
and filesystem caching code that does exactly the opposite of
what a database application would like.
For some reason the kernel guys don't see it that way ;)
Over the years there have been various kernel features added
with the overall goal of
the speed more or less normal for a 'dual G5 with 2 GHZ and 4 GB of
Ram and a SATA hd' or do i miss something?
regards David
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-noma
I have 2 servers both with the exact same data, the same O.S., the same
version of Postgres (7.4.5) and the exact same db schema's (one production
server, one development server). One server is using the correct index for
SQL queries resulting in extremely slow performance, the other server is
pro
ng. After correcting it and re-running the script, the poorly
performing SQL query takes only a few seconds as opposed to 15 minutes.
Thank you for your help!
- David
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Tom Lane
Sent: Wednesday, August 25, 2004
at the 4th line of the plan, but I don't know why it
would be different for the same schema, same dataset.
What factors go into the planner's decision to choose a nested loop over a hash
join? Should I be looking at adjusting my runtime configuration on the sparc
box somehow?
Thanks.
- DAP
--
David ParkerTazz Networks(401) 709-5130
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Oh, I didn't realize that analyze gave that much more info. I've got a
lot to learn about this tuning stuff ;-)
I've attached the output. I see from the new output where the slow query
is taking its time (the nested loop at line 10), but I still have no
idea why this plan is getting chosen
T
>If they are the same and PostgreSQL are the same, are the
>intel machines Xeons?
Yup, dual 3.06-GHz Intel Xeon Processors.
I'm not sure off the top of my head what the sparcs are exactly. We're
in the process of moving completely to intel, but we still have to
support our app on sparc, and we a
>From: Jeff [mailto:[EMAIL PROTECTED]
>Sent: Wednesday, November 17, 2004 9:01 AM
>To: David Parker
>Cc: Russell Smith; [EMAIL PROTECTED]
>Subject: Re: [PERFORM] query plan question
>
>
>On Nov 17, 2004, at 7:32 AM, David Parker wrote:
>
>> Oh, I didn
isn't analyzing those
tables.
- DAP
>-Original Message-
>From: David Parker
>Sent: Wednesday, November 17, 2004 9:44 AM
>To: 'Jeff'
>Cc: Russell Smith; [EMAIL PROTECTED]
>Subject: RE: [PERFORM] query plan question
>
>I've got pg_autovacuum running o
existing autovacuum threads would be greatly appreciated!
Thanks.
- DAP
>-Original Message-
>From: Tom Lane [mailto:[EMAIL PROTECTED]
>Sent: Wednesday, November 17, 2004 10:46 AM
>To: David Parker
>Cc: Jeff; Russell Smith; [EMAIL PROTECTED]
>Subject: Re: [PERFORM]
I'm doing some performance profiling with a simple two-table query:
SELECT L."ProductID", sum(L."Amount")
FROM "drinv" H
JOIN "drinvln" L ON L."OrderNo" = H."OrderNo"
WHERE
("OrderDate" between '2003-01-01' AND '2003-04-30')
GROUP BY L."ProductID"
drinv and drinvln have about 100,000 and 3,500,00
--
David ParkerTazz Networks(401) 709-5130
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
yze at the end of a
long import - but this "mysterious" behavior was bugging me!
Thanks.
- DAP
>-Original Message-
>From: Matthew T. O'Connor [mailto:[EMAIL PROTECTED]
>Sent: Wednesday, November 17, 2004 2:02 PM
>To: David Parker
>Cc: Tom Lane; Jeff; Russe
t know if there are any special issues with defining a tablespace that way.
Thanks.
- DAP
--
David ParkerTazz Networks(401) 709-5130
---(end of broadcast)---
[EMAIL PROTECTED]
>Cc: David Parker
>Subject: Re: [PERFORM] tablespace + RAM disk?
>
>David,
>
>> We have a couple tables (holding information about network sessions,
>> for
>> instance) which don't need to persist beyond the life of the server,
>> b
..I should stop talking and go read about tablespaces
and memcached.
I'd be interested to hear if anybody has tried this. And I will also
check out memcached, too, of course. Thanks for the pointer.
- DAP
>-Original Message-
>From: [EMAIL PROTECTED]
>[mailto:[EMAIL PROTECTED
uning
3) more backend hardware
But I would grateful to hear any tips/anecdotes/experiences that others might
have from tuning similar applications.
Thanks!
- DAP
------
David ParkerTazz Networks(401) 709-51
ce the need for client
caching).
Thanks.
- DAP
>-Original Message-
>From: Rod Taylor [mailto:[EMAIL PROTECTED]
>Sent: Friday, November 26, 2004 1:29 PM
>To: David Parker
>Cc: Postgresql Performance
>Subject: Re: [PERFORM] time to stop tuning?
>
>On Fri, 2004-11-26 a
DBMS. The archives
show there has been talk for years about this, but it seems, no solution. The
obvious thing to consider is the block size, but people have tried increasing
this in the past with only marginal success.
Regards
David
---(end of broadcast)-
Summary: Doing a two or three table join for a date range performs
worse than doing the same query individually for each date in the
range.
What works: Doing a query just on a single date or a date range
(against just one table) runs quick; 'explain' says it uses an index
scan. Doing a query on a
'explain analyze' output is below. I have done analyze recently, and
am using pg 7.4.2 on SuSE 9.1. I'd be curious to know how to "a
nestloop indexscan" to try it out.
Thanks,
Dave
mydb=> explain analyze select * from line_items t, sales_tickets s
where writtenDate >= '12/01/2002' and writtenDa
Although larger offsets have some effect, your real problem is the sort
(of 42693 rows).
Try:
SELECT r.id_order
FROM report r
WHERE r.id_order IN
(SELECT id
FROM orders
WHERE id_status = 6
ORDER BY 1
LIMIT 10 OFFSET 1000)
ORDER BY 1
The subquery doesn't *have* to sort because the table
You don't mention if you have run VACUUM or VACUUM ANALYZE
lately. That's generally one of the first things that folks will suggest. If you
have a lot of updates then VACUUM will clean up dead tuples; if you have a lot
of inserts then VACUUM ANALYZE will update statistics so that the planner
I'm hoping someone can shed some light on these results. The 'factor'
compares the ratios of cost to actual for different plans. Perhaps
nested loops should be given a discount in the planner? The estimates
seem to be out by one and a half orders of magnitude. :(
== QUERY ==
SEL
Tom Lane wrote:
What exactly did you do to force the various plan choices? (I see some
ridiculous choices of indexscans, for instance, suggesting improper use
of enable_seqscan in some cases.)
Except for forcing a hash with indexes (to show that increased use of
indexes is not necessarily good),
Tom Lane wrote:
However: the reason the second plan wins is because there are zero rows
fetched from sat_request, and so the bulk of the plan is never executed
at all. I doubt the second plan would win if there were any matching
sat_request rows.
That's what I thought at first, but if you look mor
Gaetano Mendola wrote:
I think is due the fact that first queries were performed in peakhours.
A memory intensive operation takes 7.5 times longer during heavy loads.
Doesn't this suggest that swapping of working memory is occurring?
---(end of broadcast)-
Hi -
This is based on a discussion I was having with neilc on IRC. He
suggested I post it here. Sorry for the length - I'm including
everything he requested
I'm comparing the speeds of the following two queries on 7.4.5. I was
curious why query 1 was faster than query 2:
query 1:
Select
Hi -
This is based on a discussion I was having with neilc on IRC. He
suggested I post it here. Sorry for the length - I'm including
everything he requested.
I'm comparing the speeds of the following two queries. I was curious
why query 1 was faster than query 2:
query 1:
Select layer_nu
to be move from Oracle)
Tx,
David
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Assuming your system isn't starved for memory, shouldn't repeated page
fetches be hitting the cache?
I've also wondered about the conventional wisdom that read ahead doesn't
help random reads. I may well be missing something, but *if* the OS has
enough memory to cache most of the table, surely
Jim C. Nasby wrote:
Ahh, I was thinking of a high correlation factor on the index. I still
question 5% though... that seems awefully low.
Not really. It all depends on how many records you're packing into each
page. 1% may well be the threshold for small records.
Tom mentioned this in the last
Karim Nassar wrote:
Context: I have a two disk server that is about to become dedicated to
postgresql (it's a sun v40z running gentoo linux).
What's "theoretically better"?
1) OS and pg_xlog on one disk, rest of postgresql on the other? (if I
understand the above correctly)
2) Everything stripe
So
the process don't jam but take time .. any Idea what postgresql is doing
during this time??
If you have any idea on what the problem could be... I really appreciate
it.
Thanks for any help!
/David
---(end of broadcast)---
TIP 4:
Gregory Stark wrote:
The "this day and age" argument isn't very convincing. Hard drive capacity
growth has far outstripped hard drive seek time and bandwidth improvements.
Random access has more penalty than ever.
In point of fact, there haven't been noticeable seek time improvements
for years.
ut if I drop
indexed do I have to reload all my stored procedure (to reset the
planner related info)??? Remember having read that somewhere.. (was it
in the Postgresql General Bit newletter ...anyway)
Thanks for your help I really appréciate it :-)
/David
LOG: duration: 144.000 ms
LOG:
s the code from
the class. I must say that i run script that contains create table,
alter table, insert statements with the same runner.
If I wrong please tell me .. I like to be wrong when the result is
eliminating a misunderstanding from my part :-)
Thanks for your help!
/David
public void
to have a big
performance impact... In this case. It means it's not really feasable
to empty the content of a schema. The table has only 10k .. with a huge
table it's not feasible just because the checks on itselft!
Is someone can explain why there is this extra check? Is that c
Stephan Szabo wrote:
On Wed, 16 Mar 2005, David Gagnon wrote:
Hi
I rerun the example with the debug info turned on in postgresl. As you
can see all dependent tables (that as foreign key on table IC) are
emptied before the DELETE FROM IC statement is issued. For what I
understand the
Whoa Josh! I don't believe you're going to reduce the cost by 10 times
through a bit of tweaking - not without lowering the sequential scan
cost as well.
The only thing I can think of is perhaps his primary index drastically
needs repacking. Otherwise, isn't there a real anomaly here? Halving t
In my experience, if you are concerned about filesystem performance, don't
use ext3. It is one of the slowest filesystems I have ever used
especially for writes. I would suggest either reiserfs or xfs.
--David Dougall
On Fri, 11 Mar 2005, Arshavir Grigorian wrote:
> Hi,
>
>
, when I saw all read
activity cease for 2/3 seconds whilst the disk wrote, then disk read
resumed. This occured repeatedly during a read/edit/write of a 3Gb file.
Performance not critical here so on the "hmm, that's odd" todo list :)
David
---(end of broad
actor = 0.04
autovacuum_analyze_scale_factor = 0.02
autovacuum_vacuum_cost_delay = 10
autovacuum_vacuum_cost_limit = -1
I was suspicious that the stat_row_level might not work because
stat_block_level is off. But I see pg_stat_user_tables.n_tup_ins,
pg_stat_user_tables.n_tup_upd and pg_st
James Mansion wrote:
Jakub Ouhrabka wrote:
How can we diagnose what is happening during the peaks?
Can you try forcing a core from a bunch of the busy processes? (Hmm -
does Linux have an equivalent to the useful Solaris pstacks?)
There's a 'pstack' for Linux, shipped at least in Red Hat distr
that would make adding an
additional server instead a better idea?
Also are there any settings that might boost our TPS on the existing
hardware (sync=off isn't an option.. (-: ). I have already
significantly increased the various buffers, but this was mainly to
t need anything like SERIALIZATION transaction isolation of the
online system from the entire load job.
Thanks for any ideas,
David Crane
DonorsChoose.org
good, though.
Thanks,
David Crane
-Original Message-
From: Ow Mun Heng [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 14, 2008 8:31 PM
To: [EMAIL PROTECTED]
Cc: pgsql-performance@postgresql.org; David Crane
Subject: Re: [PERFORM] Avoid long-running transactions in a
long-runnings
tables is text with length of up to 900 characters. My
question is whether I could build PostgreSQL with NAMEDATALEN (which
controls the max size of Enums) equal to 900? Or do you see any things going
wrong then (like stability problems or whatever)?
Regards,
David
Hi Tom,
Thanks a lot for the tip! I will try this. You probably saved be a few days
of work!
Regards,
David
On Sat, Jul 26, 2008 at 11:44 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "David Andersen" <[EMAIL PROTECTED]> writes:
> > One of the side tables i
Thanks in advance for your help.
Regards,
David
On Sat, Jul 26, 2008 at 11:48 PM, David Andersen <[EMAIL PROTECTED]> wrote:
> Hi Tom,
>
> Thanks a lot for the tip! I will try this. You probably saved be a few days
> of work!
>
> Regards,
>
> David
>
>
&
sults for your
queries, would help here.
--
- David T. Wilson
[EMAIL PROTECTED]
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
he users while postgresql clusters the
table.
If you're inserting in order of the start_time column (which I was) then the
cluster should almost maintain itself (I think), providing you're not
updating or deleting anyway, I'd assume that since it looks like a log
table.
David.
rows in the where clause. A bit of a hack but it changed a 2 min query into
one that ran in under a second.
Of course this is not helping with your problem but at least may trigger
some more feedback.
David.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Beha
401 - 500 of 994 matches
Mail list logo