I have a user who is trying to match overlapping duplicate phone info but
for different customer_ids.
The intended conditional could be expressed:
IF the intersection of the sets
{c.main_phone, c.secondary_phone}
and
{c1.main_phone, c1.secondary_phone}
is not empty
THEN join
EXCEPT where the inter
On Tue, Aug 15, 2017 at 12:07 PM, Scott Marlowe
wrote:
> So do iostat or iotop show you if / where your disks are working
> hardest? Or is this CPU overhead that's killing performance?
>
Sorry for the delayed reply. I took a look in more detail at the query
plans from our problem query during t
>
> > Not so.
> >
> > This system has no defined temp_tablespace however spillage due to
> > sorting/hashing that exceeds work_mem goes to base/pgsql_tmp which we
> > have symlinked out to a local SSD drive.
>
> Which is also where temp tables are created.
>
This isn't true, at least in our enviro
On Mon, Aug 14, 2017 at 3:01 PM, Scott Marlowe
wrote:
> On Mon, Aug 14, 2017 at 1:53 PM, Jeremy Finzel wrote:
> > This particular db is on 9.3.15. Recently we had a serious performance
> > degradation related to a batch job that creates 4-5 temp tables and 5
> > indexes.
g
the root cause.
Thanks,
Jeremy
Normally, I find that in these situations, it makes sense to index the
primary key of the table WHERE col is not null, because it will usually
cover the largest number of cases, and is much better than a two-value
boolean index, for example.
On Thu, Jun 8, 2017 at 9:58 AM, Tom Lane wrote:
> Merl
I have found some examples of people tweaking this
parameter track_activity_query_size to various setting such as 4000, 1,
15000, but little discussion as to performance impact on memory usage.
What I don't have a good sense of is how significant this would be for a
high traffic system with rap
t method takes advantage of
partially-sorted inout, be cheaper (by log(num-buckets)) to sort/uniq
each bucket separately (and it would parallelize, too).
--
Cheers,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.
--
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 22/11/2012 00:08, Craig Ringer wrote:
WITH
FENCE foo AS (SELECT ...),
bar AS (SELECT ...)
SELECT * FROM bar;
Are we fencing just foo? Or all expressions?
WITH foo AS (FENCED SELECT ...),
bar AS (SELECT ...),
SELECT ... ;
--
Jeremy
--
Sent via pgsql-performance mailing list
prospect of the planner/executor being taught to
merge each of those groups of three index scans,
to aid this sort of poor query?
--
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
files
which is nonetheless accounted to user-mode cpu time,
but some will be the plain inefficiency of the external
version having to effectively do work over many times
because it can't have a complete view of the problem at
hand at any one time.
--
Jeremy
--
Sent via pgsql-performance ma
realise you've had helpful answers by now, but that reads
as 16 hours of cpu time to me; mostly user-mode but with 6 minute
of system-mode. 98% cpu usage for the 16 hours elapsed.
--
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
t autoanalyze hasn't been taught to
gather those. The manual command on the parent table does gather them,
though.
Is stats-gathering significantly more expensive than an FTS? Could an FTS
update stats as a matter of course (or perhaps only if enough changes in table)?
--
Jeremy
--
Sent via
still in the development and tuning
process, so I will do some analysis of the index stats to see if they are
indeed redundant.
Cheers,
Jeremy
From: Bob Lunney [bob_lun...@yahoo.com]
Sent: Friday, 1 April 2011 3:54 a.m.
To: pgsql-performance@postgresql.org;
d = 100)
I'm running POstgreSQL 9.0.2 on Ubuntu 10.4
Cheers
Jeremy
__
This message contains information, which is confidential and may be subject to
legal privilege.
If you are not the
The row estimate is way off. Is autovacuum disabled?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
In normal circumstances does locking a table in access exclusive mode improve
insert, update and delete operation performance on that table.
Is MVCC disabled or somehow has less work to do?
Cheers
Jeremy
ing estimate of such things.
--
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
oaded size has become "large".
--
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Thanks heaps for the advice. I will do some benchmarks to see how long it takes
to cluster all of the database tables.
Cheers,
Jeremy
-Original Message-
From: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov]
Sent: Tuesday, 25 January 2011 1:02 p.m.
To: Jeremy Palmer; Tom Lane
Cc
Index Cond: ((_revision_created > 16) AND
(_revision_created <= 40))
Total runtime: 985.671 ms
Thanks heaps,
Jeremy
__
This message contains information, which is confidential a
Thanks that seems to make the query 10-15% faster :)
Cheers
jeremy
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Tuesday, 18 January 2011 9:24 a.m.
To: Jeremy Palmer
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Possible to improve query plan
It fits a Data Warehousing type application.
Apart from work_mem, my other parameters are pretty close to these numbers. I
had the work_mem down a little because a noticed some clients were getting out
of memory errors with large queries which involved lots of sorting.
Thanks
Jeremy
ere's the new plan with work_mem = 50mb:
http://explain.depesz.com/s/xwv
And here another plan with work_mem = 500mb:
http://explain.depesz.com/s/VmO
Thanks,
Jeremy
-Original Message-
From: Andy Colson [mailto:a...@squeakycode.net]
Sent: Monday, 17 January 2011 5:57 p.m.
To: Jeremy
5.872..15.872 rows=43258 loops=1)
Index Cond: ((_revision_created > 16) AND
(_revision_created <= 40))
Total runtime: 14359.747 ms
http://explain.depesz.com/s/qpL says that the bitmap heap scan is bad. Not sure
what to do about it.
Thanks,
Jeremy
-Original Message-
F
Index Cond: ((_revision_created > 16) AND
(_revision_created <= 40))
One thought I have is that maybe the
idx_crs_coordinate_revision_expired_created index could be used instead of
idx_crs_coordinate_revision_expired.
Does anyone have
esn't routinely account for all that info
per-process as routine. I'd expect IBM to have equivalent
facilities.
--
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
istribution of response times, rather than "cached" vs.
not?
That a) avoids the issue of discovering what was a cache hit b) deals neatly
with
multilevel caching c) feeds directly into cost estimation.
Cheers,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performanc
is a question for the PostGIS guys and a quick test could tell me
anyway! My memory is that the GIST r-tree index is slow for points at the
moment, and that a good implementation of a kd-tree index over GIST is required
for better speed.
Regards,
Jeremy Palmer
Geodetic Surveyor
National Geo
is inlining does seem to be
happening...
At this stage I have a work around by putting the query into a plpgsql function
and using dynamic SQL. But it is still frustrating why the planner seems to be
working in a far from optimal fas
statistics?
Thanks,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 01/11/2010 02:53 AM, Robert Haas wrote:
On Sun, Jan 10, 2010 at 9:04 AM, Jeremy Harris wrote:
Needing to use an external (on-disk) sort method, when taking
only 90MB, looks odd.
[...]
Well, you'd need to have work_mem> 90 MB for that not to happen, and
very few people can affor
2 rows=1548520 width=338)
(actual time=0.022..2195.527 rows=1551923 loops=1)
Filter: (event_timestamp > (now() - '1 year'::interval))
Total runtime: 6407.377 ms
Needing to use an external (on-disk) sort method, when taking
only 90MB, looks odd.
- Jeremy
--
Sent via pgsql-perfo
On 12/24/2009 05:12 PM, Richard Neill wrote:
Of course, with a server machine, it's nearly impossible to use mdadm
raid: you are usually compelled to use a hardware raid card.
Could you expand on that?
- Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.or
On 12/02/2009 11:31 PM, Ashish Kumar Singh wrote:
While importing this dump in to database I have noticed that initially
query response time is very slow but it does improves with time.
Any suggestions to improve performance after dump in imported in to
database will be highly appreciated!
Ana
On 11/03/2009 07:16 PM, Subbiah Stalin-XCGF84 wrote:
All,
I'm trying to understand the free memory usage and why it falls below
17G sometimes and what could be causing it. Any pointers would be
appreciated.
r...@prod1 # prtconf
System Configuration: Sun Microsystems sun4u
Memory size: 32768 M
on queries to guide the automatic
creation of indices? Or to set up a partitioning scheme on a previously
monolithic table?
- Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
are, in the SQL/Codd worldview?
Or, is there more to it?
I appreciate the "Simple Matter Of Programming" problem.
Thanks,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mail
I'm attempting to implement full-text search and am torn between two techniques:
1) Create multiple GIN indexes on columns I'm going to search against and UNION
the results
or
2) Create one concatenated column GIN index consisting of the columns that will
be searched.
Is there any performance c
I believe this is exactly what is happening. I see that the TOP output lists a
large amount ov VIRT & RES size being used, but the kernel does not report this
memory as being reserved and instead lists it as free memory or cached.
If this is indeed the case, how does one determine if a PostgreSQ
olumn "-/+ buffers/cache:". That shows 46Gb Free RAM.
I cannot be the only person that has asked this question.
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Saturday, August 15, 2009 10:25 AM
To: Jeremy Carroll
Cc: Scott Carey; pgsql-performance@postgresql.
[mailto:sc...@richrelevance.com]
Sent: Friday, August 14, 2009 3:38 PM
To: Jeremy Carroll; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Memory reporting on CentOS Linux
On 8/14/09 11:00 AM, "Jeremy Carroll"
wrote:
> I am confused about what the OS is reporting for memory usag
But the kernel can take back any of the cache memory if it wants to. Therefore
it is free memory.
This still does not explain why the top command is reporting ~9GB of resident
memory, yet the top command does not suggest that any physical memory is being
used.
On 8/14/09 2:43 PM, "Reid Thomps
I am confused about what the OS is reporting for memory usage on CentOS 5.3
Linux. Looking at the resident memory size of the processes. Looking at the
resident size of all postgres processes, the system should be using around 30Gb
of physical ram. I know that it states that it is using a lot of
Scott Carey wrote:
Well, what does a revolution like this require of Postgres? That is the
question.
[...]
#1 Per-Tablespace optimizer tuning parameters.
... automatically measured?
Cheers,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
be 50 rows returned then the estimates from the
planner are way out.
If that doesn't help, we'll need version info, and (if you can afford
the time) an "explain analyze"
Cheers,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
your operations (and then run analyze)?
Cheers,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
it though.
A thread maintaining a pool of assigned and cleared pg_clog pages, ahead
of the immediate need?Possibly another job for an existing daemon
thread.
- Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscrip
Bill Moran wrote:
This is a FAQ, it comes up on an almost weekly basis.
I don't think so. "where".
- select count(*) from gene_prediction_view where gene_ref = 523
Cheers,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
Paul Lambert wrote:
"-> Merge Join
(cost=35.56..19732.91 rows=12 width=48) (actual time=0.841..2309.828
rows=206748 loops=1)"
I'm no expert, but in the interests of learning: why is the
rows estimate so far out for this join?
Tilmann Singer wrote:
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [20070728 21:05]:
Let's try putting the sort/limit in each piece of the UNION to speed them up
separately.
SELECT * FROM (
(SELECT * FROM large_table lt
WHERE lt.user_id = 12345
ORDER BY created_at DESC LIMIT 10) AS q1
UNION
(S
asn't watching it *all* the time)
So - as far as I could tell it wasn't running.
On Thu, 18 Jan 2007 16:30:17 -0500, "Tom Lane" <[EMAIL PROTECTED]> said:
> "Jeremy Haile" <[EMAIL PROTECTED]> writes:
> > No tables have been vacuumed or analyzed t
> Unless it's just a bug, my only guess is that autovacuum may be getting
> busy at times (vacuuming large tables for example) and hasn't had a
> chance to even look at that table for a while, and by the time it gets
> to it, there have been tens of thousands of inserts. Does that sounds
> pla
tables have been updated with tens of thousands of inserts
and the table has still not been analyzed (according to
pg_stat_user_tables). Does a scale factor of 0 cause the table to never
be analyzed? What am I doing wrong?
I'm using PG 8.2.1.
Thanks,
Jeremy
> I once had a query which would operate on a recordlist and
> see whether there were any gaps larger than 1 between consecutive
> primary keys.
Would you mind sharing the query you described? I am attempting to do
something similar now.
---(end of broadcast)--
Interesting - I haven't seen that tool before. I'll have to check it
out when I get a chance. Thanks!
On Wed, 17 Jan 2007 20:32:37 +0100, "Tomas Vondra" <[EMAIL PROTECTED]> said:
> > That's about 32% dead rows. Might be worth scheduling a vacuum full,
> > but it's not like I was afraid it migh
'm going to reindex the the table tonight.
Jeremy Haile
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
> It would be nice if the database could
> learn to estimate these values, as newer versions of Oracle does.
That would be really nice since it would take some of the guess work out
of it.
> Yes, cluster would rebuild the table for you. I wouldn't do anything too
> intrusive, run with the random
> That's about 32% dead rows. Might be worth scheduling a vacuum full,
> but it's not like I was afraid it might be. It looks to me like you
> could probably use a faster I/O subsystem in that machine though.
I'll try to schedule a full vacuum tonight. As far as I/O - it's using
SAN over fiber.
> How much memory does the box have
2GB
> Yes, it takes up space
Well, I upped max_fsm_pages to 200 because it vacuums were failing
with it set to 150. However, I'm now autovacuuming, which might be
keeping my fsm lower. I didn't realize that setting it too high had
negative effects, so
h better with a partitioned table setup.
Also, since I usually delete old data one day at a time, I could simply
drop the old day's partition. This would make vacuuming much less of an
issue.
But I won't be making any changes immediately, so I'll continue to run
tests give
> I still keep wondering if this table is bloated with dead tuples. Even
> if you vacuum often if there's a connection with an idle transaction,
> the tuples can't be reclaimed and the table would continue to grow.
I used to vacuum once an hour, although I've switched it to autovacuum
now. It de
"comment";0.0052;29;7885;0.0219167
"archived";0;1;2;0.84623
"response_code";0.9942;4;3;0.905409
"transaction_source";0;4;2;0.983851
"location_dim_id";0;4;86;0.985384
"success";0;4;2;0.981072
Just curious - what does that tell us?
Jeremy Haile
worth trying to defragment the drive on a regular basis
in Windows?
Jeremy Haile
On Tue, 16 Jan 2007 16:39:07 -0500, "Tom Lane" <[EMAIL PROTECTED]> said:
> "Jeremy Haile" <[EMAIL PROTECTED]> writes:
> > Running PostgreSQL 8.2.1 on Win32. The query planner i
Running PostgreSQL 8.2.1 on Win32. The query planner is choosing a seq
scan over index scan even though index scan is faster (as shown by
disabling seqscan). Table is recently analyzed and row count estimates
seem to be in the ballpark.
Another tidbit - I haven't done a "vacuum full" ever, alth
Well - whether or not MySQL's implementation of partitioning has some
deficiency, it sure is a lot easier to set up than PostgreSQL. And I
don't think there is any technical reason that setting up partitioning
on Postgres couldn't be very easy and still be robust.
On Thu, 11 Jan 2007 13:59:20 +01
do that usually to lower the scale factors? Is it
ever a good approach to lower the scale factor to zero and just set the
thresholds to a pure number of rows? (when setting it for a specific
table)
Thanks,
Jeremy Haile
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
endly syntax in the future similar
to MySQL partitioning support. Having first-class citizen support of
partitions would also allow some nice administrative GUIs and views to
be built for managing them.
Jeremy Haile
On Wed, 10 Jan 2007 15:09:31 -0600, "Jim C. Nasby" <[EMAIL PROTEC
Hey Jim -
Thanks for the feedback. The server has dual Xeons with HyperThreading
enabled - so perhaps I should try disabling it. How much performance
boost have you seen by disabling it? Of course, the bottleneck in my
case is more on the I/O or RAM side, not the CPU side.
Jeremy Haile
On
I really wish that PostgreSQL supported a "nice" partitioning syntax
like MySQL has.
Here is an example:
CREATE TABLE tr (id INT, name VARCHAR(50), purchased DATE)
PARTITION BY RANGE( YEAR(purchased) ) (
PARTITION p0 VALUES LESS THAN (1990),
PARTITION p1 VALUES LESS THAN (1995),
PARTITIO
than once) Anyways - I'll let you know if something
similar happens again.
Thanks,
Jeremy Haile
On Wed, 10 Jan 2007 14:22:35 -0500, "Tom Lane" <[EMAIL PROTECTED]> said:
> "Jeremy Haile" <[EMAIL PROTECTED]> writes:
> > Another random idea - does Pos
like that. And of
course, if PostgreSQL doesn't cache query plans - this idea is bogus =)
On Wed, 10 Jan 2007 13:38:24 -0500, "Jeremy Haile" <[EMAIL PROTECTED]>
said:
> I'm pretty sure it didn't analyze in between - autovac is turned off
> and I ran the test
ased on an unrelated join condition.
If I ever get it to happen again, I'll be more careful and repost if it
is a real issue. Thanks for pointing me in the right direction!
On Wed, 10 Jan 2007 13:38:15 -0500, "Tom Lane" <[EMAIL PROTECTED]> said:
> "Jeremy Haile&quo
27;2007-01-09
09:30:00'::timestamp without time
zone))
Total runtime: 675.638 ms
On Wed, 10 Jan 2007 12:15:44 -0500, "Tom Lane" <[EMAIL PROTECTED]> said:
> "Jeremy Haile" <[EM
I have a query made by joining two subqueries where the outer query
performing the join takes significantly longer to run than the two
subqueries.
The first subquery runs in 600ms. The seconds subquery runs in 700ms.
But the outer query takes 240 seconds to run! Both of the two
subqueries onl
back on?
On Tue, 09 Jan 2007 19:02:25 +0100, "Florian Weimer" <[EMAIL PROTECTED]>
said:
> * Jeremy Haile:
>
> > I'd like any performance advice, but my main concern is the amount of
> > time vacuum/analyze runs and its possible impact on the overall DB
&
I am developing an application that has very predictable database
operations:
-inserts several thousand rows into 3 tables every 5 minutes. (table
contain around 10 million rows each)
-truncates and rebuilds aggregate tables of this data every 5 minutes.
(several thousand rows each)
-regu
I'm using 8.2. I don't know when I'll get a chance to run my own
benchmarks. (I don't currently have access to a Windows and Linux
server with similar hardware/configuration) But when/if I get a chance
to run them, I will post the results here.
Thanks for the feedback.
J
s of connections is
not a huge issue. However - it does have very large tables and
regularly queries and inserts into these tables. I insert several
million rows into 3 tables every day - and also delete about the same
amount.
On Thu, 04 Jan 2007 00:18:23 +0100, "Magnus Hagander"
Thanks for the recommendations. I wasn't familiar with those packages!
On Thu, 4 Jan 2007 00:46:32 +0100, "Dimitri Fontaine" <[EMAIL PROTECTED]>
said:
> Le jeudi 4 janvier 2007 00:18, Magnus Hagander a écrit :
> > But to get a good answer on if the difference is
> > significant enough to matter,
I am sure that this has been discussed before, but I can't seem to find
any recent posts. (I am running PostgreSQL 8.2)
I have always ran PostgreSQL on Linux in the past, but the company I am
currently working for uses Windows on all of their servers. I don't
have the luxury right now of running
> So, on a 4 Gig machine you could divide 1G (25%) by the total possible
> connections, then again by the average number of sorts you'd expect per
> query / connection to get an idea.
Thanks for the advice. I'll experiment with higher work_mem settings,
as I am regularly doing sorts on large da
iety of factors
- but I'd love some more advice on what good rule-of-thumb starting
points are for experimentation and how to evaluate whether the values
are set correctly. (in the case of temp_buffers and work_mem especially)
On Tue, 02 Jan 2007 18:49:54 +0000, "Richard Huxton"
sai
:19 +, "Richard Huxton"
said:
> Jeremy Haile wrote:
> > I'm curious what parameters you guys typically *always* adjust on new
> > PostgreSQL installs.
>
> > The parameters that I almost always change when installing a new system
> > is shared_buf
I'm curious what parameters you guys typically *always* adjust on new
PostgreSQL installs.
I am working with a database that contains several large tables (10-20
million) and many smaller tables (hundreds of rows). My system has 2 GB
of RAM currently, although I will be upping it to 4GB soon.
More specifically, you should set the noatime,data=writeback options in
fstab on ext3 partitions for best performance. Correct?
> it doesn't really belong here but ext3 has
> data journaled (data and meta data)
> ordered (meta data journald but data written before meta data (default))
> journald
> > Once you free some space on the data partition and restart, you should
> > be good to go --- there will be no loss of committed transactions, since
> > all the operations are in pg_xlog. Might take a little while to replay
> > all that log though :-(
>
> Amazing that all works. What I did no
(roughly 80% of the rows) The transaction ran for a long
time and I assume is what caused the pg_xlog to fill up.
On Fri, 22 Dec 2006 17:36:39 +, "Simon Riggs"
<[EMAIL PROTECTED]> said:
> On Fri, 2006-12-22 at 12:30 -0500, Jeremy Haile wrote:
> > The archive_status
3000ED
12/18/2006 08:52 PM archive_status
28 File(s)469,762,048 bytes
3 Dir(s) 10,206,756,864 bytes free
On Fri, 22 Dec 2006 17:02:43 +, "Simon Riggs"
<[EMAIL PROTECTED]> said:
> On Fri, 2006-12-22 at 11:52 -0500, Jeremy Hail
. It seems like it can vary considerably depending on
how intensive your current transactions are. Is there a way to
determine a maximum?
On Fri, 22 Dec 2006 11:06:46 -0500, "Jeremy Haile" <[EMAIL PROTECTED]>
said:
> I created a 10GB partition for pg_xlog and ran out of disk s
I created a 10GB partition for pg_xlog and ran out of disk space today
during a long running update. My checkpoint_segments is set to 12, but
there are 622 files in pg_xlog. What size should the pg_xlog partition
be?
Postmaster is currently not starting up (critical for my organization)
and re
different plan?
On Tue, 19 Dec 2006 20:02:35 +, "Richard Huxton"
said:
> Jeremy Haile wrote:
> > Here's the query and explain analyze using the result of the sub-query
> > substituted:
> >
> > QUERY
> > explain analyze select min(nlogid) as
Makes sense. It is NOT executing the subquery more than once is it?
On Tue, 19 Dec 2006 20:02:35 +, "Richard Huxton"
said:
> Jeremy Haile wrote:
> > Here's the query and explain analyze using the result of the sub-query
> > substituted:
> >
> > Q
t the best at
interpreting the explains. Why is this explain so much simpler than the
other query plan (with the subquery)?
On Tue, 19 Dec 2006 18:23:06 +, "Richard Huxton"
said:
> Jeremy Haile wrote:
> > Here is the explain analyze output:
>
> Well, the row estimate
NOT NULL) AND (nlogid > $1))
Total runtime: 156589.605 ms
On Tue, 19 Dec 2006 16:31:41 +, "Richard Huxton"
said:
> Jeremy Haile wrote:
> > I have the following query which performs extremely slow:
> > select min(nlogid) as start_nlogid,
> >
easiest to read and wish
it performed well.
Jeremy Haile
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
ngres R3's built-in clustering
support with a SAN, but am interested to know other people's experiences
before we start toying with this possibility. Any experience with the
Ingres support from Computer Associates? Good/bad?
Jeremy
---(end of broadcast)-
I am running a query that joins against several large tables (~5 million
rows each). The query takes an exteremely long time to run, and the
explain output is a bit beyond my level of understanding. It is an
auto-generated query, so the aliases are fairly ugly. I can clean them
up (rename them)
wise would be
appreciated!
Thanks for all of the responses!
On Wed, 15 Feb 2006 14:53:28 -0500, "Ron" <[EMAIL PROTECTED]> said:
> At 11:21 AM 2/15/2006, Jeremy Haile wrote:
> >We are a small company looking to put together the most cost effective
> >solution for our pr
1 - 100 of 120 matches
Mail list logo