32KL1i cache: 32KL2 cache: 256KL3 cache: 12288KNUMA node0 CPU(s): 0,12 users, load average: 0.00, 0.12, 0.37Please see the following for the explain analysis :http://explain.depesz.com/s/I3SLI'm trying to understand why I'm getting the yellow, orange, and red on the inclusive, and the yellow on the exclusive. (referring to the explain.depesz.com/s/I3SL page.)I'm relatively new to PostgreSQL, but I've been an Oracle DBA for some time. I suspect the I/O may be dragging but I don't know how to dig that information out from here. Please point out anything else you can decipher from this. Thanks,John
x27;t know how to do it on Linux, but you should be able
to change TIME_WAIT to a shorter value. For the archives, here is a
pointer on changing TIME_WAIT on Windows:
http://www.winguides.com/registry/display.php/878/
John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for
er varying(2000) |
fulfillment_status_id | numeric(38,0) |
Indexes:
"lead_requests_pkey" primary key, btree (id)
"lead_requests_contact_id_idx" btree (contact_id)
"lead_requests_request_id_idx" btree (request_id)
Check constraints:
"
000# in milliseconds
#max_locks_per_transaction = 64 # min 10, ~260*max_connections bytes each
#---
# VERSION/PLATFORM COMPATIBILITY
#---
# - Previous Postgres Versions -
#add_missing_from = true
#re
Dennis,
On Fri, 01 Jul 2005, Dennis Bjorklund wrote:
> On Thu, 30 Jun 2005, John Mendenhall wrote:
>
> > Our setting for effective_cache_size is 2048.
> >
> > random_page_cost = 4, effective_cache_size = 2048 time approximately
> > 4500ms
> > random
--
There is definitely a difference in the query plans.
I am guessing this difference in the performance decrease.
However, nothing was changed in the postgresql.conf file.
I may have run something in the psql explain analyze session
a week ago, but I can't figure out what I changed.
So, the bo
On Tue, 19 Jul 2005, John Mendenhall wrote:
> I tuned a query last week to obtain acceptable performance.
> Here is my recorded explain analyze results:
>
> LOG: duration: 826.505 ms statement: explain analyze
> [cut for brevity]
>
> I rebooted the database machine la
can get this to repeat consistently?
Please let me know if any of you have any pointers as to
the cause of the different query plans.
Thank you very much in advance for any pointers you can provide.
JohnM
On Tue, 19 Jul 2005, John Mendenhall wrote:
> I tuned a query last week to obt
ery is slow.
My second and more important question is, does anyone have
any ideas or suggestions as to how I can increase the speed
for this query?
Things I have already done are, modify the joins and conditions
so it starts with smaller tables, thus the join set is smaller,
modify the
On Sat, 20 Aug 2005, John Mendenhall wrote:
> I need to improve the performance for the following
> query.
I have run the same query in the same database under
different schemas. Each schema is pretty much the same
tables and indices. One has an extra backup table and
an extra index whi
pen lead_request.
Would it be best to attempt to rewrite it for IN?
Or, should we try to tie it in with a join? I would
probably need to GROUP so I can just get a count of those
contacts with open lead_requests. Unless you know of a
better way?
Thanks for your assistance. Thi
-> Seq Scan on partners p (cost=0.00..24.35 rows=435 width=30) (actual
time=0.039..9.383 rows=435 loops=1)
Total runtime: 3241.139 ms
(43 rows)
-
The DISTINCT ON condition was about the same amount of time,
statistically. Removing the DISTINCT entirely only gave a
very s
However, what is the max number of database I can create before
performance goes down?
I know I'm not directly answering your question, but you might want to
consider why you're splitting things up into different logical
databases. If security is a big concern, you can create different
da
bove
-shmmax - how large dare I set this value on dedicated postgres servers?
-checkpoint_segments - this is crucial as one of the server is
transaction heavy
-vacuum_cost_delay
Of course some values can only be estimated after database has been feed
data and queries have been run in a production l
://www.enterprisedb.com/documentation/kernel-resources.html]
// John
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Hello All
I sent this message to the admin list and it never got through so I
am trying the performance list.
We moved our application to a new machine last night. It is a Dell
PowerEdge 6950 2X Dual Core. AMD Opteron 8214 2.2Ghz. 8GB Memory. The
machine is running R
what about kernel buffers on RHEL4.
Thanks
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jeff Frost
Sent: Thursday, April 05, 2007 3:24 PM
To: John Allgood
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] High Load on Postgres 7.4.16 Server
Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Dave Cramer
Sent: Thursday, April 05, 2007 4:01 PM
To: John Allgood
Cc: 'Jeff Frost'; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] High Load on Postgres 7.4.16 Server
On 5-Apr-07, at 3:33 PM, John All
aring about what other people on the list have there kernel tuned too.
Best Regards
John Allgood
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Dave Cramer
Sent: Thursday, April 05, 2007 4:27 PM
To: John Allgood
Cc: 'Jeff Frost'; pg
q Scan on sequence_alignment sa
(cost=1.00..110379294.60 rows=467042560
width=4)
15 record(s) selected [Fetch MetaData: 0/ms] [Fetch Data: 0/ms]
Thanks in advance!
John Major
---(end of broadcast)-
sequence_alignment)
group by sf.library_id, fio.clip_type
The index is used... but the cost gets worse!
it goes from:
11831119
-TO-
53654888
Actually... The new query executes in ~ 15 minutes... which is good
enough for me for now.
Thanks Nis!
john
Heikki Linnakangas wrote:
John Major wrote
=16
John
Nis Jørgensen wrote:
John Major skrev:
I am trying to join three quite large tables, and the query is
unbearably slow(meaning I can't get results in more than a day of
processing).
I've tried the basic optimizations I understand, and nothing has
improved the execute spe
Dear PostgreSQL Creators, I am frequently using PostgreSQL server to manage the
data, but I am stuck ed now with a problem of large objects deleting, namely it
works too slow. E.g., deleting of 900 large objects of 1 Mb size takes around
2.31 minutes. This dataset is not largest one which I am w
Ah, you're right. Thanks Hannu, that's a good solution.
Hannu Krosing wrote:
On Fri, 2010-04-02 at 16:28 -0400, Beaver, John E wrote:
...
I know that the query used here could have been a COPY statement, which I assume would
be better-behaved, but I'm more conc
tirely to local memory before printing to
standard out.
I think it grabs the whole result set to calculate the display column
widths. I think there is an option to tweak this but don't remember which,
have a look at the psql commands (\?), formatting section.
--
John E. B
tions I get:
postgres=# show debug_assertions;
debug_assertions
--
on
(1 row)
Can you let us know when the corrected packages have become available?
Regards,
John
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
output to make sure the bbu is enabled and the cache is turned on
for the raid array u0 or u1 ...?
--
-- rouilj
John Rouillard System Administrator
Renesys Corporation 603-244-9084 (cell) 603-643-9300 x 111
--
Sent via pgsql-performance mailing list
ne plpgsql functions to
UPDATE or DELETE the leaf tables directly, but using such an interface isn't
terribly elegant.
I therefore tried writing the plpgsql functions for UPDATE and DELETE anyway,
with the idea of linking to a TRIGGER on the parent ptest table. This didn't
On 12/3/10 10:20 PM, Tom Lane wrote:
> John Papandriopoulos writes:
>> I've found that a k-ary table inheritance tree works quite well to
>> reduce the O(n) CHECK constraint overhead [1] in the query planner
>> when enabling partition constraint exclusion.
>
>
On 12/4/10 8:42 AM, Tom Lane wrote:
John Papandriopoulos writes:
I've recreated the same example with just one parent table, and 4096 child
tables.
SELECT query planning is lightning fast as before; DELETE and UPDATE cause my
machine to swap.
What's different about DELETE
you use the "ONLY" keyword,
they work again: see my original post of this thread. In that case, the
application SQL still retrains some simplicity. On this topic, I think there's
quite a bit of confusion and updates to the documentation would help greatly.
John
--
Sent via pgsql-
s at any level
in the inheritance hierarchy. You've been a great help in helping my
understanding PostgreSQL inheritance.
Best,
John
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
you're using INSERT triggers, you'd want to make sure your plpgsql
function is fast: I'm partitioning by power-of-two, so can use right-shift
n-bits to quickly compute the insertion table name, rather than using an
if-else-if chain.
John
--
Sent via pgsql-performance mailing
er the inheritance_planner(...) can be avoided if the
rowtypes of children are the same as the parent? (I'm not yet sufficiently
familiar with the source to determine on my own.) If that's the case, is there
a simple test (like cardinality of columns) that can be used to differentiate
On 12/6/10 10:03 AM, Tom Lane wrote:
> John Papandriopoulos writes:
>> I am still wondering whether the inheritance_planner(...) can be avoided if
>> the rowtypes of children are the same as the parent?
>
> Possibly, but it's far from a trivial change. The difficul
expected power failure.)
When a write() to a Fusion-io device has been acknowledged, the data is
guaranteed to be stored safely. This is a strict requirement for any
enterprise-ready storage device.
Thanks,
John Cagle
Fusion-io, Inc.
Confidentiality Notice: This e-mail message, its contents and
eed to test your raid card batteries (nothing like having a
battery with only a 6 hour runtime when it takes you a couple of days
MTTR), can your database app survive with that low a commit rate? As
you said you ar expecting something almost 4-5x faster with 7200 rpm
disks.
--
much of the table
and our application will not work. What are we doing wrong?
Cheers now,
John
> >> When we 'EXPLAIN' this query, PostgreSQL says it is using the index
> >> idx_receiveddatetime. The way the application is designed means
that
> >> in virtually all cases the query will have to scan a very long way
> >> into idx_receiveddatetime to find the first record where userid =
> 311369
een separe Browser request
(and to give it time to live)?
Sure, but you need to add a lot of connection management to do this.
You would need to keep track of the cursors and make sure a
subsequent request uses the right connection.
John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for
.vcf cell: 512-569-9461---(end of broadcast)---TIP 9: In versions below
8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match-- John E. Vincent
[EMAIL PROTECTED]
On 6/13/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
On Tue, Jun 13, 2006 at 05:40:58PM -0400, John Vincent wrote:> Maybe from a postgresql perspective the cpus may be useless but the memory> on the pSeries can't be beat. We've been looking at running our warehouse
> (P
ing our DBAs for not realizing the 18GB instance memory thing though ;)
--Jim C. Nasby, Sr. Engineering Consultant
[EMAIL PROTECTED]Pervasive Software http://pervasive.comwork: 512-231-6117vcard: http://jim.nasby.net/pervasive.vcf
cell: 512-569-9461-- John E. Vincent[EMAIL PROTECTED]
On 6/14/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
On Wed, 2006-06-14 at 09:47, John E. Vincent wrote:> -- this is the third time I've tried sending this and I never saw it get> through to the list. Sorry if multiple copies show up.>> Hi all,
BUNCHES SNIPPED> work_mem
Out of curiosity, does anyone have any idea what the ratio of actual datasize to backup size is if I use the custom format with -Z 0 compression or the tar format? Thanks.On 6/14/06,
Scott Marlowe <[EMAIL PROTECTED]> wrote:
On Wed, 2006-06-14 at 09:47, John E. Vincent wrote:> -- this is
use thier own logins when developing new reports. Only when they get published do they convert to the Actuate user.
-- John E. Vincent[EMAIL PROTECTED]
time gzip -6 claDW_PGSQL.test.bakreal 3m4.360suser 1m22.090ssys 0m6.050sWhich is still less time than it would take to do a compressed pg_dump. On 6/14/06,
Scott Marlowe <[EMAIL PROTECTED]> wrote:
How long does gzip take to compress this backup?On Wed, 2006-06-14 at 15:59, John V
On 6/14/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
On Wed, Jun 14, 2006 at 02:11:19PM -0400, John Vincent wrote:> Out of curiosity, does anyone have any idea what the ratio of actual> datasize to backup size is if I use the custom format with -Z 0 compression
> or the tar format?-
I'm not a programmer so understanding the optimizer code is WAY beyond my limits.My question, that I haven't seen answered elsewhere, is WHAT things can affect the choice of an index scan over a sequence scan. I understand that sometimes a sequence scan is faster and that you still have to get the
On 6/15/06, Mark Lewis <[EMAIL PROTECTED]> wrote:
DB2 can satisfy the query using only indexes because DB2 doesn't doMVCC.Although MVCC is generally a win in terms of making the database easierto use and applications less brittle, it also means that the database
must inspect the visibility informat
On 6/15/06, Mark Lewis <[EMAIL PROTECTED]> wrote:
Unfortunately SUM is in the same boat as COUNT; in order for it toreturn a meaningful result it must inspect visibility information forall of the rows.-- MarkWe'll this is interesting news to say the least. We went with PostgreSQL for our warehouse
Any suggestions? FYI the original question wasn't meant as a poke at comparing PG to MySQL to DB2. I'm not making an yvalue judgements either way. I'm just trying to understand how we can use it the best way possible.
If anyone from the bizgres team is watching, have they done any work in this area
On 6/15/06, Tim Allen <[EMAIL PROTECTED]> wrote:
Is that expected performance, anyone? It doesn't sound right to me. Doesanyone have any clues about what might be going on? Buggy kerneldrivers? Buggy kernel, come to think of it? Does a SAN just not provide
adequate performance for a large database?
decibel=# create index test on i ( sum(i) );ERROR: cannot use aggregate function in index _expression_
decibel=#BTW, there have been a number of proposals to negate the effect of nothaving visibility info in indexes. Unfortunately, none of them have cometo fruition yet, mostly because it's a very
ueue_depth off on each host. Contrast that with the Turbo controller option which does 1024 I/Os per sec and I can duplicate what I have now or add a second LUN per host. I can't even find how much our DS6800 supports.
Thanks for all the suggestions, John. I'll keep trying to follow some of
I'd have to agree with you about the specific SAN/setup you're working
with there. I certainly disagree that it's a general property of SAN'sthough. We've got a DS4300 with FC controllers and drives, hosts aregenerally dual-controller load-balanced and it works quite decently.
How are you guys
Any suggestions? FYI the original question wasn't meant as a poke at comparing PG to MySQL to DB2. I'm not making an yvalue judgements either way. I'm just trying to understand how we can use it the best way possible.
Actually we just thought about something. With PG, we can create an index that is
I'd have to agree with you about the specific SAN/setup you're working
with there. I certainly disagree that it's a general property of SAN'sthough. We've got a DS4300 with FC controllers and drives, hosts aregenerally dual-controller load-balanced and it works quite decently.
How are you guys do
Hello,
I'm working out specs for a new database server to be
purchased for our organization. The applications the
server will handle are mainly related to network
operations (monitoring, logging, statistical/trend
reports, etc.). Disk I/O will be especially high with
relation to processing netwo
> The thing I would ask is would you not be better
> with SAS drives?
>
> Since the comments on Dell, and the highlighted
> issues I have been
> looking at HP and the the Smart Array P600
> controller with 512 BBWC.
> Although I am looking to stick with the 8 internal
> disks, rather than
>
Chromosome help a little, but it I can't think of a way to
avoid full table scans for each of the position range queries.
Any advice on how I might be able to improve this situation would be
very helpful.
Thanks!
John
---(end of broadcast)
pair in my auto-generated SQL into a double
pair of "(column = value and column is not null)" It's redundant and looks
pretty silly, IMO.
Thanks for you consideration :)
-John
---(end of broadcast)---
TIP 3: if posting/readi
found is large compared to the number of
total entries (I don't know the percentages, but probably >30-40%), then
it is faster to just load the data and scan through it, rather than
doing a whole bunch of indexed lookups.
John
=:->
signature.asc
Description: OpenPGP digital signature
Tom Lane wrote:
John Meinel <[EMAIL PROTECTED]> writes:
... However, if I try to
bundle this query up into a server side function, it runs very slow (10
seconds). I'm trying to figure out why, but since I can't run EXPLAIN
ANALYZE inside a function, I don't really kno
Richard Huxton wrote:
John Meinel wrote:
So notice that when doing the actual select it is able to do the index
query. But for some reason with a prepared statement, it is not able
to do it.
Any ideas?
In the index-using example, PG knows the value you are comparing to. So,
it can make a
Tom Lane wrote:
[ enlarging on Richard's response a bit ]
John Meinel <[EMAIL PROTECTED]> writes:
jfmeinel=> explain analyze execute myget(3);
QUERY PLAN
Seq
Like it is thinking it is looking
at the time of day when it plans the queries (hence why so many rows),
but really it is looking at the date. Perhaps a cast is in order to make
it work right. I don't really know.
Interesting problem, though.
John
=:->
signature.asc
Description: OpenPGP digital signature
ch where botnumber = '1-7'
limit 1);
Just some thoughts about where *I've* found performance to change
between functions versus raw SQL.
You probably should also mention what version of postgres you are
running (and possibly what your hardware is)
John
=:->
signature.asc
Description: OpenPGP digital signature
Rod Dutton wrote:
Thank John,
I am running Postgres 7.3.7 on a Dell PowerEdge 6600 Server with Quad Xeon
2.7GHz processors with 16GB RAM and 12 x 146GB drives in Raid 10 (OS, WAL,
Data all on separate arrays).
You might want think about upgraded to 7.4, as I know it is better at
quite a few
r any
value of botnum. I would have hoped that using LIMIT 1 would have fixed
that.
John
=:->
signature.asc
Description: OpenPGP digital signature
-> Index Scan using finst_t_store_id_idx on finst_t
(cost=0.00..140417.22 rows=88217 width=4) (actual time=0.000..0.000
rows=1 loops=1)
Index Cond: (store_id = 539960)
Total runtime: 0.000 ms
Could being aware of LIMIT be added to the planner? Is there a better
way to check for existe
Tom Lane wrote:
John Meinel <[EMAIL PROTECTED]> writes:
I was looking into another problem, and I found something that surprised
me. If I'm doing "SELECT * FROM mytable WHERE col = 'myval' LIMIT 1.".
Now "col" is indexed, by mytable has 500,000 rows
Curt Sampson wrote:
On Sun, 24 Oct 2004, John Meinel wrote:
I was looking into another problem, and I found something that surprised
me. If I'm doing "SELECT * FROM mytable WHERE col = 'myval' LIMIT 1.".
Now "col" is indexed...
The real purpose of this query i
e useful.
(That's one thing with DB tuning. It seems to be very situation
dependent, and it's hard to plan without a real dataset.)
John
=:->
signature.asc
Description: OpenPGP digital signature
omething that was universally agreed
upon. Or maybe the man-pages were all wrong, and only got updated recently.
John
=:->
signature.asc
Description: OpenPGP digital signature
ORDER BY col
LIMIT 1;
regards,
Jaime Casanova
Thanks for the heads up. This actually worked. All queries against that
table have turned into index scans instead of sequential.
John
=:->
signature.asc
Description: OpenPGP digital signature
his format
create function test(int) returns int as '
declare
x alias for $1;
int y;
begin
EXECUTE ''select into y ... from ... where id=''
||quote_literal(x)
|| '' limit ...'';
return y;
end;
';
I think that will point you in the right direction.
John
=:->
signature.asc
Description: OpenPGP digital signature
patrick ~ wrote:
Hi John,
Thanks for your reply and analysis.
No problem. It just happens that this is a problem we ran into recently.
--- John Meinel <[EMAIL PROTECTED]> wrote:
patrick ~ wrote:
[...]
Hmm... The fact is I am selecting (in this example anyway) over all
values in pkk_offer
patrick ~ wrote:
Hi John,
Thanks for your reply and analysis.
--- John Meinel <[EMAIL PROTECTED]> wrote:
patrick ~ wrote:
[...]
pkk=# explain analyze execute pkk_01(241 );
QUERY PLAN
One other thing that I just thought of. I think it is actually possible
to add an index on a fu
plish the best performance or
would it be better to use all the drives in one huge raid five with a
couple of failovers. I have looked around in the archives and found some
info but I would like to here about some of the configs other people are
running and how they have them setup.
Thanks
John Al
hanks
John Allgood - ESC
Tom Lane wrote:
"Bruno Almeida do Lago" <[EMAIL PROTECTED]> writes:
Is there a real limit for max_connections? Here we've an Oracle server with
up to 1200 simultaneous conections over it!
[ shrug... ] If your machine has the beef to run 1200 simulta
into one cluster. Also I might mention that I am running
clustering using Redhat Clustering Suite.
John Arbash Meinel wrote:
John Allgood wrote:
I think maybe I didn't explain myself well enough. At most we will
service 200-250 connections across all the 9 databases mentioned. The
database w
rrectly, and will restart them automatically if
they fail.
John Arbash Meinel wrote:
John Allgood wrote:
This some good info. The type of attached storage is a Kingston 14 bay
Fibre Channel Infostation. I have 14 36GB 15,000 RPM drives. I think
the way it is being explained that I should build a
g_xlog for database group 2
MIRROR5 - Database Group 3
MIRROR6 - pg_xlog for database group 3
This will take about 12 disk drives. I have a 14 bay Storage Bay I can
use two of the drives for hotspare's.
Thanks
John Allgood - ESC
Systems Administrator
---(end of broadcast
I can ask Postgres to
report an Out-of-memory the moment it tries to consume greater than a
certain percentage of memory (instead of letting it hit the swap and
eventually die or thrash) ?
Thanks!
- John
---(end of broadcast)---
TIP 6: explain analyze is your friend
gt; > computation was being pipelined with the tuples returned from
> > generate_series().
>
> It's pipelined either way. But int8 is a pass-by-reference data type,
> and it sounds like we have a memory leak for this case.
Thanks for your reply. How easy is it to fix this? Which por
Hi, I've started my first project with Postgres (after several years of
using Mysql), and I'm having an odd performance problem that I was
hoping someone might be able to explain the cause of.
My query
- select count(*) from gene_prediction_view where gene_ref = 523
- takes 26 sec
Thanks a lot, all of you - this is excellent advice. With the data
clustered and statistics at a more reasonable value of 100, it now
reproducibly takes even less time - 20-57 ms per query.
After reading the section on "Statistics Used By the Planner" in the
manual, I was a little concerned th
er in a few hours to see if it affects anything cache-wise.
Gaetano Mendola wrote:
John Beaver wrote:
- Trying the same constant a second time gave an instantaneous result,
I'm guessing because of query/result caching.
AFAIK no query/result caching is in place in post
ns when you
re-add the index?
Does anybody have any things to check/ideas on why loading a 100Mb sql
file using psql would take 3 hours?
Thanks in advance for any ideas.
--
-- rouilj
John Rouillard
System Administrator
Renesys Corporation
603-244-9084 (cell)
603-
On Mon, Apr 28, 2008 at 06:53:09PM +0100, Heikki Linnakangas wrote:
> John Rouillard wrote:
> >We are running postgresql-8.1.3 under Centos 4
> You should upgrade, at least to the latest minor release of the 8.1
> series (8.1.11), as there has been a bunch of important bug and se
On Tue, Apr 29, 2008 at 05:19:59AM +0930, Shane Ambler wrote:
> John Rouillard wrote:
>
> >We can't do this as we are backfilling a couple of months of data
> >into tables with existing data.
>
> Is this a one off data loading of historic data or an ongoing thing?
On Mon, Apr 28, 2008 at 02:16:02PM -0400, Greg Smith wrote:
> On Mon, 28 Apr 2008, John Rouillard wrote:
>
> > 2008-04-21 11:36:43 UTC @(2761)i: LOG: checkpoints ... (27 seconds
> > apart)
> > so I changed:
> > checkpoint_segments = 30
> > checkpoint_war
Hi,
I'm trying to make use of a cluster of 40 nodes that my group has,
and I'm curious if anyone has experience with PgPool's parallel query
mode. Under what circumstances could I expect the most benefit from
query parallelization as implemented by PgPool?
--
Sent via pgsql-performance mai
are being freed too early? Any other
ideas as to what's going on here?
Thanks,
John
On Tue, Jan 8, 2008 at 8:51 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "John Smith" <[EMAIL PROTECTED]> writes:
>>> It's pipelined either way. But int8 is a pass-by-refer
I'm having a strange problem with a query. The query is fairly simple,
with a few constants and two joins. All relevant columns should be
indexed, and I'm pretty sure there aren't any type conversion issues.
But the query plan includes a fairly heavy seq scan. The only possible
complication is
Jeremy Harris wrote:
John Beaver wrote:
I'm having a strange problem with a query. The query is fairly
simple, with a few constants and two joins. All relevant columns
should be indexed, and I'm pretty sure there aren't any type
conversion issues. But the query plan includes
Oh, and the version is 8.3.3.
Jeremy Harris wrote:
John Beaver wrote:
I'm having a strange problem with a query. The query is fairly
simple, with a few constants and two joins. All relevant columns
should be indexed, and I'm pretty sure there aren't any type
conversion issues
-> Seq Scan on
functional_linkage_scores fls (cost=0.00..3928457.08 rows=232365808
width=20) (actual time=14.221..86455.902 rows=232241678 loops=1)
Total runtime: 24183346.271 ms
(18 rows)
Jeremy Harris wrote:
John Beaver wrote:
I'm having a strange problem with a query. T
You're right - for some reason I was looking at the (18
rows) at the bottom. Pilot error indeed - I'll have to figure out
what's going on with my data.
Thanks!
Tom Lane wrote:
John Beaver <[EMAIL PROTECTED]> writes:
Ok, here's the explain analyze resul
1 - 100 of 427 matches
Mail list logo