-Original Message-
From: k...@rice.edu [mailto:k...@rice.edu]
Sent: October 8, 2015 1:00 PM
To: Carlo
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] One long transaction or multiple short transactions?
On Thu, Oct 08, 2015 at 11:08:55AM -0400, Carlo wrote:
> >> Soun
ware and OS configuration, and that's
why I can't want to get into a general optimization discussion because the
client is concerned with just this question.
-Original Message-
From: Graeme B. Bell [mailto:graeme.b...@nibio.no]
Sent: October 8, 2015 4:55 AM
To: Carlo
ntention (or anything else) over longer vs. shorter single-row-write
transactions under the same conditions might explain this.
Carlo
From: Igor Neyman [mailto:iney...@perceptron.com]
Sent: October 6, 2015 9:10 AM
To: Carlo; pgsql-performance@postgresql.org
Subject: RE: [PERFORM] One long
degradation for longer transactions. Would the
operators be advised to rewrite the feeds to result in more smaller
transactions rather than fewer, longer ones?
Carlo
ays, thanks.
From: Jeff Janes [mailto:jeff.ja...@gmail.com]
Sent: March 5, 2013 4:21 PM
To: Carlo Stonebanks
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Are bitmap index scans slow to start?
On Wed, Feb 27, 2013 at 1:38 PM, Carlo Stonebanks
wrote:
>> Is the original
<>
Technically, yes. That would really help, but the issue is scheduling.
Although the logs are closed off for writes, they aren't closed off for
reads, ref PG documentation: "When a table is being clustered, an ACCESS
EXCLUSIVE lock is acquired on it. This prevents any other database
operatio
<<
I had thought you were saying that any one ETL procedure into one database
used 14 concurrent threads. But really, each ETL procedure is
single-threaded, and there can be up to 5 (or theoretically up to 14) of
them running at a time into different databases?
>>
Sorry, just caught this.
<< pgbouncer is more for making connections line up single-file when the
line is moving at a very fast clip, say 0.01 second per turn. If I were
trying to make tasks that can each last for hours or days line up and take
turns, I don't think pgbouncer would be the way to go.
>>
The recommendati
<>
All writes are single row. All DB's have exactly the same structure, only
the content is different. Currently the server is hosting five active DB's -
although there 14 DB's actually on the host, the balance are backups and or
testing environments.
When a feed comes in, it can be anythi
egy that will not
interfere with normal operations?
3) By PG standards, is this a busy DB - and does explain why the
general caches expire?
Thanks,
Carlo
ke 36 seconds to set up the general index caches?
2) What can I do about it (what stats do I need to look at)?
3) How can I force these caches to expire so I can tell if the strategy
worked?
From: Nikolas Everett [mailto:nik9...@gmail.com]
Sent: February 22, 2013 2:05 PM
To
very
month). There are 89 session_id values in the January log (log_2013_01) so
this would quickly get out of control. But - like I said - an interesting
idea for more specific challenges.
From: Marc Mamin [mailto:m.ma...@intershop.de]
Sent: February 21, 2013 2:41 PM
To: Jeff Janes; Carlo Sto
ing else expires it and the problem is
back. Is there a way around this problem, can I force the expiration of a
cache?
Carlo
eries are built dynamically and
called rarely, so their first-time performance is important. I'd prefer a
strategy that allowed fast performance the first time, rather than slow the
first time and extremely fast subsequently.
Thanks,
Carlo
SELECT
l.session_id,
l.log_id,
s.se
Pavel, thank you very much for your explanation.
Is it possible to define under what conditions that sql procs will
outperform plpgsql ones, and vice-versa?
-Original Message-
From: Pavel Stehule [mailto:pavel.steh...@gmail.com]
Sent: January 30, 2012 2:57 AM
To: Carlo Stonebanks
Cc
insight would be much appreciated.
Carlo
CREATE OR REPLACE FUNCTION mdx_lib.lex_compare_candidate3(character varying,
character varying)
RETURNS numeric AS
$BODY$
/*
Rate two strings candidacy for lex_compare.
param 1: first string to compare
param 2: 2nd string to compare
returns: numeric
THEN true ELSE false END
) AS result?
2) Does that not bypass the benefits of IMMUTABLE?
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Pavel Stehule
Sent: January 28, 2012 1:38 AM
To: Carlo Stonebanks
C
Was I even right in thinking I would gain any performance by converting to
SQL?
-Original Message-
From: Deron [mailto:fecas...@gmail.com]
Sent: January 27, 2012 2:29 PM
To: Carlo Stonebanks
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] pl/pgsql functions outperforming sql
e SQL is emulating
the straight non-set-oriented procedural logic of the original plpgsql.
-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com]
Sent: January 27, 2012 10:47 AM
To: Carlo Stonebanks
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] pl/pgsql func
details, here they are. If you happen to think
this behavior is expected, I needn't bore you - just let me know!
Thanks,
Carlo
This was all triggered during the optimization of a query like this:
SELECT myVar
FROM myTable
WHERE myFunc(myVar);
Looking at EXPLAIN ANALYSE
his was written in C, the function would be data-bound
as it read from the abbreviation table - unless you guys tell that there is
a not inconsiderable cost involved in type conversion from PG to internal
vars.
Carlo
-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com]
Se
te it so I can tailor my
tests accordingly.
Thanks,
Carlo
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
the left. correct?
_
From: Dave Crooke [mailto:dcro...@gmail.com]
Sent: October 11, 2011 9:28 PM
To: Claudio Freire
Cc: Carlo Stonebanks; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Composite keys
Claudio is on point, I'll be even more pointed
If pkey
tion (more generally, is there a performance impact for
column ordering in btree composite keys?)
Thanks,
Carlo
or will things just quietly go wrong,
with one session getting a side-effect from another session's state? Carlo>
Date: Wed, 14 Sep 2011 09:52:07 +0800
> From: ring...@ringerc.id.au
> To: stonec.regis...@sympatico.ca
> CC: kevin.gritt...@wicourts.gov; pgsql-performance@postgr
commendation on the forum? Carlo > Date: Tue, 13 Sep 2011 16:13:00
-0500
> From: kevin.gritt...@wicourts.gov
> To: pgsql-performance@postgresql.org; stonec.regis...@sympatico.ca
> Subject: RE: [PERFORM] Migrated from 8.3 to 9.0 - need to update config
>
up where they left off).
What would the bg_writer settings be in this case?
Thanks again for your time,
Carlo
> Date: Fri, 9 Sep 2011 13:16:28 -0500
> From: kevin.gritt...@wicourts.gov
> To: pgsql-performance@postgresql.org; stonec.regis...@sympatico.ca
> Subject: Re: [PER
then the imports behave as if they are suddenly taking a deep
breath, slowing down. Sometimes, so much we cancel the import and restart
(the imports pick up where they left off).
What would the bg_writer settings be in this case?
Thanks again for your time,
Carlo
-Original Message-
Fr
config settings.
The specifics of the DB and how it is used is below
that, but in general let me say that this is a full-time ETL system, with only
a
handful of actual “users” and automated processes over 300 connections running
“import” programs 24/7.
I appreciate the help,
Carlo
The host
didn't want to throw too much
info as my concern was actually whether views were as klunky as other DB
platforms.
Carlo
-Original Message-
From: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov]
Sent: August 13, 2010 9:29 AM
To: pgsql-performance@postgresql.org; Carlo Stonebanks
Subjec
ernal UNION.
Any explanation as to why this happens?
The actualt view is listed at the very bottom, if relevant.
Carlo
QUERY 1 PLAN
"Unique (cost=25.48..25.69 rows=2 width=417) (actual time=0.180..0.190
rows=2 loops=1)"
" -> Sort (cost=25.48..25.48 rows=2 width=417) (actua
ircuit the AND return false right away, or would it still execute
MySlowFunc('foo') ?
Thanks!
Carlo
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
setup that makes our PG servers so hard to
tune, but I think its time to call the cavalry (gotta find serious PG server
tuning experts in NJ).
Carlo
SLOW PLAN
Sort (cost=42869.40..42869.59 rows=77 width=18) (actual
time=26316.495..26322.102 rows=9613 loops=1)
Sort Key: p.provider_id
table)
This DB is a copy of another DB, on the same server host, same drive but
different tablespace. The original query has good performance, and is hit
often by the live web server. With the copy - which performs poorly - the
query is hit infrequently.
Is there any evidence for why the nested
sions, picking seq
scan over index scan, where index scan has a better result.
Can anyone see any obvious faults?
Carlo
autovacuum = on
autovacuum_analyze_scale_factor = 0.05
autovacuum_analyze_threshold = 1000
autovacuum_naptime = 1min
autovacuum_vacuum_cost_delay = 50
autovacuum_vacuum_scale_f
requires restart)
max_fsm_relations = 1000 # min 100, ~70 bytes each
max_locks_per_transaction = 128 # min 10
port = 5432# (change requires restart)
shared_buffers = 4096MB
shared_preload_libraries = '$libdir/plugins/plugin_debugger.so' # (change
requires restart)
track
e DBs?
Thanks,
Carlo
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
p. On Windows see the "System Cache" in the Windows Task Manager's
Performance tab.
Are these values to look at BEFORE starting PG? If so, how do I relate the
values returned to setting the effective_cache_size values?
Carlo
PS Loved your 1995 era pages. Being a musician, it w
erator_cost = 0.0025 # same scale as above
#effective_cache_size = 128MB
Thanks for the help,
Carlo
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
needed (4090224) exceeds max_fsm_pages
(204800)
HINT: Consider increasing the configuration parameter "max_fsm_pages" to
a value over 4090224.
Gee, only off by a factor of 20. What happens if I go for this number (once
again, what's the down side)?
Carlo
--
Sent via pgsql-per
me it is taking to THAT degree?
4) Any other way to get max_fsm_pages settings and max_fsm_relations?
Carlo
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
know that after some screwing around they got the
server to the point that it wouldn't restart and tried to back out until it
would.
max_connections = 200
work_mem = 512MB
This is a frightening combination by the way.
Looks like it's connected to the above issue. The real max
My bad - I thought I had, so it has been re-posted with a (v2) disclaimer in
the title... like THAT will stop the flaming!
Thanks for your patience!
"Craig James" wrote in message
news:4b4f8a49.7010...@emolecules.com...
Carlo Stonebanks wrote:
Guys, I want to thank you for
) Which Windows OS would you recommend? (currently 2008 x64 Server)
3) If we were to port to a *NIX flavour, which would you recommend? (which
support trouble-free PG builds/makes please!)
4) Is this the right PG version for our needs?
Thanks,
Carlo
The details of our use:
. The DB hosts is a data
-in-hand, looking for advice under the PERFORM post: "New server to
improve performance on our large and busy DB - advice?"
Thanks again!
Carlo
"Scott Marlowe" wrote in message
news:dcc563d11001071740q572cdae2re410788fe790d...@mail.gmail.com...
On Thu, Jan 7, 20
2) Which Windows OS would you recommend? (currently 2008 x64 Server)
3) If we were to port to a *NIX flavour, which would you recommend? (which
support trouble-free PG builds/makes please!)
4) Is this the right PG version for our needs?
Thanks,
Carlo
The details of our use:
. The DB hosts is
ulted).
Carlo
autovacuum = on
autovacuum_analyze_scale_factor = 0.1
autovacuum_analyze_threshold = 250
autovacuum_naptime = 1min
autovacuum_vacuum_scale_factor = 0.2
autovacuum_vacuum_threshold = 500
bgwriter_lru_maxpages = 100
checkpoint_segments = 64
checkpoint_warning = 290
datestyle =
crank it up more and delay the checkpoints as much as possible during
these updates. 64 segments is already 1024M.
We have 425M rows, total table size is 78GB, so we can imagine a worst case
UPDATE write is less than 200 bytes * number of rows specified in the update
(is that logic correct?).
runtime: 372.141 ms
""Kevin Grittner"" wrote in message
news:4b46256302250002d...@gw.wicourts.gov...
"Carlo Stonebanks" wrote:
An interesting idea, if I can confirm that the performance problem
is because of the WHERE clause, not the UPDATE.
If you cou
It might well be checkpoints. Have you tried cranking up checkpoint
segments to something like 100 or more and seeing how it behaves then?
No I haven't, althugh it certainly make sense - watching the process run,
you get this sense that the system occaisionally pauses to take a deep, long
bre
e new_column is null' to the conditions.
Already being done, albeit with a coalesce(val, '') = '' - it's quite
possible that this is hurting the WHERE clause; the EXPLAIN shows the table
using the pkey and then filtering on the COALESCE as one would expect.
Ca
If it is possible to lock this audit table exclusively (may be during
off peak hours) I would look into
- create new_audit_table as select col1, col2, col3 ... col9,
'new_col_value' from old_audit_table;
- create all indexes
- drop old_audit_table
- rename new_audit_table to old_audit_table
That
Got an explain analyze of the delete query?
UPDATE mdx_core.audit_impt
SET source_table = 'mdx_import.'||impt_name
WHERE audit_impt_id >= 31941 AND audit_impt_id <= 319400010
AND coalesce(source_table, '') = ''
Index Scan using audit_impt_pkey on audit_impt (cost=0.00..92.63 rows=1
width=
eks to complete on an 8-core CPU with
more than enough memory.
As a ballpark estimate - is this sort of performance for an 500M updates
what one would expect of PG given the table structure (detailed below) or
should I dig deeper to look for performance issues?
As always, thanks!
Carlo
16MB) = 3.6GB total RAM eaten up under peak load for these
two values alone.
If we wanted to get more aggressive, we can raise work_mem.
Carlo
-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED]
Sent: November 1, 2007 5:39 PM
To: Carlo Stonebanks
Cc: pgsql-perfor
e to the fact that there was something
suspicious. We try to optimize our memory settings (based on various tuning
docs, advice from here, and good old trial-and-error). Since the new config
had barely any changes, I knew something was wrong.
Carlo
-Original Message-
From: Tom Lane [mailto
'off';
The plans were now similar, using nested loops and bitmapped heap scans. Now
the Linux query outperformed the Windows query.
Question: Can anyone tell me which config values would have made PG select
hash join and merge joins when the nested loop/bitm
a great way to place the enterprise's
db-centric business logic at the server.
Carlo
-Original Message-
From: Ow Mun Heng [mailto:[EMAIL PROTECTED]
Sent: September 24, 2007 8:51 PM
To: Carlo Stonebanks
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] REPOST: Nested loops
Is there a rule of thumb about when the planner's row estimates are too
high? In particular, when should I be concerned that planner's estimated
number of rows estimated for a nested loop is off? By a factor of 10? 100?
1000?
Carlo
---(end of
Has anyone offered any answers to you? No one else has replied to this post.
"Ow Mun Heng" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
On Thu, 2007-09-20 at 11:02 -0400, Carlo Stonebanks wrote:
(SORRY FOR THE REPOST, I DON'T SEE MY ORIGINAL QUESTION OR A
ut I
assumed that VACUUM ANALYZE and ANALYZE have the same net result. Am I
wrong?
If I am not wrong (i.e. VACUUM ANALYZE and ANALYZE should produce the same
results) why would the performance improve on a DB that has seen no
transactional activity only after the SECOND try?
PG
e the over-estimations below significant, and if so, is this an indication
of a general configuration problem?
Carlo
select
pp.provider_id,
pp.provider_practice_id,
nearby.distance
from mdx_core.provider_practice as pp
join mdx_core.facility as f
on f.facility_id = pp.facility_id
join (s
(i.e. VACUUM ANALYZE and ANALYZE should produce the same
results) why would the performance improve on a DB that has seen no
transactional activity only after the SECOND try?
PG 8.2.4 on RH LINUX 1GB RAM SCSI RAID 1
Carlo
---(end of broadcast)---
?
Carlo
select
pp.provider_id,
pp.provider_practice_id,
nearby.distance
from mdx_core.provider_practice as pp
join mdx_core.facility as f
on f.facility_id = pp.facility_id
join (select * from mdx_core.zips_in_mile_range('
ep, I have a dream -- and in this dream Tom writes a
brilliant three line code sample that makes it all clear to me, and I wake
up a PostgreSQL guru)
;-)
Carlo
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: September 17, 2007 11:30 PM
To: Merlin Moncure
Cc: Carlo
Thanks, it worked. Client happy. Big bonus in the mail.
-Original Message-
From: Merlin Moncure [mailto:[EMAIL PROTECTED]
Sent: September 17, 2007 8:18 PM
To: Carlo Stonebanks
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Query works when kludged, but would prefer "
Well, there goes my dream of getting a recommendation that will deliver a
blinding insight into how to speed up all of my queries a thousand-fold.
Thanks Merlin!
-Original Message-
From: Merlin Moncure [mailto:[EMAIL PROTECTED]
Sent: September 17, 2007 8:03 PM
To: Carlo Stonebanks
Cc
ence the facility_address_id is
NULL).
PG is 8.4.2 on RH linux server with 1GB ram, HDD is RAID 1.
I don't like kludging like this - so any and all help or advice is
appreciated!
Carlo
ORIGINAL QUERY
select
pp.provider_id,
pp.provider_practice_id,
nearby.distance
from mdx_cor
Can anyone answer this for me: Although I realize my client's disk subsystem
(SCSI/RAID Smart Array E200 controller using RAID 1) is less than
impressive - is the default setting of 4.0 realistic or could it be lower?
Thanks!
---(end of broadcast)--
<<
If what you mean is that pg has a design that's heavily oriented towards
things that tend to
be cheap on POSIX and doesn't use the core Win32 features effectively,
then let's track
that as an optimisation opportunity for the Win32 port.
>>
Isn't it just easier to assume that Windows Server ca
>= 'smith' and lower(last_name) < 'smiti' on 8.2.4
systems, but a slow sequence scan and filter on 8.1.9 - is this related to
the version difference (8.1.9 vs 8.2.4) or is this related to something like
operators/classes that have been installed?
Carlo
Wow - it's nice to hear someone say that... out loud.
Thanks, you gave me hope!
-Original Message-
From: James Mansion [mailto:[EMAIL PROTECTED]
Sent: September 6, 2007 4:55 PM
To: Carlo Stonebanks
Cc: Scott Marlowe; Alvaro Herrera; pgsql-performance@postgresql.org
Subject: Re: [PE
>> Large shared_buffers and Windows do not mix. Perhaps you should leave
the shmem config low, so that the kernel can cache the file pages.
<<
Is there a problem BESIDES the one that used to cause windows to fail to
allocate memory in blocks larger than 1.5GB?
The symptom of this problem was tha
Right, additionally NTFS is really nothing to use on any serious disc
array.
Do you mean that I will not see any big improvement if I upgrade the disk
subsystem because the client is using NTFS (i.e. Windows)
---(end of broadcast)---
TIP 9: In
ing RAID 1.
-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED]
Sent: September 4, 2007 7:15 PM
To: Alvaro Herrera
Cc: Carlo Stonebanks; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance on 8CPU's and 32GB of RAM
On 9/4/07, Alvaro Herrera <[EMAIL PROTECTE
bers they would change, and what the
recommendations would be?
Thanks!
Carlo
max_connections = 100
shared_buffers = 10
work_mem = 100
max_fsm_pages = 204800
max_fsm_relations = 1500
vacuum_cost_delay = 40
bgwriter_lru_maxpages = 100
bgwriter_all_maxpages = 100
checkpoint_segments = 64
che
see.
As you say, the problem is in the idea - but no matter what, I need to be
able to match phrases that will have all sorts of erratic abbreviations and
misspellings - and I have to do it at very high speeds.
I would appreciate any suggestions you might have.
Carlo
select similarity(
rigram matching on phrases (a sopposed to words)?
Carlo
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
was oriented towards word mathcing,
not phrase matching.
Carlo
""Steinar H. Gunderson"" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
On Wed, Aug 22, 2007 at 12:02:54PM -0400, Carlo Stonebanks wrote:
Any suggestions on where to go with this project to im
l
vectors used gist(text tsvector) and an on insert/update trigger to populate
the vector field.
Any suggestions on where to go with this project to improve performance
would be greatly appreciated.
Carlo
---(end of broadcast)---
TIP 6: explain
""Matthew O'Connor"" wrote in message
news:[EMAIL PROTECTED]
> Just a wild guess, but the performance problem sounds like maybe as your
> data changes, eventually the planner moves some query from an index scan
> to a sequential scan, do you have any details on what queries are taking
> so lon
tedious.
Could these two problems - the weird slowdowns after a VACUUM/ANALYZE and
the frequent lockups when the import process is running quickly - be
related?
Carlo
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
imported records crawl by. I have tried vacuuming the entire DB as well as
rebuilding indexes. Nothing. Any ideas what could have happened? What is the
right thing to do?
Carlo
---(end of broadcast)---
TIP 1: if posting/reading through Usenet
ause it's one fo the most
mature interfaces postgres has. So, here's a thought: is there any way for
me to inspect the state of a postgres process to see if it's responsive -
even if it's serving another connection?
Carlo
---(end of broadcast)
I wonder if the Woodcrest Xeons resolved
this? Have these problems been experienced on both Linux and Windows (we are
running Windows 2003 x64)
Carlo
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
re compiling postgresql.
Are these associated with any type of CPU?
Carlo
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
This is pretty interesting - where can I read more on this? Windows isn't
actually hanging, one single command line window is - from its behaviour, it
looks like the TCL postgresql package is waiting for pg_exec to come back
from the commit (I believe the commit has actually gone through).
It c
memory usage is below the max available. Each postgresql process takes
up 500MB, there are four running and I have 4GB of RAM.
Carlo
---(end of broadcast)---
TIP 6: explain analyze is your friend
pplications themselves: short transactions never lasting more than
a fraction of a second.
Carlo
eeds. But try to see if
> you can find something not granted in pg_locks that it may be stuck on.
Looking at the pgadmin server status pages, no locks or transactions are
pending when this happens.
Carlo
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
g locks or transactions.
I am not doing any explicit locking, all transaction settings are set to
default.
Any thoughts on the cause and possible solutions would be appreciated.
Carlo
---(end of broadcast)---
TIP 4: Have you searched
Alpha - is there an "official" release I
should be waiting for? It's not clear to me whether this is a commercial
product or not.
Carlo
---(end of broadcast)---
TIP 6: explain analyze is your friend
> carlo: please, please, get your mail server to quit telling me your
> mailbox is full :)
Merlin, sorry about that. This is the first I've heard of it.
Carlo
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
rything appears to be at its default setting.
Is this the reason for the rather depressing performance fromt/to access and
can anything be done about it?
Carlo
---(end of broadcast)---
TIP 6: explain analyze is your friend
ry is one of the victims of a db
structure corruption I suffered when transferring the schema over from
development into production.
(Well, that's my excuse and I'm sticking to it!)
Thanks for all the help - I've reduced the execution time to 1/10 of its
original time.
Carlo
---
of the
union and you mentioned it. I was just under th eimpression that getting
this sub-query to work would have produced the most clear, straightforward
ANALYZE results.
Carlo
"Shaun Thomas" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> On Monday 16 Octobe
ions.
>
> and f.default_postal_code LIKE '14224%'
I did try this - nothing signoificant came from the results (see below)
thanks,
Carlo
explain analyze select
f.facility_id,
null as facility_address_id,
null as address_id,
f.facility_type_code,
f.name,
below. If you see redundancy, this was from
vain attempts to please the optimiser gods.
Carlo
ALTER TABLE mdx_core.facility
ADD CONSTRAINT facility_pkey PRIMARY KEY(facility_id);
CREATE INDEX facility_country_state_city_idx
ON mdx_core.facility
USING btree
(default_country_code, default_
ted
4) VACUUM VERBOSE on the tables involved
5) Original SQL with original EXPLAIN to show the code that started this.
Carlo
1) Bitmap scan off, but seq scan enabled. It created a suprisingly expensive
seq scan.
"Nested Loop Left Join (cost=0.00..34572.43 rows=109 width=71) (actual
time=
Hey Tom, thanks for jumping in. Nothing on TV on a Sunday afternoon? ;-)
Appreciate teh input.
Here is vacuum verbose output for both the tables in question.
Carlo
INFO: vacuuming "mdx_core.facility"
INFO: index "facility_pkey" now contains 832399 row versions in 3179 pa
1 - 100 of 130 matches
Mail list logo