On Wed, Sep 20, 2017 at 6:05 PM, David Rowley
wrote:
> On 21 September 2017 at 04:15, Mike Broers wrote:
> > Ultimately I think this is just highlighting the need in my environment
> to
> > set random_page_cost lower (we are on an SSD SAN anyway..), but I dont
> think
> &
andom_page_cost lower (we are on an SSD SAN anyway..), but I dont
think I have a satisfactory reason by the row estimates are so bad in the
QA planner and why it doesnt use that partition index there.
On Fri, Sep 15, 2017 at 3:59 PM, Mike Broers wrote:
> That makes a lot of sense, thanks for
exists (select 1 from t_sap where e.landing_id = t_sap.landing_id)) as
rankings;
Based on the difference in row estimate I am attempting an analyze with a
higher default_statistic_target (currently 100) to see if that helps.
On Fri, Sep 15, 2017 at 3:42 PM, Tom Lane wrote:
> Mike Broers wri
Buckets: 1024 Batches: 1 Memory Usage: 10kB
│
│ -> HashAggregate (cost=41.88..43.88 rows=200
width=4) (actual time=0.054..0.067 rows=45 loops=1)
│
│ Group Key: t_sap.landing_id
│
│ -> Seq Sca
2017 at 08:28, Mike Broers wrote:
> > I have a query of a partitioned table that uses the partition index in
> > production but uses sequence scans in qa. The only major difference I
> can
> > tell is the partitions are much smaller in qa. In production the
> partitions
ing_id)
│
│ Filter: (((body ->> 'SID'::text) IS NOT
NULL) AND (validation_status_code = 'P'::bpchar))
│
│ -> Index Scan using
ix_event__00076729_landing_id on event__00076729 e_18 (cost=0.56..4593.36
rows=11599 width=796) │
│ Index Cond: (landing_id =
t_sap.landing_id)
│
│ Filter: (((body ->> 'SID'::text) IS NOT
NULL) AND (validation_status_code = 'P'::bpchar))
│
│ -> Index Scan using
ix_event__00078600_landing_id on event__00078600 e_19 (cost=0.56..4940.39
rows=13528 width=804) │
│ Index Cond: (landing_id =
t_sap.landing_id)
│
│ Filter: (((body ->> 'SID'::text) IS NOT
NULL) AND (validation_status_code = 'P'::bpchar))
│
│ -> Index Scan using
ix_event__00080741_landing_id on event__00080741 e_20 (cost=0.56..4105.25
rows=6846 width=760)│
│ Index Cond: (landing_id =
t_sap.landing_id)
│
│ Filter: (((body ->> 'SID'::text) IS NOT
NULL) AND (validation_status_code = 'P'::bpchar))
│
└──
Any ideas for how to convince postgres to choose the faster plan in qa?
Thanks!
Mike
Thanks, Tom.
Wouldn't this mean that cursors are noticeably non-optimal even for normal
data sizes, since the entire data to be streamed from the table is always
duplicated into another buffer and then streamed?
> if you want the whole query result at once, why are you bothering with a
cursor?
T
The generated buffer is 140MB, not 14MB. At 14 bytes per row, that makes
sense.
I have done another test.
If I execute `FETCH ALL FROM cursor` I get a 140MB disk buffer file, on the
PostgreSQL server, reported in its log.
If I execute `FETCH 500 FROM cursor` (exactly half the rows), I see a
> Seems odd. Is your cursor just on "SELECT * FROM table", or is there
> some processing in there you're not mentioning? Maybe it's a cursor
> WITH HOLD and you're exiting the source transaction?
Hi Tom,
I've deleted my own Stack Overflow answer in favour of Laurenz Albe's one.
New TL;DR (I'm
I meant to say: "the `FETCH 1 FROM cursor` until exhausted pattern will
always be safe". Nasty typo, sorry!
clearly not as simple as that! I don't know if you can offer any more
helpful insight on this last aspect?
Many thanks,
Mike
I asked the same question at the same time on Stack Overflow (sincere
apologies if this is a breach of etiquette - I really needed an answer, and
I thought the two communities might not overlap).
Stackoverflow now has an answer, by me:
http://stackoverflow.com/q/42292341/#42297234 - which is based
**Short version of my question:**
If I hold a cursor reference to an astronomically huge result set in my
client code, would it be ridiculous (i.e. completely defeats the point of
cursors) to issue "FETCH ALL FROM cursorname" as my next command? Or would
this slowly stream the data back to me as I
looks like it’s having to do a table scan for all the rows above
the id cutoff to see if any meet the filter requirement. “not in” can be very
expensive. An index might help on this column. Have you tried that?
Your rowcounts aren’t high enough to require partitioning or any other changes
to your table that I can see right now.
Mike Sofen (Synthetic Genomics)
From: Mike Sofen Sent: Tuesday, September 27, 2016 8:10 AM
From: Greg Spiegelberg Sent: Monday, September 26, 2016 7:25 AM
I've gotten more responses than anticipated and have answered some questions
and gotten some insight but my challenge again is what should I capture along
the w
can easily do 500
million rows per bucket before approaching anything close to the 30ms max query
time.
Mike Sofen (Synthetic Genomics)
(with guaranteed IOPS), performance against even 100m row tables should still
stay within your requirements.
So Rick’s point about not needing millions of tables is right on. If there’s a
way to create table “clumps”, at least you’ll have a more modest table count.
Mike Sofen (Synthetic Genomics)
scale, auto-shard, fault tolerant,
etc…and I’m not a Hadoopie.
I am looking forward to hearing how this all plays out, it will be quite an
adventure! All the best,
Mike Sofen (Synthetic Genomics…on Postgres 9.5x)
ther issues/requirements that are creating other performance
concerns that aren’t obvious in your initial post?
Mike Sofen (Synthetic Genomics)
it may become a
more frequent ad-hoc need so if there is something else I can do it would
be appreciated.
On Wed, Sep 21, 2016 at 9:11 PM, Tom Lane wrote:
> Mike Broers writes:
> > Hello, I am curious about the performance of queries against a master
> table
> > that seem to do
plan by using a subquery on the indexed
partition and using those results to scan for the unindexed value.
On Wed, Sep 21, 2016 at 12:37 PM, Mike Broers wrote:
> Thanks for your response - Is 'selectively choosing what partition'
> different than utilizing each partitions index wh
uld help, but its just my wish.
>
>
>
> Regards,
> Ganesh Kannan
>
>
>
> ------
> *From:* pgsql-performance-ow...@postgresql.org postgresql.org> on behalf of Mike Broers
> *Sent:* Wednesday, September 21, 2016 12:53 PM
> *To:* pgsql-performance@postgresql.org
Hello, I am curious about the performance of queries against a master table
that seem to do seq scans on each child table. When the same query is
issued at a partition directly it uses the partition index and is very
fast.
The partition constraint is in the query criteria. We have non overlappin
From: Jim Nasby [mailto:jim.na...@bluetreble.com] Sent: Wednesday, September
07, 2016 12:22 PM
On 9/4/16 7:34 AM, Mike Sofen wrote:
> You raise a good point. However, other disk activities involving
> large data (like backup/restore and pure large table copying), on both
> plat
From: Claudio Freire Sent: Friday, September 02, 2016 1:27 PM
On Thu, Sep 1, 2016 at 11:30 PM, Mike Sofen < <mailto:mso...@runbox.com>
mso...@runbox.com> wrote:
> It's obvious the size of the batch exceeded the AWS server memory,
> resulting in a profoundly slower pro
son between Pass 1 and Pass 2: average row lengths were within 7% of
each other (1121 vs 1203) using identical table structures and processing
code, the only difference was the target server.
I'm happy to answer questions about these results.
Mike Sofen (USA)
--
Sent via pgsql-perform
oops=38831)
> Index Cond: (feature_id = f.id <http://f.id> )
> Buffers: shared hit=181429 read=426
> -> Index Scan using warning_feature_id_index on "Warning"
> warning (cost=0.00..20.88 rows=1 width=16) (actual time=0.007..0.007 rows=0
> loops=26265)
> Index Cond: (f.id <http://f.id> = feature_id)
> Buffers: shared hit=81151 read=42
> Total runtime: 6273.312 ms
>
>
> ---Version---
> PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit
>
>
> ---Table sizes---
> Extra_info 1223400 rows
> Feature 185436000 rows
> Measurement 50 rows
> Point 124681000 rows
> Warning11766800 rows
>
> ---Hardware---
> Intel Core i5-2320 CPU 3.00GHz (4 CPUs)
> 6GB Memory
> 64-bit Operating System (Windows 7 Professional)
> WD Blue 500GB HDD - 7200 RPM SATA 6 Gb/s 16MB Cache
>
> ---History---
> Query gets slower as more data is added to the database
>
> ---Maintenance---
> Autovacuum is used with default settings
>
> Tommi Kaksonen wrote:
I don’t see a reason to partition such small data. What I do see is you
attempting to run a big query on what looks like a small desktop pc. 6GB of
ram, especially under Windows 7, isn’t enough ram for a database server. Run
the query on a normal small server of say 16gb and it should perform fine. IMO.
Mike
h 20 cores, it can
only support 40 active users?
I come from the SQL Server world where a single 20 core server could support
hundreds/thousands of active users and/or many dozens of background/foreground
data processes. Is there something fundamentally different between the two
platforms rela
o haven't needed to pull
on that lever.
My latest model is placing large genomic data onto the AWS S3 file system,
keeping all of
the metadata inside the database. It's working very well so far, but we're
still in development.
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
question, feel free to clarify for me.
Mike
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Dave Stibrany
Sent: Thursday, March 17, 2016 1:45 PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Disk Benchmarking Question
I
increase IOPs/distribute them) or switch to SSDs and
forget about almost everything…
Mike
--
From: Dave Stibrany [mailto:dstibr...@gmail.com]
Sent: Friday, March 18, 2016 7:48 AM
Hey Mike,
Thanks for the response. I think where I'm confused is that I thought v
After bumping up work_mem from 12MB to 25MB that last materialize is indeed
hashing and this cut the query time by about 60%. Thanks, this was very
helpful and gives me something else to look for when troubleshooting
explains.
On Tue, Jul 7, 2015 at 11:10 AM, Mike Broers wrote:
> Tha
Thanks, very informative! I'll experiment with work_mem settings and report
back.
On Tue, Jul 7, 2015 at 11:02 AM, Tom Lane wrote:
> Mike Broers writes:
> > I had a query that was filtering with a wildcard search of a text field
> for
> > %SUCCESS%. The query took
I had a query that was filtering with a wildcard search of a text field for
%SUCCESS%. The query took about 5 seconds and was running often so I wanted
to improve it. I suggested that the engineers include a new boolean column
for successful status. They implemented the requested field, but the q
.
Mike Wilson
> On Nov 2, 2014, at 12:33 PM, Greg Spiegelberg wrote:
>
> Hi Mike,
>
> Sounds very familiar. Our master fans out to 16 slaves (cascading) and we
> had great success with segregating database queries to different slaves and
> some based on network late
and when the purchase traffic increases that is when we
start to see extreme lag develop on the slave.
CPU utilization on the slave during extreme lag is similar to normal operation
even if the slave is lagging more than usual.
Thanks for the info on max_wal_senders. That’s good to know.
Mike
heavily hit databases and maintain small lag times? I would like to believe
that the fault is something we have done though and that there is some
parameter we could tune to reduce this lag.
Any recommendations would be very helpful.
Mike Wilson
Predicate Logic Consulting
Thanks for your reply. Adding '-c fast' does seem to improve the initial
delay. I'm still seeing delays of several minutes between write bursts.
The server has light OLTP loading.
__
*Mike Blackw
be missing?
__
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
http://www.rrdonnelley.com
<http://www.rrdonnelley.com/>
* *
to setup DB cluster to handle 1000 concurrent users.
>
Ok. That's a start. Can you tell us more about what these users are
doing? What kind of queries are being issued to the database? How often
(per user or total per time)?
_
On May 13, 2013, at 4:24 PM, Jeff Janes wrote:
> On Mon, May 13, 2013 at 3:36 PM, Mike McCann wrote:
>
> Increasing work_mem to 355 MB improves the performance by a factor of 2:
>
> stoqs_march2013_s=# set work_mem='355MB';
> SET
> stoqs_march2013_s=
On May 7, 2013, at 4:21 PM, Jeff Janes wrote:
> On Thu, May 2, 2013 at 6:35 PM, Scott Marlowe wrote:
> On Thu, May 2, 2013 at 5:11 PM, Mike McCann wrote:
> > Hello,
> >
> > We are in the fortunate situation of having more money than time to help
> > solve our Post
.4GHz 4-core E5-2609 CPUs
64GB RAM
2x146GB 15K SAS hard drives
3x200GB SATA SLC SSDs
+ the usual accessories (optical drive, rail kit, dual power supplies)
Opinions?
Thanks in advance for any suggestions you have.
-Mike
--
Mike McCann
Software Engineer
Monterey Bay Aquarium Research Institute
On Thu, Nov 29, 2012 at 12:09 PM, Jeff Janes wrote:
>
> But If you do keep the drop index inside the transaction, then you
> would probably be better off using truncate rather than delete, and
> rebuild the index non-concurrently and move that inside the
> transaction as well.
>
>
Hmm From
Ah. So it does. Testing with two psql sessions locks as you said, and
moving the DROP INDEX to a separate transaction give the results I was
looking for.
Thanks,
Mike
__
*Mike Blackwell | Technical Analyst
OP INDEX outside the transaction?
The access pattern for the table is such that I can afford the occasional
stray hit without an index during the reload time. It's been pretty quick
using the above.
Mike
heard it either.
>
Ah. That must have been what I'd half-remembered. Thanks for the
clarification.
Mike
ger. As that apparently is not an issue I went ahead and tried the
DELETE and COPY in a transaction. The load time is quite reasonable this
way.
Thanks!
Mike
here.
Any suggestions are welcome.
Mike
ts.
Thanks for your help, BTW!
__
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
http://www.rrdonnelley.com
nable?
______
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
http://www.rrdonnelley.com
<http://www.rrdonnelley.com/>
* *
On
6124 393344000 0 146 9586 9825 13 3
83 1
______
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
http://www.rrdonnelley.com
<http://www.rrdonnelley.com/>
f not, where should I look next?
______
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
Did you check your read ahead settings (getra)?
Mike DelNegro
Sent from my iPhone
On Apr 3, 2012, at 8:20 AM, Cesar Martin wrote:
> Hello there,
>
> I am having performance problem with new DELL server. Actually I have this
> two servers
>
> Server A
n archived?
Mike
.
this is a good start, if there are any other suggestions please let me know
- is there any query to check estimated time remaining on long running
transactions?
On Mon, Jun 6, 2011 at 3:37 PM, Tom Lane wrote:
> Mike Broers writes:
> > I am in the process of implementing cascade
duration.
Let me know if there is anything else I can supply that will help the
review, thanks!
One additional question - is there any way to check how long postgres is
estimating an operation will take to complete while it is running?
Thanks again,
Mike
Thanks Robert, this is what I was looking for. I will try these suggestions
and follow up if any of them are the silver bullet.
On Fri, Jan 14, 2011 at 7:11 AM, Robert Haas wrote:
> On Thu, Jan 6, 2011 at 4:36 PM, Mike Broers wrote:
> > Thanks for the assistance.
> > Here is an
, table
Try
order by created_at+0
On Thu, Jan 6, 2011 at 3:36 PM, Mike Broers wrote:
> Thanks for the assistance.
>
> Here is an explain analyze of the query with the problem limit:
>
> production=# explain analyze select * from landing_page.messages where
> ((messages.topi
, this particular table is about 1.6GB
and growing. Currently there are jobs that query from this table every
minute.
Thanks again
Mike
On Wed, Jan 5, 2011 at 5:10 PM, Kevin Grittner
wrote:
> Mike Broers wrote:
>
> > Hello performance, I need help explaining the performance of
aused this to be set in a non default manner if possible.
#seq_page_cost = 1.0 # measured on an arbitrary scale
random_page_cost = 3.0 # same scale as above
Why does the smaller limit cause it to skip the index?
Is there a way to help the planner choose the better plan?
Much appreciated,
Mike
Hi All,
I have a poor performance SQL as following. The table has about 200M
records, each employee have average 100 records. The query lasts about
3 hours. All I want is to update the flag for highest version of each
client's record. Any suggestion is welcome!
Thanks,
Mike
==
schemas. It's not a conventional web
app we run :-)
I'm not arguing this is a bit extremal approach, but if one is forced to
go this path, it's quite walkable ;-)
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
such a layout :-)
However, expect very slow (hours) pg_dump, \dt and everything else that
requires reading schema information for the whole db.
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresq
Hi Thomas,
How is 'messungen_v_dat_2009_04_13_gtyp_minute_tag_idx' defined? What is
the row count for the table?
Mike
Hi,
subject is the following type of query needed in a function to select
data:
SELECT ' 13.04.2009 12:
using straight
joins instead. Also, try to filter things before joining, not after.
Correct me if I'm wrong, but in this particular case this seems pretty
much possible.
Cheers,
Mike
ning wrote:
Hi Mike,
Thank you for your explanation.
The "explain analyze" command used is as
hing is really badly broken.
I agree with you that PostgreSQL is doing different level of caching,
I just wonder if there is any way to speed up PostgreSQL in this
scenario,
This is what EXPLAIN ANALYZE for. Could you post the results please?
Cheers,
Mike
--
Sent via pgsql-performance
re is no alternative, as Scott said.
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
e this
type of explosive row multiplication.
Although I don't quite understand the purpose of the query, I don't
think you need all those OUTER joins.
Regards,
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
h
Hi Rui,
i have this query (i think is a simple one)
Could you EXPLAIN ANALYZE the query and show the results please?
Thanks,
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
kpoint fsync to finish.
Thanks a lot, this is invaluable information.
Regards,
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
_ratio = dirty_background_ratio / 2 if that's not the case. Also,
how dirty_ratio could be less than 5 if 5 is the minimal value?
Regards,
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
for clearing that out.
It's normal, and shouldn't worry anybody. In fact it's a good sign
that you're not using way too much memory for any one process
It says exactly the opposite.
This time I agree :-)
Cheers,
Mike
--
Sent via pgsql-performance mailing list (pg
Scott Marlowe wrote:
Also think about it, the OP has 8G of swap and 30Gig cached. How /
why would you be caching 30Gigs worth of data when there's only 8G to
cache anyway?
You're right, I have misread it again :-)
Cheers,
Mike
--
Sent via pgsql-performance mailing l
production servers and many batch
jobs I run put all 8 cores at 90% for extended periods. Since that
machine is normally doing a lot of smaller cached queries, it hardly
even notices.
The OP's machine is doing a lot of write ops, which is different.
Yes, more hard drives / better cac
Scott Marlowe wrote:
The postgres shared cache is at 4G, is that too big?
Not for a machine with 32Gig of ram.
He could even add some more.
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
case 4G
for shared buffers is good.
Actually, I take back my words on swap, too. 200M swapped is less
important when you have a plenty of memory.
Regards,
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
hould help.
Cheers,
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
n CPU load?
You can use /usr/bin/top to obtain most of this information.
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
I'd be glad to hear any opinions/suggestions.
Many thanks to everyone who responded!
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
is the performance/memory penalty on switching users in the
same connection (connections are reused of course)?
- will it hurt the cache?
- are prepared statements kept per user or per connection?
- is the query planner global or somehow tied to users?
I'd be glad to hear any opinions/su
So your update doesn't take long to run during off-peak times, so
basically your options are:
1. Optimize your postgresql.conf settings or upgrade to the latest
version of PostgreSQL.
2. Redesign your forum code so it can scale better.
3. Upgrade your servers hardware as it may be overloaded.
I
So your update doesn't take long to run during off-peak times, so
basically your options are:
1. Optimize your postgresql.conf settings or upgrade to the latest
version of PostgreSQL.
2. Redesign your forum code so it can scale better.
3. Upgrade your servers hardware as it may be overloaded.
I
[presumably the empty-disk effect could also be achieved by partitioning, say
25% of the drive for the database, and 75% empty partition. But in fact, you
could use that "low performance 75%" for rarely-used or static data, such as
the output from pg_dump, that is written during non-peak times]
ey also only pay for used capacity from their disk
vendor.
This is not very green as you need to buy more disks for the same amount
of data and its liable to upset your purchasing department who won't
understand why you don't want to fill your disks up.
Mike
3
> AND "PrintSamples"."TestPatternName" LIKE 'IQAF-TP8%';
The query I really want to run is several times larger than this. I
didn't think people would want to wade through pages and pages worth of
SQL and then explain analyze results - espec
I'm have the following view as part of a larger, aggregate query that is
running slower than I'd like. There are 4 views total, each very
similar to this one. Each of the views is then left joined with data
from some other tables to give me the final result that I'm looking for.
I'm hoping that
> >>> On Mon, Aug 13, 2007 at 1:48 PM, in message
> <[EMAIL PROTECTED]
> .net>, "Relyea, Mike" <[EMAIL PROTECTED]> wrote:
> > I've increased shared_buffers to 128MB, and restarted the
> server. My
> > total run time didn't
> >>> On Mon, Aug 13, 2007 at 10:35 AM, in message
> <[EMAIL PROTECTED]
> .net>, "Relyea, Mike" <[EMAIL PROTECTED]> wrote:
> > I'm running 8.2.4 on Windows XP with 1.5 GB memory.
> > shared_buffers = 12288
> > effective_cache_size =
> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]
> Sent: Friday, August 10, 2007 5:44 PM
> To: Relyea, Mike
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Help optimize view
>
> Try increasing join_collapse_limit --- you have ju
Oops. Realized I posted the wrong SQL and EXPLAIN ANALYZE results.
Also forgot to mention that my "server" has 1.5 GB memory.
SELECT "PrintSamples"."MachineID", "PrintSamples"."PrintCopyID",
"tblColors"."ColorID", avg("ParameterValues"."ParameterValue") AS
"Mottle_NMF"
FROM "AnalysisModules"
project. It would have saved us a lot of time by having a
> configuration tool in the beginning. I am willing to make this a very
> high priority for my team.
>
>
>
> Thanks,
>
>
>
> Lance Campbell
>
> Project Manager/Software Architect
>
> Web Services at Public Affairs
>
> University of Illinois
>
> 217.333.0382
>
> http://webservices.uiuc.edu
>
>
>
>
--
Mike Benoit <[EMAIL PROTECTED]>
signature.asc
Description: This is a digitally signed message part
o force it into a filter, but it's a messy hack. I've tried
ordering the joins in the the most efficent way with a
join_collapse_limit of 1, but it still does uses this index in
parallel with searching an index on another table (i guess the
planner figures it's saving some ti
be anymore elegant.
-Mike
On Feb 16, 2007, at 9:46 AM, Alvaro Herrera wrote:
Gauri Kanekar escribió:
I want the planner to ignore a specific index.
I am testing some query output. For that purpose i dont want the
index.
I that possible to ignore a index by the planner.
Sure:
BEGIN
DROP
8 | Sets the number of disk-
page buffers in shared memory for WAL.
wal_sync_method |
fsync | Selects the method used
for forcing WAL updates out to disk.
work_mem|
1024| Sets the maximum memory
to be used for query workspaces.
zero_damaged_pages | off
Thanks in advance for any help you can offer on this problem.
-Mike
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
0:00.19 postmaster
Thanks for your help in advance,
Mike
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
dex Scan using crops_loct on crops (cost=0.00..118.53
rows=42 width=24) (actual time=0.007..0.018 rows=8 loops=9)
Index Cond: (crops.loct = "outer".number)
-> Index Scan using commtypes_number_key on commtypes
(cost=0.00..4.19 rows=1 width=26) (actual time=0.005..0.0
The datatype of the join columns is a user defined type and there are no
commutators defined. I will fix that and retest. Thanks for the
insight.
Mike Quinn
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http
is second.
-- Mike Quinn
-- the worst way --
EXPLAIN ANALYZE
SELECT
Locts.id,
Commtypes.name
FROM
Growers
,
Locts
,
Crops
,
Commtypes
WHERE
Growers.id = '0401606'
AND
-- Commtypes.number = Crops.Commtype
Crops.Commtype = Commtypes.number
AND
Locts.number = Crops.Loct
-- Crops.Loc
process.
I can confirm this behavior after looking at my multipathed fibre
channel SAN. To the best of my knowledge, the multipathing code uses
the same underlying I/O code as the Linux SW RAID logic.
--
Mike Rylander
[EMAIL PROTECTED]
GPLS -- PINES Development
Database Developer
http
1 - 100 of 161 matches
Mail list logo