Josh,
Thanks so much for your comments. They are incredibly insightful and you
clearly know your stuff. It's so great that I'm able to learn so much from
you. I really appreciate it.
Do you need the interior sort? It's taking ~93ms to get 7k rows from
shipment_status, and then another 30ms to
Josh,
I did everything you said and my query does perform a bit better. I've been
getting speeds from 203 to 219 to 234 milliseconds now. I tried increasing
the work mem and the effective cache size from the values you provided, but
I didn't see any more improvement. I've tried to looking in
I'm not sure what the answer is but maybe I can help? Would clustering the
name index make this faster? I thought that would bunch up the pages so the
names were more or less in order, which would improve search time. Just a
guess though.
Ken
- Original Message -
From: "Jeremiah Jahn
2) Force PG to drop the merge join via SET ENABLE_MERGEJOIN = FALSE;
Actually, it was 312 milliseconds, so it got worse.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
Josh,
1) To determine your query order ala Dan Tow and drive off of person,
please
SET JOIN_COLLAPSE_LIMIT = 1 and then run Mark Kirkwood's version of the
query. (Not that I believe in Dan Tow ... see previous message ... but it
would be interesting to see the results.
Unfortunately, the query st
John,
Why are you now left joining driver and carrier code, but inner joining
shipment_status? I assume this is the *real* query that you are executing.
Well, the old and new versions are real queries. I changed the query a bit
because I noticed for some users, the listing was pulling out many di
I took John's advice and tried to work with sub-selects. I tried this
variation, which actually seems like it would make a difference conceptually
since it drives on the person table quickly. But to my surprise, the query
runs at about 375 milliseconds. I think it's because it's going over that
I took John's advice and tried to work with sub-selects. I tried this
variation, which actually seems like it would make a difference
conceptually since it drives on the person table quickly. But to my
surprise, the query runs at about 375 milliseconds. I think it's because
it's going over t
If so, I'd have to take the EXPLAIN ANALYZE results with a big grain of
salt, because what it's trying to do is add up a lot of
mostly-sub-millisecond intervals. What would essentially happen is that
whichever plan node had control at a particular millisecond boundary
would get charged for the who
"Ken Egervari" <[EMAIL PROTECTED]> writes:
Okay, here is the explain analyze I managed to get from work.
What platform is this on? It seems very strange/fishy that all the
actual-time values are exact integral milliseconds.
regards, tom lane
My machine is WinXP professional, atho
it might help the planner estimate better the number of cs rows
affected. whether this improves performance depends on whether
the best plans are sensitive to this.
I managed to try this and see if it did anything. Unfortunately, it made no
difference. It's still 250 milliseconds. It was a good
left join is for eager loading so that I don't have to run a seperate
query
to fetch the children for each shipment. This really does improve
performance because otherwise you'll have to make N+1 queries to the
database, and that's just too much overhead.
are you saying that you are actually doing
select s.*
from shipment s
inner join carrier_code cc on s.carrier_code_id = cc.id
inner join carrier c on cc.carrier_id = c.id
inner join carrier_to_person ctp on ctp.carrier_id = c.id
inner join person p on p.id = ctp.person_id
inner join shipment_status cs on s.current_status
First, what version of postgres, and have you run VACUUM ANALYZE recently?
Also, please attach the result of running EXPLAIN ANALYZE.
(eg, explain analyze select s.* from shipment ...)
I'm using postgres 8.0. I wish I could paste explain analyze, but I won't
be at work for a few days. I was hopi
I've tried to use Dan Tow's tuning method and
created all the right indexes from his diagraming method, but the query still
performs quite slow both inside the application and just inside pgadmin
III. Can anyone be kind enough to help me tune it so that it performs
better in postgres? I do
Thanks again for your response. I'll try and clarify some metrics that I
took a few days to figure out what would be the best join order.
By running some count queries on the production database, I noticed there
were only 8 rows in release_code. The filtered column is unique, so that
means th
Well, postgres does what you asked. It will be slow, because you have a
full table join. LIMIT does not change this because the rows have to be
sorted first.
I am aware that limit doesn't really affect the execution time all that
much. It does speed up ORM though and keeps the rows to a managea
Yes, I'm very well aware of VACUUM and VACUUM
ANALYZE. I've even clusted the date index and so on to ensure faster
performance.
- Original Message -
From:
David
Parker
To: Ken Egervari ; pgsql-performance@postgresql.org
Sent: Saturday, January 29, 2005
? I'm not really interested in this specific
case (I just made it up). I'm more interested in general solutions to this
general problem of big table sizes with bad filters and where join orders don't
seem to help much.
Thank you very much for your
help.
Best Regards,
Ken Egervari
19 matches
Mail list logo