-> Seq Scan on partners p (cost=0.00..24.35 rows=435 width=30) (actual
time=0.039..9.383 rows=435 loops=1)
Total runtime: 3241.139 ms
(43 rows)
-
The DISTINCT ON condition was about the same amount of time,
statistically. Removing the DISTINCT entirely only gave a
very s
pen lead_request.
Would it be best to attempt to rewrite it for IN?
Or, should we try to tie it in with a join? I would
probably need to GROUP so I can just get a count of those
contacts with open lead_requests. Unless you know of a
better way?
Thanks for your assistance. Thi
On Sat, 20 Aug 2005, John Mendenhall wrote:
> I need to improve the performance for the following
> query.
I have run the same query in the same database under
different schemas. Each schema is pretty much the same
tables and indices. One has an extra backup table and
an extra index whi
ery is slow.
My second and more important question is, does anyone have
any ideas or suggestions as to how I can increase the speed
for this query?
Things I have already done are, modify the joins and conditions
so it starts with smaller tables, thus the join set is smaller,
modify the
can get this to repeat consistently?
Please let me know if any of you have any pointers as to
the cause of the different query plans.
Thank you very much in advance for any pointers you can provide.
JohnM
On Tue, 19 Jul 2005, John Mendenhall wrote:
> I tuned a query last week to obt
On Tue, 19 Jul 2005, John Mendenhall wrote:
> I tuned a query last week to obtain acceptable performance.
> Here is my recorded explain analyze results:
>
> LOG: duration: 826.505 ms statement: explain analyze
> [cut for brevity]
>
> I rebooted the database machine la
--
There is definitely a difference in the query plans.
I am guessing this difference in the performance decrease.
However, nothing was changed in the postgresql.conf file.
I may have run something in the psql explain analyze session
a week ago, but I can't figure out what I changed.
So, the bo
Dennis,
On Fri, 01 Jul 2005, Dennis Bjorklund wrote:
> On Thu, 30 Jun 2005, John Mendenhall wrote:
>
> > Our setting for effective_cache_size is 2048.
> >
> > random_page_cost = 4, effective_cache_size = 2048 time approximately
> > 4500ms
> > random
000# in milliseconds
#max_locks_per_transaction = 64 # min 10, ~260*max_connections bytes each
#---
# VERSION/PLATFORM COMPATIBILITY
#---
# - Previous Postgres Versions -
#add_missing_from = true
#re
er varying(2000) |
fulfillment_status_id | numeric(38,0) |
Indexes:
"lead_requests_pkey" primary key, btree (id)
"lead_requests_contact_id_idx" btree (contact_id)
"lead_requests_request_id_idx" btree (request_id)
Check constraints:
"
10 matches
Mail list logo