szag Informatikai Zrt.
On Thu, 26 Apr 2012, Tom Lane wrote:
Date: Thu, 26 Apr 2012 15:17:18 -0400
From: Tom Lane
To: Richard Kojedzinszky
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] query optimization
Richard Kojedzinszky writes:
Dear list,
We have a database schema, which
On 04/26/2012 04:08 PM, Tom Lane wrote:
Thomas Kellerer writes:
Tom Lane wrote on 26.04.2012 21:17:
Um ... did you analyze all the tables, or just some of them? I get
sub-millisecond runtimes if all four tables have been analyzed, but it
does seem to pick lousy plans if, say, only a and b h
Thomas Kellerer writes:
> Tom Lane wrote on 26.04.2012 21:17:
>> Um ... did you analyze all the tables, or just some of them? I get
>> sub-millisecond runtimes if all four tables have been analyzed, but it
>> does seem to pick lousy plans if, say, only a and b have been analyzed.
> Here it's sim
Tom Lane wrote on 26.04.2012 21:17:
Richard Kojedzinszky writes:
Dear list,
We have a database schema, which looks the same as the attached script.
When filling the tables with data, and skipping analyze on the table (so
pg_stats contains no records for table 'a'), the first select in the
sc
Richard Kojedzinszky writes:
> Dear list,
> We have a database schema, which looks the same as the attached script.
> When filling the tables with data, and skipping analyze on the table (so
> pg_stats contains no records for table 'a'), the first select in the
> script runs fast, but after an
Richard Kojedzinszky wrote:
> tuning our installation to not to use sequence scans in this case?
Make sure effective_cache_size is set to the sum of shared_buffers
and whatever your OS shows as usable for caching. Try adjusting
cost factors: maybe random_page_cost between 1 and 2, and
cpu_tup
Stephen Frost writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> Yeah, it's spending quite a lot of time finding the first matching row
>> in each child table. I'm curious why that is though; are the child
>> tables not set up with nonoverlapping firstloadtime ranges?
> The issue here is that t
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Yeah, it's spending quite a lot of time finding the first matching row
> in each child table. I'm curious why that is though; are the child
> tables not set up with nonoverlapping firstloadtime ranges?
They are set up w/ nonoverlapping firstloadtime ranges
Stephen Frost writes:
> What I think is happening here is that PG is pushing down that filter
> (not typically a bad thing..), but with that condition, it's going to
> scan the index until it finds a match for that filter before returning
> back up only to have that result cut out due to the limit
Stephen,
Yes, I couldn't agree more. The next two things I will be looking at very
carefully are the timestamps and indexes. I will reply to this post if
either dramatically helps.
Thanks again for all your help. My eyes were starting to bleed from staring
at explain logs!
Mike
On Thu, Sep 2
Mike,
* Michael Viscuso (michael.visc...@getcarbonblack.com) wrote:
> I spent the better part of the day implementing an application layer
> nested loop and it seems to be working well. Of course it's a little
> slower than a Postgres only solution because it has to pass data back
> and forth for
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Stephen,
I spent the better part of the day implementing an application layer
nested loop and it seems to be working well. Of course it's a little
slower than a Postgres only solution because it has to pass data back
and forth for each daily table
* Michael Viscuso (michael.visc...@getcarbonblack.com) wrote:
> Adding the final condition hosts_guid = '2007075705813916178' is what
> ultimately kills it http://explain.depesz.com/s/8zy. By adding the
> host_guid, it spends considerably more time in the older tables than
> without this condition
Thanks Ken,
I'm discussing with my coworker how to best make that change *as we
speak*. Do you think this will also resolve the original issue I'm
seeing where the query doesn't "limit out properly" and spends time in
child tables that won't yield any results? I was hoping that by using
the chec
On Wed, Sep 21, 2011 at 11:22:53PM -0400, Tom Lane wrote:
> Michael Viscuso writes:
> > Greg/Tom, you are correct, these columns should be modified to whatever
> > is easiest for Postgres to recognize 64-bit unsigned integers. Would
> > you still recommend bigint for unsigned integers? I likely
Michael Viscuso writes:
> Greg/Tom, you are correct, these columns should be modified to whatever
> is easiest for Postgres to recognize 64-bit unsigned integers. Would
> you still recommend bigint for unsigned integers? I likely read the
> wrong documentation that suggested bigint for signed 64
Thanks guys,
First of all, I should have included my postgres.conf file with the
original submission. Sorry about that. It is now attached.
Based on a recommendation, I also should have shown the parent child
relationship between osmoduleloads and its daily partitioned tables. to
reduce clutte
Greg Smith writes:
> That weird casting can't be helping. I'm not sure if it's your problem
> here, but the constraint exclusion code is pretty picky about matching
> the thing you're looking for against the CHECK constraint, and this is a
> messy one. The bigint conversion in the middle ther
On 09/21/2011 07:14 PM, Michael Viscuso wrote:
Check constraints:
"osmoduleloads_2011_09_14_event_time_check" CHECK (event_time =
'2011-09-14 00:00:00'::timestamp without time zone)
"osmoduleloads_2011_09_14_firstloadtime_check" CHECK
(firstloadtime >= 1296044640::bigint::numeri
Hi,
2011/8/30 Szymon Kosok :
> 2011/8/30 Ondrej Ivanič :
>> Could you please re-post your explain using this web site:
>> http://explain.depesz.com/ and post links to Stackoverflow question?
>
> Here it is: http://explain.depesz.com/s/Iaa
>
>> - try to disable nested loop join (set enable_nestloop
2011/8/30 Ondrej Ivanič :
> Could you please re-post your explain using this web site:
> http://explain.depesz.com/ and post links to Stackoverflow question?
Here it is: http://explain.depesz.com/s/Iaa
> - try to disable nested loop join (set enable_nestloop=off)
Even worse performance (http://e
Hi,
On 30 August 2011 15:36, Szymon Kosok wrote:
> Hello,
>
> I asked that question on StackOverflow, but didn't get any valuable
> response, so I'll ask it here. :)
>
> I have such query:
Could you please re-post your explain using this web site:
http://explain.depesz.com/ and post links to Sta
Hello Zotov,
Somehow the equivalence d2.basedon=d1.id is not used in the slow query,
probably because the equivalence constant value would be used inside a
not-base expression (the OR). You can see that the equivalence values
*are* used by changing the or to an and and compare both queries. Th
In response to Kaloyan Iliev Iliev :
> Hi,
> Can anyone suggest why this query so slow.
>-> Index Scan using
> ms_commands_history_ms_device_id_idx on ms_commands_history ch
> (cost=0.00..4.33 rows=1 width=8) (actual time=0.163..25254.004 rows=9807
> loops=1)
Estimat
Sorry for the noise.
I make vacuum analyze and the problem is solved.
Kaloyan Iliev
Kaloyan Iliev Iliev wrote:
Hi,
Can anyone suggest why this query so slow.
SELECT version();
version
On Fri, Nov 27, 2009 at 4:47 PM, Faheem Mitha wrote:
If not, you might want to look at some way of pre-marking the
non-duplicate rows so that you don't have to recompute that each time.
>>>
>>> What are the options re pre-marking?
>>
>> Well, what I usually do is - if I'm going to do the
On Wed, 25 Nov 2009, Robert Haas wrote:
On Wed, Nov 25, 2009 at 5:54 PM, Faheem Mitha wrote:
Well, any method of DISTINCT-ifying is likely to be somewhat slow, but
I've had good luck with SELECT DISTINCT ON (...) in the past, as
compared with other methods. YMMV - the only way to find out
On Wed, Nov 25, 2009 at 5:54 PM, Faheem Mitha wrote:
>
> Hi Robert,
>
> Thanks very much for your suggestions.
>
>>> Hi everybody,
>>>
>>> I've got two queries that needs optimizing. Actually, there are others,
>>> but these are pretty representative.
>>>
>>> You can see the queries and the corres
Hi Robert,
Thanks very much for your suggestions.
On Wed, 25 Nov 2009, Robert Haas wrote:
On Mon, Nov 23, 2009 at 5:47 PM, Faheem Mitha wrote:
Hi everybody,
I've got two queries that needs optimizing. Actually, there are others,
but these are pretty representative.
You can see the quer
On Mon, Nov 23, 2009 at 5:47 PM, Faheem Mitha wrote:
>
> Hi everybody,
>
> I've got two queries that needs optimizing. Actually, there are others, but
> these are pretty representative.
>
> You can see the queries and the corresponding plans at
>
> http://bulldog.duhs.duke.edu/~faheem/snpdb/opt.pd
On Tue, 24 Nov 2009, Sebastian Jörgensen wrote:
How often are the tables you query from updated?
Quite rarely. Once in a while. The large tables, eg. geno, are basically
static.
Regards, Faheem.
Rgds
Sebastian
On Tue, Nov 24, 200
How often are the tables you query from updated?
Rgds
Sebastian
On Tue, Nov 24, 2009 at 12:52 AM, marcin mank wrote:
> On Tue, Nov 24, 2009 at 12:49 AM, Faheem Mitha
> wrote:
> >
> > Yes, sorry. I'm using Postgresql 8.4. I guess I should go through
> diag.pdf
> > and make sure all the informat
On Tue, Nov 24, 2009 at 12:49 AM, Faheem Mitha wrote:
>
> Yes, sorry. I'm using Postgresql 8.4. I guess I should go through diag.pdf
> and make sure all the information is current. Thanks for pointing out my
> error.
>
excellent report!
about the copy problem: You seem to have created the primar
On Mon, 23 Nov 2009, Thom Brown wrote:
Hi Faheem,
There appears to be a discrepancy between the 2 PDFs you provided. One
says you're using PostgreSQL 8.3, and the other shows you using common
table expressions, which are only available in 8.4+.
Yes, sorry. I'm using Postgresql 8.4. I gue
2009/11/23 Faheem Mitha
>
> Hi everybody,
>
> I've got two queries that needs optimizing. Actually, there are others, but
> these are pretty representative.
>
> You can see the queries and the corresponding plans at
>
> http://bulldog.duhs.duke.edu/~faheem/snpdb/opt.pdf
>
> or
>
> http://bulldog.
I am struggeling with the following query which fetches a random subset
of 200 questions that matches certain tags within certain languages.
However, the query takes forever to evaluate, even though I have a
"limit 200" appended. Any ideas on how to optimize it?
QUERY: ===
Le Sunday 30 November 2008 19:45:11 tmp, vous avez écrit :
> I am struggeling with the following query which fetches a random subset
> of 200 questions that matches certain tags within certain languages.
> However, the query takes forever to evaluate, even though I have a
> "limit 200" appended. An
On 5/7/08, Tarcizio Bini <[EMAIL PROTECTED]> wrote:
> I'm working on optimizing queries using the Kruskal algorithm
> (http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4318118).
That paper looks very interesting. I would love to hear what the
PostgreSQL committers think of this algorithm.
Ale
It's not necessarily the join order that's an issue; it could also be
due to the merge join that it does in the first case. I've also run into
situations where the cost estimate for a merge join is way off the mark.
Rather than forcing the join order, you might try setting
enable_mergejoin=false.
Reinhard Vicinus <[EMAIL PROTECTED]> writes:
> PostgreSQL version: 8.1.6
> The following query needs a lot of time because the query planner
> reorders the joins:
Try reducing random_page_cost, increasing effective_cache_size, and/or
updating to PG 8.2. Any of these are likely to make it like th
Yes, that helps a great deal. Thank you so much.
- Original Message -
From: "Richard Huxton"
To: <[EMAIL PROTECTED]>
Cc:
Sent: Thursday, January 26, 2006 11:47 AM
Subject: Re: [PERFORM] Query optimization with X Y JOIN
[EMAIL PROTECTED] wrote:
If I want my datab
[EMAIL PROTECTED] wrote:
If I want my database to go faster, due to X then I would think that the
issue is about performance. I wasn't aware of a paticular constraint on X.
You haven't asked a performance question yet though.
I have more that a rudementary understanding of what's going on here
[EMAIL PROTECTED] wrote:
If I want my database to go faster, due to X then I would think that
the issue is about performance. I wasn't aware of a paticular
constraint on X.
I have more that a rudementary understanding of what's going on here,
I was just hoping that someone could shed some lig
y 26, 2006 11:12 AM
Subject: Re: [PERFORM] Query optimization with X Y JOIN
First, this isn't really the right place to ask -- this forum is about
performance, not SQL syntax.
Second, this isn't a question anyone can answer in a reasonable length of
time. What you're asking for usuall
First, this isn't really the right place to ask -- this forum is about
performance, not SQL syntax.
Second, this isn't a question anyone can answer in a reasonable length of time.
What you're asking for usually is taught in a class on relational database
theory, which is typically a semester
James G Wilkinson wrote:
All,
...
Firstly, I am frankly mystified on how to interpret all this. If anyone
could point me to a document or two that will help me decipher this,
I will greatly appreciate it.
I assume you have looked at:
http://www.postgresql.org/docs/8.0/static/performance-tips.html
Hi,
Thanks for the help. I actually got around with it by doing the following.
I created a temporary table:
create table statesnew as select distinct state,state_code from postalcode where lower(country)='us';
And then changed the query to :
SELECT (SELECT sn.state FROM statesnew sn WHERE U
Please post in plaintext, not html where possible.
Your group by clause was 'myst'...was that supposed to be mystate?
Her is something you might try...use the original query form and create a
function which resolves the state code from the input data...you are already
doing that with upper.
So,
sarlav kumar wrote:
Hi all,
Can someone please help me optimize this query? Is there a better way to
write this query? I am generating a report of transactions ordered by
time and with details of the sender and receiver etc.
SELECT distinct a.time::date ||''||substring(a.time::time::text,1,8)
Stephan, Tom-
Thanks. I now see that DISTINCT can't be moved within the plan as I thought. This is exactly the thinko that I was hoping someone would expose.
I've decided to abandon the DISTINCT clause. The view is more general and sufficiently fast without it, and callers can always add it t
On Thu, 2004-01-29 at 23:23, Dennis Bjorklund wrote:
> On Thu, 29 Jan 2004, Jack Coates wrote:
>
> > > Probably better to repost it as a gzip'd attachment. That should
> >
> > complete with a picture of the GUI version. 26k zipped, let's see if
> > this makes it through.
>
> Are you sure you at
Dennis Bjorklund <[EMAIL PROTECTED]> writes:
> Getting rid of the group by would not give that kind of speedup?
No. Getting rid of the per-row subqueries (or at least finding a way to
make 'em a lot cheaper) is the only way to make any meaningful change.
regards, tom lane
Tom Lane <[EMAIL PROTECTED]> writes:
> Jack Coates <[EMAIL PROTECTED]> writes:
> > yup -- here it is. It will probably be a nasty mess after linewrap gets
> > done with it,
>
> yup, sure is :-( If I was familiar with the layout I could probably
> decipher where the line breaks are supposed to b
On Thu, 29 Jan 2004, Jack Coates wrote:
> > Probably better to repost it as a gzip'd attachment. That should
>
> complete with a picture of the GUI version. 26k zipped, let's see if
> this makes it through.
Are you sure you attached it?
At least when it got here there was no attachment.
--
/
On Thu, 29 Jan 2004, Tom Lane wrote:
> > jackdb-# GROUP BY memberid_ HAVING (
>
> Um, that's not what I had in mind at all. Does GROUP BY actually do
> anything at all here? (You didn't answer me as to whether memberid_
> is a unique identifier or not, but if it is, this GROUP BY is just an
>
Stephan Szabo <[EMAIL PROTECTED]> writes:
> On Thu, 29 Jan 2004, Reece Hart wrote:
>> I have a large query which I would like to place in a view. The explicit
>> query is sufficiently fast, but the same query as a view is much slower
>> and uses a different plan. I would appreciate an explanation o
On Thu, 29 Jan 2004, Reece Hart wrote:
> I have a large query which I would like to place in a view. The explicit
> query is sufficiently fast, but the same query as a view is much slower
> and uses a different plan. I would appreciate an explanation of why this
> is, and, more importantly whether
[EMAIL PROTECTED]> create view v1 as
select distinct on (AH.p2gblataln_id) AH.p2gblataln_id,H.pseq_id,min(H.pstart) as
"pstart",
max(H.pstop) as "pstop",A.ident,(A.ident/Q.len::float*100)::int as "pct_ident",
sum(H.pstop-H.pstart+1) as "aln_length",H.genasm_id,H.chr,H.plus_strand,min(H.gstart) as
On Thu, 2004-01-29 at 14:01, Tom Lane wrote:
> Probably better to repost it as a gzip'd attachment. That should
> protect the formatting and get it into the list archives.
>
> regards, tom lane
complete with a picture of the GUI version. 26k zipped, let's see if
this makes
Jack Coates <[EMAIL PROTECTED]> writes:
> yup -- here it is. It will probably be a nasty mess after linewrap gets
> done with it,
yup, sure is :-( If I was familiar with the layout I could probably
decipher where the line breaks are supposed to be, but right now I'm
just confused.
> so let me kn
On Thu, 2004-01-29 at 11:31, Tom Lane wrote:
> Jack Coates <[EMAIL PROTECTED]> writes:
> > jackdb=# explain SELECT DISTINCT members_.memberid_
> > jackdb-# FROM members_
> > jackdb-# WHERE ( members_.List_='list1'
> > jackdb(# AND members_.MemberType_='normal'
> > jackdb(# AND members_.SubType_
Jack Coates <[EMAIL PROTECTED]> writes:
> jackdb=# explain SELECT DISTINCT members_.memberid_
> jackdb-# FROM members_
> jackdb-# WHERE ( members_.List_='list1'
> jackdb(# AND members_.MemberType_='normal'
> jackdb(# AND members_.SubType_='mail'
> jackdb(# AND members_.emailaddr_ IS NOT NULL )
On Thu, 2004-01-29 at 10:05, Tom Lane wrote:
> Jack Coates <[EMAIL PROTECTED]> writes:
> > That completed in 3.5 minutes on MS-SQL. I killed the query this morning
> > after 15 hours on PostgreSQL 7.4. I tried a GROUP BY memberid_ HAVING
> > variation, which completed in 59 seconds on MS-SQL. I kil
Jack Coates <[EMAIL PROTECTED]> writes:
> That completed in 3.5 minutes on MS-SQL. I killed the query this morning
> after 15 hours on PostgreSQL 7.4. I tried a GROUP BY memberid_ HAVING
> variation, which completed in 59 seconds on MS-SQL. I killed it after 35
> minutes on PostgreSQL.
Hm. I'd li
On Wed, 2004-01-28 at 18:04, Tom Lane wrote:
> Jack Coates <[EMAIL PROTECTED]> writes:
> > I've got a query that needs some help, please. Is there a way to avoid
> > all the looping? I've got freedom to work with the double-indented
> > sections below ) AND (, but the initial select distinct wrappe
Jack Coates <[EMAIL PROTECTED]> writes:
> I've got a query that needs some help, please. Is there a way to avoid
> all the looping? I've got freedom to work with the double-indented
> sections below ) AND (, but the initial select distinct wrapper is much
> more difficult to change. This is auto-ge
66 matches
Mail list logo