On Sun, Dec 9, 2012 at 7:16 PM, Jeff Janes wrote:
> The obvious difference is that this one finds all 5 buffers it needs
> in buffers already, while the first one had to read them in. So this
> supports the idea that your data has simply grown too large for your
> RAM.
>
> Cheers,
>
> Jeff
>
J
On Mon, Dec 3, 2012 at 5:56 AM, Henry Drexler wrote:
> On Sun, Dec 2, 2012 at 12:44 AM, Jeff Janes wrote:
>>
>> Could you do it for the recursive
>> SQL (the one inside the function) like you had previously done for the
>> regular explain?
>>
>> Cheers,
>>
>> Jeff
>
>
> Here they are:
>
> for the
On Sun, Dec 2, 2012 at 12:44 AM, Jeff Janes wrote:
> Could you do it for the recursive
> SQL (the one inside the function) like you had previously done for the
> regular explain?
>
> Cheers,
>
> Jeff
>
Here they are:
for the 65 million row table:
"Index Scan using ctn_source on massive (cost=0
On Sun, Dec 2, 2012 at 12:44 AM, Jeff Janes wrote:
> Could you do it for the recursive
> SQL (the one inside the function) like you had previously done for the
> regular explain?
>
> Cheers,
>
> Jeff
>
Here they are:
for the 65 million row table:
"Index Scan using ctn_source on massive (cost=0
On Fri, Nov 30, 2012 at 12:22 PM, Henry Drexler wrote:
> On Fri, Nov 30, 2012 at 1:42 PM, Jeff Janes wrote:
>>
>> Can you report the EXPLAIN (ANALYZE, BUFFERS) instead?
>
>
> Thanks, here they are:
>
> for the approx 65 million row approx 50 min version:
>
> EXPLAIN (ANALYZE, BUFFERS)
> select
>
On Fri, Nov 30, 2012 at 1:23 PM, Kevin Grittner wrote:
> Henry Drexler wrote:
>
> > why would the query time go from 4 minutes to over 50, for an
> > increase in table rows from 30 million to 65 million?
>
> Did the active (frequently referenced) portion of the database go
> from something which
On Fri, Nov 30, 2012 at 1:42 PM, Jeff Janes wrote:
> Can you report the EXPLAIN (ANALYZE, BUFFERS) instead?
Thanks, here they are:
for the approx 65 million row approx 50 min version:
EXPLAIN (ANALYZE, BUFFERS)
select
massive_expansion(ctn,the_range)
from
critical_visitors;
"Seq Scan on crit
On Fri, Nov 30, 2012 at 5:22 AM, Henry Drexler wrote:
> Hello, and thank you in advance.
>
>
> Beyond the date vs timestamp troubleshooting I did, I am not sure what else
> to look for, I know the increase of rows will have some affect but I just
> don't think the query should go from 4 minutes to
Henry Drexler wrote:
> why would the query time go from 4 minutes to over 50, for an
> increase in table rows from 30 million to 65 million?
Did the active (frequently referenced) portion of the database go
from something which fit in cache to something which didn't? Did
any hash table or sort no
On Fri, Nov 30, 2012 at 8:22 AM, Henry Drexler wrote:
> Hello, and thank you in advance.
>
>
> Beyond the date vs timestamp troubleshooting I did,
>
I realize this could be confusing - since I ruled out that difference, the
real question is - given this setup, why would the query time go from 4
On 14 Sep 2011, at 20:45, Brian Fehrle wrote:
>> That is only about 1/30th of your table. I don't think a seqscan makes sense
>> here unless your data is distributed badly.
>>
> Yeah the more I look at it, the more I think it's postgres _thinking_ that
> it's faster to do a seqential scan. I'll
On 09/14/2011 01:10 AM, Alban Hertroys wrote:
On 13 Sep 2011, at 23:44, Brian Fehrle wrote:
These queries basically do a 'select max(primary_key_column) from table group by
column1, column2." Because of the group by, we would result in a sequential
scan of the entire table which proves to be
On 13 Sep 2011, at 23:44, Brian Fehrle wrote:
> These queries basically do a 'select max(primary_key_column) from table group
> by column1, column2." Because of the group by, we would result in a
> sequential scan of the entire table which proves to be costly.
That seems to suggest a row where
Hi,
On 14 September 2011 07:44, Brian Fehrle wrote:
> 2. I have appropriate indexes where they need to be. The issue is in the
> query planner not using them due to it (i assume) just being faster to scan
> the whole table when the data set it needs is as large as it is.
Try to reduce random_pag
Hi all,
I've got a large table that has 15 million + rows in it, and a set
of queries I've been trying to speed up. The table has a primary key
column, and a couple hundred other columns.
These queries basically do a 'select max(primary_key_column) from table
group by column1, column2." B
Hello,
I have a question regarding query performance from two pgsql applications.
PGadmin III 1.8 and Navicat for postgresql 8. Both connected to same server
on local machine 8.3 and runs a same query for appox. 1,60,000 rows with one
min, one max and addition of the min and max values on two table
> [ scratches head... ] Your example command works as expected for me.
> [ rereads thread... ] Oh, you're running 8.1. I think you have to
> do the command as a superuser to get that output in 8.1. Later versions
> are less picky.
Yes, with the right incantations, the FSM information does ap
Brian Cox <[EMAIL PROTECTED]> writes:
> I've already posted the tail of this output previously.
> I conclude that these lines are not in this file. Where
> did they go?
[ scratches head... ] Your example command works as expected for me.
[ rereads thread... ] Oh, you're running 8.1. I think yo
Tom Lane [EMAIL PROTECTED] wrote:
At the very end ... you're looking for these messages:
ereport(elevel,
(errmsg("free space map contains %d pages in %d relations",
storedPages, numRels),
errdetail("A total of %.0f page slots are in use (including
overhe
Brian Cox <[EMAIL PROTECTED]> writes:
> Any hints as to where the FSM info is in this file?
At the very end ... you're looking for these messages:
ereport(elevel,
(errmsg("free space map contains %d pages in %d relations",
storedPages, numRels),
errdetail("
Scott Marlowe [EMAIL PROTECTED] wrote:
There's bits spread throughout the file, but the summary is at the bottom.
Here's a tail of the 'vacuum verbose' output:
INFO: vacuuming "pg_toast.pg_toast_797619965"
INFO: index "pg_toast_797619965_index" now contains 0 row versions in 1
pages
DETAIL
On Thu, Apr 24, 2008 at 6:23 PM, Brian Cox <[EMAIL PROTECTED]> wrote:
> Tom Lane [EMAIL PROTECTED] wrote:
>
> > You need a database-wide vacuum verbose (not just 1 table) to get that
> > output ...
> >
>
> I ran:
>
> > pgsql -U admin -d cemdb -c 'vacuum verbose' > /tmp/pgvac.log 2>&1
>
> the out
Tom Lane [EMAIL PROTECTED] wrote:
You need a database-wide vacuum verbose (not just 1 table) to get that
output ...
I ran:
> pgsql -U admin -d cemdb -c 'vacuum verbose' > /tmp/pgvac.log 2>&1
the output file has 2593 lines and, while I haven't looked at all of
them, a:
> fgrep -i fsm /tmp/p
Brian Cox <[EMAIL PROTECTED]> writes:
> I read in another thread that vacuum verbose would tell me how much FSM
> is needed, but I ran it and didn't see any output about this.
You need a database-wide vacuum verbose (not just 1 table) to get that
output ...
regards, tom l
Tom Lane [EMAIL PROTECTED] wrote:
I suspect that your FSM settings are too low, causing free space found
by VACUUM to be forgotten about.
I read in another thread that vacuum verbose would tell me how much FSM
is needed, but I ran it and didn't see any output about this. What is
the way to de
Brian Cox <[EMAIL PROTECTED]> writes:
> I have a largish (pg_dump output is 4G) database. The query:
> select count(*) from some-table
> was taking 120 secs to report that there were 151,000+ rows.
> This seemed very slow. This db gets vacuum'd regularly (at least once
> per day). I also did a manu
I have a largish (pg_dump output is 4G) database. The query:
select count(*) from some-table
was taking 120 secs to report that there were 151,000+ rows.
This seemed very slow. This db gets vacuum'd regularly (at least once
per day). I also did a manual 'vacuum analyze', but after it completed,
On Jan 14, 2008, at 3:58 AM, pepone.onrez wrote:
I have this query in a table with 150 thowsand tuples and it takes
to long
t_documentcontent._id AS _id
FROM t_documentcontent LIMIT 50 OFFSET 8
You want an ORDER BY there. Not only will it probably speed things
up, without it
Oh - if you do this then make sure that you have the primary key index on
overview too.
Alex
On Jan 14, 2008 12:53 AM, Alex Turner <[EMAIL PROTECTED]> wrote:
> If you combine it with Tom Lane's suggestion - it will go even better,
> something like:
>
> select * from t_documentcontent where _id i
If you combine it with Tom Lane's suggestion - it will go even better,
something like:
select * from t_documentcontent where _id in (select _id from overview where
_id>x order by _id limit 50);
Alex
On Jan 13, 2008 11:59 PM, pepone. onrez <[EMAIL PROTECTED]> wrote:
>
> Thanks Alex
>
> I test yo
Thanks Alex
I test your solution and is realy more faster.
Nested Loop (cost=1743.31..2044.58 rows=50 width=908) (actual time=
101.695..106.178 rows=50 loops=1)
-> HashAggregate (cost=1743.31..1743.31 rows=50 width=108) (actual time=
101.509..101.567 rows=50 loops=1)
-> Subquery Sca
Here is a table I threw together to demonstrate the approximate speed of a
materialized view in this case:
trend=# explain analyze select property_id from overview order by
property_id limit 50 offset 5;
QUERY
PLAN
--
If you have to access the data this way (with no where clause at all - which
sometimes you do) then I have already provided a solution that will work
reasonably well. If you create what is essentially a materialized view of
just the id field, the sequence scan will return much fewer pages than whe
Sorry Alex i forget mention that i have setscan of in my last test.
now I have set seqscan on and indexscan on and added order by _id
The table has an index in the _id field
CREATE INDEX i_documentcontent_document
ON t_documentcontent
USING btree
(_document);
The database was rencently v
"Scott Marlowe" <[EMAIL PROTECTED]> writes:
> On Jan 13, 2008 8:58 PM, pepone. onrez <[EMAIL PROTECTED]> wrote:
>> t_documentcontent._id AS _id
>> FROM t_documentcontent LIMIT 50 OFFSET 8
> with no order by, and possibly no index on t_documentcontent._id,
> there's no choice but a seq scan.
M
On Jan 13, 2008 8:58 PM, pepone. onrez <[EMAIL PROTECTED]> wrote:
> I have this query in a table with 150 thowsand tuples and it takes to long
>
> t_documentcontent._id AS _id
> FROM t_documentcontent LIMIT 50 OFFSET 8
>
> here is the explain output
>
> "Limit (cost= 100058762.30..1000
If you haven't already, make sure you've done a vacuum full recently. When
in doubt, pg_dump the db, and reload it, and see if that helps, but this
works for me:
create table overview as select _id from t_documentcontent;
alter table overview add constraint overview_pkey primary key (_id);
selec
I have this query in a table with 150 thowsand tuples and it takes to long
t_documentcontent._id AS _id
FROM t_documentcontent LIMIT 50 OFFSET 8
here is the explain output
"Limit (cost=100058762.30..100058799.02 rows=50 width=58) (actual time=
19433.474..19433.680 rows=50 loops=1)"
> So, how can I do to execute it as if it was the first
> time again?
Reboot.
As Lew pointed out, that might not actually be a good idea, because caching
means that most queries will most of the time not run with that "first time"
performance.
--
Scott Ribe
[EMAIL PROTECTED]
http://www.killerb
Hello, All.
I have a query that I want to test twice to see its performance with
and without another program running (anti-virus program). I know that
if you run the same query for the second time, it will be faster than
the first time. So, how can I do to execute it as if it was the first
time ag
dcrespo wrote:
I have a query that I want to test twice to see its performance with
and without another program running (anti-virus program). I know that
if you run the same query for the second time, it will be faster than
the first time. So, how can I do to execute it as if it was the first
tim
Steve Spicklemire wrote:
Here is the function body... the data is stored in and XML "pickle". I
had hoped that it would only be called in building the index.
Since the query uses it in the 'filter' step.. I'm not sure if it's
using the index or not.
Does marking the function immutable hel
Hi Richard,
On Jul 19, 2007, at 12:49 AM, Richard Huxton wrote:
Steve Spicklemire wrote:
I also have a function "get_cem_for_directBurial(personid)" that
is expensive to call, but it's also indexed, so I hoped that the
index would normally be used (essentially as a cache). It returns
a 'c
Steve Spicklemire wrote:
I also have a function "get_cem_for_directBurial(personid)" that is
expensive to call, but it's also indexed, so I hoped that the index
would normally be used (essentially as a cache). It returns a 'cemetery
code' so I can search for folks buried in a particular cemeter
Hi Folks,
I'm new to this list, but I've been using postgresql for a few years.
In general I've been able to figure things out by reading various
docs. I've hit something now that I haven't been able to sort out at
all. It may be that there's some document that explains all this...
if so,
Matthew Schumacher <[EMAIL PROTECTED]> writes:
> Here is the proc that has very inconsistent (anywhere from 25ms to
> 8000ms) performance:
> ...
> This setup, with concurrency, is returning very inconsistent query
> performance. Sometimes its very fast, other times it's slow and waits.
> This mak
Tom Lane wrote:
> Jeff Davis <[EMAIL PROTECTED]> writes:
>> From what you described, I would not expect many locking problems. Are
>> there any other types of queries you run that may cause a lock?
>
> Row locks (SELECT FOR UPDATE/SHARE) are a possible problem, particularly
> if this is a pre-8.1
Matthew Schumacher <[EMAIL PROTECTED]> writes:
> I have "autovacuum = on" in the config file with a pretty frequent
> autovacuum_naptime, but I'm unsure if that does a vacuum or vacuum full.
autovacuum *never* does a vacuum full, because that would lead to
unexpected blockages of foreground querie
Jeff Davis <[EMAIL PROTECTED]> writes:
> From what you described, I would not expect many locking problems. Are
> there any other types of queries you run that may cause a lock?
Row locks (SELECT FOR UPDATE/SHARE) are a possible problem, particularly
if this is a pre-8.1 Postgres where exclusive r
Jeff Davis wrote:
> http://www.postgresql.org/docs/8.1/static/mvcc.html
>
> In the "Explicit Locking" section it details the locks acquired by
> UPDATE, etc.
>
>>From what you described, I would not expect many locking problems. Are
> there any other types of queries you run that may cause a lock
On Thu, 2006-08-31 at 11:04 -0800, Matthew Schumacher wrote:
> Tom Lane wrote:
> > Matthew Schumacher <[EMAIL PROTECTED]> writes:
> >> I have been having performance problems with my DB so this morning I
> >> added some config to log queries that take more than 250ms. The result
> >> is surprising
Tom Lane wrote:
> Matthew Schumacher <[EMAIL PROTECTED]> writes:
>> I have been having performance problems with my DB so this morning I
>> added some config to log queries that take more than 250ms. The result
>> is surprising because some queries will take as long as 10 seconds, but
>> then you
Matthew Schumacher <[EMAIL PROTECTED]> writes:
> I have been having performance problems with my DB so this morning I
> added some config to log queries that take more than 250ms. The result
> is surprising because some queries will take as long as 10 seconds, but
> then you do a explain analyze o
I have been having performance problems with my DB so this morning I
added some config to log queries that take more than 250ms. The result
is surprising because some queries will take as long as 10 seconds, but
then you do a explain analyze on them they show that indexes are being
used and they r
> i have a table with around 57 million tuples, with the following columns:
> pid(varchar), crit(varchar), val1(varchar), val2(varchar). Example:
> pidcritval1val2
> p1 c1 xy
> p1 c2 xz
> p1 c3 yx
> ...
> What i am doing is to quer
49 (0) 221 59 68 88 98
> Email: [EMAIL PROTECTED]
>
>
>
>> -Original Message-
>> From: Christian Rengstl
>> [mailto:[EMAIL PROTECTED]
>> Sent: Thursday, August 03, 2006 11:18 AM
>> To: Richard Huxton; Hakan Kocaman
>> Cc: pgsql-general@po
From: Christian Rengstl
> [mailto:[EMAIL PROTECTED]
> Sent: Thursday, August 03, 2006 11:18 AM
> To: Richard Huxton; Hakan Kocaman
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Query performance
>
>
> Hi,
>
> i would rather compare int4 too, b
CTED]
>> Sent: Thursday, August 03, 2006 11:00 AM
>> To: Christian Rengstl
>> Cc: Hakan Kocaman; pgsql-general@postgresql.org
>> Subject: Re: [GENERAL] Query performance
>>
>>
>> Christian Rengstl wrote:
>> > Hi,
>> >
>> > th
Hi,
> -Original Message-
> From: Richard Huxton [mailto:[EMAIL PROTECTED]
> Sent: Thursday, August 03, 2006 11:00 AM
> To: Christian Rengstl
> Cc: Hakan Kocaman; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Query performance
>
>
> Christian Rengstl
Christian Rengstl wrote:
Hi,
the complete query is the one i posted, but here comes the schema for mytable:
entry_no int8 NOT NULL DEFAULT nextval('entry_no_seq''::regclass),
pid varchar(15) NOT NULL,
crit varchar(13) NOT NULL,
val1 varchar(1),
val2 varchar(1),
aendat text,
aennam
gt; To: pgsql-general@postgresql.org
> Subject: [GENERAL] Query performance
>
>
> Hi everyone,
>
> i have a table with around 57 million tuples, with the
> following columns: pid(varchar), crit(varchar),
> val1(varchar), val2(varchar). Example:
> pidcritva
gt; Tel.: +49 (0) 221 59 68 88 31
> Fax: +49 (0) 221 59 68 88 98
> Email: [EMAIL PROTECTED]
>
>
>
>> -Original Message-
>> From: [EMAIL PROTECTED]
>> [mailto:[EMAIL PROTECTED] On Behalf Of
>> Christian Rengstl
>> Sent: Thursday, August 03, 2006 1
Hi everyone,
i have a table with around 57 million tuples, with the following columns:
pid(varchar), crit(varchar), val1(varchar), val2(varchar). Example:
pidcritval1val2
p1 c1 xy
p1 c2 xz
p1 c3 yx
...
What i am doing is to query a
On 8/3/06, Christian Rengstl <[EMAIL PROTECTED]> wrote:
...
Unfortunately the query takes pretty long for the big table, so maybe one of
you has a suggestion on how to make it faster.
try smth like this:
select val1, val2, crit from mytable as a where pid='somepid' and
exists(select 1 from m
Paul Tillotson <[EMAIL PROTECTED]> writes:
> >Total runtime: 12.241 ms
> >
> > Still this is a third of the time of the sub-query route but 4 times longer
> > than mysql - this must be an install issue?
>
> Just about any query will usually take a few milliseconds (try SELECT 1; to
> see
> the a
Phil,
Just about any query will usually take a few milliseconds (try SELECT 1;
to see the absolute lowest), and so 12 ms is probably about as good as
you can get. For my own part, I consider 50 ms good enough for any
query that is not run inside of a loop. If you want to write suitably
effic
I can also do the same thing without sub-queries - I messed about some more
since I was keen to ensure backward compatibility with prior versions of
mysql that have left/right joins but no subqueries ... quite a bit quicker
still!
Query took 0.0037 sec - 1/10th of the sub-query time.
SELECT ch
First time I ran it it took 5127.243 ms .. then I did a full vacuum.
then ...
SQL executed.
Total runtime: 33.707 ms
I am keen to just have the one lot of code all in the scripts ... so I was
pleased when the identical sql also worked on mysql!!!
Your SQL-query has been executed success
Phil Daintree wrote:
Appreciated you help Paul - many thanks for taking the time.
I view this as merely passing on all the pearls of wisdom I have gleaned
from this list. : )
Advice:
Are you running this inside a transaction? Do so, because if you don't,
then each UPDATE or INSERT or SELECT
Note: If you want to know WHY this takes so long, please tell us how
many times each loop executes and how long each query takes.
Be sure to post an EXPLAIN ANALYZE for each of your queries that you are
running. This will show what plans the planner is using and how long
they are actually taki
See the syntax for INSERT ... SELECT shown here:
http://www.postgresql.org/docs/8.0/static/sql-insert.html
Instead of doing a nested loop to INSERT new records, do it like this:
For ($period = start; $period < end; $period++)
{
INSERT INTO chartdetails (accountcode, period)
SELECT accou
Phil Daintree wrote:
There are 2 tables used in the sql we need to optimise .
CREATE TABLE chartdetails (
CREATE TABLE gltrans (
So there is a chartdetail record for every period for every general ledger
account. So if there are 5 years x 12 periods (months) and 200 general
ledger accounts
Dear psqlers,
I need your help!
I administer/develop an open source PHP accounting software project (webERP)
that was originally mysql only. Since Christmas I and another member of the
team lower cased all the sql and changed some elements of the SQL to allow it
to use postgres as well. All ap
On Wed, Jan 07, 2004 at 02:31:22 -0500,
Tom Lane <[EMAIL PROTECTED]> wrote:
>
> I just a couple days ago added some logic to CVS tip to notice that the
> sub-select has a DISTINCT clause, and not add unnecessary unique-ifying
> processing on top of it. So in 7.5, writing a DISTINCT clause will
PROTECTED] Behalf Of Tom Lane
> Sent: 2004. január 6. 21:04
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED] (E-mail)
> Subject: Re: [GENERAL] Query performance question on a large table
>
>
> =?iso-8859-2?Q?Egy=FCd_Csaba?= <[EMAIL PROTECTED]> writes:
> > here is
Sean Shanny <[EMAIL PROTECTED]> writes:
> Sort of piggybacking on this thread but why the suggestion to drop the
> use of DISTINCT in 7.4?
Because the 7.4 planner can decide for itself whether DISTINCT'ifying
the sub-select output is the best way to proceed or not. There is more
than one good wa
Tom,
Sort of piggybacking on this thread but why the suggestion to drop the
use of DISTINCT in 7.4? We use DISTINCT all over the place to eliminate
duplicates in sub select statements. Running 7.4.0 currently on
FreeBSD5.1 Dell 2650 4GB RAM 5 disk SCSI array hardware RAID 0
Example:
explain
77 matches
Mail list logo