Hi.
I have a strange situation where generating the query plan takes 6s+ and
executing it takes very little time.
2013-09-03 09:19:38.726 db=# explain select table.id from db.table left
join db.tablepro on db.id = tablepro.table_id where table.fts @@
to_tsquery('english','q12345') ;
On 09/03/2013 03:46 PM, jes...@krogh.cc wrote:
> Hi.
>
> I have a strange situation where generating the query plan takes 6s+ and
> executing it takes very little time.
How do you determine that it's planning time at fault here?
Please take separate timing for:
PREPARE testq AS select table.id
Dear All
I'm running Postgres 8.4 on Ubuntu 10.4 Linux server (64bit)
I have a big table tath contains product information: during the day we perform
a process that import new product continously with statemtn COPY TO .. from
files to this table.
As result the table disk space is growing fast,
Torsten Förtsch wrote:
> Is there an other way to make the planner use generate the 1st
> plan?
The planner cost factors are based on the assumption that a
moderate percentage of random page reads will need to actually go
out to disk. If a high percentage of pages are in cache, you may
want to
Well, in older version of Hibernate it was a little tricky to handle
Postgresql Enums. Dunno if it's out of the box now.
Also adding new value is an explicit operation (much like with lookup
table). I've had quite a complex code with second connection opening to
support lookup table filling without
On 03/09/13 09:47, Craig Ringer wrote:
On 09/03/2013 03:46 PM, jes...@krogh.cc wrote:
Hi.
I have a strange situation where generating the query plan takes 6s+ and
executing it takes very little time.
How do you determine that it's planning time at fault here?
Not that I'm sure, but the timing
Roberto Grandi wrote:
> I'm running Postgres 8.4 on Ubuntu 10.4 Linux server (64bit)
> I have a big table tath contains product information: during the
> day we perform a process that import new product continously with
> statemtn COPY TO .. from files to this table.
>
> As result the table disk
This is postgres 9.1.9.
I'm getting a very weird case in which a simple range query over a PK
picks the wrong... the very very wrong index.
The interesting thing, is that I've got no idea why PG is so grossly
mis-estimating the number of rows scanned by the wrong plan.
I've got this table that's
--
Milos Babic
http://www.linkedin.com/in/milosbabic
Twitter: @milosbabic
Skype: milos.babic
Claudio Freire writes:
> So, I've got this query with this very wrong plan:
> explain SELECT min(created) < ((date_trunc('day',now()) - '90
> days'::interval)) FROM "aggregated_tracks_daily_full" WHERE id BETWEEN
> 34979048 AND 35179048
> ;
> QUERY PLAN
>
On Tue, Sep 3, 2013 at 8:11 PM, Tom Lane wrote:
> Claudio Freire writes:
>> So, I've got this query with this very wrong plan:
>
>> explain SELECT min(created) < ((date_trunc('day',now()) - '90
>> days'::interval)) FROM "aggregated_tracks_daily_full" WHERE id BETWEEN
>> 34979048 AND 35179048
>> ;
Hi kevin
first of all thanks for your help. I did a mistake we are using postgres 8.3.
I didn't expect COPY TO frees space but I was wondering Autovacumm delete dead
rows as soon as possible, in fact my scenario is:
- Delete all products record for a vendor
- Reload all products record (from ne
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 08/28/2013 09:08 PM, Tom Lane wrote:
[..]
>
> If you don't want to do any major rewriting, you could probably
> stick an OFFSET 0 into the outer EXISTS sub-select (and/or the
> inner one) to get something similar to the 9.1 plan.
>
Thank
13 matches
Mail list logo