Hi all,
I am facing a strange problem when I run EXPLAIN against a table
having more than 10 records. The query have lot of OR conditions
and when parts of the query is removed it is using index. To analyse
it I created a table with a single column, inserted 10
records(random number) in
On more investigation I found that index scan is not used if the query
have a function in it like lower() and an index exist for lower()
column.
rgds
Antony Paul
On Mon, 7 Feb 2005 14:37:15 +0530, Antony Paul <[EMAIL PROTECTED]> wrote:
> Hi all,
> I am facing a strange problem when I run EXP
Hi every one,
Why does this take forever (each query is sub second when done seperately)?
Is it because I cannot open two cursors in the same transaction?
begin;
declare SQL_CUR01 cursor for
SELECT A.ordernummer, B.klantnummer FROM "orders" A LEFT OUTER JOIN "klt_alg" B
ON A.Klantnummer=B.Kl
On Mon, Feb 07, 2005 at 04:44:07PM +0530, Antony Paul wrote:
> On more investigation I found that index scan is not used if the query
> have a function in it like lower() and an index exist for lower()
> column.
What version are you using? 8.0 had fixes for this situation.
/* Steinar */
--
Homep
It depends on many circumstances, but, at first, simple question: Did
you run vacuum analyze?
I am satisfied with functional indexes - it works in my pg 7.4.x.
Antony Paul wrote:
On more investigation I found that index scan is not used if the query
have a function in it like lower() and an index
Sorry I forgot to mention it. I am using 7.3.3. I will try it in 8.0.0
rgds
Antony Paul
On Mon, 7 Feb 2005 12:46:05 +0100, Steinar H. Gunderson
<[EMAIL PROTECTED]> wrote:
> On Mon, Feb 07, 2005 at 04:44:07PM +0530, Antony Paul wrote:
> > On more investigation I found that index scan is not us
I ran analyze; several times.
rgds
Antony Paul
On Mon, 07 Feb 2005 12:53:30 +0100, Jan Poslusny wrote:
> It depends on many circumstances, but, at first, simple question: Did
> you run vacuum analyze?
> I am satisfied with functional indexes - it works in my pg 7.4.x.
>
> Antony Paul wrote:
>
Hi, Tom,
Tom Lane schrieb:
> Markus Schaber <[EMAIL PROTECTED]> writes:
>> [Query optimizer misestimation using lossy GIST on TOASTed columns]
>
> What I would be inclined to do is to extend ANALYZE to make an estimate
> of the extent of toasting of every toastable column, and then modify
> cost_q
Hi, @all,
Greg Stark schrieb:
> "Leeuw van der, Tim" <[EMAIL PROTECTED]> writes:
>
>>I don't think EXPLAIN ANALYZE puts that much overhead on a query.
>
> EXPLAIN ANALYZE does indeed impose a significant overhead.
Additional note:
In some rare cases, you can experience just the opposite effect,
Hi all,
A retry of the question asked before. All tables freshly vacuumed an analized.
Two queries: one with "set enable_seqscan = on" , the other with "set
enable_seqscan = off". The first query lasts 59403 ms, the second query 31 ms (
the desc order variant has the same large difference: 122
"Joost Kraaijeveld" <[EMAIL PROTECTED]> writes:
> Two queries: one with "set enable_seqscan = on" , the other with "set
> enable_seqscan = off". The first query lasts 59403 ms, the second query 31 ms
> ( the desc order variant has the same large difference: 122494 ms vs. 1297
> ms). (for the que
Steven Rosenstein wrote:
>
>
>
> Hi Michael,
>
> Thank you for the link to the documentation page. I forgot to mention that
> we're still using version 7.3. When I checked the 7.3 documentation for
> DELETE, there was no mention of being able to use fields from different
> tables in a WHERE claus
Does the planner also take into account that the Hash Join will need a
huge temporary space which will exist for the whole length of the cursor
existence (which may be quite long if he intends to fetch everything),
whereas the Merge Join should need very little space as it is sending the
r
>> The best solution is probably to put a LIMIT into the DECLARE CURSOR,
>> so that the planner can see how much you intend to fetch.
I assume that this limits the resultset to a LIMIT. That is not what I was
hoping for. I was hoping for a way to scrolll throught the whole tables with
orders.
> >> The best solution is probably to put a LIMIT into the DECLARE
CURSOR,
> >> so that the planner can see how much you intend to fetch.
> I assume that this limits the resultset to a LIMIT. That is not what I
was
> hoping for. I was hoping for a way to scrolll throught the whole
tables
> with ord
Hi all, we have an Sun E3500 running Solaris 9. It's got 6x336MHz CPU and
10GB RAM.
I would like to know what /etc/system and postgresql_conf values are
recommended to deliver as much system resource as possible to Postgres. We
use this Sun box solely for single user Postgres data warehousing
work
Paul,
> I would like to know what /etc/system and postgresql_conf values are
> recommended to deliver as much system resource as possible to Postgres. We
> use this Sun box solely for single user Postgres data warehousing
> workloads.
What's your disk system?
> shared_buffers = 50
This is h
17 matches
Mail list logo