Hi all,
Im currently taking my first steps with db optimizations and am wondering
whats happening here and if/how i can help pg choose the better plan.
Thanks,
Fabian
>>>
psql (PostgreSQL) 7.2.2
perg_1097=# VACUUM ANALYZE ;
VACUUM
perg_1097=# EXPLAIN ANALYZEselect notiz_id, obj_id, obj_
At 12:12 17.07.2003, you wrote:
On 17/07/2003 10:01 Fabian Kreitner wrote:
Hi Fabian,
When you are doing these kinds of tests, you need to be aware that the
kernel may have most of your data cached after the first query and this
may be why the second query appears to run faster.
I thought of
At 11:17 17.07.2003, Shridhar Daithankar wrote:
On 17 Jul 2003 at 11:01, Fabian Kreitner wrote:
> psql (PostgreSQL) 7.2.2
>
> perg_1097=# VACUUM ANALYZE ;
> VACUUM
> perg_1097=# EXPLAIN ANALYZEselect notiz_id, obj_id, obj_typ
> perg_1097-# fromnotiz_objekt a
>
At 14:34 17.07.2003, you wrote:
On 17/07/2003 12:13 Fabian Kreitner wrote:
That is what I read too and is why Im confused that the index is indeed
executing faster. Can this be a problem with the hardware and/or
postgress installation?
It's more likely that the OS has most of the data c
At 20:12 17.07.2003, Tom Lane wrote:
Fabian Kreitner <[EMAIL PROTECTED]> writes:
> That is what I read too and is why Im confused that the index is indeed
> executing faster. Can this be a problem with the hardware and/or postgress
> installation?
I think the actual issue here
Hi all,
Adjusting the cpu_tuple_cost to 0.042 got the planner to choose the index.
Anything I need to consider when raising it to such "high" values?
Thanks for the help,
Fabian
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Hi,
can anyone point me to information regarding this please?
Objective is to find entries that match one (or more) supplied strings in
two tables. The first has about 20.000 entries with 1 varchar field to
check, the other about 40.000 with 5 varchar fields to check. The currently
used sequen
Hi everyone,
I have a sql request which on first invocation completes in ~12sec but then
drops to ~3sec on the following runs. The 3 seconds would be acceptable but
how can I make sure that the data is cached and all times? Is it simply
enough to set shared_buffers high enough to hold the entir