('5502712','5802693','5801981') order by timestamp desc limit 1;
Regards
Gaetano Mendola
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
('5502712','5802693','5801981') order by timestamp desc limit 1;
Regards
Gaetano Mendola
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
oriented.
That's a bit harsh. Your sense if fooling you.
Regards
Gaetano Mendola
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
> it seems like there are a number of options you could explore here.
Question here is not how to do it right, but how to make the optimizer smarter
than it is now, taking rid of work not needed.
Regards
Gaetano Mendola
--
cpp-today.blogspot.com
--
Sent via pgsql-performance mailing list
On Mon, Sep 13, 2010 at 6:48 PM, Tom Lane wrote:
> Gaetano Mendola writes:
>> Of course I'm not suggesting to take away the "sort by" and give the user
>> an unsorted result, I'm asking why the the optimizer in cases like:
>
>> select unique(a) fr
On 09/13/2010 04:44 PM, Tom Lane wrote:
> Gaetano Mendola writes:
>> because of that sort in the view definition the first query above
>> takes not less than 3 seconds. I have solved this performance issue
>> removing the sort from the view definition and putting it in the
&
e optimizer take rid of that useless sort on those
kind of queries ?
Regards
Gaetano Mendola
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
I have complete the benchmarks I have made with Postgres and I have talk about
some weeks ago on postgres performance mailing list (see post shared_buffers).
On the follow link you can find a doc that contains the graphs generated.
http://www.mediafire.com/?lk4woomsxlc
Regards
Gaetano Mendola
Gaetano Mendola wrote:
> Hi all,
> I started to do some performance tests (using pgbench) in order to
> estimate the DRBD impact on our servers, my plan was to perform some
> benchmarks without DRBD in order to compare the same benchmark with
> DRBD.
> I didn't perform yet
Greg Smith wrote:
> On Mon, 14 Apr 2008, Gaetano Mendola wrote:
>
>> I'm using postgres 8.2.3 on Red Hat compiled with GCC 3.4.6.
>
> 8.2.3 has a performance bug that impacts how accurate pgbench results
> are; you really should be using a later version.
>
>>
Greg Smith wrote:
> On Mon, 14 Apr 2008, Gaetano Mendola wrote:
>
>> I'm using postgres 8.2.3 on Red Hat compiled with GCC 3.4.6.
>
> 8.2.3 has a performance bug that impacts how accurate pgbench results
> are; you really should be using a later version.
Thank you,
aining better
results. Is this something expected or I'm looking in the wrong direction?
I'm going to perform same tests without using the -S option in pgbench but
being a time expensive operation I would like to ear your opinion first.
Regards
Gaetano Mendola
--
Sent via pgsql-per
Hi all,
specifing as shared_buffers = 26800 in 8.2.x will this value accepted like
in the 8.1.x series and then 26800*8192 bytes = 209 MB or 26800 bytes
(not being specified the memory unit)?
Regards
Gaetano Mendola
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
John Beaver wrote:
> - Trying the same constant a second time gave an instantaneous result,
> I'm guessing because of query/result caching.
AFAIK no query/result caching is in place in postgres, what you are experiencing
is OS disk/memory caching.
Regards
Gaetano Mendola
--
Sen
a_2_00_card (cost=0.00..73.24
rows=4388 width=0) (actual time=1.779..1.779 rows=7801 loops=1)
Index Cond: (t_oa_2_00_card.pvcp = l_pvcp.id)
Total runtime: 23.491 ms
(10 rows)
I had to lower the random_page_cost = 2.5 in order to avoid the sequential scan
on the big table t_oa_2_00_ca
elease Notes:
>
> * Improve the optimizer's selectivity estimates for LIKE, ILIKE,
> and regular expression operations (Tom)
I will try same select on a 8.2 ( that one was a 8.1 ) and I'll let you
know.
Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Michael Fuhr wrote:
> On Fri, Mar 30, 2007 at 12:08:26PM +0200, Gaetano Mendola wrote:
>> Claus Guttesen wrote:
>>> Try changing random_page_cost from the default 4 to 2 in postgresql.conf:
>>>
>>> random_page_
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Richard Huxton wrote:
> Gaetano Mendola wrote:
>> -BEGIN PGP SIGNED MESSAGE- Hash: SHA1
>>
>> Hi all, take a look at those plans:
>>
>>
>> test=# explain analyze SELECT COUNT(id) FROM t_oa_2_00_card
tive. This setting
> indicates for postgresql how fast your disks are, the lower the
> faster.
>
> Could this setting be changed to 2 as default rather than 4?
I have tuned that number already at 2.5, lowering it to 2 doesn't change
the plan.
Regards
Gaetano Mendola
-BEGIN PG
Index Cond: (t_oa_2_00_card.pvcp = "outer".id)
Total runtime: 55.454 ms
(10 rows)
Isn't too much choose a sequential scan due to 19 estimated rows when with 4
estimated does a correct index scan ?
Regards
Gaetano Mendola
-BEGIN P
d I'd like to include postgresql or have a similar tool. Any
> of you is using anything like that? all kind of hints are welcome :-)
>
> Cheers!
We use Cricket + Nagios ( new Netsaint release ).
Regards
Gaetano Mendola
---(end of broadcast)--
Mark Lewis wrote:
> I imported my test dataset
> and was almost immediately able to track down the cause of my
> performance problem.
Why don't you tell us what the problem was :-) ?
Regards
Gaetano Mendola
---(end of broadcast)---
login_login_key on user_login (cost=0.00..4.00 rows=1
width=16) (actual time=0.050..0.052 rows=1 loops=1)
Index Cond: ((login)::text = 'Zoneon'::text)
Total runtime: 4.627 ms
(3 rows)
btw, is it normal that cast ?
Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Versio
t_group,
p.target_group AS target_group,
p.auto_listen AS auto_listen,
p.public_flag AS public_flag,
p.needed_versionAS needed_version,
p.logic_version AS logic_version,
p.package_size
Tom Lane wrote:
> Gaetano Mendola <[EMAIL PROTECTED]> writes:
>
>>Richard Huxton wrote:
>>
>>>If page number 8549 was the one being held, I don't think vacuum can
>>>truncate the file. The empty space can be re-used, but the rows can't be
>&g
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Richard Huxton wrote:
> Gaetano Mendola wrote:
>
>>
>> # vacuum full verbose url;
>> INFO: vacuuming "public.url"
>> INFO: "url": found 74 removable, 21266 nonremovable row versions in
>> 8550
Michael Fuhr wrote:
> On Wed, Mar 09, 2005 at 02:02:13AM +0100, Gaetano Mendola wrote:
>
>
>>Basically I'm noticing that a simple vacuum full is not enough to
>>shrink completelly the table:
>>
>># vacuum full verbose url;
>>INFO: vacuuming "p
have been
> faster with a seqscan.
The field1 was a calculated field and with the filter "='New'"
postgres was executing that function on more rows than without filter.
Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Co
ial move destinations.
CPU 0.17s/0.04u sec elapsed 0.49 sec.
[SNIPPED]
INFO: "url": moved 5666 row versions, truncated 8504 to 621 pages
anyone knows why ? I had the same behaviour with a 46000 rows table with
46000 pages! It was reduced to 3000 pages after 7 vacuum full.
Regar
h
the second select in a loop that was returnin the row if
the field1 was equal = 'New'.
It's strange but happen.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tom Lane wrote:
> Gaetano Mendola <[EMAIL PROTECTED]> writes:
>
>>>Since your query is so simple, I'm guessing v_sc_user_request is a view.
>>>Can you provide the definition?
>
>
>>Of course:
>
ty_id_package_key on
package_security ps (cost=0.00..3.84 rows=1 width=4) (actual time=0.036..0.042
rows=1 loops=1)
Index Cond: ("outer".id_package = ps.id_package)
Total runtime: 2.878 ms
(14 rows)
but with this last setting for the original query is choosed a very bad plan.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Richard Huxton wrote:
> Gaetano Mendola wrote:
>
>> running a 7.4.5 engine, I'm facing this bad plan:
>>
>> empdb=# explain analyze SELECT
>> name,url,descr,request_status,url_status,size_mb,estimated_s
Hash Cond: ("outer".id_program =
"inner".id_program)
-> Seq Scan on sequences
(cost=0.00..512.82 rows=830 width=16) (actual time=16.858..595.262 rows=480
loops=1)
Filter: (estimated_start IS
NOT NULL)
-> Hash (cost=20.48..20.48
rows=47 width=19) (actual time=21.093..21.093 rows=0 loops=1)
-> Seq Scan on programs
(cost=0.00..20.48 rows=47 width=19) (actual time=9.369..20.980 rows=48 loops=1)
Filter: (id_program <>
0)
Total runtime: 1614.123 ms
(36 rows)
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
le;
> pg_dump 3monthsago_dynamically_named_table for archiving;
In my experience copy/delete in a single transaction 60+ million rows
is not feseable, at least on my 1 GB ram, 2 way CPU box.
Regards
Gaetano Mendola
---(end of broadcast)--
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Markus Schaber wrote:
> Hi, Gaetano,
>
> Gaetano Mendola schrieb:
>
>
>>I have the same requirement too. Actually pg_autovacuum can not be
>>instructed "per table" so some time the global settings are not good
Matthew T. O'Connor wrote:
> Christopher Browne wrote:
>
>> Gaetano Mendola <[EMAIL PROTECTED]> writes:
>>
>>
>>> I do a graph about my disk usage and it's a ramp since one week,
>>> I'll continue to wait in order to see if it w
Christopher Browne wrote:
> Gaetano Mendola <[EMAIL PROTECTED]> writes:
>
>
>>Tom Lane wrote:
>>
>>>Gaetano Mendola <[EMAIL PROTECTED]> writes:
>>>
>>>
>>>>I'm using ony pg_autovacuum. I expect that disk usage will re
Tom Lane wrote:
> Gaetano Mendola <[EMAIL PROTECTED]> writes:
>
>>I'm using ony pg_autovacuum. I expect that disk usage will reach
>>a steady state but is not. PG engine: 7.4.5
>
>
> One data point doesn't prove that you're not at a steady sta
gh:
max_fsm_pages | 200
max_fsm_relations | 1000
at least after a vacuum full I see that these numbers are an overkill...
REgards
Gaetano Mendola
---(end of broadcast)---
TIP 2: you can get off all lists at once with the
r that table
relfilenode | relname | relpages | reltuples
-+--+--+-
18376 | messages |69472 | 1.60644e+06
how was possible accumulate 6000 pages wasted on that table?
Between these two calls:
[2005-02-22 05:25:03 CET] Performing: VACUUM
Christopher Browne wrote:
> After a long battle with technology, Gaetano Mendola <[EMAIL PROTECTED]>, an
> earthling, wrote:
>
>>JM wrote:
>>
>>>Hi ALL,
>>>
>>> I was wondering if there is a DB performance reduction if
>>>there
Tom Lane wrote:
> but this behavior isn't reproduced in the later message, so I wonder if
> it wasn't an artifact of something else taking a chunk of time.
I think is due the fact that first queries were performed in peakhours.
Regards
Gaetano Mendola
--
In my experience not at all, you have to wonder if some of that are "idle in
transaction"
that are really a pain in the @#$
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
Tom Lane wrote:
> Gaetano Mendola <[EMAIL PROTECTED]> writes:
>
>>If you need other info in order to improve the planner,
>
>
> ... like, say, the PG version you are using, or the definitions of the
> views involved? It's difficult to say much of anything about
Hi all,
I'm stuck in a select that use the hash join where should not:
6 seconds vs 0.3 ms !!
If you need other info in order to improve the planner,
let me know.
Regards
Gaetano Mendola
empdb=# explain analyze SELECT id_sat_request
empdb-#FROM sat_request sr,
ing, unless you
> write it yourself. I think there's some work being done in this area,
> though.
Seen my last attempts to perform an horizontal partition I have to say
that postgres do not support it even if you try to write it yourself
(see my post "horizontal partion" ).
>>2) pgpool sends query only to the master if the query is other than
>> SELECT.
Don't you think that this is unsafe ?
SELECT foo(id), id
FROM bar;
where foo have side effect.
Is pgpool able to detect it and perform this select on the master ?
Regar
rever.
7.3 is affected by bad performances if you use IN.
Transform the IN in an EXIST construct.
If it'is an option for you upgrade you DB engine.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Steven Rosenstein wrote:
DELETE FROM detail JOIN summary ON (summary.id=detail.sum_id) WHERE
collect_date='2005-02-05';
DELETE FROM detail WHERE detail.sum_id in ( select id from summary )
AND collect_date='2005-02-05';
Regards
Gaetano Mendola
Index Cond: ((login)::text = 'kalman'::text)
Total runtime: 37122.069 ms
(10 rows)
and how you can see this path is not applicable too :-(
Any other suggestion ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
. What Postgres version are you using? I thought this was fixed
in 7.4, but maybe not ...
Yes, I'm using with 7.4.x, so it was not fixed...
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an
user_logs_2004
Any reason you didn't use inheritance?
I did in that way just to not use postgresql specific feature.
I can give it a try and I let you know, however the question remain,
why the index usage is lost if used in that way ?
Regards
Gaetano Mendola
--
46..1474.709 rows=505199 loops=1)
-> Hash (cost=4.00..4.00 rows=1 width=16) (actual time=0.083..0.083 rows=0
loops=1)
-> Index Scan using user_login_login_key on user_login u
(cost=0.00..4.00 rows=1 width=16) (actual time=0.064..0.066 rows=1 loops=1)
Index Cond:
'll
continue to work consistently in production code.
For sure it will not break the goal: "check the existence".
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
Jim C. Nasby wrote:
> I'm wondering if there's any way I can tweak things so that the estimate
> for the query is more accurate (I have run analyze):
Can you post your configuration file ? I'd like to see for example your
settings about: random_page_cost and effective_cache_s
ans ( and not soo
much )
so don't hope to double this parameter and push postgres to use more RAM.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 8: explain analyze is your friend
ly out of
|>mind ?
|
|
| I don't see how that would help. The problem is not backends switching
| processors, it's the buffermgrlock needing to be swapped between processors.
This is not clear to me. What happen if during a spinlock a backend is
moved away from one processor to another
Tom Lane wrote:
Gaetano Mendola <[EMAIL PROTECTED]> writes:
I proposed weeks ago to see how the CSStorm is affected by stick each
backend in one processor ( where the process was born ) using the
cpu-affinity capability ( kernel 2.6 ), is this proposal completely
out of mind ?
Th
t cure the CSStorm bug, they do lessen its effects in terms of
> real performance loss.
I proposed weeks ago to see how the CSStorm is affected by stick each backend
in one processor ( where the process was born ) using the cpu-affinity capability
( kernel 2.6 ), is this proposal completely out of
Brock Henry wrote:
> Any comments/suggestions would be appreciated.
Tune also the disk I/O elevator.
look at this: http://www.varlena.com/varlena/GeneralBits/49.php
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 2: you can get off all lists
[EMAIL PROTECTED] wrote:
If the resource owner is always responsible for releasing locked
buffers, who releases the locks if the backend crashes?
The semaphore "undo" I hope.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 8: explain
s the dual Xeon, when
running on one and both cpus.
Here http://www6.tomshardware.com/cpu/20030422/ both were tested and there is
a database performance section, unfortunatelly they used MySQL.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
e with this?
>
What do you need ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
gards
Gaetano Mendola
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Neil Conway wrote:
On Tue, 2004-09-28 at 08:42, Gaetano Mendola wrote:
Now I'm reading an article, written by the same author that ispired the magic "300"
on analyze.c, about "Self-tuning Histograms". If this is implemented, I understood
we can take rid of "vacuum a
to date the statistics.
Have someone in his plans to implement it ?
After all the idea is simple: compare during normal selects the extimated rows and
the actual extracted rows then use this "free" information to refine the histograms.
Regards
Gaetano Mendola
---(e
50%
| hits, which it otherwise might.
So, I didn't understand how the statistics hystogram works.
I'm going to take a look at analyze.c
Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.
, you may benefit from using
| PREPAREd queries.
|
| -Mike
With his load will not change anything.
Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBVany7UpzwH2SGd4RAj9UAJ0SO3VE7zMbwrgdwPQc+H
trying to spawn 300 new processes a
|>second.
Not to mention that a proxy squid mounted in reverse proxy mode will
help a lot.
Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozde
onnections means a very huge load, may you provide also the result of
"vmstat 5" my webserver trash already with -c 120 !
how many connection your postgres can manage ?
You have to consider to use a connection pool with that ammount of connections.
Regards
Gaetano Mendola
--
).
Trust me the PREPARE is not doing miracle in shenarios like yours . If you use
postgres
in a web service environment what you can use is a connection pool ( look for pgpoll
IIRC ),
if you use a CMS then try to enable the cache in order to avoid to hit the DB for each
request.
Regards
Gaetano Me
s pending ) or is 8.0 stuff ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
wever (consider multiple parameters
to PREPARE, for example).
Do you mean store different plans for each different histogram segment ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
Dennis Bjorklund wrote:
> On Wed, 22 Sep 2004, Gaetano Mendola wrote:
>
>
>> Limit (cost=10.21..10.21 rows=1 width=24) (actual time=0.885..0.885 rows=0
loops=1)
>> Limit (cost=10.28..10.28 rows=1 width=24) (actual time=0.429..0.429 rows=0
loops=1)
>
>
> These
Dennis Bjorklund wrote:
On Wed, 22 Sep 2004, Gaetano Mendola wrote:
Now my question is why the 7.4 choose the hash join ? :-(
It looks to me that the marge join is faster because there wasn't really
anything to merge, it resulted in 0 rows. Maybe the hash join that is
choosen in 7.4
=0.00..1.16 rows=16
width=32) (never executed)
~ Total runtime: 0.31 msec
(14 rows)
Disabling the hashjoin on the 7.4 I got best performance that 7.3:
x=# set enable_hashjoin = off;
SET
x=# explain analyze select sp_get_ultimo_parere(id_pratica) from t_pratica;
~
for the inodes mounting the partition with noatime option
( this however have more impact on performance for read activities )
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(s
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tom Lane wrote:
| Gaetano Mendola <[EMAIL PROTECTED]> writes:
|
|>Now that the argument is already open, why postgres choose
|>on linux fdatasync? I'm understanding from other posts that
|>on this platform open_sync is b
parameter ?
Regards
Gaetano Mendola
These are my times:
kernel 2.4.9-e.24smp ( RAID SCSI ):
Simple write timing:
write0.011544
Compare fsync times on write() and non-write() descriptor:
(If the times are similar, fsync() can sync data written
on a different
Mark Cotner wrote:
Requirements:
Merge table definition equivalent. We use these
extensively.
What do you mean with "merge table definition equivalent"?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 2: you can get off all lis
Pierre-Frédéric Caillaud wrote:
Yes, you're right as usual.
As usual ? Do you think your father can be wrong on you ? :-)
Gaetano
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
ux.blogspot.com/
:-)
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
[EMAIL PROTECTED] wrote:
| Gaetano Mendola <[EMAIL PROTECTED]> wrote ..
|
|>Hi all,
|>do you know any clean workaround at ill-planned queries inside a stored
|>procedure?
| I use "EXECUTE" inside a stored procedure for just th
to modify adjacent
memory address instead of jumping.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
plan this query at execution time ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Bruce Momjian wrote:
| Gaetano Mendola wrote:
|
|>Tom Lane wrote:
|>
|> > Bruce Momjian <[EMAIL PROTECTED]> writes:
|> >
|> >>Agreed. What I am wondering is with our system where every update gets
|> >>a new r
5) are space available for the (0,1) updates.
This will help a table clustered ( for example ) to mantain his
own correct cluster order.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
r selects and can be used for
indexes that you do not expect to grow or for mostly read-only indexes.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
te ON.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Do you think that adopting the "chip tuning" product
postgresql could increase the performances as well ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscr
the 2 different default stat
| targets are the same? Sorry if I missed the post indicating they were.
|
| If the plans are the same, it would be interesting to get a profile on
| the 2 different cases with that index in place across 100k iterations of
| the prepared query.
Do you have an advice on the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tom Lane wrote:
| Gaetano Mendola <[EMAIL PROTECTED]> writes:
|
|>Using a prepared query:
|
|
|>Without index and default stat 10 :1.12 ms
ariadne=# explain analyze execute
Rod Taylor wrote:
On Fri, 2004-08-20 at 05:37, Gaetano Mendola wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Christopher Kings-Lynne wrote:
|>>> Without index: 1.140 ms
|>>> With index: 1.400 ms
|>>> With default_statistic_targer = 200: 1.800 ms
|>>
|>
that value even
|> with 1000 times.
|
|
| Given the almost irrelvant difference in the speed of those queries, I'd
| say that with the stats so high, postgres simply takes longer to check
| the statistics to come to the same conclusion. ie. it has to loop over
| 200 rows instead of jus
Richard Huxton wrote:
Gaetano Mendola wrote:
Hi all,
I'm tring to optimize the following query:
http://rafb.net/paste/results/YdO9vM69.html
as you can see from the explain after defining the
index the performance is worst.
If I raise the default_statistic_target to 200
then the performanc
With index: 1.400 ms
With default_statistic_targer = 200: 1.800 ms
tought anyone ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
the database
works much, much faster.
I suggest you to investigate why noapic did the work for you, do you have
not well supported device ? At your place also I'd try removing the noapic
option and using acpi=noidle
Regards
Gaetano Mendola
---(end of broa
#x27;,'201A') and effectif < 150
I think it is possible to optimize the performance of this queries
before changing the hardware (I now I will...) but I don't know how,
even after having read lot of things about postgresql ...
Show us a explain analyze for that queries.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
1 - 100 of 141 matches
Mail list logo