Mark Lewis wrote:
> I imported my test dataset
> and was almost immediately able to track down the cause of my
> performance problem.
Why don't you tell us what the problem was :-) ?
Regards
Gaetano Mendola
---(end of broadcast)---
d I'd like to include postgresql or have a similar tool. Any
> of you is using anything like that? all kind of hints are welcome :-)
>
> Cheers!
We use Cricket + Nagios ( new Netsaint release ).
Regards
Gaetano Mendola
---(end of broadcast)--
Index Cond: (t_oa_2_00_card.pvcp = "outer".id)
Total runtime: 55.454 ms
(10 rows)
Isn't too much choose a sequential scan due to 19 estimated rows when with 4
estimated does a correct index scan ?
Regards
Gaetano Mendola
-BEGIN P
tive. This setting
> indicates for postgresql how fast your disks are, the lower the
> faster.
>
> Could this setting be changed to 2 as default rather than 4?
I have tuned that number already at 2.5, lowering it to 2 doesn't change
the plan.
Regards
Gaetano Mendola
-BEGIN PG
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Richard Huxton wrote:
> Gaetano Mendola wrote:
>> -BEGIN PGP SIGNED MESSAGE- Hash: SHA1
>>
>> Hi all, take a look at those plans:
>>
>>
>> test=# explain analyze SELECT COUNT(id) FROM t_oa_2_00_card
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Michael Fuhr wrote:
> On Fri, Mar 30, 2007 at 12:08:26PM +0200, Gaetano Mendola wrote:
>> Claus Guttesen wrote:
>>> Try changing random_page_cost from the default 4 to 2 in postgresql.conf:
>>>
>>> random_page_
elease Notes:
>
> * Improve the optimizer's selectivity estimates for LIKE, ILIKE,
> and regular expression operations (Tom)
I will try same select on a 8.2 ( that one was a 8.1 ) and I'll let you
know.
Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version:
a_2_00_card (cost=0.00..73.24
rows=4388 width=0) (actual time=1.779..1.779 rows=7801 loops=1)
Index Cond: (t_oa_2_00_card.pvcp = l_pvcp.id)
Total runtime: 23.491 ms
(10 rows)
I had to lower the random_page_cost = 2.5 in order to avoid the sequential scan
on the big table t_oa_2_00_ca
e optimizer take rid of that useless sort on those
kind of queries ?
Regards
Gaetano Mendola
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 09/13/2010 04:44 PM, Tom Lane wrote:
> Gaetano Mendola writes:
>> because of that sort in the view definition the first query above
>> takes not less than 3 seconds. I have solved this performance issue
>> removing the sort from the view definition and putting it in the
&
On Mon, Sep 13, 2010 at 6:48 PM, Tom Lane wrote:
> Gaetano Mendola writes:
>> Of course I'm not suggesting to take away the "sort by" and give the user
>> an unsorted result, I'm asking why the the optimizer in cases like:
>
>> select unique(a) fr
> it seems like there are a number of options you could explore here.
Question here is not how to do it right, but how to make the optimizer smarter
than it is now, taking rid of work not needed.
Regards
Gaetano Mendola
--
cpp-today.blogspot.com
--
Sent via pgsql-performance mailing list
oriented.
That's a bit harsh. Your sense if fooling you.
Regards
Gaetano Mendola
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
make data redundancy
using a RAID 4 or 5, this depend if you need read performances or
write performances, in the case of Red Sherif I guess that guys are
using RAID 50 ( 0 + 5 ) sets so what you "waste" is a disk for each
set.
Regards
Gaetano Mendola
h that will not fail with
big tables like yours ?
if you can post the autovacuum daemon log ( last lines ).
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 8: explain analyze is your friend
d the patch for the autovacuum but evidently I have to
make it more aggressive, I'm sorry that I can not made him more aggressive
only for this table.
Thank you all.
Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (MingW32)
Comment: Using GnuPG with Thund
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matthew T. O'Connor wrote:
| Gaetano Mendola wrote:
|
|> Tom Lane wrote:
|> | Given the nature of the data (login times), I'd imagine that the
|> problem
|> | is simply that he hasn't analyzed recently enough. A bump in s
00
http://www.dell.com/downloads/emea/products/pvaul/en/Dell_EMC_cx600_specs.pdf
and I'm forgotting to have a disk behind...
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
And also is not system wide but let me say "for backend"...
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
and also decrease the other cpu costs
Regards
Gaetano Mendola
BTW, I live in Paris too, if you need a hand...
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
ption that is (at least currently) untrue,
and that is that PostgreSQL has it's own cache.
Are you sure of this ? What is the meaning of the ARC recently introduced
then ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 8: explain analyze is your friend
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Scott Marlowe wrote:
| On Mon, 2004-08-02 at 10:43, Gaetano Mendola wrote:
|
|>Scott Marlowe wrote:
|>
|>>On Mon, 2004-08-02 at 02:11, Ioannis Theoharis wrote:
|>>
|>>
|>>>Hi, i would like to answer if there is any way
mprovements.
As you wrote no one did benchmarks on demostrating with the "brute force" that
ARC is better but on the paper should be.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 8: explain analyze is your friend
Matthew T. O'Connor wrote:
Lending, Rune wrote:
Hello all.
I am managing a large database with lots of transactions in different
tables.
The largest tables have around 5-6 millions tuples and around
5-6 inserts and maybe 2 updates pr day.
While the smalest tables have only a few tup
up 128 MB for shared_buffer ( may be you need to instruct your
OS to allow that ammount of shared memory usage ) and 24MB for sort_mem.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 8: explain analyze is your friend
?
May be you are not using indexes some where, or may be yes but the
planner is not using it... In two words we needs other informations
in order to help you.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
involving the creation of temporary tables.
I seen too this behavior, till I explained that this is a valid sql:
select T.* from ( select * from table t where a = 5 ) AS T join foo using ( bar );
show us a typical function that use temporary tables.
Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE
been running under Oracle for about fifteen years. Now I try to replace
Oracle by Postgres.
Show us the explain analyze on your queries.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index
, till I explained that this is a valid sql:
select T.* from ( select * from table t where a = 5 ) AS T join foo using ( bar );
show us a typical function that use temporary tables.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 4: Don't
<1s
> Q2 3s 8s 4s
> Q3 8s 1m20s27s
> Q4 28s 17m20s 6m47s
Are you using the same disk for oracle and PG ?
Could you post your actual postgresql.conf ?
Try also to mount
Martin Foster wrote:
Gaetano Mendola wrote:
Martin Foster wrote:
I run a Perl/CGI driven website that makes extensive use of
PostgreSQL (7.4.3) for everything from user information to formatting
and display of specific sections of the site. The server itself, is
a dual processor AMD Opteron
value MaxClients in your apache configuration
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
at seems resonable with the total ammount of memory available.
Am I too optimistic?
Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBE81z7UpzwH2SGd4RAuzzAJ98Ze0HQedKaZ/laT7P1OS44F
Martin Foster wrote:
Gaetano Mendola wrote:
Let start from your postgres configuration:
shared_buffers = 8192< This is really too small for your
configuration
sort_mem = 2048
wal_buffers = 128< This is really too small for your
configuration
effective_cache_size = 16000
e end of connection.
Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBFBcn7UpzwH2SGd4RAuNhAJ0f+NVUlRUszX+gUE6EfYiFYQy5JQCgnaRj
HcguR1U3CgvQiZ4a56PBtVU=
=6Jzo
-END PGP SIGNATURE-
-
Scott Marlowe wrote:
On Fri, 2004-08-06 at 22:02, Martin Foster wrote:
Scott Marlowe wrote:
On Fri, 2004-08-06 at 17:24, Gaetano Mendola wrote:
Martin Foster wrote:
Gaetano Mendola wrote:
Let start from your postgres configuration:
shared_buffers = 8192< This is really too small
Tom Lane wrote:
Martin Foster <[EMAIL PROTECTED]> writes:
Gaetano Mendola wrote:
change this values in:
shared_buffers = 5
sort_mem = 16084
wal_buffers = 1500
This value of wal_buffers is simply ridiculous.
Instead I think is ridiculous a wal_buffers = 8 ( 64KB ) by default.
There isn
pic sep mtrr pge mca cmov pat
pse36 mmx fxsr sse
runqueue : 1
bogomips : 1749.81
Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBGrSE7UpzwH2SGd4RAoXnAKCHhuw/pWKgY+
gnari wrote:
"G u i d o B a r o s i o" <[EMAIL PROTECTED]> wrote:
[speeding up 100 inserts every 5 minutes]
Tips!
*Delete indexes and recreate them after the insert.
sounds a bit extreme, for only 100 inserts
which fsync method are you using ?
change it and see what happen
#x27;,'201A') and effectif < 150
I think it is possible to optimize the performance of this queries
before changing the hardware (I now I will...) but I don't know how,
even after having read lot of things about postgresql ...
Show us a explain analyze for that queries.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
the database
works much, much faster.
I suggest you to investigate why noapic did the work for you, do you have
not well supported device ? At your place also I'd try removing the noapic
option and using acpi=noidle
Regards
Gaetano Mendola
---(end of broa
With index: 1.400 ms
With default_statistic_targer = 200: 1.800 ms
tought anyone ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
Richard Huxton wrote:
Gaetano Mendola wrote:
Hi all,
I'm tring to optimize the following query:
http://rafb.net/paste/results/YdO9vM69.html
as you can see from the explain after defining the
index the performance is worst.
If I raise the default_statistic_target to 200
then the performanc
that value even
|> with 1000 times.
|
|
| Given the almost irrelvant difference in the speed of those queries, I'd
| say that with the stats so high, postgres simply takes longer to check
| the statistics to come to the same conclusion. ie. it has to loop over
| 200 rows instead of jus
Rod Taylor wrote:
On Fri, 2004-08-20 at 05:37, Gaetano Mendola wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Christopher Kings-Lynne wrote:
|>>> Without index: 1.140 ms
|>>> With index: 1.400 ms
|>>> With default_statistic_targer = 200: 1.800 ms
|>>
|>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tom Lane wrote:
| Gaetano Mendola <[EMAIL PROTECTED]> writes:
|
|>Using a prepared query:
|
|
|>Without index and default stat 10 :1.12 ms
ariadne=# explain analyze execute
the 2 different default stat
| targets are the same? Sorry if I missed the post indicating they were.
|
| If the plans are the same, it would be interesting to get a profile on
| the 2 different cases with that index in place across 100k iterations of
| the prepared query.
Do you have an advice on the
Do you think that adopting the "chip tuning" product
postgresql could increase the performances as well ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscr
te ON.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
r selects and can be used for
indexes that you do not expect to grow or for mostly read-only indexes.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
5) are space available for the (0,1) updates.
This will help a table clustered ( for example ) to mantain his
own correct cluster order.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Bruce Momjian wrote:
| Gaetano Mendola wrote:
|
|>Tom Lane wrote:
|>
|> > Bruce Momjian <[EMAIL PROTECTED]> writes:
|> >
|> >>Agreed. What I am wondering is with our system where every update gets
|> >>a new r
plan this query at execution time ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
to modify adjacent
memory address instead of jumping.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
[EMAIL PROTECTED] wrote:
| Gaetano Mendola <[EMAIL PROTECTED]> wrote ..
|
|>Hi all,
|>do you know any clean workaround at ill-planned queries inside a stored
|>procedure?
| I use "EXECUTE" inside a stored procedure for just th
ux.blogspot.com/
:-)
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
Pierre-Frédéric Caillaud wrote:
Yes, you're right as usual.
As usual ? Do you think your father can be wrong on you ? :-)
Gaetano
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Mark Cotner wrote:
Requirements:
Merge table definition equivalent. We use these
extensively.
What do you mean with "merge table definition equivalent"?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 2: you can get off all lis
parameter ?
Regards
Gaetano Mendola
These are my times:
kernel 2.4.9-e.24smp ( RAID SCSI ):
Simple write timing:
write0.011544
Compare fsync times on write() and non-write() descriptor:
(If the times are similar, fsync() can sync data written
on a different
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tom Lane wrote:
| Gaetano Mendola <[EMAIL PROTECTED]> writes:
|
|>Now that the argument is already open, why postgres choose
|>on linux fdatasync? I'm understanding from other posts that
|>on this platform open_sync is b
for the inodes mounting the partition with noatime option
( this however have more impact on performance for read activities )
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(s
=0.00..1.16 rows=16
width=32) (never executed)
~ Total runtime: 0.31 msec
(14 rows)
Disabling the hashjoin on the 7.4 I got best performance that 7.3:
x=# set enable_hashjoin = off;
SET
x=# explain analyze select sp_get_ultimo_parere(id_pratica) from t_pratica;
~
Dennis Bjorklund wrote:
On Wed, 22 Sep 2004, Gaetano Mendola wrote:
Now my question is why the 7.4 choose the hash join ? :-(
It looks to me that the marge join is faster because there wasn't really
anything to merge, it resulted in 0 rows. Maybe the hash join that is
choosen in 7.4
Dennis Bjorklund wrote:
> On Wed, 22 Sep 2004, Gaetano Mendola wrote:
>
>
>> Limit (cost=10.21..10.21 rows=1 width=24) (actual time=0.885..0.885 rows=0
loops=1)
>> Limit (cost=10.28..10.28 rows=1 width=24) (actual time=0.429..0.429 rows=0
loops=1)
>
>
> These
wever (consider multiple parameters
to PREPARE, for example).
Do you mean store different plans for each different histogram segment ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
s pending ) or is 8.0 stuff ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
).
Trust me the PREPARE is not doing miracle in shenarios like yours . If you use
postgres
in a web service environment what you can use is a connection pool ( look for pgpoll
IIRC ),
if you use a CMS then try to enable the cache in order to avoid to hit the DB for each
request.
Regards
Gaetano Me
onnections means a very huge load, may you provide also the result of
"vmstat 5" my webserver trash already with -c 120 !
how many connection your postgres can manage ?
You have to consider to use a connection pool with that ammount of connections.
Regards
Gaetano Mendola
--
trying to spawn 300 new processes a
|>second.
Not to mention that a proxy squid mounted in reverse proxy mode will
help a lot.
Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozde
, you may benefit from using
| PREPAREd queries.
|
| -Mike
With his load will not change anything.
Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBVany7UpzwH2SGd4RAj9UAJ0SO3VE7zMbwrgdwPQc+H
50%
| hits, which it otherwise might.
So, I didn't understand how the statistics hystogram works.
I'm going to take a look at analyze.c
Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.
to date the statistics.
Have someone in his plans to implement it ?
After all the idea is simple: compare during normal selects the extimated rows and
the actual extracted rows then use this "free" information to refine the histograms.
Regards
Gaetano Mendola
---(e
Neil Conway wrote:
On Tue, 2004-09-28 at 08:42, Gaetano Mendola wrote:
Now I'm reading an article, written by the same author that ispired the magic "300"
on analyze.c, about "Self-tuning Histograms". If this is implemented, I understood
we can take rid of "vacuum a
gards
Gaetano Mendola
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
e with this?
>
What do you need ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
s the dual Xeon, when
running on one and both cpus.
Here http://www6.tomshardware.com/cpu/20030422/ both were tested and there is
a database performance section, unfortunatelly they used MySQL.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP
[EMAIL PROTECTED] wrote:
If the resource owner is always responsible for releasing locked
buffers, who releases the locks if the backend crashes?
The semaphore "undo" I hope.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 8: explain
Brock Henry wrote:
> Any comments/suggestions would be appreciated.
Tune also the disk I/O elevator.
look at this: http://www.varlena.com/varlena/GeneralBits/49.php
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 2: you can get off all lists
t cure the CSStorm bug, they do lessen its effects in terms of
> real performance loss.
I proposed weeks ago to see how the CSStorm is affected by stick each backend
in one processor ( where the process was born ) using the cpu-affinity capability
( kernel 2.6 ), is this proposal completely out of
Tom Lane wrote:
Gaetano Mendola <[EMAIL PROTECTED]> writes:
I proposed weeks ago to see how the CSStorm is affected by stick each
backend in one processor ( where the process was born ) using the
cpu-affinity capability ( kernel 2.6 ), is this proposal completely
out of mind ?
Th
ly out of
|>mind ?
|
|
| I don't see how that would help. The problem is not backends switching
| processors, it's the buffermgrlock needing to be swapped between processors.
This is not clear to me. What happen if during a spinlock a backend is
moved away from one processor to another
ans ( and not soo
much )
so don't hope to double this parameter and push postgres to use more RAM.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 8: explain analyze is your friend
Jim C. Nasby wrote:
> I'm wondering if there's any way I can tweak things so that the estimate
> for the query is more accurate (I have run analyze):
Can you post your configuration file ? I'd like to see for example your
settings about: random_page_cost and effective_cache_s
'll
continue to work consistently in production code.
For sure it will not break the goal: "check the existence".
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
46..1474.709 rows=505199 loops=1)
-> Hash (cost=4.00..4.00 rows=1 width=16) (actual time=0.083..0.083 rows=0
loops=1)
-> Index Scan using user_login_login_key on user_login u
(cost=0.00..4.00 rows=1 width=16) (actual time=0.064..0.066 rows=1 loops=1)
Index Cond:
user_logs_2004
Any reason you didn't use inheritance?
I did in that way just to not use postgresql specific feature.
I can give it a try and I let you know, however the question remain,
why the index usage is lost if used in that way ?
Regards
Gaetano Mendola
--
. What Postgres version are you using? I thought this was fixed
in 7.4, but maybe not ...
Yes, I'm using with 7.4.x, so it was not fixed...
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an
Index Cond: ((login)::text = 'kalman'::text)
Total runtime: 37122.069 ms
(10 rows)
and how you can see this path is not applicable too :-(
Any other suggestion ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Steven Rosenstein wrote:
DELETE FROM detail JOIN summary ON (summary.id=detail.sum_id) WHERE
collect_date='2005-02-05';
DELETE FROM detail WHERE detail.sum_id in ( select id from summary )
AND collect_date='2005-02-05';
Regards
Gaetano Mendola
rever.
7.3 is affected by bad performances if you use IN.
Transform the IN in an EXIST construct.
If it'is an option for you upgrade you DB engine.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
>>2) pgpool sends query only to the master if the query is other than
>> SELECT.
Don't you think that this is unsafe ?
SELECT foo(id), id
FROM bar;
where foo have side effect.
Is pgpool able to detect it and perform this select on the master ?
Regar
ing, unless you
> write it yourself. I think there's some work being done in this area,
> though.
Seen my last attempts to perform an horizontal partition I have to say
that postgres do not support it even if you try to write it yourself
(see my post "horizontal partion" ).
Hi all,
I'm stuck in a select that use the hash join where should not:
6 seconds vs 0.3 ms !!
If you need other info in order to improve the planner,
let me know.
Regards
Gaetano Mendola
empdb=# explain analyze SELECT id_sat_request
empdb-#FROM sat_request sr,
Tom Lane wrote:
> Gaetano Mendola <[EMAIL PROTECTED]> writes:
>
>>If you need other info in order to improve the planner,
>
>
> ... like, say, the PG version you are using, or the definitions of the
> views involved? It's difficult to say much of anything about
In my experience not at all, you have to wonder if some of that are "idle in
transaction"
that are really a pain in the @#$
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
Tom Lane wrote:
> but this behavior isn't reproduced in the later message, so I wonder if
> it wasn't an artifact of something else taking a chunk of time.
I think is due the fact that first queries were performed in peakhours.
Regards
Gaetano Mendola
--
Christopher Browne wrote:
> After a long battle with technology, Gaetano Mendola <[EMAIL PROTECTED]>, an
> earthling, wrote:
>
>>JM wrote:
>>
>>>Hi ALL,
>>>
>>> I was wondering if there is a DB performance reduction if
>>>there
r that table
relfilenode | relname | relpages | reltuples
-+--+--+-
18376 | messages |69472 | 1.60644e+06
how was possible accumulate 6000 pages wasted on that table?
Between these two calls:
[2005-02-22 05:25:03 CET] Performing: VACUUM
gh:
max_fsm_pages | 200
max_fsm_relations | 1000
at least after a vacuum full I see that these numbers are an overkill...
REgards
Gaetano Mendola
---(end of broadcast)---
TIP 2: you can get off all lists at once with the
1 - 100 of 141 matches
Mail list logo