On 9/11/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
I'd suggest two things.
one: Get a better ERP... :) or at least one you can inject some
intelligence into, and two: upgrade to postgresql 8.1, or even 8.2 which
will be released moderately soon, and if you won't be going into
production dire
Florian Weimer <[EMAIL PROTECTED]> writes:
> I've done that. Fortunately, ANALYZE time didn't increase by that
> much, compared to the default (by just a factor of 10).
With really high stats times you also have to keep an eye on planning time.
The extra data in the stats table can cause plan
Brian,
On 9/11/06 8:50 AM, "Brian Wipf" <[EMAIL PROTECTED]> wrote:
> That's a great idea. One question though. If I put all 16 drives in a
> RAID 10 for the database, where should I put the logs? On that large
> RAID set? If I use a RAID controller with a BB cache for the mirrored
> laptop drives
On Mon, 2006-09-11 at 13:14, Piñeiro wrote:
> Hi,
>
> a week ago we migrate a Woody(postgre 7.2.1) server to Sarge(postgre
> 7.4.7). To migrate the database we use a dump, using pg_dump with this
> options:
> pg_dump -U -c -F p -O -v -f
>
> We have a search, that using woody take about 1-2 min
On Mon, 2006-09-11 at 20:14 +0200, Piñeiro wrote:
> Hi,
>
> a week ago we migrate a Woody(postgre 7.2.1) server to Sarge(postgre
> 7.4.7). To migrate the database we use a dump, using pg_dump with this
> options:
> pg_dump -U -c -F p -O -v -f
>
> We have a search, that using woody take about 1
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Piñeiro
> Subject: [PERFORM] Performance problem with Sarge compared with Woody
> a week ago we migrate a Woody(postgre 7.2.1) server to Sarge(postgre
> 7.4.7). To migrate the database we use a dump, u
Hi,
a week ago we migrate a Woody(postgre 7.2.1) server to Sarge(postgre
7.4.7). To migrate the database we use a dump, using pg_dump with this
options:
pg_dump -U -c -F p -O -v -f
We have a search, that using woody take about 1-2 minutes, but with
sarge it is executing about 2 hours, and at l
* Tom Lane:
> Yeah, n_distinct estimation from a sample is inherently hard :-(. Given
> that you have such a long tail on the distribution, it might be worth
> your while to crank the stats target for that column all the way to the
> maximum (1000).
I've done that. Fortunately, ANALYZE time did
On 8-Sep-06, at 2:44 AM, Luke Lonergan wrote:
One modification: we implemented two internal 60GB laptop hard
drives with
an additional 3Ware 8006-2LP controller on each machine for the OS.
This
frees up all 16 SATA II drives for data.
That's a great idea. One question though. If I put all
Florian Weimer <[EMAIL PROTECTED]> writes:
>> Maybe larger stats targets would help.
> I've set default_statistics_target to 100 and rerun ANALYZE on that
> table. The estimate went down to 43108 (and the hash join is still
> the preferred plan). ANALZE with default_statistics_target = 200
> (wh
* Tom Lane:
> Florian Weimer <[EMAIL PROTECTED]> writes:
>> -> Bitmap Index Scan on large_rel_1_field_1 (cost=0.00..2003.09
>> rows=193739 width=0) (actual time=0.148..0.148 rows=12 loops=1)
>>Index Cond: (n.field_1 = "outer".field_2)
>
> What you need to look into is w
Florian Weimer <[EMAIL PROTECTED]> writes:
> -> Bitmap Index Scan on large_rel_1_field_1 (cost=0.00..2003.09
> rows=193739 width=0) (actual time=0.148..0.148 rows=12 loops=1)
>Index Cond: (n.field_1 = "outer".field_2)
What you need to look into is why that rowcount esti
fardeen memon <[EMAIL PROTECTED]> writes:
> here is the output of the explain query after changing the tra_date
> column to timestamp.
If you want intelligent commentary, please (a) post EXPLAIN ANALYZE not
EXPLAIN output, and (b) don't mangle the indentation. This is just
about unreadable :
Hi,
for this simple join of two tables,
SELECT * FROM large_rel n, smaller_rel a
WHERE n.field_1 = a.field_2 AND a.key = '127.0.0.1';
PostgreSQL 8.1.4 chooses an extremely bad query plan:
Hash Join (cost=283.45..8269374.38 rows=14137 width=94)
Hash Cond: ("outer".field_1 = "inner".field_
Thanks for the reply .. you are right after i changed tra_date to timestamp in the view it considered the index and the performance did increase a bit .. but still compared to the query without the joins its much less .. any idea why? here is the output of the explain query after changi
15 matches
Mail list logo