[PERFORM] PostgreSQL 7.4.3 Performance issues on OpenBSD

2005-02-28 Thread Glenn Kusardi
Hello,

I'm experiencing performance problems with 7.4.3 on OpenBSD 3.6, at
least I think so. It is running on a Xeon 3 GHz with 2 GB RAM.

I have a table with 22 columns, all integer, timestamp or varchar and
10 indizes on integer, timestamp and varchar columns.

The table got 8500 rows (but growing). I try to make an UPDATE on the
table with 7000 affected rows. This update takes about 2-6 seconds.

Has it to be that slow? I'm running the same query on MySQL or Oracle
databases faster on similar machines.

EXPLAIN ANALYZE UPDATE ... tells me:
QUERY PLAN:
Seq Scan on table (cost=0.00..286.57 rows=4804 width=146) (actual
time=405.206..554.433 rows=7072 loops=1)
Filter: (system_knoten_links > 3501)
Total runtime: 2928.500 ms

So that looks fine to me, except the runtime.

Without indizes the query is fast with 456 ms.
Trying to disable fsync to avoid some disc operations aren't helping.

Sincerely TIA,
Glenn




---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] is pg_autovacuum so effective ?

2005-02-28 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Markus Schaber wrote:
> Hi, Gaetano,
> 
> Gaetano Mendola schrieb:
> 
> 
>>I have the same requirement too. Actually pg_autovacuum can not be
>>instructed "per table" so some time the global settings are not good
>>enough. I have a table of logs with 6 milions rows ( 3 years logs )
>>I insert on that page ~ 6000 rows for day. I'm running pg_autovacuum
>>with setting to ANALYZE or VACUUM table if the 10% is touched.
>>With this setting pg_autovacuum will analyze that table each 3 months!!!
> 
> 
> If you have only inserts, and only so few on a large table, you do not
> need to vacuum such often. Not to reclaim space, only to prevent
> transaction ID wraparound (which is ensured by pg_autovacuum).
> 
> And if the data distribution does not change, frequently calling ANALYZE
> does not help much, either.

Yes, I'm  aware about it indeed I need the analyze because usualy I do on that
table select regarding last 24 ours so need to analyze it in order to
collect the statistics for this period.
Beside that I tried to partition that table, I used both tecnique on
my knowledge

1) A view with UNION ALL on all tables collecting these logs
2) Using inheritance

and both cases are working in theory but in practice are not ( the index scan
is lost as soon you use this view/table inside others views or joining them)

I heard that next version of pg_autovacuum can be instructed "per table";
is it true ?


Regards
Gaetano Mendola










-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCHlVu7UpzwH2SGd4RAqQfAKCatX9qbf5fmTN7RbapWj6BgAcwQgCfRy2R
ApeFl9jezm/4YyVN/4fY3Jg=
=wBIK
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[PERFORM] PG block sizes

2005-02-28 Thread Tom Arthurs
Hi, All
I'm trying to tune a software RAID 0 (striped) on a solaris 9, sparc box.  
Currently I'm using a raid 1 (mirrored) array on two discs for the data area,
and I put in 4 new drives last night (all are f-cal).  On the new array I have 
a width of 4, and used the default interleave factor of 32k.  I believe
a smaller interleave factor may get me better read performance (I'm seeing a 
bulk load performance increase of about 35% but a 7-8x worse read performance
between the two RAID setups.)
Conventional wisdom is  using an interleave factor < = db default block size 
gives the best read performance.  I would like to try that (though this testing
is burning a lot of daylight, since I'll have to reload the db every time I 
remake the RAID.)
Question:  what't the best block size to use for postgresql on solaris?  (I'm 
using 7.4.5)
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [PERFORM] is pg_autovacuum so effective ?

2005-02-28 Thread Markus Schaber
Hi, Matthew,

Matthew T. O'Connor schrieb:

> The version of pg_autovacuum that I submitted for 8.0 could be
> instructed "per table" but it didn't make the cut.  Aside from moved out
> of contrib and integrated into the backend, per table autovacuum
> settings is probably the next highest priority.

What was the reason for non-acceptance?

Is it available as a standalone project?


Markus
-- 
markus schaber | dipl. informatiker
logi-track ag | rennweg 14-16 | ch 8001 zürich
phone +41-43-888 62 52 | fax +41-43-888 62 53
mailto:[EMAIL PROTECTED] | www.logi-track.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] is pg_autovacuum so effective ?

2005-02-28 Thread Matthew T. O'Connor
Gaetano Mendola wrote:
Yes, I'm  aware about it indeed I need the analyze because usualy I do on that
table select regarding last 24 ours so need to analyze it in order to
collect the statistics for this period.
Beside that I tried to partition that table, I used both tecnique on
my knowledge
1) A view with UNION ALL on all tables collecting these logs
2) Using inheritance
and both cases are working in theory but in practice are not ( the index scan
is lost as soon you use this view/table inside others views or joining them)
I heard that next version of pg_autovacuum can be instructed "per table";
is it true ?
The version of pg_autovacuum that I submitted for 8.0 could be 
instructed "per table" but it didn't make the cut.  Aside from moved out 
of contrib and integrated into the backend, per table autovacuum 
settings is probably the next highest priority.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Peformance Tuning Opterons/ Hard Disk Layout

2005-02-28 Thread Markus Schaber
Hi, John,

John Allgood schrieb:
> My question is what is the best way to setup
> postgres databases on different disks. I have setup multiple postmasters
> on this system as a test. The only problem was configuring each
> databases "ie postgresql.conf, pg_hba.conf".  Is there anyway in
> postgres to have everything in one cluster and have it seperated onto
> multiple drives.

Using PostgreSQL 8.0, the newly introduced "tablespaces" solve all this:
http://www.postgresql.org/docs/8.0/interactive/manage-ag-tablespaces.html

Using PostgreSQL 7.4, you can relatively easy create single databases on
different drives. However, separating out single tables or indices
involves some black symlink magic. See google and
http://www.postgresql.org/docs/7.4/interactive/manage-ag-alternate-locs.html

HTH,
Markus
-- 
markus schaber | dipl. informatiker
logi-track ag | rennweg 14-16 | ch 8001 zürich
phone +41-43-888 62 52 | fax +41-43-888 62 53
mailto:[EMAIL PROTECTED] | www.logi-track.com

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] Possible interesting extra information for explain

2005-02-28 Thread Simon Riggs
On Fri, 2005-02-25 at 08:49 -0500, Jeff wrote:
> Also another thing I started working on back in the day and hope to 
> finish when I get time (that is a funny idea) is having explain analyze 
> report when a step required the use of temp files.

Sounds useful. Please work on it...

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] is pg_autovacuum so effective ?

2005-02-28 Thread Bruno Wolff III
On Mon, Feb 28, 2005 at 16:46:34 +0100,
  Markus Schaber <[EMAIL PROTECTED]> wrote:
> Hi, Matthew,
> 
> Matthew T. O'Connor schrieb:
> 
> > The version of pg_autovacuum that I submitted for 8.0 could be
> > instructed "per table" but it didn't make the cut.  Aside from moved out
> > of contrib and integrated into the backend, per table autovacuum
> > settings is probably the next highest priority.
> 
> What was the reason for non-acceptance?

It wasn't reviewed until very close to freeze due to people who could do
the review being busy and then there wasn't enough time to iron some things
out before the freeze.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[PERFORM] wal_sync_methods

2005-02-28 Thread Stefan Hans



Hi *,
 
I am looking for the fastest wal_sync_method 
(postgres 8, Linux (Redhat) 2.4.29, ext3, SCSI HW-Raid 5).
 
Any experiences and/or tips?.
 
Thanks in advance
 
Stefan


Re: [PERFORM] High end server and storage for a PostgreSQL OLTP system

2005-02-28 Thread Cosimo Streppone
Cosimo Streppone wrote:
Merlin Moncure wrote:
> If everything is working the way it's supposed to, 8.0 should be faster
> than 7.1 (like, twice faster) for what you are probably trying to do.
In the next days I will be testing the entire application with the
same database only changing the backend from 7.1 to 8.0, so this is
a somewhat perfect condition to have a "real-world" benchmark
of Pg 8.0 vs 7.1.x performances.
The "next days" have come. I did a complete migration to Pg 8.0.1
from 7.1.3. It was a *huge* jump.
The application is exactly the same, also the database structure
is the same. I only dumped the entire 7.1.3 db, changed the backend
version, and restored the data in the 8.0.1 db.
The performance level of Pg 8 is at least *five* times higher
(faster!) than 7.1.3 in "query-intensive" transactions,
which is absolutely astounding.
In my experience, Pg8 handles far better non-unique indexes
with low cardinality built on numeric and integer types, which
is very common in our application.
--
Cosimo
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] High end server and storage for a PostgreSQL OLTP system

2005-02-28 Thread Tom Lane
Cosimo Streppone <[EMAIL PROTECTED]> writes:
> The performance level of Pg 8 is at least *five* times higher
> (faster!) than 7.1.3 in "query-intensive" transactions,
> which is absolutely astounding.

Cool.

> In my experience, Pg8 handles far better non-unique indexes
> with low cardinality built on numeric and integer types, which
> is very common in our application.

Yes, we've fixed a number of places where the btree code was inefficient
with large numbers of equal keys.  I'm not sure that that explains a
5x speedup all by itself, though.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[PERFORM] Inheritence versus delete from

2005-02-28 Thread Sven Willenberger
Trying to determine the best overall approach for the following
scenario:

Each month our primary table accumulates some 30 million rows (which
could very well hit 60+ million rows per month by year's end). Basically
there will end up being a lot of historical data with little value
beyond archival.

The question arises then as the best approach of which I have enumerated
three:

1) Just allow the records to accumulate and maintain constant vacuuming,
etc allowing for the fact that most queries will only be from a recent
subset of data and should be mostly cached.

2) Each month:
SELECT * INTO 3monthsago_dynamically_named_table FROM bigtable WHERE
targetdate < $3monthsago;
DELETE FROM bigtable where targetdate < $3monthsago;
VACUUM ANALYZE bigtable;
pg_dump 3monthsago_dynamically_named_table for archiving;

3) Each month:
CREATE newmonth_dynamically_named_table (like mastertable) INHERITS
(mastertable);
modify the copy.sql script to copy newmonth_dynamically_named_table;
pg_dump 3monthsago_dynamically_named_table for archiving;
drop table 3monthsago_dynamically_named_table;

Any takes on which approach makes most sense from a performance and/or
maintenance point of view and are there other options I may have missed?

Sven Willenberger


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Inheritence versus delete from

2005-02-28 Thread Tom Lane
Sven Willenberger <[EMAIL PROTECTED]> writes:
> 3) Each month:
> CREATE newmonth_dynamically_named_table (like mastertable) INHERITS
> (mastertable);
> modify the copy.sql script to copy newmonth_dynamically_named_table;
> pg_dump 3monthsago_dynamically_named_table for archiving;
> drop table 3monthsago_dynamically_named_table;

A number of people use the above approach.  It's got some limitations,
mainly that the planner isn't super bright about what you are doing
--- in particular, joins involving such a table may work slowly.

On the whole I'd probably go with the other approach (one big table).
A possible win is to use CLUSTER rather than VACUUM ANALYZE to recover
space after your big deletes; however this assumes that you can schedule
downtime to do the CLUSTERs in.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Inheritence versus delete from

2005-02-28 Thread John Arbash Meinel
Sven Willenberger wrote:
Trying to determine the best overall approach for the following
scenario:
Each month our primary table accumulates some 30 million rows (which
could very well hit 60+ million rows per month by year's end). Basically
there will end up being a lot of historical data with little value
beyond archival.

If this statement is true, then 2 seems the best plan.
2) Each month:
SELECT * INTO 3monthsago_dynamically_named_table FROM bigtable WHERE
targetdate < $3monthsago;
DELETE FROM bigtable where targetdate < $3monthsago;
VACUUM ANALYZE bigtable;
pg_dump 3monthsago_dynamically_named_table for archiving;

It seems like this method would force the table to stay small, and would
keep your queries fast.  But if you ever actually *need* the old data,
then you start having problems.
...
I think (3) would tend to force a whole bunch of joins (one for each
child table), rather than just one join against 3months of data.
Any takes on which approach makes most sense from a performance and/or
maintenance point of view and are there other options I may have missed?
Sven Willenberger

If you can get away with it 2 is the best.
John
=:->


signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] seq scan cache vs. index cache smackdown

2005-02-28 Thread PFC

(For those not knowing - it's ReadFile/WriteFile where you pass an array
of "this many bytes to this address" as parameters)
Isn't that like the BSD writev()/readv() that Linux supports also?  Is
that something we should be using on Unix if it is supported by the OS?
	Nope, readv()/writev() read/write from/to the file sequentially to/from a  
list of buffers in memory. The Windows calls read/write at random file  
offsets to/from a list of buffers.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster