I am running the precomplied binary of Postgreql 7.1.2 on a Redhat 7.1 (on a Dual
Celeron System with 256MB, kernel 2.4.4 and 2.4.5) System.
(The installation of the new 7.1.3 doesn't seem to solve the problem)
I am connecting to the DB with a Perl Program (using Perl 5.6.0 with DBD-Pg-1.01 and
I am running the precomplied binary of Postgreql 7.1.2 on a Redhat 7.1 (on a Dual
Celeron System with 256MB, kernel 2.4.4 and 2.4.5) System.
(The installation of the new 7.1.3 doesn't seem to solve the problem)
I am connecting to the DB with a Perl Program (using Perl 5.6.0 with DBD-Pg-1.01 and
It is running on many transactions. At least after 5 inserts a transaction is commited.
The filesystems doesn't get slow (reading a (big) file works still at >20 MBytes/s).
14839 postgres 20 0 19948 19M 18980 R98.5 7.7 477:24 postmaster
14819 postgres 8 0 1856 1856 1700 S 0.0
y dropping the index, using several COPY commands at the same time loading
>different parts of the data and then creating the index again.
> At the time of the inserts no other processes than the COPY's was connected to the
>database.
>
> /Jonas Lindholm
>
>
> Andrea
38:23 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
> Andreas Wernitznig <[EMAIL PROTECTED]> writes:
> > I am aware of the performance drawbacks because of indices and
> > triggers. In fact I have a trigger and an index on the most populated
> > table. It is not possible
On Wed, 22 Aug 2001 19:19:42 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
> Andreas Wernitznig <[EMAIL PROTECTED]> writes:
> > I took option 1 and managed to create a profile of a slow and a fast run:
>
> It's difficult to compare these profiles, because they seem t
sample script that I use to generete this bug:
>
> begin transaction;
> insert into tbl_b values (1, 'xxx');
> delete from tbl_b where pc_id=1;
> ERROR: triggered data change violation on relation "tbl_b"
>
> How to solve this probl
eive the information gained from "vacuum analyze".
Greetings
Andreas
On Mon, 03 Sep 2001 12:26:39 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
> Andreas Wernitznig <[EMAIL PROTECTED]> writes:
> > To make it more comparable I have made two additional runs, a slow and
> > a
This is the last part of a "vacuum verbose analyze;":
NOTICE: --Relation pg_toast_17058--
NOTICE: Pages 2: Changed 0, reaped 0, Empty 0, New 0; Tup 9: Vac 0, Keep/VTL 0/0,
Crash 0, UnUsed 0, MinLen 113, MaxLen 2034; Re-using: Free/Avai
l. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u
The query optimizer uses the index only if enough data are present in the table.
If only a few data are available a Seq Scan performs better and is therefore used.
Further one of the problems (which is hopfully solved in version 7.2) is that the
query optimizer used for primary keys/foreign keys
slower in Test B.
For both tests (steps 2-4) I use one connection to the database.
If I quit the connection after step 3 and establish a new connection for step 4 it
takes 39 seconds in either cases.
-> Using one connection the optimizer for pk/fk-checking is not updated by a "vacuum
analyz
Why don't you skip the automatic index creation for primary keys and let the user
decide to create an index,
that should be used in any case, regardless what the query planner recommends ?
On Fri, 05 Oct 2001 15:15:06 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
> Andreas Wern
g decisions.
Then I have to execute a "vacuum analyze" or reconnect in case of foreign key checking.
I would like to tune postgresql to use an index in any case if it is available.
On Fri, 05 Oct 2001 18:01:08 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
> Andreas Wernitznig <[
13 matches
Mail list logo