Re: [BUGS] memory leak while using vaccum

2001-08-22 Thread Tom Lane

Achim =?iso-8859-1?Q?Kr=FCmmel?= <[EMAIL PROTECTED]> writes:
> I suppose this problem is easy to reproduce.

No, it isn't.  At the very least, we need the full schema for the
table causing this.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[BUGS] memory leak while using vaccum

2001-08-22 Thread Achim Krümmel

I am running a PostgreSQL 7.1.3 on a Intel Pentium III
My operating system is SuSE Linux 7.1 with Kernel 2.4.4 (ELF)
I compiled postgresql with gcc 2.95.2
 
My problem:
when using "vacuum analyze " on very large tables (I have one
with about 30GB) the memory usage increases continues until no memory is
left and the kernel stops this process.

I suppose this problem is easy to reproduce. Just create a table like
this:

   Table "wa_cube"
 Attribute | Type  | Modifier
---+---+--
 dim1  | numeric(38,0) |
 dim2  | numeric(38,0) |
 dim3  | numeric(38,0) |
 dim4  | numeric(38,0) |
 dim5  | numeric(38,0) |
 lvkumsatz | numeric(14,3) |
 vkumsatz  | numeric(14,3) |
 ekumsatz  | numeric(14,3) |
 menge | numeric(14,3) |
Index: wa_cube_idx

fill very much data into it, then create an index like this:

Index "wa_cube_idx"
 Attribute | Type
---+---
 dim1  | numeric(38,0)
 dim2  | numeric(38,0)
 dim3  | numeric(38,0)
 dim4  | numeric(38,0)
btree

and run: vacuum anayze 


greetings,
   Achim Kruemmel

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [BUGS] Re: low performance

2001-08-22 Thread Andreas Wernitznig

I took option 1 and managed to create a profile of a slow and a fast run:

The frequent functions of the FAST run:

  %   cumulative   self  self total   
 time   seconds   secondscalls  Ts/call  Ts/call  name
  0.00  0.00 0.00 15725437 0.00 0.00  AllocSetAlloc
  0.00  0.00 0.00 15380742 0.00 0.00  MemoryContextAlloc
  0.00  0.00 0.00 11296700 0.00 0.00  ExecEvalExpr
  0.00  0.00 0.00  8276639 0.00 0.00  newNode
  0.00  0.00 0.00  5430717 0.00 0.00  MemoryContextSwitchTo
  0.00  0.00 0.00  4492641 0.00 0.00  LockBuffer
  0.00  0.00 0.00  4425642 0.00 0.00  AllocSetFree
  0.00  0.00 0.00  4356571 0.00 0.00  pfree
  0.00  0.00 0.00  3873174 0.00 0.00  pq_getbyte
  0.00  0.00 0.00  3799725 0.00 0.00  appendStringInfoChar

The frequent functions of the SLOW run:

  %   cumulative   self  self total   
 time   seconds   secondscalls  Ts/call  Ts/call  name
  0.00  0.00 0.00 27832819 0.00 0.00  ExecEvalExpr
  0.00  0.00 0.00 19040887 0.00 0.00  AllocSetAlloc
  0.00  0.00 0.00 18976313 0.00 0.00  MemoryContextAlloc
  0.00  0.00 0.00 18722462 0.00 0.00  LockBuffer
  0.00  0.00 0.00 18684832 0.00 0.00  MemoryContextSwitchTo
  0.00  0.00 0.00 18442039 0.00 0.00  pg_detoast_datum
  0.00  0.00 0.00 16947638 0.00 0.00  AllocSetFree
  0.00  0.00 0.00 16934648 0.00 0.00  pfree
  0.00  0.00 0.00  9716164 0.00 0.00  SpinAcquire
  0.00  0.00 0.00  9716164 0.00 0.00  SpinRelease

Since these files are to big for a posting, I have put the whole profile files on:
ftp://ftp.insilico.com/out.fast.gz
ftp://ftp.insilico.com/out.slow.gz

I don't know why the time column and number of seconds is zero in all the cases.
I am using the Redhat 7.1 binutils (binutils-2.10.91.0.2-3).

On Tue, 21 Aug 2001 17:38:23 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:

> Andreas Wernitznig <[EMAIL PROTECTED]> writes:
> > I am aware of the performance drawbacks because of indices and
> > triggers. In fact I have a trigger and an index on the most populated
> > table.  It is not possible in my case to remove the primary keys
> > during insert, because the database structure and foreign keys
> > validate my data during import.
> 
> Foreign keys eh?
> 
> > The problem is, that sometimes the performance is good, and sometimes
> > the database is awfully slow.  If it is slow, postgres is eating up
> > all CPU time and it takes at least 150 times longer to insert the
> > data.  I don't know why and what to do against that.
> 
> We found some foreign-key-related performance problems not long ago,
> and it could be you're happening on another one.  However there's not
> enough info here to figure it out.  I can offer you two alternatives:
> 
> 1. Compile up the backend with profiling enabled (if you're using gcc
> then "make PROFILE=-pg clean all" in src/backend should do the trick).
> Collect profiles for both a "normal" and a "slow" run and send them in.
> 
> 2. Develop a self-contained example that exhibits the problem, and send
> it along for someone else to profile.
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html
> 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [BUGS] Re: low performance

2001-08-22 Thread Tom Lane

Andreas Wernitznig <[EMAIL PROTECTED]> writes:
> I took option 1 and managed to create a profile of a slow and a fast run:

It's difficult to compare these profiles, because they seem to be taken
over very different numbers of queries --- did you let the "fast" run
process more queries than the "slow" one?

However, I think what is happening is that some queries are being done
as indexscans in the fast case and seqscans in the slow case.  The
ratio of ExecIndexScan calls to ExecSeqScan calls is vastly different
in the two profiles.

It looks like you are doing vacuums fairly frequently, so I speculate
that the statistics gathered by vacuum are changing just enough to alter
the planner's choice between indexscan and seqscan for some
often-executed query type.  Evidently the planner is guessing the costs
to be nearly the same, so a small change in stats might tip the choice
--- but in reality the costs are quite a bit different, thus you observe
fast and slow behavior.

The next step should be to get EXPLAIN results for the queries used
by your application in both fast and slow states.  This will help us
narrow down where the planner's misprediction is occurring.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[BUGS] timestamps cannot be created without time zones

2001-08-22 Thread pgsql-bugs

Rainer Mager ([EMAIL PROTECTED]) reports a bug with a severity of 2
The lower the number the more severe it is.

Short Description
timestamps cannot be created without time zones

Long Description
Creating a TABLE with a TIMESTAMP type seems to ignore the WITH TIME ZONE option. That 
is, with or without the option the TIMESTAMP still has a time zone.

Sample Code
vgkk=> create table a ( t1 timestamp, t2 timestamp with time zone, t3 timestamp 
without time zone );
CREATE
vgkk=> \d a
Table "a"
 Attribute |   Type   | Modifier 
---+--+--
 t1| timestamp with time zone | 
 t2| timestamp with time zone | 
 t3| timestamp with time zone | 


No file was uploaded with this report


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl