On Wed, 22 Aug 2001 19:19:42 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:

> Andreas Wernitznig <[EMAIL PROTECTED]> writes:
> > I took option 1 and managed to create a profile of a slow and a fast run:
> 
> It's difficult to compare these profiles, because they seem to be taken
> over very different numbers of queries --- did you let the "fast" run
> process more queries than the "slow" one?

Both runs where running for about 5 minutes. The fast one processes about 10 times 
more data than the slow one. After a longer time this ratio gets even worse. The 
databases contain about 130000 rows before the start of the run.

To make it more comparable I have made two additional runs, a slow and a fast one with 
exactly the same number of inserts (about 20500) and put it on our ftp server:

ftp://ftp.insilico.com/def.fast.gz
ftp://ftp.insilico.com/def.slow.gz

The fast run takes about 5 minutes, the slow one took 59 minutes.

*SLOW* Flat profile:

Each sample counts as 0.01 seconds.
  %   cumulative   self              self     total           
 time   seconds   seconds    calls  Ts/call  Ts/call  name    
100.00      0.01     0.01                             reaper
  0.00      0.01     0.00 555836881     0.00     0.00  ExecEvalExpr
  0.00      0.01     0.00 372917548     0.00     0.00  LockBuffer
  0.00      0.01     0.00 369206213     0.00     0.00  MemoryContextSwitchTo
  0.00      0.01     0.00 366016306     0.00     0.00  pg_detoast_datum
  0.00      0.01     0.00 342868028     0.00     0.00  AllocSetAlloc
  0.00      0.01     0.00 342579484     0.00     0.00  MemoryContextAlloc
  0.00      0.01     0.00 333623357     0.00     0.00  AllocSetFree
  0.00      0.01     0.00 333565521     0.00     0.00  pfree
  0.00      0.01     0.00 189738579     0.00     0.00  SpinAcquire
  0.00      0.01     0.00 189738579     0.00     0.00  SpinRelease


*FAST* Flat profile:
Each sample counts as 0.01 seconds.
 no time accumulated

  %   cumulative   self              self     total           
 time   seconds   seconds    calls  Ts/call  Ts/call  name    
  0.00      0.00     0.00 13439626     0.00     0.00  AllocSetAlloc
  0.00      0.00     0.00 13151082     0.00     0.00  MemoryContextAlloc
  0.00      0.00     0.00  8194227     0.00     0.00  ExecEvalExpr
  0.00      0.00     0.00  6962789     0.00     0.00  newNode
  0.00      0.00     0.00  4072271     0.00     0.00  MemoryContextSwitchTo
  0.00      0.00     0.00  3931730     0.00     0.00  AllocSetFree
  0.00      0.00     0.00  3873894     0.00     0.00  pfree
  0.00      0.00     0.00  3389344     0.00     0.00  LockBuffer
  0.00      0.00     0.00  3253378     0.00     0.00  pq_getbyte
  0.00      0.00     0.00  3191526     0.00     0.00  appendStringInfoChar


What does the first row of the slow run mean (reaper) ?

> However, I think what is happening is that some queries are being done
> as indexscans in the fast case and seqscans in the slow case.  The
> ratio of ExecIndexScan calls to ExecSeqScan calls is vastly different
> in the two profiles.

Does the new profiles proof that assumption ?
 
> It looks like you are doing vacuums fairly frequently, so I speculate
> that the statistics gathered by vacuum are changing just enough to alter
> the planner's choice between indexscan and seqscan for some
> often-executed query type.  Evidently the planner is guessing the costs
> to be nearly the same, so a small change in stats might tip the choice
> --- but in reality the costs are quite a bit different, thus you observe
> fast and slow behavior.

In all of the profiled runs no "vacuum" was executed.

Another question:
When I am executing a run, does a vacuum with another postmaster influence the planner 
of the run ? (By the way: I didn't do that for the profiled samples). 

I am hardly executing any queries during the runs (only inserts). I am only running 
SELECTs on two tables with 600 and 200 rows, respectively.

If I have a database with some data inside and run "vacuum" and "vacuum analyze" and 
then delete the data and start the run it is FAST.
If I run "vacuum" and "vacuum analyze" on an empty database, the following run will be 
a SLOW one.
It seems that the planner plans a "Seq Scan" if vacuum analyze was executed on a empty 
database and an "Index Scan" if vacuum analyze was executed on a full database. (as 
expected)

> The next step should be to get EXPLAIN results for the queries used
> by your application in both fast and slow states.  This will help us
> narrow down where the planner's misprediction is occurring.

I build in some "explain select ..." on the most populated table in my parser script 
(Such a kind of select, I assume, is done by the pk trigger) and got a confirmation 
for the assumption above (Seq Scan vs. Index Scan). 

If I am in a slow run and start a "psql <dbname> <username>" during that run and run 
"vacuum" and "vacuum analyze" the "explains select ..." changes from "Seq Scan" to 
"Index Scan" but the run is still slow. It seems like a "Seq Scan" is still used (at 
least by the pk trigger) although "explain select ..." is reporting a different thing.

Greetings 
Andreas

>                       regards, tom lane
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
> 


---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to