[PERFORM] Out of Memory Problem.

2006-07-12 Thread nicky

Hello Everyone,

I'm trying to find out/understand what causes my 'out of memory' error. 
I do not have enough experience with such logs to understand what is 
wrong or how to fix it. So i hope someone can point me in the right 
direction.


The 'rpt.rpt_verrichting' table contains about 8.5 million records and 
the 'rpt.rpt_dbc_traject' table contains 700k records.

It's part of a nightly process, so there is only 1 user active.

The server PostgreSQL 8.1.4 is running on, has 4GB Ram, OS FreeBSD 
6.1-Stable.


postgresql.conf
shared_buffers = 8192
work_mem = 524288
maintenance_work_mem = 524288
effective_cache_size = 104858


Resource limits (current):
 cputime  infinity secs
 filesize infinity kB
 datasize  1048576 kB  < could this be a problem?
 stacksize  131072 kB  < could this be a problem?
 coredumpsize infinity kB
 memoryuseinfinity kB
 memorylocked infinity kB
 maxprocesses 5547
 openfiles   11095
 sbsize   infinity bytes
 vmemoryuse   infinity kB


Thanks in advance.

_*The Query that is causing the out of memory error.*_
LOG:  statement: insert into rpt.rpt_verrichting_dbc
   (
  verrichting_id
   ,  verrichting_secid
   ,  dbcnr
   ,  vc_dbcnr
   )
   select
 t1.verrichting_id
   , t1.verrichting_secid
   , t1.dbcnr
   , max(t1.vc_dbcnr) as vc_dbcnr
   from
 rpt.rpt_verrichting t1
   , rpt.rpt_dbc_traject t00
   where
 t1.vc_patientnr = t00.vc_patientnr
   and
 t1.vc_agb_specialisme_nr_toek = t00.agb_specialisme_nr
   and
 t1.verrichtingsdatum between t00.begindat_dbc and 
COALESCE(t00.einddat_dbc, t00.begindat_dbc + interval '365 days')

   group by
 t1.verrichting_id
   , t1.verrichting_secid
   , t1.dbcnr
   ;

_*An EXPLAIN for the query:*_
Subquery Scan "*SELECT*"  (cost=1837154.04..1839811.72 rows=106307 
width=74)

  ->  HashAggregate  (cost=1837154.04..1838482.88 rows=106307 width=56)
->  Merge Join  (cost=1668759.55..1836090.97 rows=106307 width=56)
  Merge Cond: ((("outer".vc_patientnr)::text = 
"inner"."?column8?") AND ("outer".agb_specialisme_nr = 
"inner".vc_agb_specialisme_nr_toek))
  Join Filter: (("inner".verrichtingsdatum >= 
"outer".begindat_dbc) AND ("inner".verrichtingsdatum <= 
COALESCE("outer".einddat_dbc, ("outer".begindat_dbc + '365 
days'::interval
  ->  Index Scan using rpt_dbc_traject_idx1 on 
rpt_dbc_traject t00  (cost=0.00..84556.01 rows=578274 width=37)

  ->  Sort  (cost=1668759.55..1689806.46 rows=8418765 width=79)
Sort Key: (t1.vc_patientnr)::text, 
t1.vc_agb_specialisme_nr_toek
->  Seq Scan on rpt_verrichting t1  
(cost=0.00..302720.65 rows=8418765 width=79)


_*Out of memory log.*_
TopMemoryContext: 16384 total in 2 blocks; 3824 free (4 chunks); 12560 used
Type information cache: 8192 total in 1 blocks; 1864 free (0 chunks); 
6328 used
Operator class cache: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 
used
TopTransactionContext: 8192 total in 1 blocks; 7856 free (0 chunks); 336 
used

MessageContext: 122880 total in 4 blocks; 64568 free (4 chunks); 58312 used
smgr relation table: 8192 total in 1 blocks; 2872 free (0 chunks); 5320 used
Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 1024 total in 1 blocks; 896 free (0 chunks); 128 used
ExecutorState: 8192 total in 1 blocks; 5304 free (1 chunks); 2888 used
ExecutorState: 562316108 total in 94 blocks; 528452720 free (2593154 
chunks); 33863388 used

ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
AggContext: 399499264 total in 58 blocks; 5928 free (110 chunks); 
399493336 used
TupleHashTable: 109109272 total in 23 blocks; 2468576 free (70 chunks); 
106640696 used

ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
Relcache by OID: 8192 total in 1 blocks; 3376 free (0 chunks); 4816 used
CacheMemoryContext: 516096 total in 6 blocks; 83448 free (0 chunks); 
432648 used

rpt_dbc_traject_idx1: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
rpt_dbc_traject_pk: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
rpt_verrichting_idx2: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
rpt_verrichting_idx1: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_index_indrelid_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 328 free 

[PERFORM] Performance Problem between Ora 10g and Psql

2006-07-12 Thread Thomas Radnetter



 Hello!We're facing a performance problem here in a Oracle 10g -> PostgreSQL environment. The Oracle DB accesses the PostgreSQL DB via UnixODBC + psqlODBC.There are some 100 records on the Oracle DB to be updated with data obtained from a view of the PostgreSQL DB. The fetched data accumulates to a few kB's and the update takes some 5 - 7 hours. During this time the PostgreSQL machine is running at approx. 100% CPU usage.If a select for the same data is issued from the Oracle DB the statement is executed in half a second.Is this the correct place to issue this problem?How can I trace down the cause for this performance problem?Thanx in advance!Regards, Thomas Radnetterp.s. Mr. Ludek Finstrle, can you help again?
 --  "Feel free" – 10 GB Mailbox, 100 FreeSMS/Monat ... Jetzt GMX TopMail testen: http://www.gmx.net/de/go/topmail 
-- 
"Feel free" – 10 GB Mailbox, 100 FreeSMS/Monat ...
Jetzt GMX TopMail testen: http://www.gmx.net/de/go/topmail




Re: [PERFORM] Performance Problem between Ora 10g and Psql

2006-07-12 Thread Guillaume Smet

Thomas,

On 7/12/06, Thomas Radnetter <[EMAIL PROTECTED]> wrote:

Is this the correct place to issue this problem?


It is if your issue is due to a PostgreSQL performance problem.


How can I trace down the cause for this performance problem?


The first thing to do is to determine if it is a problem due to the
Oracle -> ODBC -> PostgreSQL thing or if it is a problem with the
query. My advice is to set log_min_duration_statement to 0 in your
postgresql.conf (and set the logger so that you can see the log output
somewhere). Then you'll see if your query is slow.

If your query is slow, post the output of an explain analyze on the
list with all the relevant information (structure of the concerned
tables, indexes, size...).

If not, it's probably more an ODBC problem.

Regards,

--
Guillaume Smet
Open Wide

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] Performance Problem between Ora 10g and Psql

2006-07-12 Thread Joshua D. Drake
On Wednesday 12 July 2006 01:33, Thomas Radnetter wrote:
> Hello!
>
> We're facing a performance problem here in a Oracle 10g -> PostgreSQL
> environment. The Oracle DB accesses the PostgreSQL DB via UnixODBC +
> psqlODBC.
>
> There are some 100 records on the Oracle DB to be updated with data
> obtained from a view of the PostgreSQL DB. The fetched data accumulates to
> a few kB's and the update takes some 5 - 7 hours. During this time the
> PostgreSQL machine is running at approx. 100% CPU usage.
> If a select for the same data is issued from the Oracle DB the statement is
> executed in half a second.
>
> Is this the correct place to issue this problem?

Sure but you haven't provided a TON of information that would be needed to 
help?

If you execute the same query that is being executed via ODBC, via psql is the 
performance problem still there?

If so, you probably have a postgresql issue, otherwise look at Oracle or ODBC.

If it is a PostgreSQL issue:
 Do you have indexes applied?
 What is the explain plan?
 When was the last time you analyzed?
 What about vacuum?

etc. etc.

Sincerely,

Joshua D. Drake


>
> How can I trace down the cause for this performance problem?
>
> Thanx in advance!
>
> Regards,
> Thomas Radnetter
>
> p.s. Mr. Ludek Finstrle, can you help again?
>
> --
>
>  "Feel free" – 10 GB Mailbox, 100 FreeSMS/Monat ...
>  Jetzt GMX TopMail testen: http://www.gmx.net/de/go/topmail

-- 
   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] High CPU Usage - PostgreSQL 7.3

2006-07-12 Thread Jeff Frost
Please Cc: the list when replying to things like this so everyone can see (and 
likely help).


I'm not sure what you're response is actually regarding.  Could you give some 
more detail?


On Wed, 12 Jul 2006, Rizal wrote:


so, i must upgrade my PostgreSQL 803 which i have with a new version ?

- Original Message -
From: "Jeff Frost" <[EMAIL PROTECTED]>
To: "Neil Hepworth" <[EMAIL PROTECTED]>
Cc: 
Sent: Wednesday, July 12, 2006 10:27 AM
Subject: Re: [PERFORM] High CPU Usage - PostgreSQL 7.3



On Wed, 12 Jul 2006, Neil Hepworth wrote:


I am using version PostgreSQL 7.3.10 (RPM:
postgresql73-rhel21-7.3.10-2).  Unfortunately vacuumdb -a -v does not
give the FSM info at the end (need a newer version of postgres for
that).  Running the same queries on 8.1 reduces the time taken to
about 16 minutes, though I didn't run the test on the same hardware or
OS as I want to keep my test server as close to production as
possible, so I ran the 8.1 server on my Windows laptop (2GHz Centrino
Duo with 2GB of RAM, yes the laptop is brand new :).


Well, looks like you're at least fairly up to date, but there is a fix in
7.3.11 that you might want to get by upgrading to 7.3.15:

  * Fix race condition in transaction log management
There was a narrow window in which an I/O operation could be
initiated for the wrong page, leading to an Assert failure or data
corruption.

It also appears that you can run autovacuum with 7.3 (I thought maybe it

only

went back as far as 7.4).

So, is the 16 minutes on your laptop with 8.1 for windows vs 1hr on the

server

for the whole set of loops?  If so, 4x isn't a bad improvement. :-)  So,
assuming you dumped/loaded the same DB onto your laptop's postgresql

server,

what does the vacuumdb -a -v say on the laptop?  Perhaps we can use it to

see

if your fsm settings are ok.

BTW, did you see Scott's posting here:

http://archives.postgresql.org/pgsql-performance/2006-07/msg00091.php

Since we didn't hear from you for a while, I thought perhaps Scott had hit

on

the fix.  Have you tried that yet?  It certainly would help the planner

out.


You might also want to turn on autovacuum and see if that helps.

What's your disk subsystem like?  In fact, what's the entire DB server
hardware like?



I run through a loop, executing the following or similar queries 8
times (well actually 12 but the last 4 don't do anything) - Jeff I've
attached complete outputs as files.

A debug output further below (numbers after each method call name,
above each SQL statement, are times to run that statement in
milliseconds, the times on the lines "" are cumulative).  So total for
one loop is 515 seconds, multiple by 8 and that gets me to over an
hour); it is actually the deletes that take the most time; 179 seconds
and 185 seconds each loop.



CREATE TABLE fttemp670743219 AS SELECT * FROM ftone LIMIT 0
INSERT INTO fttemp670743219 ( epId, start, direction, classid,
consolidation, cnt )  SELECT epId, TO_TIMESTAMP(start, '-MM-DD
HH24:00:00.0')::timestamp AS start, direction, classid, 60 AS
consolidation, SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0 AND
start < TO_TIMESTAMP('2006-07-11 14:04:34.156433+1000', '-MM-DD
HH24:00:00.0')::timestamp GROUP BY epId, direction,
TO_TIMESTAMP(start, '-MM-DD HH24:00:00.0')::timestamp, classid
DELETE FROM ONLY ftone WHERE ftone.epId = fttemp670743219.epId AND
ftone.direction = fttemp670743219.direction AND ftone.start =
fttemp670743219.start AND ftone.consolidation =
fttemp670743219.consolidation AND ftone.classid =
fttemp670743219.classid
INSERT INTO ftone ( epId, start, consolidation, direction, classid,
cnt ) SELECT epId, start, consolidation, direction, classid, cnt FROM
fttemp670743219
DROP TABLE fttemp670743219
DELETE FROM ftone WHERE consolidation = 0 AND start <
TO_TIMESTAMP((TO_TIMESTAMP('2006-07-11 14:04:34.156433+1000',
'-MM-DD 00:00:00.0')::timestamp - INTERVAL '10080 MINUTE'),
'-MM-DD 00:00:00.0')::timestamp



ftone: 0:
createConsolidatedInTemporary: 188:
CREATE TABLE fttemp678233382 AS SELECT * FROM ftone LIMIT 0
createConsolidatedInTemporary: 76783:
INSERT INTO fttemp678233382 ( epPairdefnid, start, direction, classid,
consolidation, cnt )  SELECT epPairdefnid, TO_TIMESTAMP(start,
'-MM-DD HH24:00:00.0')::timestamp AS start, direction, classid, 60
AS consolidation, SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0
AND start < TO_TIMESTAMP('2006-07-12 11:02:13.865444+1000',
'-MM-DD HH24:00:00.0')::timestamp GROUP BY epPairdefnid,
direction, TO_TIMESTAMP(start, '-MM-DD HH24:00:00.0')::timestamp,
classid
replaceConsolidatedInMainTable: 179178:
DELETE FROM ONLY ftone WHERE ftone.epPairdefnid =
fttemp678233382.epPairdefnid AND ftone.direction =
fttemp678233382.direction AND ftone.start = fttemp678233382.start AND
ftone.consolidation = fttemp678233382.consolid

Re: [PERFORM] Kill a session

2006-07-12 Thread Craig A. James

Magnus Hagander wrote:
This raises the question: Why doesn't Postgres have a "kill 
session" command that works?  Oracle has it, and it's 
invaluable; there is no substitute.  Various writers to these 
PG lists have raised the question repeatedly.  Is it just a 
matter that nobody has had the time to do it (which I 
respect!), or is there a reason why the Postgres team decided 
a "kill session" is a bad idea?


I beleive the function to kill a backend is actually in the codebase,
it's just commented out because it's considered dangerous. There are
some possible issues (see -hackers archives) about sending SIGTERM
without actually shutting down the whole cluster.

Doing the client-side function to call is the easy part.

In many cases you just need to cancel a query, in which case you can use
pg_cancel_backend() for exmaple. If you need to actually kill it, your
only supported way is to restart postgresql. 


In other words, are you confirming that there is no way to kill a query from 
another process, other than shutting down the database?  My understanding of 
the documentation tells me I can't use cancel, because the process doing the 
killing isn't the original process.

But in spite earlier posting in these forums that say the 
killing the backend was the way to go, this doesn't really 
work.  First, even though the "postgres" backend job is 
properly killed, a "postmaster" job keeps running at 99% CPU, 
which is pretty useless.  Killing the client's backend didn't 
kill the process actually doing the work!


Then you killed the wrong backend...
No queries run in postmaster. They all run in postgres backends. The
postmaster does very little actual work, other than keeping track of
everybody else.


It turns out I was confused by this: ps(1) reports a process called "postgres", but 
top(1) reports a process called "postmaster", but they both have the same pid.  I guess 
postmaster replaces its own name in the process table when it's executing a query, and it's not 
really the postmaster even though top(1) calls it postmaster.

So "kill -15 " is NOT killing the process -- to kill the process, I have 
to use signal 9.  But if I do that, ALL queries in progress are aborted.  I might as well shut 
down and restart the database, which is an unacceptable solution for a web site.

I'm back to my original question: How do you kill a runaway query without 
bringing down the whole database?  Is there really no answer to this?

Thanks,
Craig

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[PERFORM] Commit slower on faster PC

2006-07-12 Thread Koth, Christian (DWBI)

Hi,

please help me with the following problem:

I have noticed a strange performance behaviour using a commit statement on two 
different machines. On one of the machines the commit is many times faster than 
on the other machine which has faster hardware. Server and client are running 
always on the same machine.

Server version (same on both machines): PostgreSQL 8.1.3. (same binaries as 
well)

PC1:

Pentium 4 (2.8 GHz)
1GB RAM
IDE-HDD (approx. 50 MB/s rw), fs: ext3
Mandrake Linux: Kernel 2.4.22


PC2:

Pentium 4 (3.0 GHz)
2GB RAM
SCSI-HDD (approx. 65 MB/s rw), fs: ext3
Mandrake Linux: Kernel 2.4.32


Both installations of the database have the same configuration, different from 
default are only the following settings on both machines:

shared_buffers = 2
listen_addresses = '*'
max_stack_depth = 4096


pgbench gives me the following results:
PC1:

transaction type: TPC-B (sort of)
scaling factor: 1
number of clients: 1
number of transactions per client: 10
number of transactions actually processed: 10/10
tps = 269.905533 (including connections establishing)
tps = 293.625393 (excluding connections establishing)

PC2:

transaction type: TPC-B (sort of)
scaling factor: 1
number of clients: 1
number of transactions per client: 10
number of transactions actually processed: 10/10
tps = 46.061935 (including connections establishing)
tps = 46.519634 (excluding connections establishing)


My own performance test sql script which inserts and (auto)commits some data 
into a simple table produces the following log output in the server log:

PC1:

LOG:  duration: 1.441 ms  statement: INSERT INTO performance_test VALUES 
(500938362, 'Xawhefjmd');
STATEMENT:  INSERT INTO performance_test VALUES (500938362, 'Xawhefjmd');

PC2:

LOG:  duration: 29.979 ms  statement: INSERT INTO performance_test VALUES 
(500938362, 'Xawhefjmd');
STATEMENT:  INSERT INTO performance_test VALUES (500938362, 'Xawhefjmd');


I created a 'strace' one both machines which is interesting:

Opening the socket:
---
PC1: socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 10 <0.21>
PC2: socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 8 <0.15>

PC1: bind(10, {sa_family=AF_INET, sin_port=htons(0), 
sin_addr=inet_addr("0.0.0.0")}, 16) = 0 <0.07>
PC2: bind (8, {sin_family=AF_INET, sin_port=htons(0), 
sin_addr=inet_addr("0.0.0.0")}}, 16) = 0 <0.07>

PC1: getsockname(10, {sa_family=AF_INET, sin_port=htons(32820), 
sin_addr=inet_addr("0.0.0.0")}, [16]) = 0 <0.05>
PC2: getsockname( 8, {sin_family=AF_INET, sin_port=htons(36219), 
sin_addr=inet_addr("0.0.0.0")}}, [16]) = 0 <0.05>

PC1: connect(10, {sa_family=AF_INET, sin_port=htons(5432), 
sin_addr=inet_addr("127.0.0.1")}, 16) = 0 <0.000440>
PC2: connect( 8, {sin_family=AF_INET, sin_port=htons(5432), 
sin_addr=inet_addr("127.0.0.1")}}, 16) = 0 <0.000394>

PC1: setsockopt(10, SOL_TCP, TCP_NODELAY, [1], 4) = 0 <0.06>
PC2: setsockopt (8, SOL_TCP, TCP_NODELAY, [1], 4) = 0 <0.04>


Inserting and commiting the data: 
-
PC1:

send(10, "B\\0<\0INSERT INTO performance_test VAL"..., 175, 0) = 175 
<0.15>
recv(10, "2\0\17INSERT 0 1\0Z\0\0\0\5T", 8192, 0) = 53 <0.07>
send(10, "B\0\0\0\17\0S_2\0\0\0\0\0\0\0E\0\0\0\t\0\0\0\0\1S\0\0\0\4", 31, 0) = 
31 <0.11>
recv(10, "2\0\0\0\4C\0\0\0\vCOMMIT\0Z\0\0\0\5I", 8192, 0) = 23 <0.000211>

PC2:

send(8, "B\\0<\0INSERT INTO performance_test VAL"..., 175, 0) = 175 
<0.14>
recv(8, "2\0\17INSERT 0 1\0Z\0\0\0\5T", 8192, 0) = 53 <0.05>
send(8, "B\0\0\0\17\0S_2\0\0\0\0\0\0\0E\0\0\0\t\0\0\0\0\1S\0\0\0\4", 31, 0) = 
31 <0.09>
recv(8, "2\0\0\0\4C\0\0\0\vCOMMIT\0Z\0\0\0\5I", 8192, 0) = 23 <0.0253>

Every command is a bit faster on PC2 except the last one which is many times 
slower.
Any help or hint where to look at would be highly appreciated because I'm 
running out of ideas ;-).


regards,
Christian


**
The information contained in, or attached to, this e-mail, may contain 
confidential information and is intended solely for the use of the individual 
or entity to whom they are addressed and may be subject to legal privilege.  If 
you have received this e-mail in error you should notify the sender immediately 
by reply e-mail, delete the message from your system and notify your system 
manager.  Please do not copy it for any purpose, or disclose its contents to 
any other person.  The views or opinions presented in this e-mail are solely 
those of the author and do not necessarily represent those of the company.  The 
recipient should check this e-mail and any attachments for the presence of 
viruses.  The company accepts no liability for any damage caused, directly or 
indirectly, by any virus transmitted in this email.
**

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Kill a session

2006-07-12 Thread Steinar H. Gunderson
On Wed, Jul 12, 2006 at 08:43:18AM -0700, Craig A. James wrote:
>> Then you killed the wrong backend...
>> No queries run in postmaster. They all run in postgres backends. The
>> postmaster does very little actual work, other than keeping track of
>> everybody else.
> 
> It turns out I was confused by this: ps(1) reports a process called 
> "postgres", but top(1) reports a process called "postmaster", but they both 
> have the same pid.  I guess postmaster replaces its own name in the process 
> table when it's executing a query, and it's not really the postmaster even 
> though top(1) calls it postmaster.
> 
> So "kill -15 " is NOT killing the process -- to kill the process, I 
> have to use signal 9.  But if I do that, ALL queries in progress are 
> aborted.  I might as well shut down and restart the database, which is an 
> unacceptable solution for a web site.

I don't follow your logic here. If you do "kill -15 " of the postmaster
doing the work, the query should be aborted without taking down the entire
cluster. I don't see why you'd need -9 (which is a really bad idea anyhow)...

> I'm back to my original question: How do you kill a runaway query without 
> bringing down the whole database?  Is there really no answer to this?

Kill it with -15. If you're worried about your CGI scripts, use sudo or some
sort of client/server solution.

/* Steinar */
-- 
Homepage: http://www.sesse.net/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Commit slower on faster PC

2006-07-12 Thread Mark Lewis
The IDE drive is almost certainly lying about flushing data to the disk.
Lower-end consumer drives often do.

What this means is that commits will be a whole lot faster, but the
database loses its ACID guarantees, because a power failure at the wrong
moment could corrupt the whole database.

If you don't care about your data and want the SCSI drive to perform
fast just like the IDE drive, you can set fsync = off in your
configuration file.

-- Mark

On Wed, 2006-07-12 at 10:16 -0600, Koth, Christian (DWBI) wrote:
> Hi,
> 
> please help me with the following problem:
> 
> I have noticed a strange performance behaviour using a commit statement on 
> two different machines. On one of the machines the commit is many times 
> faster than on the other machine which has faster hardware. Server and client 
> are running always on the same machine.
> 
> Server version (same on both machines): PostgreSQL 8.1.3. (same binaries as 
> well)
> 
> PC1:
> 
> Pentium 4 (2.8 GHz)
> 1GB RAM
> IDE-HDD (approx. 50 MB/s rw), fs: ext3
> Mandrake Linux: Kernel 2.4.22
> 
> 
> PC2:
> 
> Pentium 4 (3.0 GHz)
> 2GB RAM
> SCSI-HDD (approx. 65 MB/s rw), fs: ext3
> Mandrake Linux: Kernel 2.4.32
> 
> 
> Both installations of the database have the same configuration, different 
> from default are only the following settings on both machines:
> 
> shared_buffers = 2
> listen_addresses = '*'
> max_stack_depth = 4096
> 
> 
> pgbench gives me the following results:
> PC1:
> 
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 1
> number of transactions per client: 10
> number of transactions actually processed: 10/10
> tps = 269.905533 (including connections establishing)
> tps = 293.625393 (excluding connections establishing)
> 
> PC2:
> 
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 1
> number of transactions per client: 10
> number of transactions actually processed: 10/10
> tps = 46.061935 (including connections establishing)
> tps = 46.519634 (excluding connections establishing)
> 
> 
> My own performance test sql script which inserts and (auto)commits some data 
> into a simple table produces the following log output in the server log:
> 
> PC1:
> 
> LOG:  duration: 1.441 ms  statement: INSERT INTO performance_test VALUES 
> (500938362, 'Xawhefjmd');
> STATEMENT:  INSERT INTO performance_test VALUES (500938362, 'Xawhefjmd');
> 
> PC2:
> 
> LOG:  duration: 29.979 ms  statement: INSERT INTO performance_test VALUES 
> (500938362, 'Xawhefjmd');
> STATEMENT:  INSERT INTO performance_test VALUES (500938362, 'Xawhefjmd');
> 
> 
> I created a 'strace' one both machines which is interesting:
> 
> Opening the socket:
> ---
> PC1: socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 10 <0.21>
> PC2: socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 8 <0.15>
> 
> PC1: bind(10, {sa_family=AF_INET, sin_port=htons(0), 
> sin_addr=inet_addr("0.0.0.0")}, 16) = 0 <0.07>
> PC2: bind (8, {sin_family=AF_INET, sin_port=htons(0), 
> sin_addr=inet_addr("0.0.0.0")}}, 16) = 0 <0.07>
> 
> PC1: getsockname(10, {sa_family=AF_INET, sin_port=htons(32820), 
> sin_addr=inet_addr("0.0.0.0")}, [16]) = 0 <0.05>
> PC2: getsockname( 8, {sin_family=AF_INET, sin_port=htons(36219), 
> sin_addr=inet_addr("0.0.0.0")}}, [16]) = 0 <0.05>
> 
> PC1: connect(10, {sa_family=AF_INET, sin_port=htons(5432), 
> sin_addr=inet_addr("127.0.0.1")}, 16) = 0 <0.000440>
> PC2: connect( 8, {sin_family=AF_INET, sin_port=htons(5432), 
> sin_addr=inet_addr("127.0.0.1")}}, 16) = 0 <0.000394>
> 
> PC1: setsockopt(10, SOL_TCP, TCP_NODELAY, [1], 4) = 0 <0.06>
> PC2: setsockopt (8, SOL_TCP, TCP_NODELAY, [1], 4) = 0 <0.04>
> 
> 
> Inserting and commiting the data: 
> -
> PC1:
> 
> send(10, "B\\0<\0INSERT INTO performance_test VAL"..., 175, 0) = 175 
> <0.15>
> recv(10, "2\0\17INSERT 0 1\0Z\0\0\0\5T", 8192, 0) = 53 <0.07>
> send(10, "B\0\0\0\17\0S_2\0\0\0\0\0\0\0E\0\0\0\t\0\0\0\0\1S\0\0\0\4", 31, 0) 
> = 31 <0.11>
> recv(10, "2\0\0\0\4C\0\0\0\vCOMMIT\0Z\0\0\0\5I", 8192, 0) = 23 <0.000211>
> 
> PC2:
> 
> send(8, "B\\0<\0INSERT INTO performance_test VAL"..., 175, 0) = 175 
> <0.14>
> recv(8, "2\0\17INSERT 0 1\0Z\0\0\0\5T", 8192, 0) = 53 <0.05>
> send(8, "B\0\0\0\17\0S_2\0\0\0\0\0\0\0E\0\0\0\t\0\0\0\0\1S\0\0\0\4", 31, 0) = 
> 31 <0.09>
> recv(8, "2\0\0\0\4C\0\0\0\vCOMMIT\0Z\0\0\0\5I", 8192, 0) = 23 <0.0253>
> 
> Every command is a bit faster on PC2 except the last one which is many times 
> slower.
> Any help or hint where to look at would be highly appreciated because I'm 
> running out of ideas ;-).
> 
> 
> regards,
> Christian
> 
> 
> **
> The information contained in, or attached to, this e-mail, may contain 
> confidential information and is intended solely for the use of the individual 
> or entity to whom they are addressed and may be subject to legal privilege.  
> If you

Re: [PERFORM] Commit slower on faster PC

2006-07-12 Thread D'Arcy J.M. Cain
On Wed, 12 Jul 2006 10:16:40 -0600
"Koth, Christian (DWBI)" <[EMAIL PROTECTED]> wrote:
> I have noticed a strange performance behaviour using a commit statement on 
> two different machines. On one of the machines the commit is many times 
> faster than on the other machine which has faster hardware. Server and client 
> are running always on the same machine.
> 
> Server version (same on both machines): PostgreSQL 8.1.3. (same binaries as 
> well)
> 
> PC1:
> 
> Pentium 4 (2.8 GHz)
> 1GB RAM
> IDE-HDD (approx. 50 MB/s rw), fs: ext3
> Mandrake Linux: Kernel 2.4.22
> 
> 
> PC2:
> 
> Pentium 4 (3.0 GHz)
> 2GB RAM
> SCSI-HDD (approx. 65 MB/s rw), fs: ext3
> Mandrake Linux: Kernel 2.4.32
> 
> 
> Both installations of the database have the same configuration, different 
> from default are only the following settings on both machines:
> 
> shared_buffers = 2
> listen_addresses = '*'
> max_stack_depth = 4096
> 
> 
> pgbench gives me the following results:
> PC1:
> 
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 1
> number of transactions per client: 10
> number of transactions actually processed: 10/10
> tps = 269.905533 (including connections establishing)
> tps = 293.625393 (excluding connections establishing)
> 
> PC2:
> 
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 1
> number of transactions per client: 10
> number of transactions actually processed: 10/10
> tps = 46.061935 (including connections establishing)
> tps = 46.519634 (excluding connections establishing)

I'm not sure 10 transactions is enough of a test.  You could just be
seeing the result of your IDE drive lying to you about actually writing
your data.  There may be other considerations but I would start with
checking with 10,000 or 100,000 transactions to overcome the driver
buffering.

-- 
D'Arcy J.M. Cain  |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Kill a session

2006-07-12 Thread Magnus Hagander
> > I beleive the function to kill a backend is actually in the 
> codebase, 
> > it's just commented out because it's considered dangerous. 
> There are 
> > some possible issues (see -hackers archives) about sending SIGTERM 
> > without actually shutting down the whole cluster.
> > 
> > Doing the client-side function to call is the easy part.
> > 
> > In many cases you just need to cancel a query, in which 
> case you can 
> > use
> > pg_cancel_backend() for exmaple. If you need to actually 
> kill it, your 
> > only supported way is to restart postgresql.
> 
> In other words, are you confirming that there is no way to 
> kill a query from another process, other than shutting down 
> the database?  My understanding of the documentation tells me 
> I can't use cancel, because the process doing the killing 
> isn't the original process.

You can't kill another backend, no.
You can *cancel* a query on it and return it to idle state. See
http://www.postgresql.org/docs/8.1/interactive/functions-admin.html,
pg_cancel_backend().


> So "kill -15 " is NOT killing the process -- to kill the 
> process, I have to use signal 9.  But if I do that, ALL 
> queries in progress are aborted.  I might as well shut down 
> and restart the database, which is an unacceptable solution 
> for a web site.
> 
> I'm back to my original question: How do you kill a runaway 
> query without bringing down the whole database?  Is there 
> really no answer to this?

Runaway queries can be killed with pg_cancel_backend(), or from the
commandline using kill -INT . The backend will still be around, but
it will have cancelled the query.

//Magnus

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[PERFORM] size of pg_dump files containing bytea values

2006-07-12 Thread Steve McWilliams
I notice that non-printables in bytea values are being spit out by pg_dump
using escaped octet sequences even when the "-Fc" option is present
specifying use of the custom binary output format rather than plain text
format.  This bloats the size of bytea values in the dump file by a factor
of 3+ typically.  When you have alot of large bytea values in your db this
can add up very quickly.

Shouldn't the custom format be smart and just write the raw bytes to the
output file rather than trying to make them ascii readable?

Thanks.

Steve McWilliams
Software Engineer
Emprisa Networks
703-691-0433x21
[EMAIL PROTECTED]

The information contained in this communication is intended only for the
use of the recipient named above, and may be legally privileged,
confidential and exempt from disclosure under applicable law. If the
reader of this communication is not the intended recipient, you are hereby
notified that any dissemination, distribution or copying of this
communication, or any of its contents, is strictly prohibited. If you have
received this communication in error, please resend this communication to
the sender and delete the original communication and any copy of it from
your computer system. Thank you.




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Commit slower on faster PC

2006-07-12 Thread Joshua D. Drake
On Wednesday 12 July 2006 09:16, Koth, Christian (DWBI) wrote:
> Hi,
>
> please help me with the following problem:
>
> I have noticed a strange performance behaviour using a commit statement on
> two different machines. On one of the machines the commit is many times
> faster than on the other machine which has faster hardware. Server and
> client are running always on the same machine.
>
> Server version (same on both machines): PostgreSQL 8.1.3. (same binaries as
> well)

Heh, I bet you are being bit by the cache on the IDE drive. What happens if 
you turn fsync off?

Sincerely,

Joshua D. Drake
-- 
   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 6: explain analyze is your friend


[PERFORM] [Fwd: Delivery Status Notification (Failure)]

2006-07-12 Thread Craig A. James

I can't find an address to complain about the mailing list itself, so apologies 
but I'm posting directly to this list.  Every time I post to this group, I get 
returned mails about OTHER subscribers' invalid accounts, like the one below.  
What's up?  This seems to be a new phenomenon.  Should the [EMAIL PROTECTED] be 
getting these and discarding them?

Thanks,
Craig


 Original Message 
Subject: Delivery Status Notification (Failure)
Date: Wed, 12 Jul 2006 13:15:16 -0400
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]

This is an automatically generated Delivery Status Notification.

Delivery to the following recipients failed.

  [EMAIL PROTECTED]





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Kill a session

2006-07-12 Thread Stefan Kaltenbrunner
Craig A. James wrote:
> Magnus Hagander wrote:
>>> This raises the question: Why doesn't Postgres have a "kill session"
>>> command that works?  Oracle has it, and it's invaluable; there is no
>>> substitute.  Various writers to these PG lists have raised the
>>> question repeatedly.  Is it just a matter that nobody has had the
>>> time to do it (which I respect!), or is there a reason why the
>>> Postgres team decided a "kill session" is a bad idea?
>>
>> I beleive the function to kill a backend is actually in the codebase,
>> it's just commented out because it's considered dangerous. There are
>> some possible issues (see -hackers archives) about sending SIGTERM
>> without actually shutting down the whole cluster.
>>
>> Doing the client-side function to call is the easy part.
>>
>> In many cases you just need to cancel a query, in which case you can use
>> pg_cancel_backend() for exmaple. If you need to actually kill it, your
>> only supported way is to restart postgresql. 
> 
> In other words, are you confirming that there is no way to kill a query
> from another process, other than shutting down the database?  My
> understanding of the documentation tells me I can't use cancel, because
> the process doing the killing isn't the original process.
> 
>>> But in spite earlier posting in these forums that say the killing the
>>> backend was the way to go, this doesn't really work.  First, even
>>> though the "postgres" backend job is properly killed, a "postmaster"
>>> job keeps running at 99% CPU, which is pretty useless.  Killing the
>>> client's backend didn't kill the process actually doing the work!
>>
>> Then you killed the wrong backend...
>> No queries run in postmaster. They all run in postgres backends. The
>> postmaster does very little actual work, other than keeping track of
>> everybody else.
> 
> It turns out I was confused by this: ps(1) reports a process called
> "postgres", but top(1) reports a process called "postmaster", but they
> both have the same pid.  I guess postmaster replaces its own name in the
> process table when it's executing a query, and it's not really the
> postmaster even though top(1) calls it postmaster.
> 
> So "kill -15 " is NOT killing the process -- to kill the process, I
> have to use signal 9.  But if I do that, ALL queries in progress are
> aborted.  I might as well shut down and restart the database, which is
> an unacceptable solution for a web site.
> 
> I'm back to my original question: How do you kill a runaway query
> without bringing down the whole database?  Is there really no answer to
> this?

are you maybe looking for pg_cancel_backend() ?

http://www.postgresql.org/docs/current/interactive/functions-admin.html#FUNCTIONS-ADMIN-SIGNAL-TABLE

Stefan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] [Fwd: Delivery Status Notification (Failure)]

2006-07-12 Thread Michael Glaesemann


On Jul 12, 2006, at 11:39 , Craig A. James wrote:

I can't find an address to complain about the mailing list itself,  
so apologies but I'm posting directly to this list.  Every time I  
post to this group, I get returned mails about OTHER subscribers'  
invalid accounts, like the one below.


Is this when you're replying to a post or creating a new post? If the  
former, and you're using reply-to-all, you'll be sending one message  
to the list and another directly to the poster of the message you're  
responding to. The directly sent message is outside of the list  
entirely, so any returned mail is also outside of the list. I've seen  
this happen occasionally myself. Could this be what you're seeing?  
AFAICT, such messages sent to the list *do* get filtered out.


Michael Glaesemann
grzm seespotcode net




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Commit slower on faster PC

2006-07-12 Thread Bruno Wolff III
On Wed, Jul 12, 2006 at 10:16:40 -0600,
  "Koth, Christian (DWBI)" <[EMAIL PROTECTED]> wrote:
> 
> I have noticed a strange performance behaviour using a commit statement on 
> two different machines. On one of the machines the commit is many times 
> faster than on the other machine which has faster hardware. Server and client 
> are running always on the same machine.
> 
> Server version (same on both machines): PostgreSQL 8.1.3. (same binaries as 
> well)
> 
> PC1:
> 
> IDE-HDD (approx. 50 MB/s rw), fs: ext3
> 
> PC2:
> 
> SCSI-HDD (approx. 65 MB/s rw), fs: ext3
> 
> Both installations of the database have the same configuration, different 
> from default are only the following settings on both machines:
> 
> pgbench gives me the following results:
> PC1:
> 
> tps = 293.625393 (excluding connections establishing)
> 
> PC2:
> 
> tps = 46.519634 (excluding connections establishing)

Have you checked to see if the ide drive is lying about having written the
data to the platters?

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] High CPU Usage - PostgreSQL 7.3

2006-07-12 Thread Jeff Frost

On Wed, 12 Jul 2006, Neil Hepworth wrote:


Yes, it was the same DB so, yes 8.1 gives roughly a four fold improvement
(assuming hardware and OS differences aren't that significant - I'd expect
the Linux version to be faster if anything); which certainly ain't bad! :)

Good idea for the vacuumdb -a -v on the laptop, I re imported the database
and than ran it output below:

INFO:  free space map contains 949 pages in 537 relations
DETAIL:  A total of 9024 page slots are in use (including overhead).
9024 page slots are required to track all free space.
Current limits are:  2 page slots, 1000 relations, using 186 KB.
VACUUM


Well, this looks like it's probably on track already even though it'll change 
as there are updates/deletes, but I suspect you're more or less ok with the 
FSM settings you have.




I am about to start testing Scott's suggestion now (thanks Scott - wasn't
ignoring you, just didn't have time yesterday), and I'll get back with the
results.

Before I posted the problem to this list I was focusing more on the settings
in postgresql.conf than optimising the query as I thought this might be a
general problem, for all my DB updates/queries, with the way the planner was
optimising queries; maybe assuming CPU cost was too cheap?  Do you think I
was off track in my initial thinking?  Optimising these queries is
certainly beneficial but I don't want postgres to hog the CPU for any
extended period (other apps also run on the server), so I was wondering if
the general config settings could to be tuned to always prevent this
(regardless of how poorly written my queries are :)?



I guess you could nice the postmaster, on startup or renice after startup but 
I'm not aware of any conf settings that would tune postgres to avoid using the 
CPU.



Neil


On 12/07/06, Jeff Frost <[EMAIL PROTECTED]> wrote:


On Wed, 12 Jul 2006, Neil Hepworth wrote:

> I am using version PostgreSQL 7.3.10 (RPM:
> postgresql73-rhel21-7.3.10-2).  Unfortunately vacuumdb -a -v does not
> give the FSM info at the end (need a newer version of postgres for
> that).  Running the same queries on 8.1 reduces the time taken to
> about 16 minutes, though I didn't run the test on the same hardware or
> OS as I want to keep my test server as close to production as
> possible, so I ran the 8.1 server on my Windows laptop (2GHz Centrino
> Duo with 2GB of RAM, yes the laptop is brand new :).

Well, looks like you're at least fairly up to date, but there is a fix in
7.3.11 that you might want to get by upgrading to 7.3.15:

 * Fix race condition in transaction log management
   There was a narrow window in which an I/O operation could be
   initiated for the wrong page, leading to an Assert failure or data
   corruption.

It also appears that you can run autovacuum with 7.3 (I thought maybe it
only
went back as far as 7.4).

So, is the 16 minutes on your laptop with 8.1 for windows vs 1hr on the
server
for the whole set of loops?  If so, 4x isn't a bad improvement. :-)  So,
assuming you dumped/loaded the same DB onto your laptop's postgresql
server,
what does the vacuumdb -a -v say on the laptop?  Perhaps we can use it to
see
if your fsm settings are ok.

BTW, did you see Scott's posting here:

http://archives.postgresql.org/pgsql-performance/2006-07/msg00091.php

Since we didn't hear from you for a while, I thought perhaps Scott had hit
on
the fix.  Have you tried that yet?  It certainly would help the planner
out.

You might also want to turn on autovacuum and see if that helps.

What's your disk subsystem like?  In fact, what's the entire DB server
hardware like?

>
> I run through a loop, executing the following or similar queries 8
> times (well actually 12 but the last 4 don't do anything) - Jeff I've
> attached complete outputs as files.
>
> A debug output further below (numbers after each method call name,
> above each SQL statement, are times to run that statement in
> milliseconds, the times on the lines "" are cumulative).  So total for
> one loop is 515 seconds, multiple by 8 and that gets me to over an
> hour); it is actually the deletes that take the most time; 179 seconds
> and 185 seconds each loop.
>
> 
>
> CREATE TABLE fttemp670743219 AS SELECT * FROM ftone LIMIT 0
> INSERT INTO fttemp670743219 ( epId, start, direction, classid,
> consolidation, cnt )  SELECT epId, TO_TIMESTAMP(start, '-MM-DD
> HH24:00:00.0')::timestamp AS start, direction, classid, 60 AS
> consolidation, SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0 AND
> start < TO_TIMESTAMP('2006-07-11 14:04:34.156433+1000', '-MM-DD
> HH24:00:00.0')::timestamp GROUP BY epId, direction,
> TO_TIMESTAMP(start, '-MM-DD HH24:00:00.0')::timestamp, classid
> DELETE FROM ONLY ftone WHERE ftone.epId = fttemp670743219.epId AND
> ftone.direction = fttemp670743219.direction AND ftone.start =
> fttemp670743219.start AND ftone.consolidation =
> fttemp670743219.consolidation AND ft

[PERFORM] hyper slow after upgrade to 8.1.4

2006-07-12 Thread Medora Schauer








I have just upgraded from 7.3.4 to 8.1.4 and now *all* db access calls are extremely slow.  I
didn’t need to preserve any old data so at this point all my tables are
empty.  Just connecting to a db takes several seconds.

 

When I was accidentally linking my app with the 7.3.4 libs
but running the 8.1.4 postmaster everything was fine.

 

I know I’m not giving much to go on but I’m
stumped.  Can anyone suggest how I might track down the cause of this problem?

 

Medora Schauer

 

 








Re: [PERFORM] [Fwd: Delivery Status Notification (Failure)]

2006-07-12 Thread Richard Broersma Jr
> I can't find an address to complain about the mailing list itself, so 
> apologies but I'm posting
> directly to this list.  Every time I post to this group, I get returned mails 
> about OTHER
> subscribers' invalid accounts, like the one below.  What's up?  This seems to 
> be a new
> phenomenon.  Should the [EMAIL PROTECTED] be getting these and discarding 
> them?
> 
> Thanks,
> Craig

Does the message come from postgresql.org or is the bounced email coming from 
these specific users
when you include them in reply-all?

Regards,

Richard Broersma jr.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] hyper slow after upgrade to 8.1.4

2006-07-12 Thread Joshua D. Drake
On Wednesday 12 July 2006 13:41, Medora Schauer wrote:
> I have just upgraded from 7.3.4 to 8.1.4 and now *all* db access calls
> are extremely slow.  I didn't need to preserve any old data so at this
> point all my tables are empty.  Just connecting to a db takes several
> seconds.
>
>
>
> When I was accidentally linking my app with the 7.3.4 libs but running
> the 8.1.4 postmaster everything was fine.
>
>
>
> I know I'm not giving much to go on but I'm stumped.  Can anyone suggest
> how I might track down the cause of this problem?

analyze?

Sincerely,

Joshua D. Drake


>
>
>
> Medora Schauer

-- 
   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] [Fwd: Delivery Status Notification (Failure)]

2006-07-12 Thread Craig A. James

I wrote:
I can't find an address to complain about the mailing list itself, so 
apologies but I'm posting directly to this list.  Every time I post to 
this group, I get returned mails about OTHER subscribers' invalid 
accounts, like the one below.


Michael Glaesemann replied:
Is this when you're replying to a post or creating a new post? If the 
former, and you're using reply-to-all, you'll be sending one message to 
the list and another directly to the poster of the message you're 
responding to. 


And Richard Broersma Jr replied:

Does the message come from postgresql.org or is the bounced email coming from 
these specific users
when you include them in reply-all?


Thanks to both for your answers.  But no -- It's for new posts.  In fact, when writing the email 
that started this thread, it was only to pgsql-performance@postgresql.org (I double-checked by 
using emacs on my Thunderbird "Sent" folder), yet I still got another 
"undeliverable" reply along with your message:


   This is an automatically generated Delivery Status Notification.
   Delivery to the following recipients failed.
  [EMAIL PROTECTED]



Craig


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] [Fwd: Delivery Status Notification (Failure)]

2006-07-12 Thread Richard Broersma Jr
> >This is an automatically generated Delivery Status Notification.
> >Delivery to the following recipients failed.
> >   [EMAIL PROTECTED]

yes, I got the same thing that you did here. only i got when I replied all to 
your email.  Are you
sure this individual wasn't listed in any of your CC or BCC addresses?

This is an automatically generated Delivery Status Notification.
Delivery to the following recipients failed.
   [EMAIL PROTECTED]

Reporting-MTA: dns;enpocket-exch.usaemail.enpocket.com
Received-From-MTA: dns;middx.enpocketbureau.com
Arrival-Date: Wed, 12 Jul 2006 18:10:58 -0400

Final-Recipient: rfc822;[EMAIL PROTECTED]
Action: failed
Status: 5.1.1



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] hyper slow after upgrade to 8.1.4

2006-07-12 Thread Bruno Wolff III
On Wed, Jul 12, 2006 at 15:41:14 -0500,
  Medora Schauer <[EMAIL PROTECTED]> wrote:
> I have just upgraded from 7.3.4 to 8.1.4 and now *all* db access calls
> are extremely slow.  I didn't need to preserve any old data so at this
> point all my tables are empty.  Just connecting to a db takes several
> seconds.
> 
> I know I'm not giving much to go on but I'm stumped.  Can anyone suggest
> how I might track down the cause of this problem?

That connections are slow makes me think DNS is worth looking at. It might
be that reverse lookups are timing out.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] size of pg_dump files containing bytea values

2006-07-12 Thread Tom Lane
"Steve McWilliams" <[EMAIL PROTECTED]> writes:
> I notice that non-printables in bytea values are being spit out by pg_dump
> using escaped octet sequences even when the "-Fc" option is present
> specifying use of the custom binary output format rather than plain text
> format.  This bloats the size of bytea values in the dump file by a factor
> of 3+ typically.

No, because the subsequent compression step should buy back most of
that.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] [Fwd: Delivery Status Notification (Failure)]

2006-07-12 Thread Tom Lane
"Craig A. James" <[EMAIL PROTECTED]> writes:
> Thanks to both for your answers.  But no -- It's for new posts.  In fact, 
> when writing the email that started this thread, it was only to 
> pgsql-performance@postgresql.org (I double-checked by using emacs on my 
> Thunderbird "Sent" folder), yet I still got another "undeliverable" reply 
> along with your message:

>> This is an automatically generated Delivery Status Notification.
>> Delivery to the following recipients failed.
>> [EMAIL PROTECTED]

This means that usaemail.enpocket.com has seriously misconfigured mail
software --- it's bouncing undeliverable messages to the From: address
rather than to the envelope sender (which will be pgsql-performance-owner
for a message coming through the pgsql-performance list).  This is
generally considered sufficiently unfriendly behavior that proof of it
is grounds for instant ejection from a mailing list, because the From:
address is someone who has no control over where the mailing list tries
to deliver to.  Certainly it's grounds for ejection from any PG list.
Send the bounce message with full headers to the list admin (Marc
Fournier, scrappy at postgresql.org) and [EMAIL PROTECTED]
will soon be an ex-subscriber.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match