On 13/07/2010 6:26 PM, Andras Fabian wrote:
Wait, now, here I see some correlation! Yes, it seems to be the memory! When I start my COPY-to-STDOUT 
experiment I had some 2000 MByte free (well ,the server has 24 GByte ... maybe other PostgreSQL processes 
used up the rest). Then, I could monitor via "ll -h" how the file nicely growed (obviously no 
congestion), and in parallel see, how "free -m" the "free" memory went down. Then, it 
reached a level below 192 MByte, and congestion began. Now it is going back and forth around 118-122-130 ... 
Obviously the STDOUT thing went out of some memory resources.
Now I "only" who and why is running out, and how I can prevent that.

> Could there be some extremely big STDOUT buffering in play ????

Remember, "STDOUT" is misleading. The data is sent down the network socket between the postgres backend and the client connected to that backend. There is no actual stdio involved at all.

Imagine that the backend's stdout is redirected down the network socket to the client, so when it sends to "stdout" it's just going to the client. Any buffering you are interested in is in the unix or tcp/ip socket (depending on how you're connecting), in the client, and in the client's output to file/disk/whatever.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to