Re: [HACKERS] [PATCH] Add SIGCHLD catch to psql

2010-05-28 Thread Tom Lane
Stephen Frost writes: > * Tom Lane (t...@sss.pgh.pa.us) wrote: >> I thought it might be about that simple once you went at it the right >> way ;-). However, I'd suggest checking ferror(pset.queryFout) as well >> as the fflush result. > Sure, I can add the ferror() check. Patch attached. This s

Re: [HACKERS] [PATCH] Add SIGCHLD catch to psql

2010-05-17 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote: > Stephen Frost writes: > > Attached is a patch that just checks the result from the existing > > fflush() inside the FETCH_COUNT loop and drops out of that loop if we > > get an error from it. > > I thought it might be about that simple once you went at it

Re: [HACKERS] [PATCH] Add SIGCHLD catch to psql

2010-05-17 Thread Tom Lane
Stephen Frost writes: > * Tom Lane (t...@sss.pgh.pa.us) wrote: >> If you're combining this with the FETCH_COUNT logic then it seems like >> it'd be sufficient to check ferror(fout) once per fetch chunk, and just >> fall out of that loop then. I don't want psql issuing query cancels >> on its own

Re: [HACKERS] [PATCH] Add SIGCHLD catch to psql

2010-05-17 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote: > If you're combining this with the FETCH_COUNT logic then it seems like > it'd be sufficient to check ferror(fout) once per fetch chunk, and just > fall out of that loop then. I don't want psql issuing query cancels > on its own authority, either. Attached

Re: [HACKERS] [PATCH] Add SIGCHLD catch to psql

2010-05-16 Thread Tom Lane
Stephen Frost writes: > * Tom Lane (t...@sss.pgh.pa.us) wrote: >> A saner >> approach, which would also help for other corner cases such as >> out-of-disk-space, would be to check for write failures on the output >> file and abandon the query if any occur. > I had considered this, but I'm not sur

Re: [HACKERS] [PATCH] Add SIGCHLD catch to psql

2010-05-16 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote: > A saner > approach, which would also help for other corner cases such as > out-of-disk-space, would be to check for write failures on the output > file and abandon the query if any occur. I had considered this, but I'm not sure we really need to catch *ever

Re: [HACKERS] [PATCH] Add SIGCHLD catch to psql

2010-05-15 Thread Bruce Momjian
Tom Lane wrote: > Robert Haas writes: > > On Sat, May 15, 2010 at 7:46 PM, David Fetter wrote: > >> Wouldn't this count as a bug fix? > > > Possibly, but changes to signal handlers are pretty global and can > > sometimes have surprising side effects. I'm all in favor of someone > > reviewing th

Re: [HACKERS] [PATCH] Add SIGCHLD catch to psql

2010-05-15 Thread Tom Lane
Robert Haas writes: > On Sat, May 15, 2010 at 7:46 PM, David Fetter wrote: >> Wouldn't this count as a bug fix? > Possibly, but changes to signal handlers are pretty global and can > sometimes have surprising side effects. I'm all in favor of someone > reviewing the patch - any volunteers? One

Re: [HACKERS] [PATCH] Add SIGCHLD catch to psql

2010-05-15 Thread Robert Haas
On Sat, May 15, 2010 at 7:46 PM, David Fetter wrote: >> >   Anyway, this makes FETCH_COUNT alot more useful, and, in my view, the >> >   current behaviour of completely ignoring $PAGER exiting is a bug. >> >> Plesae add this to the next commit-fest: >> >>       https://commitfest.postgresql.org/ac

Re: [HACKERS] [PATCH] Add SIGCHLD catch to psql

2010-05-15 Thread David Fetter
On Fri, May 14, 2010 at 04:24:43PM -0400, Bruce Momjian wrote: > Stephen Frost wrote: > -- Start of PGP signed section. > > Greetings, > > > > Toying around with FETCH_COUNT today, I discovered that it didn't do > > the #1 thing I really wanted to use it for- query large tables without > > h

Re: [HACKERS] [PATCH] Add SIGCHLD catch to psql

2010-05-14 Thread Bruce Momjian
Stephen Frost wrote: -- Start of PGP signed section. > Greetings, > > Toying around with FETCH_COUNT today, I discovered that it didn't do > the #1 thing I really wanted to use it for- query large tables without > having to worry about LIMIT to see the first couple hundred records. > The r