Thinking about this some more, why don't we just elog(FATAL) in
internal_flush(), if the write fails, instead of setting the flag and
waiting for the next CHECK_FOR_INTERRUPTS(). That sounds scary at first,
but why not?
There's this comment in internal_flush():
/*
* Careful: an ereport() that tries to write to the
client would
* cause recursion to here, leading to stack overflow
and core
* dump! This message must go *only* to the postmaster
log.
That's understandable.
* If a client disconnects while we're in the midst of
output, we
* might write quite a bit of data before we get to a
safe query
* abort point. So, suppress duplicate log messages.
But what about this? Tracing back the callers, I don't see any that
would be upset if we just threw an error there. One scary aspect is if
you're within a critical section, but I don't think we currently send
any messages while in a critical section. And we could refrain from
throwing the error if we're in a critical section, to be safe.
*/
if (errno != last_reported_send_errno)
{
last_reported_send_errno = errno;
ereport(COMMERROR,
(errcode_for_socket_access(),
errmsg("could not send data to
client: %m")));
}
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers