On Fri, Jan 14, 2022 at 7:30 PM Thomas Munro wrote:
> On Fri, Jan 14, 2022 at 4:35 PM Andres Freund wrote:
> > The more I think about it, the less I see why we *ever* need to re-arm the
> > latch in pq_check_connection() in this approach. pq_check_connection() is
> > only
> > used from from Proc
On Fri, Jan 14, 2022 at 4:35 PM Andres Freund wrote:
> The more I think about it, the less I see why we *ever* need to re-arm the
> latch in pq_check_connection() in this approach. pq_check_connection() is only
> used from from ProcessInterrupts(), and there's plenty things inside
> ProcessInterru
Hi,
On 2022-01-11 22:59:13 +1300, Thomas Munro wrote:
> I considered another idea we discussed: if we see a latch event, clear
> it and try again so that other events can be revealed (rince and
> repeat), but remember if that happens and set the latch at the end. I
> think that still requires PG_
On Tue, Dec 14, 2021 at 11:50 PM Thomas Munro wrote:
> On Tue, Dec 14, 2021 at 11:18 AM Thomas Munro wrote:
> > Well, I was trying to avoid bikeshedding an API change just for a
> > hypothetical problem we could solve when the time comes (say, after
> > fixing the more egregious problems with App
On Tue, Dec 14, 2021 at 11:18 AM Thomas Munro wrote:
> On Tue, Dec 14, 2021 at 7:53 AM Andres Freund wrote:
> > On 2021-12-13 17:51:00 +1300, Thomas Munro wrote:
> > > I tried that. It seems OK, and gets rid of the PG_FINALLY(), which is
> > > nice. Latches still have higher priority, and still
On Tue, Dec 14, 2021 at 7:53 AM Andres Freund wrote:
> On 2021-12-13 17:51:00 +1300, Thomas Munro wrote:
> > I tried that. It seems OK, and gets rid of the PG_FINALLY(), which is
> > nice. Latches still have higher priority, and still have the fast
> > return if already set and you asked for onl
Hi,
On 2021-12-13 17:51:00 +1300, Thomas Munro wrote:
> On Sat, Dec 11, 2021 at 7:09 PM Thomas Munro wrote:
> > On Sat, Dec 11, 2021 at 6:11 PM Andres Freund wrote:
> > > Yuck. Is there really no better way to deal with this? What kind of
> > > errors is
> > > this trying to handle transparentl
On Mon, Dec 13, 2021 at 5:51 PM Thomas Munro wrote:
> [...] Everywhere else calls
> with nevents == 1, so that's hypothetical.
Erm, I forgot about ExecAppendAsyncEventWait(), so I'd have to update
the commit message on that point, but it's hard to worry too much
about that case -- it's creating a
On Sat, Dec 11, 2021 at 7:09 PM Thomas Munro wrote:
> On Sat, Dec 11, 2021 at 6:11 PM Andres Freund wrote:
> > Yuck. Is there really no better way to deal with this? What kind of errors
> > is
> > this trying to handle transparently? Afaics this still changes when we'd
> > e.g. detect postmaster
On Sat, Dec 11, 2021 at 6:11 PM Andres Freund wrote:
> Yuck. Is there really no better way to deal with this? What kind of errors is
> this trying to handle transparently? Afaics this still changes when we'd
> e.g. detect postmaster death.
The problem is that WaitEventSetWait() only reports the l
Hi,
On 2021-12-11 17:41:34 +1300, Thomas Munro wrote:
> --- a/src/backend/libpq/pqcomm.c
> +++ b/src/backend/libpq/pqcomm.c
> @@ -1959,22 +1959,36 @@ pq_settcpusertimeout(int timeout, Port *port)
> bool
> pq_check_connection(void)
> {
> -#if defined(POLLRDHUP)
> - /*
> - * POLLRDHUP is
On Tue, Oct 12, 2021 at 3:10 AM Maksim Milyutin wrote:
> Good work. I have tested your patch on Linux and FreeBSD on three basic
> cases: client killing, cable breakdown (via manipulations with firewall)
> and silent closing client connection before completion of previously
> started query in asyn
On 12.06.2021 07:24, Thomas Munro wrote:
On Fri, Apr 30, 2021 at 2:23 PM Thomas Munro wrote:
Here's something I wanted to park here to look into for the next
cycle: it turns out that kqueue's EV_EOF flag also has the right
semantics for this. That leads to the idea of exposing the event via
On Thu, Oct 7, 2021 at 8:43 PM Thomas Munro wrote:
> On Sat, Jun 12, 2021 at 8:31 PM Zhihong Yu wrote:
> > +#ifdef POLLRDHUP
> > + if ((cur_event->events & WL_SOCKET_CLOSED) &&
> > + (cur_pollfd->revents & (POLLRDHUP | errflags)))
> >
> > It seems the last condition above
On Sat, Jun 12, 2021 at 8:31 PM Zhihong Yu wrote:
> +#ifdef POLLRDHUP
> + if ((cur_event->events & WL_SOCKET_CLOSED) &&
> + (cur_pollfd->revents & (POLLRDHUP | errflags)))
>
> It seems the last condition above should be written as:
>
> ((cur_pollfd->revents & POLLRDHUP) | (
On Fri, Jun 11, 2021 at 9:24 PM Thomas Munro wrote:
> On Fri, Apr 30, 2021 at 2:23 PM Thomas Munro
> wrote:
> > Here's something I wanted to park here to look into for the next
> > cycle: it turns out that kqueue's EV_EOF flag also has the right
> > semantics for this. That leads to the idea o
On Fri, Apr 30, 2021 at 2:23 PM Thomas Munro wrote:
> Here's something I wanted to park here to look into for the next
> cycle: it turns out that kqueue's EV_EOF flag also has the right
> semantics for this. That leads to the idea of exposing the event via
> the WaitEventSet API, and would the b
On Sat, Apr 3, 2021 at 9:27 AM Thomas Munro wrote:
> Pushed! Thanks to all who contributed.
Here's something I wanted to park here to look into for the next
cycle: it turns out that kqueue's EV_EOF flag also has the right
semantics for this. That leads to the idea of exposing the event via
the
On Fri, Apr 2, 2021 at 6:18 PM tsunakawa.ta...@fujitsu.com
wrote:
> The patch looks committable to me.
I checked for performance impact compared to master with pgbench -S,
and didn't see any problem. I thought more about how to write a
decent race-free test but struggled with the lack of a good
From: Thomas Munro
> > Following PostmasterIsAlive(), maybe it's better to name it as
> pq_connection_is_alive() pq_client_is_alive(), or pq_frontend_is_alive() like
> the pqcomm.c's head comment uses the word frontend?
>
> I think it's OK, because it matches the name of the GUC. I'm more concer
On Fri, Apr 2, 2021 at 1:36 AM Bharath Rupireddy
wrote:
> Here's a minor comment: it would be good if we have an extra line
> after variable assignments, before and after function calls/if
> clauses, something like
Done in v11. Thanks.
On Thu, Apr 1, 2021 at 10:16 PM tsunakawa.ta...@fujitsu.com
wrote:
> From: Thomas Munro
> > I changed my mind. Let's commit the pleasingly simple Linux-only feature
> > for
> > now, and extend to it to send some kind of no-op message in a later release.
> > So this is the version I'd like to go
On Thu, Apr 1, 2021 at 11:29 AM Thomas Munro wrote:
>
> On Tue, Mar 30, 2021 at 10:00 AM Thomas Munro wrote:
> > If we want to ship this in v14 we have to make a decision ASAP:
> >
> > 1. Ship the POLLHUP patch (like v9) that only works reliably on
> > Linux. Maybe disable the feature completel
From: Thomas Munro
> I changed my mind. Let's commit the pleasingly simple Linux-only feature for
> now, and extend to it to send some kind of no-op message in a later release.
> So this is the version I'd like to go with.
> Objections?
+1, as some of our users experienced the problem that the s
On Tue, Mar 30, 2021 at 10:00 AM Thomas Munro wrote:
> If we want to ship this in v14 we have to make a decision ASAP:
>
> 1. Ship the POLLHUP patch (like v9) that only works reliably on
> Linux. Maybe disable the feature completely on other OSes?
> 2. Ship the patch that tries to read (like v7
On Tue, Mar 30, 2021 at 6:25 AM Maksim Milyutin wrote:
>
> Hi Thomas! Thanks for working on this patch.
>
> I have attached a new version with some typo corrections of doc entry,
> removing of redundant `include` entries and trailing whitespaces. Also I
> added in doc the case when single query tr
Hi Thomas! Thanks for working on this patch.
I have attached a new version with some typo corrections of doc entry,
removing of redundant `include` entries and trailing whitespaces. Also I
added in doc the case when single query transaction with disconnected
client might be eventually commited
Hi,
On 2021-03-24 16:08:13 +1300, Thomas Munro wrote:
> ... Andres just asked me the same question, when we were discussing
> the pq_peekmessage() patch (v7). I had remembered that POLLHUP didn't
> work for this type of thing, from some earlier attempt at something
> similar, and indeed on my fir
Going back a couple of years to something Konstantin said:
On Sat, Aug 3, 2019 at 4:40 AM Konstantin Knizhnik
wrote:
> But I wonder why we can not perform just pool with POLLOUT flag and zero
> timeout.
> If OS detected closed connection, it should return POLLHUP, should not it?
> I am not sure i
Hi,
In the description:
Provide a new optional GUC that can be used to check whether the client
connection has gone away periodically while running very long queries.
I think moving 'periodically' to the vicinity of 'to check' would make the
sentence more readable.
+the operating system,
On Tue, Mar 23, 2021 at 11:47 PM Thomas Munro wrote:
> That leaves the thorny problem Tom mentioned at the top of this
> thread[1]: this socket-level approach can be fooled by an 'X' sitting
> in the socket buffer, if a client that did PQsendQuery() and then
> PQfinish(). Or perhaps even by SSL m
On Mon, Mar 22, 2021 at 3:29 PM Thomas Munro wrote:
> 2. The tests need tightening up. The thing with the "sleep 3" will
> not survive contact with the build farm, and I'm not sure if the SSL
> test is as short as it could be.
I don't think the TAP test can be done in the way Sergey had it,
bec
On Sat, Mar 6, 2021 at 5:50 PM Zhihong Yu wrote:
> + if (client_connection_check_interval > 0)
> + enable_timeout_after(CLIENT_CONNECTION_CHECK_TIMEOUT,
>
> + /* Start timeout for checking if the client has gone away if necessary. */
> + if (client_connection_check_interval)
T
For v4-0002-some-fixups.patch :
+ if (client_connection_check_interval > 0)
+ enable_timeout_after(CLIENT_CONNECTION_CHECK_TIMEOUT,
+ /* Start timeout for checking if the client has gone away if necessary.
*/
+ if (client_connection_check_interval)
It would be better if the s
On Mon, Mar 1, 2021 at 6:18 PM Thomas Munro wrote:
> I've done a quick rebase of this the patch and added it to the
> commitfest. No other changes. Several things were mentioned earlier
> that still need to be tidied up.
Rebased again due to bitrot. This time I did some actual work:
1. I did
On Sat, Aug 3, 2019 at 4:40 AM Konstantin Knizhnik
wrote:
> On 18.07.2019 6:19, Tatsuo Ishii wrote:
> > So the performance is about 5% down with the feature enabled in this
> > case. For me, 5% down is not subtle. Probably we should warn this in
> > the doc.
>
> I also see some performance degrad
On 18.07.2019 6:19, Tatsuo Ishii wrote:
I noticed that there are some confusions in the doc and code regarding what the
new
configuration parameter means. According to the doc:
+Default value is zero - it disables connection
+checks, so the backend will detect client disconn
On Thu, Jul 18, 2019 at 5:04 PM Tom Lane wrote:
> Tatsuo Ishii writes:
> >> Yeah, the timer logic is wrong. I didn't have time to look into it
> >> but with truss/strace for some reason I see 3 setitimer() syscalls for
> >> every query, but I think this doesn't even need to set the timer for
> >
Tatsuo Ishii writes:
>> Yeah, the timer logic is wrong. I didn't have time to look into it
>> but with truss/strace for some reason I see 3 setitimer() syscalls for
>> every query, but I think this doesn't even need to set the timer for
>> every query.
> Hum. I see 2 settimer(), instead of 3.
s
> Yeah, the timer logic is wrong. I didn't have time to look into it
> but with truss/strace for some reason I see 3 setitimer() syscalls for
> every query, but I think this doesn't even need to set the timer for
> every query.
Hum. I see 2 settimer(), instead of 3.
Best regards,
--
Tatsuo Ishii
On Thu, Jul 18, 2019 at 3:19 PM Tatsuo Ishii wrote:
> So the performance is about 5% down with the feature enabled in this
> case. For me, 5% down is not subtle. Probably we should warn this in
> the doc.
Yeah, the timer logic is wrong. I didn't have time to look into it
but with truss/strace f
> Yeah.
>
> +1 for this patch, with a few adjustments including making the test
> use pg_sleep() as mentioned. It does something useful, namely
> cancelling very long running queries sooner if the client has gone
> away instead of discovering that potentially much later when sending a
> response.
On Sat, Jul 6, 2019 at 12:27 AM Stas Kelvich wrote:
> Well, indeed in case of cable disconnect only way to detect it with
> proposed approach is to have tcp keepalive. However if disconnection
> happens due to client application shutdown then client OS should itself
> properly close than connectio
> On 5 Jul 2019, at 11:46, Thomas Munro wrote:
>
> On Fri, Jul 5, 2019 at 6:28 PM Tatsuo Ishii wrote:
>>> The purpose of this patch is to stop the execution of continuous
>>> requests in case of a disconnection from the client.
>>
>> Pgpool-II already does this by sending a parameter status
On Fri, Jul 5, 2019 at 6:28 PM Tatsuo Ishii wrote:
> > The purpose of this patch is to stop the execution of continuous
> > requests in case of a disconnection from the client.
>
> Pgpool-II already does this by sending a parameter status message to
> the client. It is expected that clients are al
On Fri, Jul 5, 2019 at 6:42 PM Tatsuo Ishii wrote:
> > This seems like a reasonable idea to me. There is no point in running
> > a monster 24 hour OLAP query if your client has gone away. It's using
> > MSG_PEEK which is POSIX, and I can't immediately think of any reason
> > why it's not safe to
> This seems like a reasonable idea to me. There is no point in running
> a monster 24 hour OLAP query if your client has gone away. It's using
> MSG_PEEK which is POSIX, and I can't immediately think of any reason
> why it's not safe to try to peek at a byte in that socket at any time.
I am not
> The purpose of this patch is to stop the execution of continuous
> requests in case of a disconnection from the client.
Pgpool-II already does this by sending a parameter status message to
the client. It is expected that clients are always prepared to receive
the parameter status message. This w
On Fri, Jul 5, 2019 at 5:36 PM Thomas Munro wrote:
> On Sat, Feb 9, 2019 at 6:16 AM wrote:
> > The purpose of this patch is to stop the execution of continuous
> > requests in case of a disconnection from the client. In most cases, the
> > client must wait for a response from the server before se
On Sat, Feb 9, 2019 at 6:16 AM wrote:
> The purpose of this patch is to stop the execution of continuous
> requests in case of a disconnection from the client. In most cases, the
> client must wait for a response from the server before sending new data
> - which means there should not remain unrea
The purpose of this patch is to stop the execution of continuous
requests in case of a disconnection from the client. In most cases, the
client must wait for a response from the server before sending new data
- which means there should not remain unread data on the socket and we
will be able to
Hi,
On 2019-01-13 18:05:39 -0500, Tom Lane wrote:
> s.cherkas...@postgrespro.ru writes:
> > This patch adds verification of the connection with the client during
> > the execution of the SQL query. The feature enables using the GUC
> > variable ‘client_connection_check_interval’. The default che
s.cherkas...@postgrespro.ru writes:
> This patch adds verification of the connection with the client during
> the execution of the SQL query. The feature enables using the GUC
> variable ‘client_connection_check_interval’. The default check interval
> is 1 second. If you set the value of ‘client
53 matches
Mail list logo