On Tue, Dec 10, 2019 at 9:27 PM Tom Lane <t...@sss.pgh.pa.us> wrote: > > Amit Kapila <amit.kapil...@gmail.com> writes: > > On Sun, Dec 8, 2019 at 10:27 PM Tom Lane <t...@sss.pgh.pa.us> wrote: > >> Doing it like this seems attractive to me because it gets rid of two > >> different failure modes: inability to create a new thread and inability > >> to create a new pipe handle. Now on the other hand, it means that > >> inability to complete the read/write transaction with a client right > >> away will delay processing of other signals. But we know that the > >> client is engaged in a CallNamedPipe operation, so how realistic is > >> that concern? > > > Right, the client is engaged in a CallNamedPipe operation, but the > > current mechanism can allow multiple such clients and that might lead > > to faster processing of signals. > > It would only matter if multiple processes signal the same backend at the > same time, which seems to me to be probably a very minority use-case. > For the normal case of one signal arriving at a time, what I'm suggesting > ought to be noticeably faster because of fewer kernel calls. Surely > creating a new pipe instance and a new thread are not free. > > In any case, the main thing I'm on about here is getting rid of the > failure modes. The existing code does have a rather lame/buggy > workaround for the cant-create-new-pipe case. A possible answer for > cant-create-new-thread might be to go ahead and service the current > request locally in the long-lived signal thread. But that seems like > it's piling useless (and hard to test) complexity on top of useless > complexity. >
I am convinced by your points. So +1 for your proposed patch. I have already reviewed it yesterday and it appears fine to me. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com