On 11/01/2018 08:03 PM, Andres Freund wrote: > On 2018-11-01 19:57:17 +0100, Tomas Vondra wrote: >>>> I think that very much depends on how expensive the tasks handled by the >>>> threads are. It may still be cheaper than a reasonable IPC, and if you >>>> don't create/destroy threads, that also saves quite a bit of time. >>> >>> I'm not following. How can you have a pool *and* threads? Those seem to >>> be contradictory in PG's architecture? You need full blown IPC with your >>> proposal afaict? >>> >> >> My suggestion was to create a bgworker, which would then internally >> allocate and manage a pool of threads. It could then open some sort of >> IPC (say, as dumb as unix socket). The backends could could then send >> requests to it, and it would respond to them. Not sure why/how would >> this contradict PG's architecture? > > Because you said "faster than reasonable IPC" - which to me implies that > you don't do full blown IPC. Which using threads in a bgworker is very > strongly implying. What you're proposing strongly implies multiple > context switches just to process a few results. Even before, but > especially after, spectre that's an expensive proposition. >
Gah! I meant to wrote "faster with reasonable IPC" - i.e. faster/cheaper than a solution that would create threads ad-hoc. My assumption is that the tasks are fairly large, and may take quite a bit of time to process (say, a couple of seconds?). In which cese the the extra context switches are not a major issue. But maybe I'm wrong. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services