Le jeu. 26 déc. 2019 à 10:26, Julien Rouhaud <rjuju...@gmail.com> a écrit :
> On Thu, Dec 26, 2019 at 10:20 AM Guillaume Lelarge > <guilla...@lelarge.info> wrote: > > > > Le jeu. 26 déc. 2019 à 09:49, Julien Rouhaud <rjuju...@gmail.com> a > écrit : > >> > >> On Thu, Dec 26, 2019 at 9:08 AM Guillaume Lelarge > >> <guilla...@lelarge.info> wrote: > >> > > >> > Le mer. 25 déc. 2019 à 19:30, Julien Rouhaud <rjuju...@gmail.com> a > écrit : > >> >> > >> >> On Wed, Dec 25, 2019 at 7:03 PM Julien Rouhaud <rjuju...@gmail.com> > wrote: > >> >> > > >> >> > Guillaume (in Cc) recently pointed out [1] that it's currently not > >> >> > possible to retrieve the list of parallel workers for a given > backend > >> >> > at the SQL level. His use case was to develop a function in > plpgsql > >> >> > to sample a given query wait event, but it's not hard to imagine > other > >> >> > useful use cases for this information, for instance doing some > >> >> > analysis on the average number of workers per parallel query, or > ratio > >> >> > of parallel queries. IIUC parallel queries is for now the only > user > >> >> > of lock group, so this should work just fine. > >> >> > > >> >> > I'm attaching a trivial patch to expose the group leader pid if any > >> >> > in pg_stat_activity. Quick example of usage: > >> >> > > >> >> > =# SELECT query, leader_pid, > >> >> > array_agg(pid) filter(WHERE leader_pid != pid) AS members > >> >> > FROM pg_stat_activity > >> >> > WHERE leader_pid IS NOT NULL > >> >> > GROUP BY query, leader_pid; > >> >> > query | leader_pid | members > >> >> > -------------------+------------+--------------- > >> >> > select * from t1; | 28701 | {28728,28732} > >> >> > (1 row) > >> >> > > >> >> > > >> >> > [1] https://twitter.com/g_lelarge/status/1209486212190343168 > >> >> > >> >> And I just realized that I forgot to update rule.out, sorry about > >> >> that. v2 attached. > >> > > >> > > >> > So I tried your patch this morning, and it works really well. > >> > > >> > On a SELECT count(*), I got this: > >> > > >> > SELECT leader_pid, pid, wait_event_type, wait_event, state, > backend_type FROM pg_stat_activity WHERE pid=111439 or leader_pid=111439; > >> > > >> > > ┌────────────┬────────┬─────────────────┬──────────────┬────────┬─────────────────┐ > >> > │ leader_pid │ pid │ wait_event_type │ wait_event │ state │ > backend_type │ > >> > > ├────────────┼────────┼─────────────────┼──────────────┼────────┼─────────────────┤ > >> > │ 111439 │ 111439 │ LWLock │ WALWriteLock │ active │ > client backend │ > >> > │ 111439 │ 116887 │ LWLock │ WALWriteLock │ active │ > parallel worker │ > >> > │ 111439 │ 116888 │ IO │ WALSync │ active │ > parallel worker │ > >> > > └────────────┴────────┴─────────────────┴──────────────┴────────┴─────────────────┘ > >> > (3 rows) > >> > > >> > and this from a CREATE INDEX: > >> > > >> > > ┌────────────┬────────┬─────────────────┬────────────┬────────┬─────────────────┐ > >> > │ leader_pid │ pid │ wait_event_type │ wait_event │ state │ > backend_type │ > >> > > ├────────────┼────────┼─────────────────┼────────────┼────────┼─────────────────┤ > >> > │ 111439 │ 111439 │ │ │ active │ > client backend │ > >> > │ 111439 │ 118775 │ │ │ active │ > parallel worker │ > >> > > └────────────┴────────┴─────────────────┴────────────┴────────┴─────────────────┘ > >> > (2 rows) > >> > > >> > Anyway, it applies cleanly, it compiles, and it works. Documentation > is available. So it looks to me it's good to go :) > >> > >> Thanks for the review Guillaume. Double checking the doc, I see that > >> I made a copy/pasto mistake in the new field name. Attached v3 should > >> be all good. > > > > > > Feeling bad I missed this. But, yeah, it's much better with the right > column's name. > > > > For me, it's looking good to be ready for commiter. Should I set it this > way in the Commit Fest app? > > If you don't see any other issue with the patch, I'd say yes. A > committer can still put it back to waiting on author if needed. > That's also what I thought, but as I was the only one commenting on this... Anyway, done. -- Guillaume.