> > >> Upstream should do what's best for upstream, not for Intel's "unique" > > >> management. > > >> > > >> Not sure how from Emma explaining how Rb tags were used by Intel > > >> management it came the conclusion that it were used in that way only > > >> by > > >> Intel management. Spoiler: it is not. > > > > > > Sorry, I'll make that point more emphatic. > > > > > > Upstream must do what's best for upstream without zero regard for the > > > whims of management. Doubly so for bad management. > > > > If the r-b process ever had any notice from any company's management, I > > haven't seen it. (Actually, I think most management would rather have > > the short sighted view of skipping code review to more quickly merge > > patches.) In terms of who to "track down", that is also a tenuous > > connection. > > All of the above is true but also totally irrelevant to the actual discussion. > > When R-b as a metric came up at the time of the first switch, I wrote > a really trivial Python script which used the GitLab API to scrape MR > discussions and pull 'Reviewed-by: ...' comments out and print a > leaderboard for number of reviewed MRs over the past calendar month. > Adapting that to look at approvals rather than comments would cut it > down to about 10 LoC. > > Whether it's Reviewed-by in the trailer or an approval, both are > explicitly designed to be machine readable, which means it's trivial > to turn it into a metric if you want to. Whether or not that's a good > idea is the problem of whoever wields it.
Fair enough. I don't have strong views on the tags themselves. If upstream has good reasons to use them, let's use them; if upstream has good reasons to omit them, let's omit them. What I oppose is management metrics playing a role in upstream's discussion. That goes for any company's management; I do not mean to single out Intel here. But given the aforementioned script, even the managers will be happy either way. My apologies for prolonging what in retrospect is a silly discussion.