On Fri, 14 Feb 2025 at 02:00, Kevin Fenzi <ke...@scrye.com> wrote: > On Thu, Feb 13, 2025 at 12:15:16PM -0500, Dusty Mabe wrote: > > > > > > On 2/13/25 11:42 AM, Kevin Fenzi wrote: > > > I agree with downthread folks that that seems like way too high a > > > failure rate to enable gating on. However, a few questions if I can: > > > > > > Is this reporting to bodhi for all these components? > > > Off hand checking a few I don't see any results from this? > > > > Hmm. Anytime a test gets run it should report results to bodhi. One thing > > that might be tripping us up is these metrics are based on the activity > in > > our matrix channel https://matrix.to/#/#jenkins-coreos:fedoraproject.org > > and IIUC we might have an issue where the reported RPM in the matrix > message > > that gets sent out *may* not be the actual RPM we're tracking (i.e. if an > > update has 5 packages in it the reported failure may be against an RPM > we're > > actually not tracking, but the overall test and reported failure to bodhi > > are valid). This is probably how emacs got in the list of failures. > > Ah. Makes sense. > > > > Wouldn't a good first step be to enable (non gating) these to show up > > > there so maintainers/you can be more aware and help reduce problems? > > > > Is there an example where you think the test should have reported results > > to bodhi, but didn't? It's probably what I mentioned above. > > I looked again and I missed that these were just mixed in with the rest. > Somehow I was expecting another section. I assume they are the: > coreos.cosa.build-and-test and the like tests? > > But if you look at say: > https://bodhi.fedoraproject.org/updates/FEDORA-2025-9f28dbb79a > I don't actually see any coreos* results there? > > ndbkit is not a package where we trigger the cosa tests again, see the full list https://github.com/coreos/coreos-ci/blob/main/bodhi-testing.yaml
It is I believe a similar to what Dusty described earlier, where an update contains more than one package. I could make the script that I use to collect the data to be smarter to look only at the packages we are triggering tests on. > > > Also, I see in the table a few packages have really high failure rate. > > > (nbdkit, makedumpfile, etc). Perhaps fixing these would lower the > entire > > > failure rate a good deal? > > > > Indeed, we're improving all the time! > > :) > > kevin > -- > _______________________________________________ > devel mailing list -- devel@lists.fedoraproject.org > To unsubscribe send an email to devel-le...@lists.fedoraproject.org > Fedora Code of Conduct: > https://docs.fedoraproject.org/en-US/project/code-of-conduct/ > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines > List Archives: > https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org > Do not reply to spam, report it: > https://pagure.io/fedora-infrastructure/new_issue >
-- _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue