On Tue, Sep 26, 2023 at 04:53:05PM -0700, Steve Langasek wrote: > On Mon, Sep 25, 2023 at 03:22:59PM -0700, Bryce Harrington wrote: > > Moreover, there are other use cases beyond test failure fixing. > > Consider MREs and SRUs, where you prepare a package in a PPA, and run > > autopkgtest as part of the criteria for having the package be accepted. > > For the record, I don't believe the SRU team has ever asked for pre-upload > autopkgtests as a condition of an MRE.
That's not correct, there's been at least one recent MRE I'm aware of that did this: https://bugs.launchpad.net/ubuntu/+source/openldap/+bug/2027079 It'd be reasonable to expect more along these lines. But MREs are just one of several use cases I outlined. PPA+autopkgtest is a handy service and flexible for a number of different workflows. > Given the frequency with which autopkgtest infrastructure gets overloaded, I > generally take the view that ppa autopkgtest runs should be kept to a > minimum because the results don't transfer to the main archive and all have > to be run again, and it's the second run that actually matters for > proposed-migration. Well, that's moving the goalposts on this argument - the original concern was regarding test log retention for already-run PPA tests, not whether autopkgtesting against PPAs is useful at all or just contributes to overloading the infrastructure. That's an entirely different question, but I'd push back on that too. Tests run against PPAs are processed at a lower default priority than the primary archive. So, assuming you're queuing a test run that is destined to fail, doing so in a PPA actually *helps* the infrastructure load balance when is in an overloaded state. Not to mention, that uploading a broken test for one package risks causing any of its dependencies to also run into trouble, which can cause a cascade of people retriggering things to try to figure out what introduced the problem. Ultimately, our goal here is to ensure the highest quality of Ubuntu possible. Obviously none of us wish to logjam Britney by pushing it beyond its capabilities. But if that is indeed a risk, wouldn't it be better to strengthen Britney rather than weaken our testing processes? Anyway, this is way more verbose than I intended. I of course understand there's trade-offs and that tech can have weird and unexpected limitations. My original question was just why 8 weeks was felt preferrable to a larger number. If there's a strong reason for that, we'll just have to live with it, but to me 26 weeks would seem like it'd be long enough to avoid most of the (admittedly outlier) issues I could imagine. Bryce -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel