On 2016-03-13 22:51 -0700, Hal Murray wrote: <snip> > The first is bleeding edge. Developers can grab the latest from git. In > this context, testers count as developers even if they don't write any code. > Support consists of going forward. Old releases are never fixed. They are > supported only in that you can get them from git to back out of recent > changes if they break something in your environment and/or you are bisecting > a bug.
I have a slew of new Builders to put on our BuildBot instance. One of them will create a snapshot of the currently built branch *if* all the Builders pass. This means the resulting snapshot has been built and passed the testsuite. I have other run-check Builders coming online soon that actually run some of the tools and ntpd to see if they at least run. (more longterm tests coming, too) This would solve a lot of the 'bleeding edge' users. I would encourage them to use these snapshots before git because it gives us a baseline for what should work. > The second target is distros. Within a distro, there are probably several > sub-targets. Many distros have 3 "supported" releases. I'll call them > testing, stable, and old. > > The stable release is the one most users are expected to run. The old > release is the previous stable. It stays around to give users plenty of time > to upgrade to the new stable. The testing area is for testing new releases > from upstream and whatever local changes they make. > > Many distros have releases every 6 months to a year. > Ubuntu LTS supports selected releases for 5 years. > https://wiki.ubuntu.com/LTS > RHEL goes out to 10 years. > https://access.redhat.com/support/policy/updates/errata/ > > In an ideal world, we would support all the releases that our users are > using. > In that context, support means security fixes and major bug fixes but not > feature updates. (The idea with not adding features is to reduce the risk of > breaking something.) If we do things right, after we release a security fix, > our users can just grab the fixed version, test it, and release. > If distros are helping us test our code by running our 2-week releases in > their testing area, we need to coordinate things when they make a release. > We > need to do a release when they start their pre-release testing and they need > to use that release and stop grabbing our 2-week releases. This can get complicated real fast. I would encourage distros to do rolling testing but we cannot pin our releases to their schedule in any way. As long as we have our test results online and they can pinpoint the 'best' version to run they can go back a release or two in order to find one they are comfortable with. This also encourages us to release more frequently. I don't believe we have any reason to have 10 releases in a month but we should always have at least 2 and no fewer than that. This is to avoid stacking changes and lets things get tested in the test system. > With some coordination, we could reduce the total workload by getting several > distros to use the same release(s). If that happens, it makes sense for us > to > support the old releases since the work of fixing a security bug only needs > to > be done once. We could also solve this by having a 'recommended' version on our website and point to test results. It would be good to keep distros at least 30 days behind our development cycle. I expect the full run of run-time testing to take at least one week per built release. I have a bunch of RPIs here that I will run ntpd on to make sure it keeps time properly one that test passes then we can say that is a 'stable' version of a release and recommend it as the next version to try. I prefer more general approaches that will suite everyone this includes distros, individual users and large users. Amar. _______________________________________________ devel mailing list devel@ntpsec.org http://lists.ntpsec.org/mailman/listinfo/devel