Michael Palimaka posted on Mon, 23 Nov 2015 02:54:58 +1100 as excerpted:

> On 22/11/15 05:51, Andrew Savchenko wrote:
>> Hi,
>> 
>> On Wed, 18 Nov 2015 07:01:21 -0500 Rich Freeman wrote:
>>> On Wed, Nov 18, 2015 at 6:12 AM, Alexander Berntsen
>>> <berna...@gentoo.org> wrote:
>>>> When I do QA in projects I'm involved with (at least outside of
>>>> Gentoo), we don't do it live on end-user systems. I'll leave the
>>>> details as an exercise for the Gentoo developer.
>>>>
>>>>
>>> People who run ~arch are not really end-users - they're contributors
>>> who have volunteered to test packages.
>> 
>> I strongly disagree with you. We do not use stable even at enterprise
>> grade production systems and HPC setups. Stable is just too freaking
>> old in order to be usable for our purposes, not to mention that it
>> lacks many packages at all. We tried stable several times, it just
>> freaks out admins (including myself) too badly or results in horrible
>> mess of stable and unstable which is less stable that unstable setups.
>> I do not use stable at workstations and personal setups as well.
>> 
>> Nevertheless I consider stable useful as stabilization process gives
>> more testing for packages (and some fixes are forward ported to
>> unstable versions). Of course I understand that there are people using
>> it and I try to support stable packages as well, but these versions are
>> mostly a burden and I can't really understand stable users.
> 
> Is the state of stable really that bad? I see this sentiment a lot.
> 
> I run mostly-stable systems and rarely have an issue with old/missing
> packages (but I'm involved in the maintenance of many of the packages I
> use so I try to keep on top of stable requests).
> 
> Are there particular areas that are lagging particularly, or is it just
> in general?

My own biggest concern about gentoo stable would be the timeliness of 
security updates, particularly if you're waiting for GLSAs to do them, as 
the GLSAs normally don't come out until all affected archs have 
stabilized, and that's often *much* longer than I'd be comfortable with 
running world-known-vulnerable versions.

If you're not on a lagging arch, sync and update every couple weeks to 
once a month at am absolute minimum, and consistently use --deep on 
updates so you should always get available stable updates, then stable 
shouldn't be /that/ bad, security-wise, as you won't be waiting for those 
GLSAs after the lagging archs have stabilized, but will instead be 
picking up the packages, including --deep dependencies, as they do 
stabilize.

Tho obviously ~arch with --deep updates are still likely to get you those 
security updates faster... but hopefully stable --deep updates will be 
fast /enough/.


My #2 concern with stable wouldn't be so much the median or even mean age 
of packages, but the effectively age-unlimited "long-tail".  I'm not sure 
what the worst cases are, age-wise, but I know of a number of bad to 
arguably "severely bad" "system-critical-package" examples.

How long did baselayout-2 and openrc take to stabilize?  IIRC it was at 
least two years, long after they were effectively stable in ~arch, with 
the holdup being primarily lack of the documentation necessary for stable 
users, both for initial installation (handbook updates) and upgraders 
(upgrade documentation).

Similarly, it took stable portage a /very/ long time to get proper sets 
support, primarily due to political issues, I believe.

And of course both glibc and gcc, particularly gcc, tend to take ages to 
make it to even unmasked ~arch, let alone stable, because for better or 
worse, the policy is basically that they can't be unmasked until all 
packages have a patched version that can work with them at the target 
unmask level (~arch or stable).  So gcc in particular takes /ages/ to 
make it to even ~arch, because while most packages that normal users run 
will at least have bugs filed with patches available, it takes months for 
them to be worked into actual in-tree ~arch packages, so gcc can build 
them all and be unmasked to the same ~arch.  Back when amd64 was newer 
and gcc updates generally had much more noticeable performance boosts 
with newer versions, I'd routinely unmask gcc and go fetching those 
patches from bugzilla when necessary, so I _know_, tho I don't do it so 
much these days, both due to having less time available, and because as a 
mature gcc arch, gcc updates don't bring the marked performance increases 
on amd64 that they used to, so it's less of a big deal and I often wait 
at least until there's noises on -dev about unmasking gcc to ~arch, 
before unmasking it and doing the rebuilds, here.

Of course, it's that same process over again before ~arch gcc and glibc 
are stabilized, so that puts them _seriously_ behind for stable, even 
more so than they are for ~arch, which is bad enough, but I know why the 
policy is what it is and I don't disagree with it, even if it /does/ mean 
gentoo, which arguably depends at the user level far more on gcc than 
normal binary distros, actually ends up way behind them in terms of 
deployment even to ~arch.


Those are my own two big reasons for preferring ~arch.  Security is the  
big one, but provided users follow appropriate update procedures, it's at 
least manageable on stable.  But the unlimited long tail on stabilization 
age is in some ways even more worrying, because while security is at 
least a limited and managed problem, as a user you really /don't/ have 
any limits on how far back into upstream ancient history the stable 
versions of packages you're running may be, and unless you actually check 
all your installed-package upstreams or at least compare against gentoo 
~arch versions for them all, you really /don't/ know which stable 
packages are furthest behind and thus which packages you're running are 
effectively out of upstream's support range and by how far.

At least with an enterprise distro like Red Hat, yes, the packages are 
going to be out of date, but you know you still have /some/ sort of 
decent support available, because that's what the enterprise distros are 
in the business of actually /providing/ -- it's their primary feature and 
reason to exist.  On Gentoo, not so much, not because maintainers won't 
do their honest best to support you on stable (they generally do), but 
because that's simply not Gentoo's primary product or reason for 
existence -- on Gentoo, that primary product and reason for existence is 
generally considered to be more the end user customizability -- 
otherwise, why not just be a binary distro and avoid all that hassle of 
end-user building in the first place.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


Reply via email to