I don't think we need 100% consensus to proceed on anything and if I've
learned anything from 20 years in this community is that forcing that
issue does the community a huge disservice as well as turn off the code
submitters. See my thread on the missed opportunities in threads, or
if you want I can paint the picture of what caused SMP to lag half a
decade behind Linux as well.
I would say that if someone submitted a patch for /dev/givemeroot, sure
that would be righteously shot down but to force the whole, entire,
"right" solution the first time around is remarkably blocking and unfair
to the community and submitters as well.
Why is this even happening in email? If folks want "the right solution"
then why aren't they submitting patches or pull requests to the pkg repo
(or where ever this is stored?). This seems counter-intuitive, but
really actually should be how it works. Specifically: if you like where
an idea is going, then don't block the code, submit improvements on top
of it. Stone soup it if you will.
-Alfred
On 4/19/16 9:28 AM, Nathan Whitehorn wrote:
Well, this discussion has gone pretty far off of the rails. I am of
course happy to make a patch that cuts this down to 10 packages, but
that's not something that should be committed without agreement --
which we obviously don't have. It would have been good to have had
meaningful discussion of this before.
There are basically three workable options:
1. Have fewer packages. This is easy to implement and preserves the
integrity of the base system (as well as unified versioning so that a
system at some particular patch level will have the same global
state). I have not seen any meaningful downside suggested to this so
far except for marginally higher load on update servers.
2. Have 755 packages. This makes it harder to version the system and
makes the user interface significantly worse (my opinion, but shared
by others). This is the easiest to implement since it is already
implemented.
3. Have ~10 meta packages that just depend on sets of the 755 packages
and hide the internal details. This gives the user experience of (1)
with the implementation of (2), and is marginally more complex than
either.
Other things (the overlapping packages idea, for instance) are way too
complex and will just lead to breakage. Can anyone provide an argument
against (1) or, alternatively, for (2)? (2) seems to add a lot of
complexity for no clear gain and I remain pretty confused about why it
was chosen.
-Nathan
On 04/18/16 20:17, Alfred Perlstein wrote:
Maybe what the "too many packages" folks need to do is write some
code to hide that it's so many packages.
:)
I think the rule of two feet should be applied here.
What we have is people that have worked quite hard to bring us
something that we can easily work with, and on the other hand some
folks that want something they consider even better. Personally I
can't see how having the system less granular is better, since having
it MORE granular is actually harder work.
Can someone on the "too many packages" campaign here explain to me
how having too fine a granularity stops you from making macro
packages containing packages?
Because honestly I can't see how having granularity hurts at all when
if someone wanted to make it less granular all they would have to do
is make some meta-packages.
-Alfred
On 4/18/16 7:23 PM, Lyndon Nerenberg wrote:
On 2016-04-18 7:01 PM, Roger Marquis wrote:
Can you explain what would be accomplished by testing all or even a
fraction of the possible permutations of base package
combinations? We
don't do that for ports.
The ports tree isn't a mandatory part of the system. And by
definition it could not be tested that way, since it offers so many
alternative implementations of specific functionality.
Other operating systems don't do that for
their base packages.
I'm pretty sure Solaris had some fairly hard-core regression tests
to ensure basic system functionality wouldn't be compromised by
'oddball' selections of packages offered up at install time.
> Honestly, some of us are wondering what exactly is
> behind some of these concerns regarding base packages.
The concern is from all of us UNIX dinosaurs who predate the
fine-grained packaging environment, which just worked, and who now
rip our (little remaining) hair out due to unsolvable package
dependency loops in the Linux machines we are forced to administer
in order to pay rent. For me, as a sysadmin, I derive a negative
benefit from this optimization.
I guess what I'm really asking is: where is the peer reviewed
research that shows this actually improves things for the not-1% of
FreeBSD users?
--lyndon
P.S. Don't turn this into a pissing match. I really want to know
how this is of net benefit to everyone. But I don't want
hyperbole. I have looked at a lot of, e.g., USENIX and ACM,
bibliographies and papers for justification for this, and I can't
find it. It would really help (me, at least) if someone could take
a moment to point me at demonstrable evidence of the benefits of
this model.
_______________________________________________
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to
"freebsd-current-unsubscr...@freebsd.org"
_______________________________________________
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to
"freebsd-current-unsubscr...@freebsd.org"
_______________________________________________
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"