On Mon, Sep 12, 2011 at 2:37 PM, Canek Peláez Valdés <can...@gmail.com> wrote:
> On Mon, Sep 12, 2011 at 1:39 PM, Michael Mol <mike...@gmail.com> wrote:
>> The first step in a clean solution, IMO, is to revert that change. The
>> second step is to fix the 'silent failure' problem for packages which
>> depend on /usr before /usr is available.
>
> No fixable, in reality. The flexibility of udev is in part in that the
> userspace can (and actually do) run arbitrary scripts and binaries
> from udev rules. You can "fix" the ones that require binaries in /usr
> *NOW*, but not forever, unless you forbid the use of arbitrary
> binaries from udev rules.
>
> And then you lose the flexibility.

Here's the chief problem with that argument...it's not just limited to
/usr. If you're going to say that a script can do whatever it wants,
and udev will make declarative statements about supported and
unsupported filesystem layouts to allow that to work, then you *must*
say that the entire filesystem be on the same partition as /, or that
one must use initramfs.

Because you can't know that a script won't depend on something under
/var. Or /opt. Or /home.  And if if /home is excluded from this
must-be-available set, what makes it more special than /usr? If it's
OK to say "no script must access files under /home", then why isn't it
OK to say "no script must access files under /usr"?

You're imposing a rule either way. If a script author must be aware of
rules, why can't he be aware of something like "be aware of when a
file may or may not be available, and do not access files which are
not yet available. If you can't know, document the requirement so that
a package maintainer or sysadmin can ensure your constraints are met."
That seems pretty simple to me. It's the *job* of package maintainers
to understand how software interacts with a distro's infrastructure.

>
> The explanation from
> http://www.freedesktop.org/wiki/Software/systemd/separate-usr-is-broken
> seems more than reasonable to me: /lib and /bin and /sbin were the way
> old-Unix solved the problem of needing some binaries before /usr was
> mounted.

I read that page. I understand the problem. I'm not convinced.

>
> Linux has a much better, flexible and automatized (dracut) way of
> doing this, by using an initramfs. With an initramfs you can have the
> smallest / in the world, and mount everything else afterwards. The
> initramfs memory is free'd after the pivot_root happens, so who cares
> how big it is?

People you (and the Fedora dev) don't care about, clearly. You ask
"who cares" rhetorically, when people on this very list have said,
"hey, I care!" There's a disconnect there I don't quite grasp.

A change like this is going to make life more difficult for embedded
hardware manufacturers, too. They'll have to have more memory
available for a larger initrd if they want to do something so simple
as a print server or a plug-and-play NAS node. And then there was that
mention of MIPS, earlier, that highlighted architectural physical
constraints that this would break. That's not exactly a trivial
problem.

> And yeah, that's not how classical Unix do things. Who cares? Linux
> does it so much better.

I really don't see where you're getting "better". You're saying, "Hey!
This is more flexible than any other solution." What you're not saying
(or noticing) is that you're kicking the can farther down the road;
the same fundamental complexities will pop up later unless you use the
initrd. With the initrd, you're turning every disk-installed system
into something equivalent to a livecd, with the capability of updating
the live cd as you go along. If that's actually the desire, there
would be far better options than initrd.

-- 
:wq

Reply via email to