On Tue, Feb 04, 2020 at 07:38:34AM +1030, Brett Lymn wrote: > > and mounted. So, so what if you get / first and then have to wait for > the rest of the fsck's to happen vs a fsck of a single large file > system? At the end of the day it will take about the same amount f time > to get the machine to a usable state.
That. To me the argument that we should have /usr split from / "because it takes too long to fsck huge filesystems" makes sense only if one's put totally inappropriate stuff in /usr. I can see entirely reasonable arguments for splitting out /home and /var and /tmp in the default partitioning. But spraying the system's executables and libraries out across two filesystems so half of them are in / and half of them are in /usr "to make fsck faster"? To me it just stinks of "that's how it was in my Golden Youth and I want it that way FOREVER!". Moving part of the system to /usr was a *necessary evil* when it was done. There is no real rhyme nor reason to what's in /bin vs /usr/bin, even less to /sbin vs /usr/sbin, except "huh, I need _this_ and I'm willing to make / a little bigger to hold it". But why shouldn't / just be big enough to hold all of /usr? Because of header files and static libraries and other toolchain components? I'm going to submit that if you have a machine suitable for development, then you have a machine where the time to fsck a few directories of libraries and header files *in the rare instances when you're booting single-user with a r/w /* will not in fact be large. Remember, we already split out the "heavy hitters" (/home, /var) to paths (and thus filesystems, if you like) of their own. I really can't see why we shouldn't fix the mess with every directory in / being mirrored over into /usr, which was acknowledged as an unfortunate compromise when it was made.