On Wednesday, 25 November 2020 13:31:27 GMT Rich Freeman wrote:
> On Wed, Nov 25, 2020 at 5:54 AM Peter Humphrey <pe...@prh.myzen.co.uk> 
wrote:
> > > > Can you imagine an fstab with 22 partitions specified with UUIDs?
> > > 
> > > Can you imagine an fstab with 22 partitions?
> > 
> > The NVMe drive, the main one, has 18;
> 
> So, if all the partitions are on one drive and that is the only drive
> you have, there aren't many issues with using raw kernel device names
> to identify them.  It isn't like a partition is just going to
> disappear.
> 
> Once you have multiple disks, then UUIDs or labels become more
> important, especially with a large number.  If you had a dozen disks
> with dozens of partitions and tried to use kernel device names, then
> anytime a device failed or was enumerated differently you'd have stuff
> mounted all over the place.

Oh yes, of course, I can see that. I'm only saying that a simple system only 
needs simple setup.

> That said, something like lvm is a good solution in almost all cases
> (or something semi-equivalent like zfs/btrfs/etc which have similar
> functionality built-in).  If I had that many partitions I'd hate to
> deal with wanting to resize one, and with lvm that is pretty trivial.
> You don't need to use UUIDs with lvm - they're basically equivalent to
> labels.

My old system had two 1TB SSDs, and I used lvm on them. It was a lot of extra 
complication, so I didn't take that approach on this box. (I still have to 
have mdadm and friends installed though; the grains aren't fine enough to split 
them out. USE in make.conf includes "-dmraid -device-mapper -lvm" but it's 
ineffective.)

> Now, one area I would use UUIDs is with mdadm if you're not putting
> lvm on top.  I've seen mdadm arrays get renumbered and that is a mess
> if you're directly mounting them without labels or UUIDs.

<Shudder.>

-- 
Regards,
Peter.




Reply via email to