On Sun, May 20, 2018 at 12:04:13AM +1000, Andrew Greig wrote:
> So the time has come when I have backed up all my data, cleaned out the
> /home directory, and in the morning I should expect that all of my data in
> Dropbox has finished synching. i have downloaded and tested the Ubuntu 18.04
> LTS and burned it to DVD.
>
> In the morning I will install the 2 new 2Tb HDDs , and load the DVD to
> launch myself into unfamiliar territory, so when I get to the partition
> stage of the process I will have 1 x 1Tb HDD for the  system and /home and
> the 2 x 2Tb drives for the RAID.

Is there any reason why you want your OS on a single separate drive with no
RAID?

If I were you, I'd either get rid of the 1TB drive (or use it as extra storage
space for unimportant files) or replace it with a third 2TB drive for a
three-way mirror - or perhaps RAID-5 (mdadm) or RAID-Z1 (zfs) if storage
capacity is more important than speed.

Both RAID-5 and RAID-Z1 will give you n-1 storage capacity, where n = number
of drives (so, for 3 drive array made up of 2TB drives, you'd have the
capacity of 2 drives, for a total of 4 TB usable space).

Alternatively, get a second 1TB drive, configure the two 1 TB drives as
the boot drive as described below, and use the two 2TB drives as bulk data
storage.  One big partition for RAID-1, or for ZFS tell it to use the entire
disks (ZFS will partition it correctly for its needs).


One thing I very strongly recommend is that you get some practice with mdadm
or LVM or ZFS before you do anything to your system.  If you have KVM or
Virtualbox installed, this is easy.  If not, install & configure libvirt + KVM
and it will be easy.  BTW, virt-manager is a nice GUI front-end for KVM.

Just create a VM with multiple virtual disks (5 or 10 GB each is plenty), and
practice installing Ubuntu onto them in various ways until you know how it
works and - most importantly - know what to expect when you install on the
real hardware.  Wipe the VMs and ĸeep practicing until it's routine.  Try
different variations on the setup so you understand what options are available
to you and how they might affect the final system.


> What should I do about partitioning? Should I use hard partitions say, 20Gb
> for the /root, should I use a /boot partition or just use the MBR?  Is mdadm
> a part of this process or does it get involved later ? ZFS for the pair of
> drives?

A /boot filesystem isn't really necessary these days, but I like to have one.
It gives me a standard, common filesystem type (ext4) to put ISOs (e.g. a
rescue disk or gparted or clonezilla) that can be booted directly from grub
with memdisk.



My usual partitioning setup for GPT partition tables is something like this:

NOTE: unless your motherboard is really ancient and doesn't support it, use
GPT partition tables rather than MSDOS tables.  With gpt, you don't have to
mess around with primary vs extended / logical partitions because gpt isn't
limited to just 4 primary partitions - it can have as many partitions as you
want without any fuss.  Any motherboard with a UEFI bios will support GPT.  If
your motherboard is less than about 10 years old, it will be UEFI.



 1       34       2047   1007.0 KiB  EF02
 2     2048    1050623   512.0 MiB   EF00
 3  1050624    5244927   2.0 GiB     FD00  Linux filesystem /boot RAID‐1
 4  5244928   13633535   4.0 GiB     8200  Linux swap
 5  Remainder of disk (or most of it - see below for L2ARC & ZIL)

Partition 1 (part type EF02) is for grub to install its 2nd-stage boot loader
into (the 512 bytes available in the MBR sector isn't enough, so grub puts its
first stage in the MBR and its second stage in the rest of the first track).
This is in sectors that you wouldn't ordinarily use (34-2047, sometimes called
the "MBR gap") because you want your partitions aligned for both 512 byte & 4K
sectors (so your first partition will start at 2048, evenly divisible by both
512 and 4096). see https://en.wikipedia.org/wiki/BIOS_boot_partition


Partition 2 (part type EF00) is for the EFI System partition (ESP). this will
usually be mounted as /EFI or /boot/EFI.  I'd recommend having this even
if you're not currently using UEFI to boot (i.e. you're booting via legacy
bios) because you may decide to switch later, or you may want to swap your
motherboard to something that only supports UEFI - legacy BIOS will eventually
vanish, probably in the next few years.  It doesn't need to be large, 512MB to
1GB is plenty.  See https://en.wikipedia.org/wiki/EFI_system_partition


Setting up the first two partitions like this results in a system that can be
booted with either legacy BIOS or UEFI.


Partition 3 (part type FD00) is for mdadm RAID-1 /boot, formatted as ext4.
2GB is enough for all of grub's loadable modules and several kernel images and
initrdś.  It's also enough for a few bootable ISO images (typically 100-600MB
each in size).

  BTW, I find the Clonezilla ISO to make an excellent rescue system, even if
  you're not using it to make clones/backups of machines, and it's easy to
  make a custom version with the latest kernel and ZFS kernel modules & zfs
  utils.  It has all the file system and partitioning tools you could ever
  need to repair/rescue a system with dead or dying drives.  The gparted ISO
  is another nice alternative, especially if you prefer a GUI interface for
  partitioning.

I like to have /boot and /EFI much larger than actually needed. Disk space is
cheap and huge these days, so a few hundred MB or a couple of GB are trivial
compared to the total disk size of 1 or 2 or more TB.  OTOH, discovering that
you don't have neough space in /boot for another kernel image or a bootable
ISO can be a major PITA....so waste a trivial amount of space to make things
easier for yourself in future.



Partition 4 (part type 8200) is for swap. Make it as large as you need.  If
you plan to use suspend/resume, it has to be at least as large as your total
RAM size.  Ideally, you should be aiming to have enough RAM that the system
rarely needs to swap - if you find you're swapping a lot, upgrade the RAM. or
start closing some tabs/windows in chromium.

Note that this partition layout will be cloned to the other disks in the array
so you'll have multiples of whatever size you choose here.



Partition 5 is for your OS + data.  You'll use this with either mdadm or lvm
or zfs. or btrfs if you're a gambler.

The parttition type should be FD00 for RAID or BF00 for ZFS partitions.

If you're using mdadm + ext4 or XFS, you may want separate / and /home (and
maybe /var or other partitions) partitions.  Most people these days just have
one big partition for everything.

If you're using LVM (with or without mdadm), you can create logical volumes as
needed for /home, /var, etc.

The problem with creating separate partitions or logical volumes is that it's
easy to guess wrong about how much space you'll need on / or /home or /var or
whatever - so, e.g., you can run out of space on / while still having hundreds
of GB free on /home.

IMO, this defeats the main purpose (avoiding the risk of running out of space
on / or /var and crashing the system) for having separate partitions, so isn't
worth doing.

For ZFS, create datasets as required for /home, /var, /var/log, etc.  Unlike
partitioning, all these datasets share the same pool of available space so
(unless you set a reservation or quota) there's no risk of running out of
space on one dataset while having plenty of free space elsewhere.



I usually have two more partitions at the end of the disk if I'm building a
system with SSDs for the OS and mechanical drives using ZFS for bulk data.
These two aren't needed except in that particular case.  If using them:

  These will be partitions 6 (L2ARC) & 7 (SLOG) unless you have extra /home or
  whatever partitions.

  Partition 6 is for for the L2ARC (Layer 2 ARC): a read cache for the ZFS
  bulk data drives - even a moderate amount like 16GB helps.  Or 32GB is
  another nice round number.  Note that this is NOT persistent (i.e. it will
  get wiped on every reboot) so if you reboot often, you won't seem much
  benefit from a large L2ARC because it will never get filled up.

  Partition 7 is for a SLOG device for the ZIL (ZFS Intent Log).  This is
  useful for when your system does synced writes (typically databases or
  mail or other things where the software writing the files wants the kernel
  to guarantee that the data is actually flushed to disk - not sitting in a
  buffer - before returning "OK, done!". BTW, rsyslog etc are often configured
  to write some log files synchronously).  This does not need to be large.  1
  or 2GB is more than enough, and you'll probably never use more than a small
  fraction of that.

  BTW, a common misconception about ZFS and ZIL and SLOG is that you need
  a SLOG in order to have a ZIL.  This is incorrect - ZFS **always** has
  a ZIL, it's in RAM.  The SLOG is a Separate LOG device for ZFS to also
  store the ZIL on a disk, to reduce the risk of corruption in case of
  power-failure/crash.  Another misconception is that the ZIL speeds up all
  random writes - nope, it speeds up syncronous writes (but it's common for
  databases to use synchronous random writes, so it does speed them up :)

  Think of the ZIL as being kind of roughly similar to a journal for ext4 or
  XFS, and the SLOG as being roughly similar to a separate journalling device
  for those file systems.

  Anyway, when setting this up with ZFS, it should be created as a mirror
  SLOG device so that if a the SLOG drive dies and the system crashes before
  everything it it has been flushed to the zpool's actual drives, there's
  a second copy of it.  In other words, there's a small chance of disk
  corruption if the SLOG isn't mirrored.  This risk is no worse than you'd
  have if you didn't have an SLOG partition (i.e. ZIL in RAM only), but
  mirroring it is a good idea.


This partitioning layout is then copied/cloned to all drives in the array.

To minimise the risk of a drive dying and making your system unbootable:

 * partition 3 (/boot) should be set up as a RAID-1 mirror using mdadm, and
   formatted with ext4.

 * grub should be installed onto all drives in the array.  This won't (can't
   AFAIK) be RAID, but individually installed into the MBR & partition 1 of
   **each** drive with 'grub-install /dev/sdXXXX'.  If a drive dies, just tell
   your BIOS to boot from one of the remaining ones.  Or set it up from the
   start to try each drive in turn.

The swap partition does not need to be RAID.


> I am hoping that this will go very smoothly and quickly, leaving me the rest
> of the day for populating the RAID disks with data.

BTW, if that data you uploaded to dropbox is on this disk, you can set up the
mdadm RAID or ZFS in degraded mode, install the OS onto the new drives, rsync
your data from this drive (much faster than downloading from dropbox), and
then (when all your data is restored to your RAID/ZFS system) re-partition it
and add it to the RAID / zpool.

This is a very useful time-saver when converting an existing system to RAID or
ZFS.

> One more thing, I have found OpenSuse a really poor distro for video, in my

IMO it's a really poor distro for anything.  I'll happily use most other
distros (with a strong preference for debian/ubuntu/etc) but really loathe
SuSE.  To me, it's a "nuke on sight and replace with something decent" distro.

> early days (Mandrake and Mandriva) really spoiled me because the Power
> Pack release ensured that everything was working. But Linux has really
> come a long way in AV and for Suse to have issues with Codecs is just a
> little tired. Do I have to jump through hoops in Ubuntu to watch an MP4 or a
> Matroska video?

Playing videos and music should Just Work on any modern distro.  It hasn't
been a problem for years.

craig

--
craig sanders <[email protected]>
_______________________________________________
luv-main mailing list
[email protected]
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

Reply via email to