>Can you share a bit more details about how you have yours setup?

Sure!

Partitions:

root@eu1 ~  lsblk
NAME          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda             8:0    0   9.1T  0 disk
├─sda1          8:1    0   9.1T  0 part
└─sda9          8:9    0     8M  0 part
sdb             8:16   0   9.1T  0 disk
├─sdb1          8:17   0   9.1T  0 part
└─sdb9          8:25   0     8M  0 part
nvme1n1       259:0    0 953.9G  0 disk
├─nvme1n1p1   259:2    0     5G  0 part
├─nvme1n1p2   259:3    0   820G  0 part
└─nvme1n1p3   259:4    0 128.9G  0 part
  └─md1         9:1    0 128.9G  0 raid1
    └─md1swap 253:0    0 128.9G  0 crypt [SWAP]
nvme0n1       259:1    0 953.9G  0 disk
├─nvme0n1p1   259:5    0     5G  0 part
├─nvme0n1p2   259:6    0   820G  0 part
└─nvme0n1p3   259:7    0 128.9G  0 part
  └─md1         9:1    0 128.9G  0 raid1
    └─md1swap 253:0    0 128.9G  0 crypt [SWAP]

(the nvme part 1 and 2s are set to type bf (solaris) using fdisk, and
the part 3s are set to type fd (linux raid auto)


MD config was set up with the command 
DISK1=/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802914
DISK2=/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802933
mdadm --create /dev/md1 --level 1 --raid-disks 2 --metadata 1.0 ${DISK1}-part3 
${DISK2}-part3


Then crypto device was configured:

root@eu1 ~  cat /etc/crypttab
# <name>  <device>     <password>     <options>
md1swap     /dev/md1    /dev/urandom   swap,cipher=aes-xts-plain64,size=256


And fstab:

root@eu1 ~  cat /etc/fstab
# <filesystem>    <dir>  <type>  <options>  <dump>  <pass>
/dev/mapper/md1swap  none   swap    defaults   0       0


Then I rebooted to activate those last two.


FWIW, here is how I did the install, according to my notes, in case
there is something in there (or missing from there) that makes the
difference:

1. It's a Hetzner server, so I did the install from a PXE boot Debian
Buster rescue console - basically the same as installing from a live CD
command line rather than using the live CD installer

2. set disk variables for use by later commands
DISK1=/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802914
DISK2=/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802933

3. reset and partition disks

    parted --script $DISK1 mklabel msdos mkpart primary 1 5Gib mkpart
primary 5GiB 825GiB mkpart        primary  825GiB 100% set 1 boot on

    parted --script $DISK2 mklabel msdos mkpart primary 1 5Gib mkpart
primary 5GiB 825GiB mkpart primary  825GiB 100% set 1 boot on

    fdisk $DISK1
      t
      1
      bf
      t
      2
      bf
      t
      3
      fd
      w

    fdisk $DISK2
      t
      1
      bf
      t
      2
      bf
      t
      3
      fd
      w

    fdisk -l
      check: 
      DiskLabel type msdos
      2048 sector alignment
      bootable flag set on partition 1
      size is correct
      Type = Solaris, Solaris, Linux Raid Auto


4. create ZFS pools and datasets

  root pool
    zpool create -R /mnt -O mountpoint=none -f -o ashift=13 ssd mirror 
${DISK1}-part2 ${DISK2}-part2

  root dataset
    zfs create -o acltype=posixacl -o compression=lz4 -o normalization=formD -o 
relatime=on -o dnodesize=auto -o xattr=sa -o mountpoint=/ ssd/root

  boot pool
    zpool create -o ashift=13 -d \
      -o feature@async_destroy=enabled \
      -o feature@bookmarks=enabled \
      -o feature@embedded_data=enabled \
      -o feature@empty_bpobj=enabled \
      -o feature@enabled_txg=enabled \
      -o feature@extensible_dataset=enabled \
      -o feature@filesystem_limits=enabled \
      -o feature@hole_birth=enabled \
      -o feature@large_blocks=enabled \
      -o feature@lz4_compress=enabled \
      -o feature@spacemap_histogram=enabled \
      -o feature@userobj_accounting=enabled \
      -O acltype=posixacl -O canmount=off -O compression=lz4 \
      -O devices=off -O mountpoint=none \
      -O normalization=formD -O relatime=on -O xattr=sa \
      -R /mnt -f bpool mirror ${DISK1}-part1 ${DISK2}-part1

  boot dataset
    zfs create -o mountpoint=/boot bpool/boot

  (order here was important - when I created the boot dataset first,
then I couldn't mount the root dataset and had to unmount and remount
them in correct order)


5. Install the minimal system

  cd ~

  wget
http://archive.ubuntu.com/ubuntu/pool/main/d/debootstrap/debootstrap_1.0.118ubuntu1_all.deb

  dpkg --install debootstrap_1.0.118ubuntu1_all.deb

  debootstrap --arch=amd64 focal /mnt http://archive.ubuntu.com/ubuntu/


6. System Configuration

  Bind the virtual filesystems from the rescue environment to the new system 
and chroot into it
    mount --rbind /dev  /mnt/dev
    mount --rbind /proc /mnt/proc
    mount --rbind /sys  /mnt/sys
    chroot /mnt /usr/bin/env DISK1=$DISK1 DISK2=$DISK2 bash --login

  Configure a basic system environment

    hostname
      echo eu1.kapitalyou.com > /etc/hostname && echo 127.0.0.1 
eu1.kapitalyou.com >> /etc/hosts 

    network
      get rescue shell interface name and convert it to persistent naming
        ip addr show
        udevadm test-builtin net_id /sys/class/net/eth0 2> /dev/null
        vi /etc/netplan/01-netcfg.yaml
          network:
            version: 2
            ethernets:
              enp35s0:
                dhcp4: true

    package sources
      vi /etc/apt/sources.list
        deb http://archive.ubuntu.com/ubuntu focal main universe
        deb-src http://archive.ubuntu.com/ubuntu focal main universe
        deb http://security.ubuntu.com/ubuntu focal-security main universe
        deb-src http://security.ubuntu.com/ubuntu focal-security main universe
        deb http://archive.ubuntu.com/ubuntu focal-updates main universe
        deb-src http://archive.ubuntu.com/ubuntu focal-updates main universe

    apt update

    apt upgrade

    dpkg-reconfigure locales

    dpkg-reconfigure tzdata

    apt install --yes openssh-server

    vi /etc/ssh/sshd_config
      PermitRootLogin prohibit-password

    cd ~

    mkdir .ssh

    chmod 700 .ssh

    cd .ssh

    vi authorized_keys
      paste in my public keys

    chmod 600 authorized_keys

    passwd

    set a root password so ssh keys will work (why does ssh require
this??? @@$@$#!)

    useradd -d /home/<username> -s /bin/bash -m -G sudo <username>

    passwd <username>


7. GRUB Installation

  Install GRUB
    apt install --yes zfsutils-linux
    apt install --yes zfs-initramfs
    apt install --yes grub-common

  install initramfs tools
    apt install --yes initramfs-tools

  install linux kernal
    apt-get -V install linux-generic linux-image-generic linux-headers-generic
      *********don't install grub on to any disks when prompted by the 
installer at this time!!!! ***************

  Verify that the boot filesystem is recognized by grub
    grub-probe /boot

  Update the boot configuration
    Refresh the initrd files (here, or later on when grub-update has run???)
      update-initramfs -u -k all -v
      update-grub

  Check that update-grub was able to create a suitable menu entry for ubuntu on 
zfs root:
    less /boot/grub/grub.cfg
      look for something like this, that has both vmlinuz and initrd lines:
        menuentry 'Ubuntu 20.04 LTS' --class ubuntu --class gnu-linux --class 
gnu --class os ${menuentry_id_option} 'gnulinux-ssd/root-5.4.0-26-generic' {
                recordfail
                load_video
                gfxmode ${linux_gfx_mode}
                insmod gzio
                if [ "${grub_platform}" = xen ]; then insmod xzio; insmod 
lzopio; fi
                insmod part_msdos
                insmod zfs
                if [ x$feature_platform_search_hint = xy ]; then
                  search --no-floppy --fs-uuid --set=root  828c009811716301
                else
                  search --no-floppy --fs-uuid --set=root 828c009811716301
                fi
                linux   /boot@/vmlinuz-5.4.0-26-generic root=ZFS=ssd/root ro
                initrd  /boot@/initrd.img-5.4.0-26-generic

    (if update-grub has failed, for example as it did when I tried these
instructions on Ubuntu 19.10, then those lines are missing and grub
won't boot the system - it's pointless to continue without resolving
this first)


  Make debugging GRUB easier
    vi /etc/default/grub
      Comment out: GRUB_TIMEOUT_STYLE=hidden
      Set: GRUB_TIMEOUT=5
      Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5
      Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
      Uncomment: GRUB_TERMINAL=console
      Save and quit.

  Install the boot loader
    grub-install $DISK1
    grub-install $DISK2

  disable vgafb16 module for vKVM (this is Hetzner specific for compatibility 
with their virtualKVM)
    echo "blacklist vga16fb" >> /etc/modprobe.d/blacklist-framebuffer.conf

8. exit chroot
  Exit from the chroot environment back to the rescue (or live CD) environment
    exit
  Run these commands in the rescue environment to unmount all filesystems
    mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount 
-lf {}

9. EXPORT THE ZFS POOL BEFORE REBOOTING - GRUB WON'T LOAD IT IF IT'S MARKED AS 
"IN USE" BY THE RESCUE/LIVE CD ENVIRONMENT!!!!
  
  zpool export -f ssd


10. reboot and check that the root-on-zfs environment is working


11. set up encrypted swap on MD raid 1 
  
  mdadm --create /dev/md1 --level 1 --raid-disks 2 --metadata 1.0 
${DISK1}-part3 ${DISK2}-part3

  vi /etc/crypttab
    # <name>  <device>     <password>     <options>
    md1swap     /dev/md1    /dev/urandom   swap,cipher=aes-xts-plain64,size=256

  vi /etc/fstab
    # <filesystem>    <dir>  <type>  <options>  <dump>  <pass>
    /dev/mapper/md1swap  none   swap    defaults   0       0

12. reboot and check you have encrypted swap and nothing else broke

  check swap is in use
    top
      look for how much swap is available

  lsblk
      look for swap on MD RAID 1:
        nvme1n1       259:0    0 953.9G  0 disk
        ├─nvme1n1p1   259:2    0     5G  0 part
        ├─nvme1n1p2   259:3    0   820G  0 part
        └─nvme1n1p3   259:4    0 128.9G  0 part
          └─md1         9:1    0 128.9G  0 raid1
            └─md1swap 253:0    0 128.9G  0 crypt [SWAP]
        nvme0n1       259:1    0 953.9G  0 disk
        ├─nvme0n1p1   259:5    0     5G  0 part
        ├─nvme0n1p2   259:6    0   820G  0 part
        └─nvme0n1p3   259:7    0 128.9G  0 part
          └─md1         9:1    0 128.9G  0 raid1
            └─md1swap 253:0    0 128.9G  0 crypt [SWAP]

  zpool list
    look for your boot and root pools and filesystems being mounted
      NAME                                       USED  AVAIL     REFER  
MOUNTPOINT
      bpool                                     98.9M  4.26G      192K  none
      bpool/boot                                95.2M  4.26G     94.7M  /boot
      ssd                                       30.3G   760G      192K  none
      ssd/root                                  2.80G   760G     1.94G  /

  check that /boot is not about to become un-readable to grub (just for the 
sake of paranoia)
    grub-probe /boot

  check that /boot still contains the proper files, and not the weird single 
grub directory you end up with if grub can't mount the boot zfs pool before 
unloading initramfs:
    ls /boot

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to