On Tue, May 22, 2018 at 11:11:02PM +1000, Andrew Greig wrote:
> Then, with 2 x 2Tb and 2 x 1xTb I will connect all drives and load the DVD
> to perform a server install

if your 2 TB array is already set up as you want it, you don't have to
re-build it.  Just remove it from the system, build it with just the 2 x 1TB
drives and then add the 2TB array when it's working properly.

In fact, you could build the system with just the single new drive by creating
the raid in degraded mode - i.e. tell mdadm or LVM or ZFS or whatever to
create a mirror array but that one of the drives is missing.

When that's done, copy anything you need from the old 1TB drive to the new
degreaded array and then repartition the old drive to be the same as the new
and add it (or partitions from it) is the missing device(s) for the degraded
array.

That way, if anything goes wrong during the rebuild, you still have your
current setup to go back to....you'll have it up until you repartition it.



BTW, this is a generally useful thing to know how to do: use half a raid (i.e.
a "degraded" raid) to store your data while you're setting up its replacement.

e.g. if your 2 TB array is currently mdadm RAID-1 and you want to convert it
to ZFS, you could tell mdadm to fail one of the drives so it's a degraded
RAID-1. Then use 'zpool create data2 /dev/disk/by-id/XXXXXXX' to create a ZFS
pool using the artifically "failed" drive.  Then rsync /data to /data2 with
something like:

    rsync -avxHAXS -h -h --progress --stats --delete /data /data2

(everything from the first '-h' to '--stats' is optional and just gives
prettier output to watch while it's running. or pipe into 'tee' to log the
output to a file AND watch it at the same time).

NOTE: Be very careful when typing in drive device names here.  It is very easy
to type, e.g., /dev/sdb when you meant /dev/sdc. or vice-versa. Double-check
every potentially destructive command before you hit enter on it.  Use a
notepad to write down which drives are being used for what purpose, and
compare each command to your notes.  ZFS' zpool command will probably warn you
that the drive you want to use is already partitioned (maybe even warn you
that it's part of a RAID array) and refuse to do anything unless you force it
with the '-f' option anyway.

(also note that while a RAID array or zfs pool is in "degraded" mode there
is either no redundancy or degraded redundancy. in the case of a RAID-1 or
mirror, it's NO redundancy.  So if a meteor strikes your house while you're in
the middle of doing this, you'll probably lose your data :-).


When that's finished, you can run the rysnc again if anything has changed on
/data while the first rsync was running (repeat the rsyncs as often as you
need or want, but remember to use rsync's "--delete" option).

Finally, umount /data, use 'mdadm stop' to disable that raid device and 'zpool
attach' to attach the ex-raid drive to the zpool.  You'll probably also want
to rename "data2" to "data":

    zpool export data2
    zpool import -d /dev/disk/by-id/ data2 data


It's also possible to do the reverse, convert from ZFS to mdadm. But I can't
think of any reason why anyone would want to. ditto for mdadm to LVM. or
to/from btrfs.

> (existing RAM=Kingston KVR 4GB SINGLE DDR3 1333) I doubt that this is ECC
> RAM I have 4 slots in my MB I have two filled I would like to have 16Gb RAM
> so maybe Caribbean Gardens next Sunday

Russell's probably got some DDR 1333 in the LUV hardware library to give away.

Dunno if he'll have ECC or not, but 8 or 16 GB of non-ECC is probably better
than 4GB of ECC.

> So this is where the road gets muddy: If LVM is installed first followed by
> an MDADM set up, is that LVM on RAID  or RAID on LVM (and which is best?)
> Or since it has more options for the server install, do I partition the 1TB
> drives and format with ZFS and use the ZFS RAID ?

Personally, I'd use ZFS (using the setup notes described in my first reply in
this thread).

There isn't any benefit to using LVM or mdadm over using ZFS.  Quite the
contrary, ZFS offers many benefits that just aren't possible with LVM or
mdadm.  The price is that it's slightly harder to set up (but maybe not harder
at all since you're using ubuntu 18.0).

here's a step by step guide for 18.04:

https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS

This includes notes on optionally setting up full-disk LUKS encryption.
Ignore the stuff about stuff if you don't need it.



For lvm and mdadm, last I checked the best advice was to use LVM on top of
mdadm.   Just make one big RAID-1 of the entire disk with mdadm and then
use LVM to create logical volumes for /, /home, and any other "partitions"
you might want.

> How do I restrict the system install to the first 100Gb on each disk for /
> and the other 900Gb /home  for general duty data (music and some
> instructional videos)

On zfs, by setting a quota of 900GB on the /home dataѕet. Maybe a reservation
on the rootfs too, so that / has a guaranteed minimum 100GB.  If you later
find you need more or less space reserverd, it's one simple command to change
either a quota or a reservation.


on LVM, by creating two LVs: 100GB for / and 900GB for /home.  Choose LV sizes
carefully because it's a PITA to change.  Also filesystem choice matters -
ext4 and be grown or shrunk.  XFS can only be grown.

> Format the drives with ZFS and select the level of Software installation and
> kick it into gear?

craig

--
craig sanders <[email protected]>
_______________________________________________
luv-main mailing list
[email protected]
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

Reply via email to