On 01/01/2015 04:24 PM, Kevin O'Gorman wrote:
On Thu, Jan 1, 2015 at 12:51 PM, David Christensen <
dpchr...@holgerdanske.com> wrote:
On 12/31/2014 01:57 PM, Kevin O'Gorman wrote:
I've just gotten 4 4TB drives to replace my 4 2TB drives. I'm wanting to
have one normal 4TB drive and one logical 12TB drive, so I will make three
physical drives into one group, one logical volume and one partition
support the big partition. My system actually resides on a fifth: an SSD
drive. I am not interested in RAID, and I'm not sure striping would even
help. I just have gigantic files I need to create and process once in a
while, so it's really temporary space.
mdadm RAID 0 should be the simplest approach for the 12 TB.
Thanks for this advice. I'm looking into it. My first impression is
that it's not so simple.
- I installed mdadm, but I'm a bit puzzled that it dragged in postfix with
it. I set that up as local only, because I don't do mail on this system.
- The first document I looked at,
https://raid.wiki.kernel.org/index.php/RAID_setup talked about probems
putting a RAID in /etc/fstab, and
while it mentioned that persistent superblocks solved this problem, it
did not say much about how to set it up such as what the fstab
entry would look like
My mobo has 5 SATA 3 interfaces. They become /dev/sda ... /dev/sde more or
less at random, perhaps depending on how quickly the
drives spin up. I had to put GRUB on all drives. I really need stability
in whatever I set up.
Any thoughts?
Please "reply to list".
One of my complaints about Debian's installer is that it installs
recommended packages, recursively, by default. When I wanted to build a
kernel with realtime patches, this meant ~700 MB of stuff I didn't want.
I now use 'apt-get' with the '--no-install-recommends' option to avoid
this problem.
Understand that with FOSS there are learning curves no matter which
approach you pick. That's part of the price you pay for "free". I've
never used mdadm. I learned enough LVM a few years ago to set up swap
and root on encrypted partitions and to set up JBOD data drives.
Administration of LVM file systems is basically the same as ext file
systems. But, the extra layer between the file system and the hardware
becomes a hassle when I want to boot the installer media into rescue
mode and mess with raw file systems on system drives (for backup and
restore). More recently, I learned enough zfs-fuse for single and
mirrored data drives. I later migrated to ZOL for performance.
Administration of ZFS file systems requires a lot more knowledge and
planning. And since ZOL is not integrated into Debian, I had to invent
scripts to get things mounted at boot and cleanly unmounted at shutdown.
Perhaps someone with mdadm experience can offer some pointers for
learning enough mdadm to set up RAID 0.
As for the problem of which device is which /dev/* entry, the
work-around is to identify them by UUID or some other means. Take a
look at 'blkid' and the /dev/disk/by-* directories.
David
--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/54a5ec04.6050...@holgerdanske.com