If you're not fully sure of any of those parameters, I'd suggest ...
do some testing.  E.g. if your target storage is quite large, maybe do only
2% of that, or possibly less - in total (approximate) size, but using the
presumed or being tested stripe/cluster/etc. sizes.  And test the performance,
and may want to use, e.g. sync option on mounted filesystems or the
like, to mostly
cancel out any host caching performance advantages, and get (at least closer to)
the actual drive (+RAID, etc.) performance, and how it would mostly behave on
cache misses (especially for writes, which will be your lowest performance
for RAID6/RAID60).  Can also inspect data on underlying devices (at least
as far down as one can go) with, e.g. od ... can even put marker patterns in
data to more easily identify exactly what data is landing where on the back-end
storage.  And as far as the md metadata, I believe it puts it at the
start, rather
than end.  In any case, I'd probably be inclined to go with the
default - likely less
confusing for anyone (e.g. even future you) to figure out exactly how
it's laid out,
if/when that becomes a question.  In any case, likewise, can test that, e.g.
scaled down, and examine the resultant data and where it lands.
And can use partitioning or losetup, etc. to limit the size of the target to
less than the full physical available, e.g. for testing.

And ... though not quite what you asked, device mapper, dmsetup, etc.
can be used to construct somewhat arbitrary layouts ... but that might
be even more confusing for anyone looking at it later.  Sometimes, however
that can be quite useful for special circumstances/requirements.  Thinking of
which, I not too long ago did that for demonstration purposes to help
someone out solving a data migration issue.  They essentially wanted to go from
quite large hardware RAID-5 to md raid5 - quite similar set of drives for each
(new ones for the md set).  Conceptually I basically thought layer RAID-1
atop that, sync, then break the mirrors.  That's not quite so easy as most any
(especially software) RAID-1 would typically want to write metadata on the same
devices - very undesirable in that case.  So, I did it with device
mapper using dmsetup,
essentially mirroring the underlying, while storing the metadata
external to those
devices.  Anyway, quite a bit more detail on that example run here:
https://lists.balug.org/mailman3/hyperkitty/list/balug-t...@lists.balug.org/message/CGZUVCF5WFM5I6GPKK5NW5DDK4OCMERK/

On Sun, Dec 1, 2024 at 4:36 AM Greg <p...@sojka.co> wrote:
>
> Hi there,
>
> I'm setting up MD-RAID0 on a top of HW-RAID6 devices (long story). I
> would like to confirm the following:
>
> 1. The RAID0 chunk size should be the stripe width of the
> underlying RAID6 volumes.
>
> 2. The RAID0 metadata should be at the end of the device (metadata ver.
> 1.0).
>
> 3. The stride and stripe-width of the ext4 fs should be set to the once
> used when creating RAID6 volumes.
>
> Thanks in advance for any help
> Greg

Reply via email to