Dne 15.09.2020 v 10:22 Linda A. Walsh napsal(a):
On 2020/09/10 07:40, Miloslav Hůla wrote:
I cannot verify it, but I think that even JBOD is propagated as a virtual device. If you create JBOD from 3 different disks, low level parameters may differ.
----
    JBOD allows each disk to be seen by the OS, as is.  You wouldn't
create JBOD disk from 3 different disks -- JBOD would give you 3 separate
JBOD disks for the 3 separate disks.

Yes. If I create 3 JBOD configurations from 3 100GB disks, I get 3 100GB devices in OS. If I create 1 JBOD configuration from 3 100GB disks, I get 1 300GB device in OS.

    So for your 16  disks, you are using 1 long RAID0?  You realize
1 disk goes out, the entire array needs to be reconstructed.  Also
all of your spindles can be tied up by long read/writes -- optimal speed
would come from a read 16 stripes wide spread over the 16 disks.

No. I have 16 RAID-0 configurations from 16 disks. As I wrote, there was no other option of how to propagate 16 disks as 16 devices into OS few years before.

    What would be better, IMO, is going with a RAID-10 like your subject
says, using 8-pairs of mirrors and strip those.  Set your stripe unit
for 64K to allow the disks to operate independently.  You don't want
a long 16-disk stripe, as that's far from optimal for your mailbox load.
What you want is the ability to have multiple I/O ops going at the same
time -- independently.  I think as it stands now, you are far more likely
to get contention as different mailboxes are accessed with contention
happening within the span, vs. letting each 2 disk mirror potentially doing
a different task -- which would likely have the effect of raising your
I/O ops/s.

The reason to not create RAID-10 in controller was, that btrfs scrubbing detects slowly degrading disk much sooner than controller (verified many times). And if I create RAID-10 in controller, btrfs scrub detects soon too, but I'm not able to recognize on which disk.

    Running raid10 on top of raid0 seems really wasteful

I'm not doing that.

Reply via email to