On Fri, Jul 30, 2021 at 11:50 PM Wols Lists <antli...@youngman.org.uk> wrote:
>
> btw, you're scrubbing over USB? Are you running a raid over USB? Bad
> things are likely to happen ...

So, USB hosts vary in quality I'm sure, but I've been running USB3
drives on lizardfs for a while now with zero issues.

At first I was shucking them and using LSI HBAs.  That was a pain for
a bunch of reasons, and I would have issues probably due to the HBAs
being old or maybe cheap cable issues (and new SAS hardware carries a
hefty price tag).

Then I decided to just try running a drive on USB3 and it worked fine.
This isn't for heavy use, but it basically performs identically to
SATA.  I did the math and for spinning disks you can get 2 drives per
host before the data rate starts to become a concern.  This is for a
distributed filesystem and I'm just using gigabit ethernet, and the
cluster is needed more for capacity than IOPS, so USB3 isn't the
bottleneck anyway.

I have yet to have a USB drive have any sort of issue, or drop a
connection.  And they're running on cheap Pi4s for the most part
(which have two USB3 hosts).  If for some reason a drive or host
dropped the filesystem is redundant at the host level, and it also
gracefully recovers data if a host shows back up, but I have yet to
see that even happen due to a USB issue.  I've had far more issues
when I was trying to use LSI HBAs on RockPro64 SBCs (which have a PCIe
slot - I had to also use a powered riser).

Now, if you want to do something where you're going to be pulling
closer to max bandwidth out of all your disks at once and you have
more than a few disks and you have it on 10GbE or faster, then USB3
could be a bottleneck unless you have a lot of hosts (though even then
adding USB3 hosts to the motherboard might not be any harder than
adding SATA hosts).

-- 
Rich

Reply via email to