On Tue, Feb 25, 2025 at 12:26 PM Dale <rdalek1...@gmail.com> wrote: > > I'm pretty sure you mentioned this once before in one of my older > threads. I can't find it tho. I use PCIe x1 cards to connect my SATA > drives for my video collection and such. You mentioned once what the > bandwidth was for that setup and how many drives it would take to pretty > much max it out. Right now, I have one card for two sets of LVs. One > LV has four drives and the other has three. What would be the limiting > factor on that, the drives, the PCIe bus or something else?
It depends on the PCIe revision, and of course whether the controller actually maxes it out. 1x PCIe v3 can do 0.985GB/s total. That's about 5 HDDs if they're running sequentially, and again assumes that your controller can actually handle all that data. For each generation of PCIe forward/backwards either double/halve the transfer rate. The interface works at the version of PCIe supported by both the motherboard+CPU and the adapter card. If you're talking about HDDs in practice the HDDs are probably still the bottleneck. If these were SATA SSDs then odds are that the PCIe lane is limiting things, because I doubt this is an all-v5 setup. https://en.wikipedia.org/wiki/PCI_Express#History_and_revisions The big advantage of NVMe isn't so much the bandwidth as the IOPS, though both benefit. Those run at full PCIe 4x interface speed per drive, but of course you need 4 lanes per drive for this, which is hard to obtain on consumer motherboards at any scale. -- Rich