Am Tue, Jun 04, 2024 at 05:49:31AM -0400 schrieb Rich Freeman: > On Tue, Jun 4, 2024 at 2:44 AM Dale <rdalek1...@gmail.com> wrote: > > > > I did some more digging. It seems that all the LSI SAS cards I found > > need a PCIe x8 slot. The only slot available is the one intended for > > video. > > The board you linked has 2 4x slots that are physically 16x, so the > card should work fine in those, just at 4x speed.
I can never remember the available throughput for each generation. So I think about my own board: it as a 2.0×2 NVMe slot that gives me 1 GB/s theoretical bandwidth. So if you have 3.0×4, that is twice the lanes and twice the BW/lane, which yields 4 GB/s gross throughput. If you attach spinning rust to that, you’d need around 15 to 20 HDDs to saturate that link. So I wouldn’t worry too much about underperformance. > > I'd rather not > > use it on the new build because I've thought about having another > > monitor added for desktop use so I would need three ports at least. DisplayPort supports daisy-chaining. So if you do get another monitor some day, look for one that has this feature and you can drive two monitors with one port on the PC. > > The little SATA controllers I currently use tend to only need PCIe x1. > > That is slower but at least it works. PCIe 3.0×1 is still fast enough for four HDDs at full speed. You may get saturation at the outermost tracks, but how often does that happen anyways? I can think of two cases that produce enough I/O for that: - copy stuff from one internal RAID to another (you use LVM, does that support striping to distribute I/O?) - a RAID scrub Everything else involves two disks at most—when you copy stuff from one to another. Getting data into the system is limited by the network which is far slower than PCIe. And a full SMART test does not use the data bus at all. But what I also just remembered: only the ×16 GPU slot and the primary M.2 slots (which are often one gen faster than the other M.2 slots) are connected to the CPU via dedicated links. All other PCIe slots are behind the chipset. And that in turn is connected to the CPU via a PCIe 4.0×4 link. This is probably the technical reason why there are so few boards with slots wider than ×4 – there is just no way to make use of them, because they all most go through that ×4 bottleneck to the CPU. ┌───┐ 5.0×4 ┌───┐ 4.0×4 ┌─────────┐ ┌───┐ │M.2┝=======┥CPU┝━━━━━━━┥ Chipset ┝━━━┥M.2│ └───┘ └─┰─┘ └─┰─────┰─┘ └───┘ 5.0×16┃ ┃ ┃ ┌─┸─┐ ┌────┸─┐ ┌─┸────┐ │GPU│ │PCIe 1│ │PCIe 2│ └───┘ └──────┘ └──────┘ Here are block diagrams of AM5 B- and X-chipsets and a more verbose explanation: https://www.anandtech.com/show/17585/amd-zen-4-ryzen-9-7950x-and-ryzen-5-7600x-review-retaking-the-high-end/4 Theoretically, the PCIe controller in the CPU has the ability to split up the ×16 GPU link into 2×8 and other subdivisions, but that would cripple the GPU, which is the normal use case for such mobos, so the feature is very seldomly found. If I look at all available AM5 mobos that have at least two ×8 slots, there are just seven out of 126: https://skinflint.co.uk/?cat=mbam5&xf=19227_2 You can also use the filter to look for boards with 3 ×4 slots. -- Grüße | Greetings | Salut | Qapla’ Please do not share anything from, with or about me on any social network. “If I could explain it to the average person, I wouldn't have been worth the Nobel Prize.” – Richard Feynman