On Tue, 2016-10-04 at 12:52 -0600, Alex Williamson wrote: > > I'ts all just idle number games, but what I was thinking of was the > > difference between plugging a bunch of root-port+upstream+downstreamxN > > combos directly into pcie-root (flat), vs. plugging the first into > > pcie-root, and then subsequent ones into e.g. the last downstream port > > of the previous set. Take the simplest case of needing 63 hotpluggable > > slots. In the "flat" case, you have: > > > > 2 x pcie-root-port > > 2 x pcie-switch-upstream-port > > 63 x pcie-switch-downstream-port > > > > In the "nested" or "chained" case you have: > > > > 1 x pcie-root-port > > 1 x pcie-switch-upstream-port > > 32 x pcie-downstream-port > > 1 x pcie-switch-upstream-port > > 32 x pcie-switch-downstream-port > > You're not thinking in enough dimensions. A single root port can host > multiple sub-hierarchies on it's own. We can have a multi-function > upstream switch, so you can have 8 upstream ports (00.{0-7}). If we > implemented ARI on the upstream ports, we could have 256 upstream ports > attached to a single root port, but of course then we've run out of > bus numbers before we've even gotten to actual devices buses. > > Another option, look at the downstream ports, why do they each need to > be in separate slots? We have the address space of an entire bus to > work with, so we can also create multi-function downstream ports, which > gives us 256 downstream ports per upstream port. Oops, we just ran out > of bus numbers again, but at least actual devices can be attached.
What's the advantage in using ARI to stuff more than eight of anything that's not Endpoint Devices in a single slot? I mean, if we just fill up all 32 slots in a PCIe Root Bus with 8 PCIe Root Ports each we already end up having 256 hotpluggable slots[1]. Why would it be preferable to use ARI, or even PCIe Switches, instead? [1] The last slot will have to be limited to 7 PCIe Root Ports if we don't want to run out of bus numbers -- Andrea Bolognani / Red Hat / Virtualization