Details attached... I was trying to passthrough multiple GPUs into a single VM, the topology looks like these below.
Without the help of PCIE switches, data traversing between GPUs are extremely slow. (depends on memcopy by CPU) So my question is can I passthrough all the switches and PCIE Root, in order to make the Nvidia p2p feature work? Host: [root@localhost ~]# nvidia-smi topo -m GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity GPU0 X PIX PHB PHB SOC SOC SOC SOC 0-9,20-29 GPU1 PIX X PHB PHB SOC SOC SOC SOC 0-9,20-29 GPU2 PHB PHB X PIX SOC SOC SOC SOC 0-9,20-29 GPU3 PHB PHB PIX X SOC SOC SOC SOC 0-9,20-29 GPU4 SOC SOC SOC SOC X PIX PHB PHB 10-19,30-39 GPU5 SOC SOC SOC SOC PIX X PHB PHB 10-19,30-39 GPU6 SOC SOC SOC SOC PHB PHB X PIX 10-19,30-39 GPU7 SOC SOC SOC SOC PHB PHB PIX X 10-19,30-39 VM: [root@titan-xp-chenbo-2 ~]# nvidia-smi topo -m GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity GPU0 X PHB PHB PHB PHB PHB PHB PHB 0-15 GPU1 PHB X PHB PHB PHB PHB PHB PHB 0-15 GPU2 PHB PHB X PHB PHB PHB PHB PHB 0-15 GPU3 PHB PHB PHB X PHB PHB PHB PHB 0-15 GPU4 PHB PHB PHB PHB X PHB PHB PHB 0-15 GPU5 PHB PHB PHB PHB PHB X PHB PHB 0-15 GPU6 PHB PHB PHB PHB PHB PHB X PHB 0-15 GPU7 PHB PHB PHB PHB PHB PHB PHB X 0-15 Legend: X = Self SOC = Connection traversing PCIe as well as the SMP link between CPU sockets(e.g. QPI) PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe switches (without traversing the PCIe Host Bridge) PIX = Connection traversing a single PCIe switch NV# = Connection traversing a bonded set of # NVLinks 2017-07-24 14:03 GMT+08:00 Bob Chen <a175818...@gmail.com>: > > - Bob >