Hello, Some high-level questions on the X310 architecture and implementation related to the GigE/PCIe support.
The X310 allows host connectivity using 10G or PCIe connections. Is it implied that if the Ethernet interfaces are being used to connect to the host, the PCIe is not being used? In the FPGA, the Ethernet logic is conditionally compiled, but the PCIe logic is always included (which is perfectly understandable). Assuming there are different drivers for Ethernet and PCIe, how is this selection of which interface is being used (PCIe vs GigE) incorporated into software. More specifically, how does/where in the UHD software code does this reconciliation happen between whether the low level FPGA calls (e.g peek32, poke32) are happening over PCIe or Ethernet? So for e.g, if there is a piece of UHD code that reads FPGA version register , does it use the PCIe driver (if I am only connected to the PCIe) or does it use the Ethernet driver (if I am only connected via GigE). I realize this inquiry is more along theoretical lines, and might not be very clear - but if anyone can shed light will be very helpful. Thanks
_______________________________________________ USRP-users mailing list USRP-users@lists.ettus.com http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com