On Mon, Jul 27, 2020 at 06:48:12PM -0700, Jonathan Lemon wrote: > While the current GPU utilized is nvidia, there's nothing in the rest of > the patches specific to Nvidia - an Intel or AMD GPU interface could be > equally workable.
I think that is very misleading. It looks like this patch, and all the ugly MM stuff, is done the way it is *specifically* to match the clunky nv_p2p interface that only the NVIDIA driver exposes. Any approach done in tree, where we can actually modify the GPU driver, would do sane things like have the GPU driver itself create the MEMORY_DEVICE_PCI_P2PDMA pages, use the P2P DMA API framework, use dmabuf for the cross-driver attachment, etc, etc. Of course none of this is possible with a proprietary driver. I could give you such detailed feedback but it is completely wasted since you can't actually use it or implement it. Which is why the prohibition on building APIs to support out of tree modules exists - we can't do a good job if our hands are tied by being unable to change something. This is really a textbook example of why this is the correct philosophy. If you are serious about advancing this then the initial patches in a long road must be focused on building up the core kernel infrastructure for P2P DMA to a point where netdev could consume it. There has been a lot of different ideas thrown about on how to do this over the years. As you've seen, posting patches so tightly coupled to the NVIDIA GPU implementation just makes people angry, I also advise against doing it any further. > I think this is a better patch than all the various implementations of > the protocol stack in the form of RDMA, driver code and device firmware. Oh? You mean "better" in the sense the header split offload in the NIC is better liked than a full protocol running in the NIC? Jason