Dan, isn't this issue similar to direct io case? can you please look at the following article http://lwn.net/Articles/322795/
regarding performance improvement using NET_DMA, I don't have concrete numbers, but it should be around 15-20%. my system is i/o coherent. saeed On Wed, Jan 15, 2014 at 11:33 PM, Dan Williams <dan.j.willi...@intel.com> wrote: > On Wed, Jan 15, 2014 at 1:31 PM, Dan Williams <dan.j.willi...@intel.com> > wrote: >> On Wed, Jan 15, 2014 at 1:20 PM, saeed bishara <saeed.bish...@gmail.com> >> wrote: >>> Hi Dan, >>> >>> I'm using net_dma on my system and I achieve meaningful performance >>> boost when running Iperf receive. >>> >>> As far as I know the net_dma is used by many embedded systems out >>> there and might effect their performance. >>> Can you please elaborate on the exact scenario that cause the memory >>> corruption? >>> >>> Is the scenario mentioned here caused by "real life" application or >>> this is more of theoretical issue found through manual testing, I was >>> trying to find the thread describing the failing scenario and couldn't >>> find it, any pointer will be appreciated. >> >> Did you see the referenced commit? >> >> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=77873803363c >> >> This is a real issue in that any app that forks() while receiving data >> can cause the dma data to be lost. The problem is that the copy >> operation falls back to cpu at many locations. Any one of those >> instance could touch a mapped page and trigger a copy-on-write event. >> The dma completes to the wrong location. >> > > Btw, do you have benchmark data showing that NET_DMA is beneficial on > these platforms? I would have expected worse performance on platforms > without i/o coherent caches. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/