If you're doing this for school, I say go for it. If you're doing this for 
performance, I'm not sure how useful the effort will be. We should be able to 
handle hundreds of thousands of MSI-X interrupts per second, as they're 
messages on the PCIe bus rather than a separate asynchronous signal.

I can't speak to the virtualization piece of this, however, and I'm hoping 
someone else has some insight there.

Todd Fujinaka
Software Application Engineer
Networking Division (ND)
Intel Corporation
[email protected]
(503) 712-4565

-----Original Message-----
From: William Tu [mailto:[email protected]] 
Sent: Monday, February 10, 2014 12:35 AM
To: [email protected]
Subject: [E1000-devel] A fully polling mode ixgbevf driver

Hi,

I'm a student from Stony Brook University. I'm thinking about modifying the 
ixgbevf driver so that it could work in fully polling mode. That is, when 
network receiving packet rate is higher than a threshold, the ixgbevf driver 
could disable the interrupt for a period of time.

A few observations motivate the idea:
1. Even with Linux's NAPI, under my iperf 8~9G experiment, the interrupt rate 
is still very high, around 80k interrupts per second.
2. Under KVM, every interrupt delivered by ixgbevf will trigger at least two VM 
exits, which incurs high overhead under I/O intensive workload.

Due to these two facts, I plan to modify ixgbevf driver to support 1. Set-up a 
packet receiving threshold. When receiving rate is higher than this threshold, 
the driver is in polling mode.
2. Set-up the polling rate. When driver is in polling mode, the polling rate 
determine the frequency for Linux network stack to get the packets.

Is it worth doing and does this idea make sense? or how do I leverage existing 
code / kernel's feature to support this?

Thank you and any comments are appreciated!
Regards,
William

------------------------------------------------------------------------------
Managing the Performance of Cloud-Based Applications
Take advantage of what the Cloud has to offer - Avoid Common Pitfalls.
Read the Whitepaper.
http://pubads.g.doubleclick.net/gampad/clk?id=121051231&iu=/4140/ostg.clktrk
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to