I done reload of ixgbe with MQ=0,0 and RSS=1,1
There are no luck with speed.

[  3] local xxx.xxx.185.135 port 5001 connected with yy.yy.74.11 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-20.0 sec   151 MBytes  63.1 Mbits/sec

[  3] local xxx.xxx.185.133 port 5001 connected with yy.yy.74.11 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-20.0 sec   979 MBytes   411 Mbits/sec

Seems we need to try limit rate on ixgbe? Suggest please what i need to do?
Because this is WAN of course somethere can be some losses, but looks
like igb recover tcp_window faster than ixgbe, can it be issue?


2013/8/13 Alexander Duyck <[email protected]>:
> One other thing that separates the 82574 and the 82599 is that 82599 is
> a multiqueue interface.  Try loading the driver with RSS=1,1 to see if
> this issue might somehow be related to multiqueue.
>
> Other than that the only other thing I can think of would be to start
> rate limiting the ixgbe port itself as the only other possibility I can
> think of that might be hurting the performance is the fact that the
> 82599 can produce traffic so much faster than the 82574 and this may be
> leading to packets being dropped somewhere.
>
> Thanks,
>
> Alex
>
> On 08/12/2013 02:50 PM, Alexey Stoyanov wrote:
>> One important thing that i not writed from start - this is real
>> internet, so this is not a LAN, but WAN
>> I have average 27 ms latency beetween hosts.
>>
>> --- yy.yy.74.11 ping statistics ---
>> 10 packets transmitted, 10 received, 0% packet loss, time 9012ms
>> rtt min/avg/max/mdev = 27.203/27.444/27.791/0.230 ms
>>
>>
>> i've reloaded ixgbe with InterruptThrottleRate=1,1 (default)
>> [31906.918256] ixgbe: Interrupt Mode set to 2
>> [31906.918259] ixgbe: Multiple Queue Support Enabled
>> [31906.918264] ixgbe: Direct Cache Access (DCA) set to 0
>> [31906.918271] ixgbe: 0000:03:00.0: ixgbe_check_options: DCA is disabled
>> [31906.918274] ixgbe: Receive-Side Scaling (RSS) set to 8
>> [31906.918276] ixgbe: Virtual Machine Device Queues (VMDQ) set to 0
>> [31906.918278] ixgbe: I/O Virtualization (IOV) set to 0
>> [31906.918279] ixgbe: L2 Loopback Enable set to 0
>> [31906.918281] ixgbe: 0000:03:00.0: ixgbe_check_options: dynamic
>> interrupt throttling enabled
>> [31906.918283] ixgbe: Low Latency Interrupt TCP Port set to 5001
>> [31906.918285] ixgbe: Low Latency Interrupt on Packet Size set to 1500
>> [31906.918287] ixgbe: Low Latency Interrupt on TCP Push flag Enabled
>> [31906.918289] ixgbe: 0000:03:00.0: ixgbe_check_options: FCoE Offload
>> feature enabled
>> [31907.083032] ixgbe 0000:03:00.0: irq 77 for MSI/MSI-X
>> [31907.083045] ixgbe 0000:03:00.0: irq 78 for MSI/MSI-X
>> [31907.083055] ixgbe 0000:03:00.0: irq 79 for MSI/MSI-X
>> [31907.083065] ixgbe 0000:03:00.0: irq 80 for MSI/MSI-X
>> [31907.083074] ixgbe 0000:03:00.0: irq 81 for MSI/MSI-X
>> [31907.085174] ixgbe 0000:03:00.0: (PCI Express:5.0GT/s:Width x8)
>> 90:e2:ba:40:89:24
>> [31907.085262] ixgbe 0000:03:00.0 eth1: MAC: 2, PHY: 15, SFP+: 5, PBA
>> No: E68793-006
>> [31907.085266] ixgbe 0000:03:00.0 eth1: Enabled Features: RxQ: 8 TxQ: 8
>> [31907.085301] ixgbe 0000:03:00.0 eth1: Intel(R) 10 Gigabit Network 
>> Connection
>> [31907.085531] ixgbe: Interrupt Mode set to 2
>> [31907.085534] ixgbe: Multiple Queue Support Enabled
>> [31907.085536] ixgbe: Direct Cache Access (DCA) set to 0
>> [31907.085538] ixgbe: 0000:03:00.1: ixgbe_check_options: DCA is disabled
>> [31907.085540] ixgbe: Receive-Side Scaling (RSS) set to 8
>> [31907.085542] ixgbe: Virtual Machine Device Queues (VMDQ) set to 0
>> [31907.085544] ixgbe: I/O Virtualization (IOV) set to 0
>> [31907.085545] ixgbe: L2 Loopback Enable set to 0
>> [31907.085547] ixgbe: 0000:03:00.1: ixgbe_check_options: dynamic
>> interrupt throttling enabled
>> [31907.085549] ixgbe: Low Latency Interrupt TCP Port set to 5001
>> [31907.085550] ixgbe: Low Latency Interrupt on Packet Size set to 1500
>> [31907.085552] ixgbe: Low Latency Interrupt on TCP Push flag Enabled
>> [31907.085554] ixgbe: 0000:03:00.1: ixgbe_check_options: FCoE Offload
>> feature enabled
>> [31907.325658] ixgbe 0000:03:00.0 eth1: detected SFP+: 5
>> [31907.784996] ixgbe 0000:03:00.0 eth1: NIC Link is Up 10 Gbps, Flow
>> Control: RX/TX
>> [31908.228991] ixgbe 0000:03:00.1: irq 82 for MSI/MSI-X
>> [31908.229002] ixgbe 0000:03:00.1: irq 83 for MSI/MSI-X
>> [31908.229007] ixgbe 0000:03:00.1: irq 84 for MSI/MSI-X
>> [31908.229012] ixgbe 0000:03:00.1: irq 85 for MSI/MSI-X
>> [31908.229016] ixgbe 0000:03:00.1: irq 86 for MSI/MSI-X
>> [31908.230732] ixgbe 0000:03:00.1: (PCI Express:5.0GT/s:Width x8)
>> 90:e2:ba:40:89:25
>> [31908.230818] ixgbe 0000:03:00.1 eth3: MAC: 2, PHY: 1, PBA No: E68793-006
>> [31908.230820] ixgbe 0000:03:00.1 eth3: Enabled Features: RxQ: 8 TxQ: 8
>> [31908.230893] ixgbe 0000:03:00.1 eth3: Intel(R) 10 Gigabit Network 
>> Connection
>> [31913.504204] ixgbe 0000:03:00.0 eth1: detected SFP+: 5
>> [31913.755868] ixgbe 0000:03:00.0 eth1: NIC Link is Up 10 Gbps, Flow
>> Control: None
>>
>> last line is when i executed disabling flow control
>> i disabled it both on switch and on card (actually i has disabled it
>> all long time ago, just tried today to turn it on, before i run out of
>> ideas how to resolve this issue)
>>
>> after i tried to do tests again, and got same (very close) result
>>
>> Optical 82599
>> ------------------------------------------------------------
>> Client connecting to yy.yy.74.11, TCP port 5001
>> Binding to local address xxx.xxx.185.135
>> TCP window size: 64.0 KByte (default)
>> ------------------------------------------------------------
>> [  3] local xxx.xxx.185.135 port 5001 connected with yy.yy.74.11 port 5001
>> [ ID] Interval       Transfer     Bandwidth
>> [  3]  0.0-20.0 sec   207 MBytes  86.7 Mbits/sec
>>
>> Cooper 82574
>> ------------------------------------------------------------
>> Client connecting to yy.yy.74.11, TCP port 5001
>> Binding to local address xxx.xxx.185.133
>> TCP window size: 64.0 KByte (default)
>> ------------------------------------------------------------
>> [  3] local xxx.xxx.185.133 port 5001 connected with yy.yy.74.11 port 5001
>> [ ID] Interval       Transfer     Bandwidth
>> [  3]  0.0-20.1 sec   645 MBytes   270 Mbits/sec
>>
>> Again i got 82574 at least 3 times faster.
>>
>> This is ethtool -S eth1 report again after reload of driver.
>> NIC statistics:
>>      rx_packets: 3474041
>>      tx_packets: 6528442
>>      rx_bytes: 450307293
>>      tx_bytes: 9525085597
>>      rx_errors: 0
>>      tx_errors: 0
>>      rx_dropped: 0
>>      tx_dropped: 0
>>      multicast: 22
>>      collisions: 0
>>      rx_over_errors: 0
>>      rx_crc_errors: 0
>>      rx_frame_errors: 0
>>      rx_fifo_errors: 0
>>      rx_missed_errors: 0
>>      tx_aborted_errors: 0
>>      tx_carrier_errors: 0
>>      tx_fifo_errors: 0
>>      tx_heartbeat_errors: 0
>>      rx_pkts_nic: 3472437
>>      tx_pkts_nic: 6525653
>>      rx_bytes_nic: 463886577
>>      tx_bytes_nic: 9547445047
>>      lsc_int: 3
>>      tx_busy: 0
>>      non_eop_descs: 0
>>      broadcast: 730
>>      rx_no_buffer_count: 0
>>      tx_timeout_count: 0
>>      tx_restart_queue: 0
>>      rx_long_length_errors: 0
>>      rx_short_length_errors: 0
>>      tx_flow_control_xon: 0
>>      rx_flow_control_xon: 0
>>      tx_flow_control_xoff: 0
>>      rx_flow_control_xoff: 0
>>      rx_csum_offload_errors: 0
>>      alloc_rx_page_failed: 0
>>      alloc_rx_buff_failed: 0
>>      rx_no_dma_resources: 0
>>      hw_rsc_aggregated: 0
>>      hw_rsc_flushed: 0
>>      fdir_match: 0
>>      fdir_miss: 0
>>      fdir_overflow: 0
>>      fcoe_bad_fccrc: 0
>>      fcoe_last_errors: 0
>>      rx_fcoe_dropped: 0
>>      rx_fcoe_packets: 0
>>      rx_fcoe_dwords: 0
>>      fcoe_noddp: 0
>>      fcoe_noddp_ext_buff: 0
>>      tx_fcoe_packets: 0
>>      tx_fcoe_dwords: 0
>>      os2bmc_rx_by_bmc: 0
>>      os2bmc_tx_by_bmc: 0
>>      os2bmc_tx_by_host: 0
>>      os2bmc_rx_by_host: 0
>>      tx_queue_0_packets: 752433
>>      tx_queue_0_bytes: 1106159204
>>      tx_queue_1_packets: 774829
>>      tx_queue_1_bytes: 1122892047
>>      tx_queue_2_packets: 837308
>>      tx_queue_2_bytes: 1224541683
>>      tx_queue_3_packets: 743260
>>      tx_queue_3_bytes: 1071268064
>>      tx_queue_4_packets: 775167
>>      tx_queue_4_bytes: 1137021910
>>      tx_queue_5_packets: 866713
>>      tx_queue_5_bytes: 1271941588
>>      tx_queue_6_packets: 1041911
>>      tx_queue_6_bytes: 1537638579
>>      tx_queue_7_packets: 736821
>>      tx_queue_7_bytes: 1053622522
>>      tx_queue_8_packets: 0
>>      tx_queue_8_bytes: 0
>>      tx_queue_9_packets: 0
>>      tx_queue_9_bytes: 0
>>      tx_queue_10_packets: 0
>>      tx_queue_10_bytes: 0
>>      tx_queue_11_packets: 0
>>      tx_queue_11_bytes: 0
>>      tx_queue_12_packets: 0
>>      tx_queue_12_bytes: 0
>>      tx_queue_13_packets: 0
>>      tx_queue_13_bytes: 0
>>      tx_queue_14_packets: 0
>>      tx_queue_14_bytes: 0
>>      tx_queue_15_packets: 0
>>      tx_queue_15_bytes: 0
>>      tx_queue_16_packets: 0
>>      tx_queue_16_bytes: 0
>>      tx_queue_17_packets: 0
>>      tx_queue_17_bytes: 0
>>      tx_queue_18_packets: 0
>>      tx_queue_18_bytes: 0
>>      tx_queue_19_packets: 0
>>      tx_queue_19_bytes: 0
>>      tx_queue_20_packets: 0
>>      tx_queue_20_bytes: 0
>>      tx_queue_21_packets: 0
>>      tx_queue_21_bytes: 0
>>      tx_queue_22_packets: 0
>>      tx_queue_22_bytes: 0
>>      tx_queue_23_packets: 0
>>      tx_queue_23_bytes: 0
>>      tx_queue_24_packets: 0
>>      tx_queue_24_bytes: 0
>>      tx_queue_25_packets: 0
>>      tx_queue_25_bytes: 0
>>      tx_queue_26_packets: 0
>>      tx_queue_26_bytes: 0
>>      tx_queue_27_packets: 0
>>      tx_queue_27_bytes: 0
>>      tx_queue_28_packets: 0
>>      tx_queue_28_bytes: 0
>>      tx_queue_29_packets: 0
>>      tx_queue_29_bytes: 0
>>      tx_queue_30_packets: 0
>>      tx_queue_30_bytes: 0
>>      tx_queue_31_packets: 0
>>      tx_queue_31_bytes: 0
>>      tx_queue_32_packets: 0
>>      tx_queue_32_bytes: 0
>>      tx_queue_33_packets: 0
>>      tx_queue_33_bytes: 0
>>      tx_queue_34_packets: 0
>>      tx_queue_34_bytes: 0
>>      tx_queue_35_packets: 0
>>      tx_queue_35_bytes: 0
>>      tx_queue_36_packets: 0
>>      tx_queue_36_bytes: 0
>>      tx_queue_37_packets: 0
>>      tx_queue_37_bytes: 0
>>      tx_queue_38_packets: 0
>>      tx_queue_38_bytes: 0
>>      tx_queue_39_packets: 0
>>      tx_queue_39_bytes: 0
>>      tx_queue_40_packets: 0
>>      tx_queue_40_bytes: 0
>>      tx_queue_41_packets: 0
>>      tx_queue_41_bytes: 0
>>      tx_queue_42_packets: 0
>>      tx_queue_42_bytes: 0
>>      tx_queue_43_packets: 0
>>      tx_queue_43_bytes: 0
>>      tx_queue_44_packets: 0
>>      tx_queue_44_bytes: 0
>>      tx_queue_45_packets: 0
>>      tx_queue_45_bytes: 0
>>      tx_queue_46_packets: 0
>>      tx_queue_46_bytes: 0
>>      tx_queue_47_packets: 0
>>      tx_queue_47_bytes: 0
>>      tx_queue_48_packets: 0
>>      tx_queue_48_bytes: 0
>>      tx_queue_49_packets: 0
>>      tx_queue_49_bytes: 0
>>      tx_queue_50_packets: 0
>>      tx_queue_50_bytes: 0
>>      tx_queue_51_packets: 0
>>      tx_queue_51_bytes: 0
>>      tx_queue_52_packets: 0
>>      tx_queue_52_bytes: 0
>>      tx_queue_53_packets: 0
>>      tx_queue_53_bytes: 0
>>      tx_queue_54_packets: 0
>>      tx_queue_54_bytes: 0
>>      tx_queue_55_packets: 0
>>      tx_queue_55_bytes: 0
>>      tx_queue_56_packets: 0
>>      tx_queue_56_bytes: 0
>>      tx_queue_57_packets: 0
>>      tx_queue_57_bytes: 0
>>      tx_queue_58_packets: 0
>>      tx_queue_58_bytes: 0
>>      tx_queue_59_packets: 0
>>      tx_queue_59_bytes: 0
>>      tx_queue_60_packets: 0
>>      tx_queue_60_bytes: 0
>>      tx_queue_61_packets: 0
>>      tx_queue_61_bytes: 0
>>      tx_queue_62_packets: 0
>>      tx_queue_62_bytes: 0
>>      tx_queue_63_packets: 0
>>      tx_queue_63_bytes: 0
>>      tx_queue_64_packets: 0
>>      tx_queue_64_bytes: 0
>>      tx_queue_65_packets: 0
>>      tx_queue_65_bytes: 0
>>      tx_queue_66_packets: 0
>>      tx_queue_66_bytes: 0
>>      tx_queue_67_packets: 0
>>      tx_queue_67_bytes: 0
>>      tx_queue_68_packets: 0
>>      tx_queue_68_bytes: 0
>>      tx_queue_69_packets: 0
>>      tx_queue_69_bytes: 0
>>      tx_queue_70_packets: 0
>>      tx_queue_70_bytes: 0
>>      rx_queue_0_packets: 463967
>>      rx_queue_0_bytes: 33270955
>>      rx_queue_1_packets: 346811
>>      rx_queue_1_bytes: 70602531
>>      rx_queue_2_packets: 445265
>>      rx_queue_2_bytes: 61583232
>>      rx_queue_3_packets: 426811
>>      rx_queue_3_bytes: 33946327
>>      rx_queue_4_packets: 473199
>>      rx_queue_4_bytes: 35975233
>>      rx_queue_5_packets: 316566
>>      rx_queue_5_bytes: 32463326
>>      rx_queue_6_packets: 529871
>>      rx_queue_6_bytes: 140641116
>>      rx_queue_7_packets: 471551
>>      rx_queue_7_bytes: 41824573
>>      rx_queue_8_packets: 0
>>      rx_queue_8_bytes: 0
>>      rx_queue_9_packets: 0
>>      rx_queue_9_bytes: 0
>>      rx_queue_10_packets: 0
>>      rx_queue_10_bytes: 0
>>      rx_queue_11_packets: 0
>>      rx_queue_11_bytes: 0
>>      rx_queue_12_packets: 0
>>      rx_queue_12_bytes: 0
>>      rx_queue_13_packets: 0
>>      rx_queue_13_bytes: 0
>>      rx_queue_14_packets: 0
>>      rx_queue_14_bytes: 0
>>      rx_queue_15_packets: 0
>>      rx_queue_15_bytes: 0
>>      rx_queue_16_packets: 0
>>      rx_queue_16_bytes: 0
>>      rx_queue_17_packets: 0
>>      rx_queue_17_bytes: 0
>>      rx_queue_18_packets: 0
>>      rx_queue_18_bytes: 0
>>      rx_queue_19_packets: 0
>>      rx_queue_19_bytes: 0
>>      rx_queue_20_packets: 0
>>      rx_queue_20_bytes: 0
>>      rx_queue_21_packets: 0
>>      rx_queue_21_bytes: 0
>>      rx_queue_22_packets: 0
>>      rx_queue_22_bytes: 0
>>      rx_queue_23_packets: 0
>>      rx_queue_23_bytes: 0
>>      rx_queue_24_packets: 0
>>      rx_queue_24_bytes: 0
>>      rx_queue_25_packets: 0
>>      rx_queue_25_bytes: 0
>>      rx_queue_26_packets: 0
>>      rx_queue_26_bytes: 0
>>      rx_queue_27_packets: 0
>>      rx_queue_27_bytes: 0
>>      rx_queue_28_packets: 0
>>      rx_queue_28_bytes: 0
>>      rx_queue_29_packets: 0
>>      rx_queue_29_bytes: 0
>>      rx_queue_30_packets: 0
>>      rx_queue_30_bytes: 0
>>      rx_queue_31_packets: 0
>>      rx_queue_31_bytes: 0
>>      rx_queue_32_packets: 0
>>      rx_queue_32_bytes: 0
>>      rx_queue_33_packets: 0
>>      rx_queue_33_bytes: 0
>>      rx_queue_34_packets: 0
>>      rx_queue_34_bytes: 0
>>      rx_queue_35_packets: 0
>>      rx_queue_35_bytes: 0
>>      rx_queue_36_packets: 0
>>      rx_queue_36_bytes: 0
>>      rx_queue_37_packets: 0
>>      rx_queue_37_bytes: 0
>>      rx_queue_38_packets: 0
>>      rx_queue_38_bytes: 0
>>      rx_queue_39_packets: 0
>>      rx_queue_39_bytes: 0
>>      rx_queue_40_packets: 0
>>      rx_queue_40_bytes: 0
>>      rx_queue_41_packets: 0
>>      rx_queue_41_bytes: 0
>>      rx_queue_42_packets: 0
>>      rx_queue_42_bytes: 0
>>      rx_queue_43_packets: 0
>>      rx_queue_43_bytes: 0
>>      rx_queue_44_packets: 0
>>      rx_queue_44_bytes: 0
>>      rx_queue_45_packets: 0
>>      rx_queue_45_bytes: 0
>>      rx_queue_46_packets: 0
>>      rx_queue_46_bytes: 0
>>      rx_queue_47_packets: 0
>>      rx_queue_47_bytes: 0
>>      rx_queue_48_packets: 0
>>      rx_queue_48_bytes: 0
>>      rx_queue_49_packets: 0
>>      rx_queue_49_bytes: 0
>>      rx_queue_50_packets: 0
>>      rx_queue_50_bytes: 0
>>      rx_queue_51_packets: 0
>>      rx_queue_51_bytes: 0
>>      rx_queue_52_packets: 0
>>      rx_queue_52_bytes: 0
>>      rx_queue_53_packets: 0
>>      rx_queue_53_bytes: 0
>>      rx_queue_54_packets: 0
>>      rx_queue_54_bytes: 0
>>      rx_queue_55_packets: 0
>>      rx_queue_55_bytes: 0
>>      rx_queue_56_packets: 0
>>      rx_queue_56_bytes: 0
>>      rx_queue_57_packets: 0
>>      rx_queue_57_bytes: 0
>>      rx_queue_58_packets: 0
>>      rx_queue_58_bytes: 0
>>      rx_queue_59_packets: 0
>>      rx_queue_59_bytes: 0
>>      rx_queue_60_packets: 0
>>      rx_queue_60_bytes: 0
>>      rx_queue_61_packets: 0
>>      rx_queue_61_bytes: 0
>>      rx_queue_62_packets: 0
>>      rx_queue_62_bytes: 0
>>      rx_queue_63_packets: 0
>>      rx_queue_63_bytes: 0
>>      rx_queue_64_packets: 0
>>      rx_queue_64_bytes: 0
>>      rx_queue_65_packets: 0
>>      rx_queue_65_bytes: 0
>>      rx_queue_66_packets: 0
>>      rx_queue_66_bytes: 0
>>      rx_queue_67_packets: 0
>>      rx_queue_67_bytes: 0
>>      rx_queue_68_packets: 0
>>      rx_queue_68_bytes: 0
>>      rx_queue_69_packets: 0
>>      rx_queue_69_bytes: 0
>>      rx_queue_70_packets: 0
>>      rx_queue_70_bytes: 0
>>      tx_pb_0_pxon: 0
>>      tx_pb_0_pxoff: 0
>>      tx_pb_1_pxon: 0
>>      tx_pb_1_pxoff: 0
>>      tx_pb_2_pxon: 0
>>      tx_pb_2_pxoff: 0
>>      tx_pb_3_pxon: 0
>>      tx_pb_3_pxoff: 0
>>      tx_pb_4_pxon: 0
>>      tx_pb_4_pxoff: 0
>>      tx_pb_5_pxon: 0
>>      tx_pb_5_pxoff: 0
>>      tx_pb_6_pxon: 0
>>      tx_pb_6_pxoff: 0
>>      tx_pb_7_pxon: 0
>>      tx_pb_7_pxoff: 0
>>      rx_pb_0_pxon: 0
>>      rx_pb_0_pxoff: 0
>>      rx_pb_1_pxon: 0
>>      rx_pb_0_pxoff: 0
>>      rx_pb_1_pxon: 0
>>      rx_pb_1_pxoff: 0
>>      rx_pb_2_pxon: 0
>>      rx_pb_2_pxoff: 0
>>      rx_pb_3_pxon: 0
>>      rx_pb_3_pxoff: 0
>>      rx_pb_4_pxon: 0
>>      rx_pb_4_pxoff: 0
>>      rx_pb_5_pxon: 0
>>      rx_pb_5_pxoff: 0
>>      rx_pb_6_pxon: 0
>>      rx_pb_6_pxoff: 0
>>      rx_pb_7_pxon: 0
>>      rx_pb_7_pxoff: 0
>>
>>
>> 2013/8/13 Alexander Duyck <[email protected]>:
>>> Based on the info you provided I would say one possible red flag would
>>> be the flow control bits in the statistics.  Specifically:
>>>>      tx_flow_control_xon: 0
>>>>      rx_flow_control_xon: 164
>>>>      tx_flow_control_xoff: 0
>>>>      rx_flow_control_xoff: 164
>>>>      rx_csum_offload_errors: 1
>>> The fact that you are getting rx_flow_control messages would indicate
>>> that the 10Gb port is being stopped by the link partner.  One thing you
>>> could try in order to test this further is to disable flow control on
>>> the 82599 port.  To do that you can run the following where ethX is the
>>> name of the ixgbe interface you are currently using:
>>>   ethtool -A ethX tx off rx off autoneg off
>>>
>>> Also you may want to set the 82599 and 82574 to the same interrupt
>>> rate.  Currently it looks like the 82599 is being limited to 4000
>>> interrupts per second while the 82574 is being allowed up to 20,000.  If
>>> you let the 82599 use its' default throttle rate setting that should be
>>> comparable to the dynamic throttling provided by the e1000 driver and
>>> should improve TCP performance by reducing latency.
>>
>>
>



-- 
Support Team
www.seedbox.org.ua

ICQ: 235-615-397
Skype: seedboxorgua
QQ: 1794064147

------------------------------------------------------------------------------
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with <2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to