Makes perfect sense - but with openvswitch 1.7.1 I'm seeing stuff like:

2013-04-06T20:03:55Z|6338190|timeval|WARN|82 ms poll interval (24 ms user, 52 
ms system) is over 52 times the weighted mean interval 2 ms (2423146772 samples)
2013-04-07T03:48:04Z|6378252|timeval|WARN|context switches: 0 voluntary, 3 
involuntary
2013-04-08T08:08:30Z|6589218|poll_loop|WARN|wakeup due to [POLLIN] on fd 18 
(unknown anon_inode:[eventpoll]) at ../lib/dpif-linux.c:1183 (93% CPU usage)

CPU usage above 90% on 2.4Ghz E5645 with ~300MB/s and ~300K packets/s. 
(Noticeable SSH latency - sub-second though)
CPU usage above 60% with ~120MB/s and 100K packets.

I know storage traffic will be jumbo frames and might not be as difficult to 
handle as normal traffic in OpenVSwitch - but I'd like to minimize the latency 
for the storage as much as possible.

Any hints on what to do here, something looks wrong or is this expected ?
Anybody able to recommend running storage with / without openvswitch ?

Thanks

Regards
Kristoffer

On 05/04/2013, at 19.00.43, Peter Phaal <peter.ph...@inmon.com> wrote:

> I can't speak to the performance impact of running storage traffic over OVS. 
> We have storage running over OVS  in our  small XenServer pool and haven't 
> seen any ill effects, but that isn't much of a test. However, the sFlow 
> instrumentation in OVS gives useful visibility into storage activity, for 
> example, looking at AoE traffic:
> 
> http://blog.sflow.com/2011/03/aoe.html
> 
> Regards,
> Peter
> 
> On Apr 5, 2013, at 1:32 AM, Kristoffer Egefelt <kristof...@itoc.dk> wrote:
> 
>> Thanks for the input - I may not have explained myself properly though - I'm 
>> not considering pci-passthrough.
>> 
>> What I would like to confirm is, if there's any problems running storage 
>> traffic over OVS, latency/performance wise - in this case using SR-IOV VFs 
>> inside dom0, to seperate the traffic.
>> 
>> What I think I need is two seperate networks on a XEN host - one for 
>> Openvswitch and one for NFS storage traffic.
>> The problem here is that I then need to create 2xLACP bonds on the network 
>> cards - which may not be possible.
>> 
>> The easy way would be to have just one LACP bond, connect OVS to it, and 
>> create a OVS interface for the storage and others for the VMs - but I'm 
>> having doubts if it will impact performance.
>> 
>> Do you guys run NFS / iSCSI / AOE traffic over OVS or on seperate interfaces 
>> (physical or with SR-IOV(Intel) or nPAR (Broadcom)?
>> 
>> Thanks ;-)
>> 
>> Regards
>> Kristoffer
>> 
>> 
>> On 05/04/2013, at 07.41.22, Ramana Reddy <gtvrre...@gmail.com> wrote:
>> 
>>> You can use 10Gb NIC cards, but when you assign vnic to each VM, the 
>>> traffic is directly goes through VNIC, and
>>> not through OVS.  The observed behaviour is that  if the traffic goes via 
>>> OVS, you may not see 10Gb speed, and it varies. What is the need of sending 
>>> traffic via OVS. Is there any decision making happening at OVS side.
>>> 
>>> 
>>> 
>>> On Thu, Apr 4, 2013 at 6:39 PM, Kristoffer Egefelt <kristof...@itoc.dk> 
>>> wrote:
>>> Hi,
>>> 
>>> I want to use 10Gig Intel x520 NICs - should I:
>>> 
>>> - Run storage (iscsi/nfs) over OVS ?
>>> - Create VFs and run storage and OVS on separate interfaces ?
>>> - Buy more physical NICs even if I don't need the bandwidth ?
>>> 
>>> Any experiences with SR-IOV, storage latency or other issues to suggest one 
>>> over the other ?
>>> 
>>> Thanks
>>> 
>>> Regards
>>> Kristoffer
>>> _______________________________________________
>>> discuss mailing list
>>> discuss@openvswitch.org
>>> http://openvswitch.org/mailman/listinfo/discuss
>>> 
>>> _______________________________________________
>>> discuss mailing list
>>> discuss@openvswitch.org
>>> http://openvswitch.org/mailman/listinfo/discuss
>> 
>> _______________________________________________
>> discuss mailing list
>> discuss@openvswitch.org
>> http://openvswitch.org/mailman/listinfo/discuss
> 
> _______________________________________________
> discuss mailing list
> discuss@openvswitch.org
> http://openvswitch.org/mailman/listinfo/discuss

_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to