Hi,

We are using OpenVswitch to connect virtual machines running on KVM
hypervisors (Fedora 13, openvswitch 1.0.1).
Each hypervisor has its ovs-vswitchd daemon to connect differents VMs
on this host. VMs are using VirtIO as network driver.

Communications between VMs on differents hypervisors pass thru an
openvswitchd daemon on what we call a "communication server".
Communication between VMs and the real word pass thru this
"communication server" too. vswitchd on hypervisors are connected to
the vswitchd on the "communication server" via GRE tunnels.

Each ovs-vswitchd has it ovsdb configuration deamon running on this
"communication server".

That's how our platform was designed.

We have 12 hypervisors, for about 100 VMs running now. The
"communication server" is hosting the 12 ovsdb + the ovsdb for the
central ovs-vswitchd. All those servers are linked with 10Gbps fiber
switch.

Everythings works fine, connectivity is ok, VMs on the same vlan can
ping each others even if they are on differents hypervisors. They can
also ping real servers on the same vlan outside etc. That was our goal
so we are quite happy. But times to times, we get very very slow
traffic and we can't find where the problem is. Files transfers that
usually take 2 or 3 minutes can take more than 60 minutes to
complete... after a few minutes, everything seems to get back in order
by itself (we don't do anything for that).

Can you point us some good tips or direction to diagnose openvswitch?

If the "communication server" is heavy loaded (it's not the case on
our platform), can it slow down communication even between VMs on the
same hypervisor? (due to slow response from ovsdb?)
Max traffic bandwith betweens VMs is about 20MBps, that is seems ok to you?

We are just begining to dig our problem, any help would be greatly
appreciated ;) Thanks



Best regards,
-- 
Edouard Bourguignon

_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss_openvswitch.org

Reply via email to