]
on fd 8 (x.x.x.x:43173<->y.y.y.y:) at ../lib/stream-fd-unix.c:124 (0%
CPU usage)
ovs-ofctl: talking to tcp:y.y.y.y: (End of file)
Thanks,
On Thu, Jan 22, 2015 at 11:47 AM, Luiz Henrique Ozaki
wrote:
> Hi all,
>
> I have some instances running on OVS with the XenServe
e assertion.
XenServer 6.2
It seems that this only happens when ovs-vswitchd is on heavy load and I
run ovs-ofctl add-flow.
Anyone knows what could be triggering this ?
Regards,
--
[]'s
Luiz Henrique Ozaki
___
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss
e packet (along the associated broadcast tree)
> > and learn its MAC (associate with the ingress port)
> >
> > This is done for all packets, not just broadcast.
>
> ...except that broadcast packets skip the "lookup destina
is broadcasting data(in this
case, that I think shouldn't be happening) or just sending to the port that
had the MAC.
--
[]'s
Luiz Henrique Ozaki
___
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss_openvswitch.org
gonna try openflow rules then...
I'm not an expert in networking so if this is not making sense, please tell
me guys...
On Thu, Sep 2, 2010 at 5:48 PM, Jesse Gross wrote:
> On Thu, Sep 2, 2010 at 9:01 AM, Luiz Henrique Ozaki
> wrote:
> > Yeah, I don't know the difficult
u, Sep 2, 2010 at 12:50 PM, Ben Pfaff wrote:
> On Thu, Sep 2, 2010 at 8:19 AM, Luiz Henrique Ozaki
> wrote:
> > Unwanted packets received by the VMs seems not a good idea...
>
> If you have a controller, it can prevent this from happening. We have
> thought
> about addin
osts
> or VMs sending packets to that MAC will realize that they haven't had
> any responses in a while, time out that MAC, and fall back to ARPing for
> it, which effectively rate-limits the traffic.)
>
> Does that sound reasonable?
>
--
[]'s
Luiz Henrique Ozaki
___
e that.
Now, I'm just checking if this is a problem that we should be worried about
or forgotten...
I'll go check with the network team here to dig around the physical switch.
=
Any more info, debug, etc. please, be welcome.
Best regards,
*
--
[]'s
Luiz Henriq
I think that this is the correct fix.
>
> Thanks,
>
> Ben.
>
> On Wed, Aug 25, 2010 at 08:31:22PM -0300, Luiz Henrique Ozaki wrote:
> > Perfect !!
> >
> > Commenting out the bond_wait(br) solved this high CPU.
> >
> > If you need more debuging and testin
and see if
> it makes a difference.
>
> Thanks,
>
> Ben.
>
> On Wed, Aug 25, 2010 at 06:37:09PM -0300, Luiz Henrique Ozaki wrote:
> > # ovs-appctl bond/show bond0
> > updelay: 200 ms
> > downdelay: 0 ms
> > next rebalance: 8481 ms
> > slave eth3: enab
anged for 5.6 to load the /proc/net just to check if this
was the issue but cpu process is still high, with or without
/proc/net/bonding
Changed for the original init again.
On Wed, Aug 25, 2010 at 6:04 PM, Ben Pfaff wrote:
> On Wed, Aug 25, 2010 at 05:59:35PM -0300, Luiz Henrique Ozaki wrote:
:03:16PM -0700, Ben Pfaff wrote:
> > On Wed, Aug 25, 2010 at 02:53:05PM -0300, Luiz Henrique Ozaki wrote:
> > > I'm trying to create a bond interface with OpenvSwitch in a XenServer
> 5.6
> > > and I'm getting a high load CPU for the ovs-vswitchd, but It's wor
flows: cur:0, soft-max:1024, hard-max:1048576
ports: cur:2, max:1024
groups: max:16
lookups: frags:0, hit:0, missed:0, lost:0
queues: max-miss:100, max-action:100
port 0: xenbr1 (internal)
port 1: eth1
sys...@dp5:
13 matches
Mail list logo