Hi Han,
Thanks for your input - much appreciated.
I will try and reproduce their results.
Regards
Kristoffer
On 31/07/2014, at 11.23.07, Han Zhou wrote:
> Hi Kristoffer,
>
> Sorry for late response.
>
> On Tue, Jul 29, 2014 at 4:30 PM, Kristoffer Egefelt
> wrote:
>&g
I actually asked them:
http://lists.opencontrail.org/pipermail/users_lists.opencontrail.org/2014-July/000338.html
As I understood it, they are able to offload from VM to VM, without
segmentation - maybe I’m wroing, and what actually is happening is that their
vrouter module, on the receiving sid
Hi Han,
>> GRE: 9000 bytes
>> NIC: 9000 bytes
>>
> Where did you get NIC: 9000 bytes? What's the MTU of your physical interface?
tcpdump on the physical interface show GRE packets are segmented to ~9000 if
the VMs vif has MTU 9000. The MTU on the physical NIC is 9192.
> Otherwise it is normal
me help from the NIC with offloading.
>
> On Fri, Jul 25, 2014 at 4:05 AM, Kristoffer Egefelt
> wrote:
>> I just confirmed that without the GRE tunnel, it works.
>> So it seems the problem is in the GRE handling in or after openvswitch.
>>
>> I’m not sure if th
I understand it OVS 2.1 uses the kernels GRE
implementation?
On 25/07/2014, at 11.15.30, Kristoffer Egefelt wrote:
> Hi Flavio,
>
>> On Thu, Jul 24, 2014 at 01:47:32PM +0200, Kristoffer Egefelt wrote:
>> [...]
>>> - Packets egressing openvswitch over a gre tunn
Hi Flavio,
> On Thu, Jul 24, 2014 at 01:47:32PM +0200, Kristoffer Egefelt wrote:
> [...]
>> - Packets egressing openvswitch over a gre tunnel is segmented to
>> 1500 bytes. If I configure mtu 9000 in the VM, the packets are
>> segmented to 9000 bytes on the gre tunnel
Hi,
I’m looking into how this is possible
http://opencontrail.org/evaluating-opencontrail-virtual-router-performance/
achieving 10G line rate between two VMs on different hypervisors over a tunnel
using MTU 1500 in the VMs.
I see (at least) two possible issues with my setup:
- Packets egressi
Hi,
When a gratuitous arp is seen on a bridge, with action=normal, the FDB is
updated -
but with action=,LOCAL, or even the same ports as
action=normal would flood to, the FDB is not updated.
This all happens on a tunnel bridge, where the broadcast rule cannot have
action=normal, as it then r
I see, thanks, got it working using patches and a single tunnel bridge, which
are then connected to all customer bridges, like this:
ovs-vsctl del-br br1
ovs-vsctl add-br br1 -- set bridge br1 other_config:hwaddr=f6:8d:91:7b:99:01
other_config:disable-in-band=true
ovs-vsctl add-port br1 gre1 --
Hi,
I’m getting "Tunneling decided against output” with more than one bridge using
flow based tunneling, with openvswitch master and kernel 3.11 (and 3.12).
I might misunderstood how flow based tunneling should work, I’m trying to
configure one bridge for each customer/vm and then create a flow
I’m wondering what the options are for isolating broadcast between VIFs on the
same bridge.
- Is it possible with flow rules? (I don't see how it’s possible to filter
broadcast especially when DHCP is needed)
- Is using a seperate bridge pr. customer an option? (I can’t seem to connect
the same
Ah, It's the FDB in the NIC - iproute2 has a bridge command which works -
thanks.
Mvh
Kristoffer
On 09/08/2013, at 23.36.37, Jesse Gross wrote:
> On Fri, Aug 9, 2013 at 2:32 PM, Kristoffer Egefelt wrote:
>> Hi,
>>
>> Network between VMs connected to openvswitch
Hi,
Network between VMs connected to openvswitch and VMs connected with SR-IOV
interfaces (VFs) on the same host seems not to work, due to some special arp
handling in the ixgbevf driver where arp traffic is not forwarded to the
physical interface.
If the NIC does not know about the ovs connect
Hi,
I have a setup with linux kernel 3.2, xen 4.1.2 and openvswitch 1.7.3 - on 10G
infrastructure.
The NAT firewall is connected to openvswitch which causes connection delays and
dropped packets under high load.
With packet counts > 150.000/s delay rises and cpu load > 90%.
Packet counts > 400
Hi Jesse
The system is still experiencing delay with more than 12000 flows.
Is there anything I can do about this - will getting a faster CPU help?
Thanks
On 17/04/2013, at 10.19.21, Kristoffer Egefelt wrote:
> OK - any suggestions on how to calculate the right value ?
> Could you e
is still high > 94%
But the latency actually went back down from 1.5 seconds to 500ms, which is
normal.
So this looks like it worked, even though the CPU usage remains high…
Thanks.
On 16/04/2013, at 21.52.46, Jesse Gross wrote:
> On Tue, Apr 16, 2013 at 3:47 AM, Kristoffer Egefelt
&
at 2:13 PM, Kristoffer Egefelt wrote:
>> OK thanks - however ovs-dpctl show:
>>
>> lookups: hit:142051685241 missed:16517079493 lost:215200
>> flows: 1544
>>
>> with cpu utilization around 80% and ~250.000 p/s
>>
>> (I hope this is the correct way
that all traffic going
through OVS would experience this delay ?
Thanks for your help.
Regards
Kristoffer
On 08/04/2013, at 18.03.13, Jesse Gross wrote:
> On Mon, Apr 8, 2013 at 1:14 AM, Kristoffer Egefelt wrote:
>> Makes perfect sense - but with openvswitch 1.7.1 I'm see
html
>
> Regards,
> Peter
>
> On Apr 5, 2013, at 1:32 AM, Kristoffer Egefelt wrote:
>
>> Thanks for the input - I may not have explained myself properly though - I'm
>> not considering pci-passthrough.
>>
>> What I would like to confirm is, if there
varies. What is the need of sending
> traffic via OVS. Is there any decision making happening at OVS side.
>
>
>
> On Thu, Apr 4, 2013 at 6:39 PM, Kristoffer Egefelt wrote:
> Hi,
>
> I want to use 10Gig Intel x520 NICs - should I:
>
> - Run storage (iscsi/
Hi,
I want to use 10Gig Intel x520 NICs - should I:
- Run storage (iscsi/nfs) over OVS ?
- Create VFs and run storage and OVS on separate interfaces ?
- Buy more physical NICs even if I don't need the bandwidth ?
Any experiences with SR-IOV, storage latency or other issues to suggest one
over t
Hi,
Could I drop hardware Layer 2 switching entirely, and route traffic directly
from the hypervisor/ospf ?
I'm trying to eliminate STP, utilize multiple links, avoid downtime during
upgrade of switch-stack etc I was wondering if the following would be possible:
Uplink Uplink
|
hash 161: 0 kB load
hash 166: 0 kB load
hash 178: 0 kB load
On Mon, Mar 5, 2012 at 6:06 PM, Ben Pfaff wrote:
> Yes, I'd expect that to work (assuming that "ab" means "application").
>
> What does "ovs-appctl bond/show " print?
>
> On Mon, Mar 05,
At any rate, balance-tcp balances
> > on L2 through L4 packet headers, including source and destination ports.
> >
> > On Fri, Mar 02, 2012 at 11:10:18AM +0100, Kristoffer Egefelt wrote:
> > > Ahh, yes - balance-tcp.
> > > I'ts balancing using d
Ahh, yes - balance-tcp.
I'ts balancing using destination ip/port - is it possible to do source
ip/port balancing?
Thanks
On Thu, Mar 1, 2012 at 4:34 PM, Ben Pfaff wrote:
> On Thu, Mar 01, 2012 at 03:55:41PM +0100, Kristoffer Egefelt wrote:
> > Outbound traffic seems to depend
Hi,
Outbound traffic seems to depend on SLB bonding (where a source switch port
is needed for every session to utilize a link).
Is it somehow possible to use policies like the linux kernel bonding driver
is offering? (layer2+3 / layer3+4) ?
Thanks
Regards
Kristoffer
_
On Thu, May 5, 2011 at 5:39 PM, Justin Pettit wrote:
>
> On May 5, 2011, at 4:03 AM, Kristoffer Egefelt wrote:
>
> > From the pool master i get:
> >
> > #ovs-vsctl get-controller xapi5
> > ssl:10.10.3.250:6633
> >
> > Probably because I tried the
that the behavior changed, but you can change the fail mode to
> not "fail open" by running the following:
>
>ovs-vsctl set-fail-mode xapi5 secure
>
> --Justin
>
>
> On May 5, 2011, at 12:13 AM, Kristoffer Egefelt wrote:
>
> > Ah, that could be
gards
Kristoffer
On Mon, May 2, 2011 at 10:01 PM, Ben Pfaff wrote:
> On Mon, May 02, 2011 at 01:43:36PM +0200, Kristoffer Egefelt wrote:
> > I'm trying to add rules to ovs to prevent virtual machines stealing ip
> > addresses from each other.
> > Using XCP, based on XE
Hi list,
I'm trying to add rules to ovs to prevent virtual machines stealing ip
addresses from each other.
Using XCP, based on XENSERVER 5.6fp1 with ovs version 1.0.2.
xapi5 is the switch.
port 5 (xapi13) is vlan8
port 8 (vif53.0) is the virtual machine I'm trying to lock down, with
ip: 10.10.8.7
30 matches
Mail list logo