Hi Damjan,
I'm trying to build and install vpp in LXC. I got vpp source code and built
it in LXC and created ".deb" files. Now, I've faced a problem; after
installing vpp, it cannot bind interfaces. In other words, vpp shows only
local0 when I execute "vppctl show interface" command and it does no
> -Original Message-
> From: Damjan Marion (damarion) [mailto:damar...@cisco.com]
> Sent: Friday, February 17, 2017 5:06 PM
> To: Zhou, Danny
> Cc: vpp-dev
> Subject: Re: [vpp-dev] memif - packet memory interface
>
>
> > On 17 Feb 2017, at 06:30, Zh
> On 17 Feb 2017, at 06:30, Zhou, Danny wrote:
>
> Very Interesting...
>
> Damjan,
>
> Do you think if it makes sense to use virtio_user/vhost_user pairs to connect
> two VPPs instances running
> inside two container?
>
> Essentially, the memif and virtio_user/vhost_user pairs both leverage
Subject: Re: [vpp-dev] memif - packet memory interface
Looks like I was too optimistic when it comes to syscalls i was planning to use.
I was not able to get more than 3 Mpps so I switched to standard shared memory.
After a bit of tuning, I’m getting following results:
broadwell 3.2GHz
Excellent!
Thanks,
-daw-
On 02/16/2017 02:43 PM, Damjan Marion (damarion) wrote:
Looks like I was too optimistic when it comes to syscalls i was planning to use.
I was not able to get more than 3 Mpps so I switched to standard shared memory.
After a bit of tuning, I’m getting following results
Looks like I was too optimistic when it comes to syscalls i was planning to use.
I was not able to get more than 3 Mpps so I switched to standard shared memory.
After a bit of tuning, I’m getting following results:
broadwell 3.2GHz, TurboBoost disabled:
IXIA - XL710-40G - VPP1 - MEMIF - VPP2 -
I got first pings running over new shared memory interface driver.
Code [1] is still very fragile, but basic packet forwarding works ...
This interface defines master/slave relationship.
Some characteristics:
- slave can run inside un-privileged containers
- master can run inside container, bu