Thanks a lot Billy, it worked. Earlier I was trying to manually mount
hugepages in container which didn't work.
Best Regards,
Omer
On 2018-08-16 01:04, Billy wrote:
> So I would need to see how you are starting your container, but I think you
> are not mapping in hugepages. I do need to mor
Hi, all
I'm a newbie of vpp and tried to compile sample_plugin out of source tree. I
followed instructions in wiki
(https://wiki.fd.io/view/VPP/How_To_Build_The_Sample_Plugin) and found that
vpp_plugin_configure is missing.
This tool has been deleted in this commit:
https://gerrit.fd.io/r/gitw
[Edited Message Follows]
Hi, all
I'm a newbie of vpp and tried to compile sample_plugin out of source tree. I
followed instructions in wiki
(https://wiki.fd.io/view/VPP/How_To_Build_The_Sample_Plugin) and found that
vpp_plugin_configure is missing.
This tool has been deleted in this commit:
h
It might be that documentation is outdated, but we definitely build sample
plugin out-of-tree as part of each commit verify job...
--
Damjan
> On 16 Aug 2018, at 10:48, ololjiiu373...@163.com wrote:
>
> Hi, all
>
> I'm a newbie of vpp and tried to compile sample_plugin out of source tree. I
Can you share more details on hardware used?
Both Dave and me noticed this on denverton device (Atom C3xxx) with onboard
10G ports.
I was able to repro this with dpdk testpmd, so (at least our issue) is not VPP
related.
--
Damjan
> On 9 Aug 2018, at 00:45, j...@yeager1.com wrote:
>
> Quic
Hi,
I need to use symmetric RSS hash in VPP while using DPDK plugin, so
that both sides of my TCP flows land on the same worker.
Requesting to please advise what would be the correct way of achieving
this in VPP via the hardware (say Intel82599 NIC) -- I believe it is
possible to configure an RSS
Thanks Florin,
It works fine now. :)
And two more questions:
1. Will eventfd replace the condvar way? it will be the default way for
event queue?
2. Will socket api replace the vpe api in the future?
Thanks!
/Yalei
Florin Coras 于2018年8月15日周三 下午11:29写道:
> Hi Yalei,
>
> You definitely need api-
you should be able to do that by enabling same number of rx queues to number of
workers +1 (main thread) and set rx placement to match tx queue assignment.
VPP statically maps on tx side queue to thread, so main thread is always using
queue 0, worker 0 uses queue 1and so on
Issue is that this
Hi Yalei,
Great! For now, I’m planning on keeping both the condvars and eventfds, at
least until we completely understand the performance implications.
As for your second question, the session layer will probably at one point start
requesting that all attachments be done over the socket trans
Hi,
I want to configure VPP to use shared memory API.
https://docs.fd.io/vpp/17.10/libmemif_doc.html
I do not find some example to follow that. Does anyone have this experience?
If so, can you share it to me?
Thanks,
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this gr
Aleksander,
This problem should be easy to figure out if you can gdb the code. When the
very first slave interface is added to the bonding group via the command “bond
add BondEthernet0 GigabitEthnerneta/0/0/1”,
- The PTX machine schedules the interface with the periodic timer via
lacp_schedule
Hi,
I am trying to bond two Intel 82599 on Intel Atom C3000. But I am
getting uio_interrupt error and BondEthernet0 is down. Each NIC works
by itself, without bonding.
[ 12.473426] irq 16: nobody cared (try booting with the "irqpoll" option)
[ 12.480149] CPU: 2 PID: 0 Comm: swapper/2 Not tain
Hi,
I am trying to forward packets using vpp classifier by following VPP
Sandbox/router - fd.io
|
|
|
| | |
|
|
|
| |
VPP Sandbox/router - fd.io
|
|
|
I am using vpp 18.04, but I am getting this error while configuring classify
session:
DBGvpp# classify table mask l3 ip4 p
Thanks! Got it!
/Yalei
Florin Coras 于2018年8月17日周五 上午12:00写道:
> Hi Yalei,
>
> Great! For now, I’m planning on keeping both the condvars and eventfds, at
> least until we completely understand the performance implications.
>
> As for your second question, the session layer will probably at one po
14 matches
Mail list logo