Thomas,
Thanks for the reply. I loaded the virtio_net module and
then tried executing testpmd, but the entire VM crashed.
Let me know if you want me to collect any specific data.
[root at localhost ~]# lsmod
Module Size Used by
fuse 76063 3
ppdev
Hi,
Well, GTP is the main usecase.
We end up with a GTP tunnel between the two machines.
And ordinarily with 82599, all the data will land up on a single queue and
therefore must be polled on a single core. Bottleneck.
But in general, if I want to employ all the CPU cores horsepower simultaneous
Hi Bryan,
Regarding your 1st point, the single core becomes the rx bottleneck which is
clearly not desirable.
I am not sure regarding how to use the stuff you mentioned in 2nd point, is
there some DPDK api which lets me configure this, kindly let me know.
Regards
-Prashant
From: Benson, Brya
Prashant,
1) I thought the same, but I was pleasantly surprised at how much a single core
can RX and distribute (from a single 10G port). It was a while back, but in my
experimentation with well distributed incoming flows, I found nearly identical
bottleneck points between polling using one core
Hello,
05/12/2013 16:42, Michael Quicquaro :
> This is a good discussion and I hope Intel can see and benefit from it.
Don't forget that this project is Open Source.
So you can submit your patches for review.
Thanks for participating
--
Thomas
05/12/2013 16:19, Srinivasan J :
> tried executing testpmd, but the entire VM crashed.
Booting a VM, removing virtio_net module and following the doc should work:
http://dpdk.org/doc/virtio-net-pmd
If it doesn't work, we need some logs to understand your problem.
--
Thomas
Hello,
04/12/2013 05:37, Srinivasan J :
> I am getting the "virtio-net device is already used by another driver"
Could you try "rmmod virtio_net" ?
Please tell us if it solves your problem.
--
Thomas
Hi Stepher,
The awfulness depends upon the 'usecase'
I have eg. a usecase where I want this roundrobin behaviour.
I just want the NIC to give me a facility to use this.
Regards
-Prashant
-Original Message-
From: Stephen Hemminger [mailto:step...@networkplumber.org]
Sent: Thursday, Dece
This is a good discussion and I hope Intel can see and benefit from it.
For my "usecase", I don't necessarily need round robin on a per packet
level, but simply some normalized distribution among core queues that has
nothing to do with anything inside the packet. A good solution perhaps
could be
Hi,
It's a real pity that Intel 82599 NIC (and possibly others) don't have a simple
round robin scheduling of packets on the configured queues.
I have requested Intel earlier, and using this forum requesting again -- please
please put this facility in the NIC that if I drop N queues there and c
Hi,
If the traffic you manage is above MPLS or GTP encapsulations, then you can
use cards that provide flexible hash functions. Chelsio cxgb5 provides
combination of "offset", length and tuple that may help.
The only reason I would have loved to get a pure round robin feature was to
pass certain
Prashant,
I assume your use case is not of one IP/UDP/TCP - or if it is, you are dealing
with a single tuple that is not evenly distributed.
You have a few options with the NIC that I can think of.
1) Use a single core to RX each port's frames and use your own software
solution to RR to worker
Hi,
My setup is a Ubuntu VM with two 1GE ports bound to igb_uio module.
I reserved 64 huge pages.
When I try to run the helloworld or test or testpmd programs, I see that it
silently aborts after trying to setup memory.
$ sudo examples/helloworld/build/helloworld -c 0xf -n 1
EAL: Detected lco
13 matches
Mail list logo