On Sat, Jan 23, 2016 at 5:49 PM, Luigi Rizzo wrote:
> On Sat, Jan 23, 2016 at 10:43 AM, Marcus Cenzatti
> wrote:
> >
> >
> > On 1/23/2016 at 1:31 PM, "Adrian Chadd" wrote:
> >>
> >>For random src/dst ports and IPs and on the chelsio t5 40gig
> >>hardware,
> >>I was getting what, uhm, 40mil tx p
On Sat, Jan 23, 2016 at 10:43 AM, Marcus Cenzatti wrote:
>
>
> On 1/23/2016 at 1:31 PM, "Adrian Chadd" wrote:
>>
>>For random src/dst ports and IPs and on the chelsio t5 40gig
>>hardware,
>>I was getting what, uhm, 40mil tx pps and around 25ish mil rx pps?
>>
>>The chelsio rx path really wants to
Oh and one other thing - on the cxgbe hardware, the netmap interfaces
(ncxl) have a different MAC. things like broadcast traffic is
duplicated to cxlX AND ncxlX. So, if you're only using netmap and
you're testing promisc/bridging, you should bring /down/ the cxlX
interface and leave ncxlX up - othe
ok, so it's .. a little more complicated than that.
The chelsio hardware (thanks jim!) and intel hardware (thanks
sean/limelight!) do support various kinds of traffic hashing into
different queues. The common subset of behaviour is the microsoft RSS
requirement spec. You can hash on v4, v6 headers
On 1/23/2016 at 1:31 PM, "Adrian Chadd" wrote:
>
>For random src/dst ports and IPs and on the chelsio t5 40gig
>hardware,
>I was getting what, uhm, 40mil tx pps and around 25ish mil rx pps?
>
>The chelsio rx path really wants to be coalescing rx buffers, which
>the netmap API currently doesn't
For random src/dst ports and IPs and on the chelsio t5 40gig hardware,
I was getting what, uhm, 40mil tx pps and around 25ish mil rx pps?
The chelsio rx path really wants to be coalescing rx buffers, which
the netmap API currently doesn't support. I've no idea if luigi has
plans to add that. So, i
Hi!
Great job! Do you have performance estimations?
On Wednesday, 20 January 2016, Adrian Chadd wrote:
> Ok, so, I mostly did this already:
>
> https://github.com/erikarn/netmap-tools/
>
> it has a multi-threaded, multi-queue bridge + ipv4 decap for testing.
>
>
>
> -a
>
--
Sincerely yours,
Ok, so, I mostly did this already:
https://github.com/erikarn/netmap-tools/
it has a multi-threaded, multi-queue bridge + ipv4 decap for testing.
-a
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To un
Hi
Yes, this approach working really well on Linux. But I have never tried to
do same on FreeBSD.
I'm using similar approach in dastnetmon abd read data from the network
card in X threads where each thread assigned to physical queue. So for
Linux you should use my custom (based on Intel's drivers
Hello all,
I have some doubts regarding netmap design direct queue usage.
If open netmap:ix0 I am opening all 0-7 queues. Are those queues FIFO among
themselves? I mean first packeds will be available on netmap:ix0-0 and if
this queue fills up the next packets will be on netmap:ix0-1, and via
net
10 matches
Mail list logo