On 09/27/2017 02:26 AM, Jesper Dangaard Brouer wrote:
> On Tue, 26 Sep 2017 21:58:53 +0200
> Daniel Borkmann <dan...@iogearbox.net> wrote:
> 
>> On 09/26/2017 09:13 PM, Jesper Dangaard Brouer wrote:
>> [...]
>>> I'm currently implementing a cpumap type, that transfers raw XDP frames
>>> to another CPU, and the SKB is allocated on the remote CPU.  (It
>>> actually works extremely well).  
>>
>> Meaning you let all the XDP_PASS packets get processed on a
>> different CPU, so you can reserve the whole CPU just for
>> prefiltering, right? 
> 
> Yes, exactly.  Except I use the XDP_REDIRECT action to steer packets.
> The trick is using the map-flush point, to transfer packets in bulk to
> the remote CPU (single call IPC is too slow), but at the same time
> flush single packets if NAPI didn't see a bulk.
> 
>> Do you have some numbers to share at this point, just curious when
>> you mention it works extremely well.
> 
> Sure... I've done a lot of benchmarking on this patchset ;-)
> I have a benchmark program called xdp_redirect_cpu [1][2], that collect
> stats via tracepoints (atm I'm limiting bulking 8 packets, and have
> tracepoints at bulk spots, to amortize tracepoint cost 25ns/8=3.125ns)
> 
>  [1] 
> https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/samples/bpf/xdp_redirect_cpu_kern.c
>  [2] 
> https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/samples/bpf/xdp_redirect_cpu_user.c
> 
> Here I'm installing a DDoS program that drops UDP port 9 (pktgen
> packets) on RX CPU=0.  I'm forcing my netperf to hit the same CPU, that
> the 11.9Mpps DDoS attack is hitting.
> 
> Running XDP/eBPF prog_num:4
> XDP-cpumap      CPU:to  pps            drop-pps    extra-info
> XDP-RX          0       12,030,471     11,966,982  0          
> XDP-RX          total   12,030,471     11,966,982 
> cpumap-enqueue    0:2   63,488         0           0          
> cpumap-enqueue  sum:2   63,488         0           0          
> cpumap_kthread  2       63,488         0           3          time_exceed
> cpumap_kthread  total   63,488         0           0          
> redirect_err    total   0              0          
> 
> $ netperf -H 172.16.0.2 -t TCP_CRR  -l 10 -D1 -T5,5 -- -r 1024,1024
> Local /Remote
> Socket Size   Request  Resp.   Elapsed  Trans.
> Send   Recv   Size     Size    Time     Rate         
> bytes  Bytes  bytes    bytes   secs.    per sec   
> 
> 16384  87380  1024     1024    10.00    12735.97   
> 16384  87380 
> 
> The netperf TCP_CRR performance is the same, without XDP loaded.
> 

Just curious could you also try this with RPS enabled (or does this have
RPS enabled). RPS should effectively do the same thing but higher in the
stack. I'm curious what the delta would be. Might be another interesting
case and fairly easy to setup if you already have the above scripts.

Thanks,
John

[...]

Reply via email to