Interestingly enough, using workers mode seems to enable
ZC without zbalance_ipc : 

    % /opt/suricata/bin/suricata -i enp4s0 -c 
/opt/suricata/etc/suricata/suricata.yaml --pfring -vv

    15/8/2016 -- 14:21:13 - <Info> - Using flow cluster mode for PF_RING (iface 
enp4s0)
    15/8/2016 -- 14:21:13 - <Info> - Going to use 22 thread(s)
    15/8/2016 -- 14:21:13 - <Perf> - Enabling zero-copy for enp4s0
    15/8/2016 -- 14:21:13 - <Perf> - (W#01-enp4s0) Using PF_RING v.6.5.0, 
interface enp4s0, cluster-id 1
    15/8/2016 -- 14:21:13 - <Perf> - Enabling zero-copy for enp4s0
    15/8/2016 -- 14:21:13 - <Perf> - (W#02-enp4s0) Using PF_RING v.6.5.0, 
interface enp4s0, cluster-id 1
    15/8/2016 -- 14:21:13 - <Perf> - Enabling zero-copy for enp4s0
    15/8/2016 -- 14:21:13 - <Perf> - (W#03-enp4s0) Using PF_RING v.6.5.0, 
interface enp4s0, cluster-id 1
    15/8/2016 -- 14:21:13 - <Perf> - Enabling zero-copy for enp4s0
    [...]

Relevant config: 

  - interface: enp4s0
    threads: 22

Drop rate appears to be about 3%, which is pretty good for this old
hardware (upgrades in progress). 

Jim

On 08/12/2016 10:07 AM, Alfredo Cardigliano wrote:
> Hi Jim afaik you should use the “workers” mode (not sure what is your
> exact configuration now) in order to directly run one suricata
> processing thread for each zc queue, otherwise you still have
> suricata redistributing traffic from the queues (using zbalance for
> doing the distribution is useless in this case). Please also note
> that this works well as long as zbalance is able to properly load
> balance your traffic across the suricata queues (this depends on the
> IP distribution in your traffic). Please also try running pfcount on
> the ZC queues to make sure traffic is well balanced and check that
> zbalance is not the bottleneck.
> 
> Regards Alfredo
> 
>> On 11 Aug 2016, at 21:27, Jim Hranicky <[email protected]> wrote:
>> 
>> [ Including the thread from osif-users for context ]
>> 
>> I'm testing out suricata and pfring. Currently, I'm running suri 
>> using 22 pinned threads on the default 10G/82599ES interface
>> (enp4s0), and it's working fairly well.
>> 
>> Suricata config :
>> 
>> - interface: enp4s0    # Number of receive threads (>1 will enable
>> experimental flow pinned # runmode) threads: 22
>> 
>> I tried using 22 ZC queues, configuring suri like this:
>> 
>> - interface: zc:99@0 threads: 1 - interface: zc:99@1 threads: 1 
>> [...] - interface: zc:99@21 threads: 1
>> 
>> and zbalance like so:
>> 
>> ./zbalance_ipc -i zc:enp4s0 -m 4 -n 22,1 -c 99 -g 0 -S 1
>> 
>> and it seemed to work, however, the performance was drastically
>> reduced.
>> 
>> With normal pf_ring (-i enp4s0) : capture.kernel_packets
>> | Total                     | 13211759633 capture.kernel_drops
>> | Total                     | 3867805
>> 
>> With 22 zc queues: capture.kernel_packets                     |
>> Total                     | 92423279 capture.kernel_drops
>> | Total                     | 619667008
>> 
>> I also tried just using one ZC queue, but found I seem to get
>> better performance with the multi-threaded non-ZC version.
>> 
>> Is there something I can do to get the benefits of both ZC and
>> multi-threading?
>> 
>> Thanks,
>> 
>> -- Jim Hranicky Data Security Specialist UF Information Technology 
>> 105 NW 16TH ST Room #104 GAINESVILLE FL 32603-1826 352-273-1341
>> 
>> -------- Forwarded Message -------- Subject: Re: [Oisf-users]
>> suricata with PF_RING Zero Copy/Pinned CPUs Date: Thu, 11 Aug 2016
>> 10:51:36 -0400 From: Jim Hranicky <[email protected]> To: Chris Wakelin
>> <[email protected]>,
>> [email protected]
>> 
>> Leaving off the -i seems work, thanks for your help.
>> 
>> Unfortunately, performance drops significantly with this setup.
>> 
>> With normal pf_ring (-i enp4s0) : capture.kernel_packets
>> | Total                     | 13211759633 capture.kernel_drops
>> | Total                     | 3867805
>> 
>> With 22 zc queues: capture.kernel_packets                     |
>> Total                     | 92423279 capture.kernel_drops
>> | Total                     | 619667008
>> 
>> I guess this is a question for the pfring list.
>> 
>> Jim
>> 
>> On 08/10/2016 09:08 PM, Jim Hranicky wrote:
>>> If you leave off the -i flag, will suri just use all the
>>> interfaces found in the config?
>>> 
>>> Thanks for the response, Jim
>>> 
>>> On 08/10/2016 07:40 PM, Chris Wakelin wrote:
>>>> I had this working when I still worked for the University of
>>>> Reading (which I left a year ago to join ET/Proofpoint). Alas I
>>>> don't have access to a machine running PF_RING at the moment.
>>>> 
>>>> I did have a message thread in November 2014 about ZC +
>>>> hugepages and Suricata on the PF_RING (ntop-misc) mailing list 
>>>> (http://lists.ntop.org/mailman/listinfo/ntop-misc - you need
>>>> to subscribe to see archives though).
>>>> 
>>>> It looks like I had (uisng pfdnacluser_master rather than
>>>> zbalance_ipc)
>>>> 
>>>> insmod ixgbe.ko RSS=1,1 mtu=1522
>>>> adapters_to_enable=xx:xx:xx:xx:xx:xx num_rx_slots=32768
>>>> num_tx_slots=0 numa_cpu_affinity=1,1 ifconfig up dna0
>>>> 
>>>> echo 1024 >
>>>> /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages cat
>>>> /proc/meminfo | grep Huge mount -t hugetlbfs none /mnt/huge
>>>> 
>>>> pfdnacluster_master -i dna0 -c 1 -n 15,1 -r 15 -m 4 -u
>>>> /mnt/huge -d
>>>> 
>>>> I was running ARGUS (flow-capture) on dnacl:1@15
>>>> 
>>>> My Suricata config looked like (I know the cluster settings are
>>>> ignored):-
>>>> 
>>>> pfring: - interface: dnacl:1@0 threads: 1 cluster-id: 99 
>>>> cluster-type: cluster_flow - interface: dnacl:1@1 threads: 1 
>>>> cluster-id: 99 cluster-type: cluster_flow
>>>> 
>>>> ...
>>>> 
>>>> - interface: dnacl:1@14 threads: 1 cluster-id: 99 cluster-type:
>>>> cluster_flow
>>>> 
>>>> 
>>>> My problem with hugepages turned out to be having Suricata
>>>> running as a non-root user meant it didn't get the right
>>>> permissions to access them. I said I was going to investigate a
>>>> fix in Suricata to drop privileges later, but seems I never got
>>>> around to it. I ran Suricata as root instead as a workaround
>>>> (obviously not ideal). I also had CPUs with 16 real cores and
>>>> hyperthreading disabled (that latter I now understand was 
>>>> probably a bad idea!)
>>>> 
>>>> Hope this gives some useful pointers, Best Wishes, Chris
>>>> 
>>>> On 10/08/16 18:07, Jim Hranicky wrote:
>>>>> I'm able to run and get good results with using multiple
>>>>> threads on a pf-enabled interface when not running in ZC
>>>>> mode. I'm a little stumped though as to how to configure
>>>>> zbalance_ipc/suricata to use multiple threads using ZC.
>>>>> 
>>>>> When run 1 queue for suri
>>>>> 
>>>>> ./zbalance_ipc -i zc:enp4s0 -m 4 -n 1,1 -c 99 -g 0 -S 1
>>>>> 
>>>>> then specify the interface like so
>>>>> 
>>>>> - interface: zc:99@0 threads: 22
>>>>> 
>>>>> and run this command
>>>>> 
>>>>> /opt/suricata/bin/suricata -i zc:99@0 -c
>>>>> /opt/suricata/etc/suricata/suricata.yaml --pfring -vv
>>>>> 
>>>>> I get this:
>>>>> 
>>>>> 10/8/2016 -- 13:00:01 - <Perf> - (RX#01) Using PF_RING
>>>>> v.6.5.0, interface zc:99@0, cluster-id 1
>>>>> 
>>>>> 10/8/2016 -- 13:00:01 - <Error> - [ERRCODE:
>>>>> SC_ERR_PF_RING_OPEN(34)] - Failed to open zc:99@0:
>>>>> pfring_open error. Check if zc:99@0 exists and pf_ring module
>>>>> is loaded.
>>>>> 
>>>>> 10/8/2016 -- 13:00:01 - <Error> - [ERRCODE:
>>>>> SC_ERR_PF_RING_OPEN(34)] - Failed to open zc:99@0:
>>>>> pfring_open error. Check if zc:99@0 exists and pf_ring module
>>>>> is loaded.
>>>>> 
>>>>> 10/8/2016 -- 13:00:01 - <Error> - [ERRCODE:
>>>>> SC_ERR_PF_RING_OPEN(34)] - Failed to open zc:99@0:
>>>>> pfring_open error. Check if zc:99@0 exists and pf_ring module
>>>>> is loaded.
>>>>> 
>>>>> Should I run zbalance_ipc with multiple queues? How do I
>>>>> specify the interfaces on the CL and the config file? FWIW I
>>>>> seem to get about 40% more events per second when running
>>>>> with multiple threads over running with 1 ZC queue.
>>>>> 
>>>>> Thanks,
>>>>> 
>>>>> -- Jim Hranicky Data Security Specialist UF Information
>>>>> Technology 105 NW 16TH ST Room #104 GAINESVILLE FL
>>>>> 32603-1826 352-273-1341 
>>>>> _______________________________________________ Suricata IDS
>>>>> Users mailing list: [email protected] 
>>>>> Site: http://suricata-ids.org | Support:
>>>>> http://suricata-ids.org/support/ List:
>>>>> https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>>>>
>>>>> 
Suricata User Conference November 9-11 in Washington, DC: http://oisfevents.net
>>>>> 
>>>> _______________________________________________ Suricata IDS
>>>> Users mailing list: [email protected] Site:
>>>> http://suricata-ids.org | Support:
>>>> http://suricata-ids.org/support/ List:
>>>> https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>>>
>>>> 
Suricata User Conference November 9-11 in Washington, DC: http://oisfevents.net
>>>> 
>>> _______________________________________________ Suricata IDS
>>> Users mailing list: [email protected] Site:
>>> http://suricata-ids.org | Support:
>>> http://suricata-ids.org/support/ List:
>>> https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>>
>>> 
Suricata User Conference November 9-11 in Washington, DC: http://oisfevents.net
>>> 
>> _______________________________________________ Suricata IDS Users
>> mailing list: [email protected] Site:
>> http://suricata-ids.org | Support:
>> http://suricata-ids.org/support/ List:
>> https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>
>> 
Suricata User Conference November 9-11 in Washington, DC: http://oisfevents.net
>> _______________________________________________ Ntop-misc mailing
>> list [email protected] 
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 
> 
> 
> _______________________________________________ Ntop-misc mailing
> list [email protected] 
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to