Hi Alfredo,
We are using SSD and wonder how can that be slow !! 
Please help to validate it!

Sent from my iPhone

> On Aug 23, 2019, at 1:49 PM, Alfredo Cardigliano <[email protected]> wrote:
> 
> Hi
> please note the "WARNING: [sender] waiting.. (buffer empty)” messages in your 
> output,
> this means that the IO is not fast enough reading data from disk and 
> providing it to disk2n,
> what kind of storage are you using?
> 
> Alfredo
> 
>> On 23 Aug 2019, at 10:05, Chandrika Gautam <[email protected]> 
>> wrote:
>> 
>> 
>> 
>> Sent from my iPhone
>> 
>> Begin forwarded message:
>> 
>>> From: Chandrika Gautam <[email protected]>
>>> Date: August 23, 2019 at 1:34:05 PM GMT+5:30
>>> To: "[email protected]" <[email protected]>
>>> Subject: Fw: Not able to achieve consistent 10Gbps on Napatech 2x40G card 
>>> for more than 10 seconds
>>> 
>>> Hi,
>>> 
>>> We are evaluating disk2n on Napatech 2x40G card to achieve sending traffic 
>>> more than 10G. 
>>> 
>>> We started for 10G traffic with 9 pcaps of around 1.5G to send on Napatech 
>>> port 0. 
>>> 
>>> Initially for few seconds we were able to observe 10G traffic then 
>>> afterwards it gradually reduces to 2.6Gbps and remains around 2-2.6Gbps for 
>>> rest of the period.
>>> 
>>> So, we were not able to achieve consistent 10G traffic with disk2n on 
>>> Napatech card.
>>> 
>>> We have also purchased the license for n2disk. 
>>> 
>>> Question. How can we achieve 10Gbps traffic on 2x40Napatech card with 
>>> driver ntanl_package_3gd-11.6.0-linux?
>>> 
>>> OS details
>>> 
>>> # uname -r
>>> 3.10.0-957.1.3.el7.x86_64
>>> 
>>> # cat /etc/redhat-release 
>>> CentOS Linux release 7.6.1810 (Core) 
>>> 
>>> # numactl --hardware
>>> available: 2 nodes (0-1)
>>> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 20 21 22 23 24 25 26 27 28 29
>>> node 0 size: 131034 MB
>>> node 0 free: 127156 MB
>>> node 1 cpus: 10 11 12 13 14 15 16 17 18 19 30 31 32 33 34 35 36 37 38 39
>>> node 1 size: 131071 MB
>>> node 1 free: 125674 MB
>>> node distances:
>>> node   0   1 
>>>   0:  10  20 
>>>   1:  20  10 
>>> 
>>> 
>>> 
>>> - O utput of disk2n is shared in  attached file.
>>> 
>>> 
>>> 
>>> root@RW-MUM-COUCHBASE1 ~]# 
>>> 
>>> We tried without caching option also (-b) and the max throughput was 2.5G
>>> 
>>> # disk2n -i nt:0 -m /tmp/playpcap.txt -c 1 -w 2 -S 3   -C 1024 -I 450  -v
>>> 
>>> While browsing the link 
>>> https://www.ntop.org/guides/pf_ring/modules/napatech.html
>>> 
>>> in section 
>>> 
>>> 8.4. Napatech and Packet Copy
>>> If you use the PF_RING (non-ZC) API packets are read in zero-copy. Instead 
>>> if you use PF_RING ZC API, a per-packet copy takes place, which is required 
>>> to move payload data from Napatech-memory to ZC memory. Keep this in mind!
>>> 
>>> It describes that if we use PF_RING ZC then 1 copy happens. Could you 
>>> please explain this?
>>> 
>>> -Iqbal
>>> 
>> <observation.txt>
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected]
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to