Cool! :-)

Cheers, 
Florin

> On Feb 19, 2018, at 9:03 AM, Ray Kinsella <m...@ashroe.eu> wrote:
> 
> Insufficient hugepages in the VM - thanks for everyone's help!
> 
> 
> Ray K
> 
> On 19/02/2018 15:38, Ray Kinsella wrote:
>> Part of the mystery is solved in any case,
>> Looks to be related to running inside the VM.
>> DBGvpp# test tcp clients nclients 4 mbytes 256 test-timeout 100 uri 
>> tcp://192.168.1.1/9000
>> 4 three-way handshakes in 4.10 seconds .98/s
>> Test started at 10.538212
>> Test finished at 26.065026
>> 1073741824 bytes (1024 mbytes, 1 gbytes) in 15.53 seconds
>> 69154035.62 bytes/second full-duplex
>> .5532 gbit/second full-duplex
>> Ray K
>> On 14/02/2018 15:51, Florin Coras wrote:
>>> Hi Ray,
>>> 
>>> The only thing missing with memif is TCO but that shouldn’t be a reason for 
>>> such a drop. I noticed you’re running a debug image, could you try with 
>>> release as well?
>>> 
>>> Cheers,
>>> Florin
>>> 
>>>> On Feb 14, 2018, at 7:42 AM, Ray Kinsella <m...@ashroe.eu 
>>>> <mailto:m...@ashroe.eu>> wrote:
>>>> 
>>>> 
>>>> Hi Florin,
>>>> 
>>>> So I connected the two containers directly as Memif Master/Slave, taking 
>>>> in VPP vSwitch completely out. Performance is double - but still is pretty 
>>>> awful.
>>>> 
>>>> Could this be because I am not using DPDK under the hood in either 
>>>> Container?
>>>> 
>>>> Ray K
>>>> 
>>>> DBGvpp# test tcp clients nclients 1 mbytes 16 test-timeout 100 
>>>> uritcp://192.168.1.1/9000
>>>> 1 three-way handshakes in .02 seconds 40.67/s
>>>> Test started at 308.999241
>>>> Test finished at 318.521999
>>>> 16777216 bytes (16 mbytes, 0 gbytes) in 9.52 seconds
>>>> 1761802.18 bytes/second full-duplex
>>>> .0141 gbit/second full-duplex
>>>> 
>>>> --------------- cone ---------------
>>>> DBGvpp# show error
>>>>   Count                    Node                  Reason
>>>>     23498              session-queue             Packets transmitted
>>>>         4            tcp4-rcv-process            Packets pushed into rx 
>>>> fifo
>>>>     23498            tcp4-established            Packets pushed into rx 
>>>> fifo
>>>>         4             ip4-icmp-input             echo replies sent
>>>>         1                arp-input               ARP replies sent
>>>> DBGvpp# show ha
>>>>              Name                Idx   Link  Hardware
>>>> local0                             0    down  local0
>>>>  local
>>>> memif0/0                           1     up   memif0/0
>>>>  Ethernet address 02:fe:70:35:68:de
>>>>  MEMIF interface
>>>>     instance 0
>>>> 
>>>> --------------- ctwo ---------------
>>>> DBGvpp# show error
>>>>   Count                    Node                  Reason
>>>>     23522              session-queue             Packets transmitted
>>>>         2            tcp4-rcv-process            Packets pushed into rx 
>>>> fifo
>>>>     23522            tcp4-established            Packets pushed into rx 
>>>> fifo
>>>>         1                ip4-glean               ARP requests sent
>>>>         4             ip4-icmp-input             unknown type
>>>>         1                arp-input               ARP request IP4 source 
>>>> address learned
>>>> DBGvpp# show ha
>>>>              Name                Idx   Link  Hardware
>>>> local0                             0    down  local0
>>>>  local
>>>> memif0/0                           1     up   memif0/0
>>>>  Ethernet address 02:fe:a3:b6:94:cd
>>>>  MEMIF interface
>>>>     instance 0
>>>> 
>>>> 
>>>> On 13/02/2018 16:37, Florin Coras wrote:
>>>>> It would really help if read the whole email!
>>>>> Apparently the test finishes, albeit with miserable performance! So, for 
>>>>> some reason lots and lots of packets are lost and that’s what triggers 
>>>>> the “heuristic” in the test client that complains the connection is 
>>>>> stuck. What does “show error” say? Does memif output something for “show 
>>>>> ha”?
>>>>> Florin
>>>>>> On Feb 13, 2018, at 8:20 AM, Ray Kinsella <m...@ashroe.eu 
>>>>>> <mailto:m...@ashroe.eu>> wrote:
>>>>>> 
>>>>>> Still stuck ...
>>>>>> 
>>>>>> DBGvpp# test tcp clients nclients 1 mbytes 16 test-timeout 100 uri 
>>>>>> tcp://192.168.1.1/9000
>>>>>> 1 three-way handshakes in .05 seconds 21.00/s
>>>>>> Test started at 205.983776
>>>>>> 0: builtin_client_node_fn:216: stuck clients
>>>>>> Test finished at 229.687355
>>>>>> 16777216 bytes (16 mbytes, 0 gbytes) in 23.70 seconds
>>>>>> 707792.53 bytes/second full-duplex
>>>>>> .0057 gbit/second full-duplex
>>>>>> 
>>>>>> As a complete aside - pings appear to be slow enough through the VPP 
>>>>>> vSwitch?
>>>>>> 
>>>>>> DBGvpp# ping 192.168.1.2
>>>>>> 64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=64.8519 ms
>>>>>> 64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=69.1016 ms
>>>>>> 64 bytes from 192.168.1.2: icmp_seq=4 ttl=64 time=64.1253 ms
>>>>>> 64 bytes from 192.168.1.2: icmp_seq=5 ttl=64 time=62.5618 ms
>>>>>> 
>>>>>> Ray K
>>>>>> 
>>>>>> 
>>>>>> On 13/02/2018 16:10, Florin Coras wrote:
>>>>>>> test-timeout 100
>>>> 
>>> 
>>> 
> 
> 

Reply via email to