Hi all,

I am testing the performance of memif to connect multiple VPP. In my
simple test, I am finding that with each memif connections, overall
packet throughput gets drastically reduced.


1. I configured VPP and did a performance test.
   command

    ```
    set int ip address TenGigabitEthernet82/0/0 100.96.0.1/16
    set int ip address TenGigabitEthernet85/0/1 172.30.1.2/29

    set int state TenGigabitEthernet82/0/0 up
    set int state TenGigabitEthernet85/0/1 up

    ```
 Tested download from a client 100.96.2.1
 The download speed of a 3GB  file was 9.30Gb/s

2. Connected 2 VPP instances using memif

  in VPP1
    ```
    set int ip address TenGigabitEthernet82/0/0 100.96.0.1/16
    set int state TenGigabitEthernet82/0/0 up
    create interface memif id 1 socket-id 0 master
    set int state memif0/1 up
    set int ip address memif0/1 10.10.3.1/24
    ip route add 0.0.0.0/0 via memif0/1 10.10.3.3
    ```
    ran in main-core 1
  in VPP2
  ```
    set int state TenGigabitEthernet85/0/1 up
    set int ip address TenGigabitEthernet85/0/1 172.30.1.2/29
    create interface memif id 1 socket-id 0 slave
    set int state memif0/1 up
    set int ip address memif0/1 10.10.3.3/24
    ip route add 0.0.0.0/0 via TenGigabitEthernet85/0/1 172.30.1.1
    ip route add 100.96.0.0/16 via memif0/1 10.10.3.1
  ```
    ran in main-core 2
 Tested download from a client 100.96.2.1
 The download speed of a 3GB  file was  7.83Gb/s


3. connected 3  VPP instances
```
  +------+    +------+    +------+
  | VPP1 |----| VPP2 |----| VPP3 |
  +------+    +------+    +------+
```
   in VPP1
   ```
    set int ip address TenGigabitEthernet82/0/0 100.96.0.1/16
    set int state TenGigabitEthernet82/0/0 up

    comment{create interface memif id 1 socket-id 0 master}
    create memif socket id 2 filename /tmp/vppsh/memif.sock
    create interface memif id 0 socket-id 2 master
    set int state memif2/0 up
    set int ip address memif2/0 10.10.2.1/24
    ip route add 0.0.0.0/0 via memif2/0 10.10.2.2
  ```
    ran in main-core 1
   in VPP2
  ```
    create memif socket id 2 filename /tmp/vppsh/memif.sock
    create interface memif id 0 socket-id 2 slave
    set int state memif2/0 up
    set int ip address memif2/0 10.10.2.2/24
    create interface memif id 1 socket-id 0 master
    set int state memif0/1 up
    set int ip address memif0/1 10.10.3.2/24
    ip route add 100.96.0.0/16 via memif2/0 10.10.2.1
    ip route add 0.0.0.0/0 via memif0/1 10.10.3.3
  ```
    ran in main-core 2
  in VPP3

  ```
    set int state TenGigabitEthernet85/0/1 up
    set int ip address TenGigabitEthernet85/0/1 172.30.1.2/29
    create interface memif id 1 socket-id 0 slave
    set int state memif0/1 up
    set int ip address memif0/1 10.10.3.3/24
    ip route add 10.10.2.0/24 via memif0/1 10.10.3.2
    ip route add 0.0.0.0/0 via TenGigabitEthernet85/0/1 172.30.1.1
    ip route add 100.96.0.0/16 via memif0/1 10.10.3.2
  ```
    ran in main-core 3

tested download from a client 100.96.2.1

The download speed of a 3GB  file was   3.99Gb/s

As it can be seen, the performance drops from 9.3 to 7.83 to 3.99, as
number of memif interconnect increases.

Is this the expected behavior or did I miss some tuning?

The CPU is Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz, total ram is
16GB CPU pinning is not done. vpp version is: 19.01.3-rc0

Thanks and Regards,

Raj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14545): https://lists.fd.io/g/vpp-dev/message/14545
Mute This Topic: https://lists.fd.io/mt/47761827/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to