Hello Neale,

Thanks for the quick answer. What i meant by not working is that traffic is not going through them once all paths are up. Interfaces are up and running though I have checked them.

*Output with one path:*

/sudo vppctl -s /run/vpp/cli-vpprouter1.sock sh ip fib 48.0.0.0/8/


ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] epoch:0 flags:none locks:[adjacency:1, recursive-resolution:1, default-route:1, nat-hi:2, ]
48.0.0.0/8 fib:0 index:23 locks:2
  CLI refs:1 src-flags:added,contributing,active,
    path-list:[33] locks:2 flags:shared, uPRF-list:38 len:1 itfs:[2, ]
      path:[42] pl-index:33 ip4 weight=1 pref=0 recursive: oper-flags:resolved,
        via 10.10.3.2 in fib:0 via-fib:24 via-dpo:[dpo-load-balance:25]

 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:26 buckets:1 uRPF:38 to:[19927080:916645680]]     [0] [@12]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:28 to:[0:0] via:[1310268869:60272367974]]           [0] [@5]: ipv4 via 10.10.3.2 memif0/0: mtu:9000 next:3 02fef98b60f802fe107edd530800

*
*

*Output with 2nd path added:*

/sudo vppctl -s /run/vpp/cli-vpprouter1.sock sh ip fib 48.0.0.0/8/

ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] epoch:0 flags:none locks:[adjacency:1, recursive-resolution:2, default-route:1, nat-hi:2, ]
48.0.0.0/8 fib:0 index:23 locks:2
  CLI refs:1 src-flags:added,contributing,active,
    path-list:[34] locks:2 flags:shared, uPRF-list:37 len:2 itfs:[2, 3, ]
      path:[40] pl-index:34 ip4 weight=1 pref=0 recursive: oper-flags:resolved,
        via 10.10.3.2 in fib:0 via-fib:24 via-dpo:[dpo-load-balance:25]
      path:[47] pl-index:34 ip4 weight=1 pref=0 recursive: oper-flags:resolved,
        via 10.10.6.2 in fib:0 via-fib:26 via-dpo:[dpo-load-balance:28]

 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:26 buckets:2 uRPF:37 to:[205189323:9438708858]]     [0] [@12]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:28 to:[0:0] via:[1487667797:68432718662]]           [0] [@5]: ipv4 via 10.10.3.2 memif0/0: mtu:9000 next:3 02fef98b60f802fe107edd530800     [1] [@12]: dpo-load-balance: [proto:ip4 index:28 buckets:1 uRPF:36 to:[0:0] via:[129101773:5938681558]]           [0] [@5]: ipv4 via 10.10.6.2 memif2/0: mtu:9000 next:5 02fe0b17bc3602fe734e64440800*
*


*Output with 3rd path added:*

/sudo vppctl -s /run/vpp/cli-vpprouter1.sock sh ip fib 48.0.0.0/8/

ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] epoch:0 flags:none locks:[adjacency:1, recursive-resolution:3, default-route:1, nat-hi:2, ]
48.0.0.0/8 fib:0 index:23 locks:2
  CLI refs:1 src-flags:added,contributing,active,
    path-list:[33] locks:2 flags:shared, uPRF-list:38 len:3 itfs:[2, 3, 4, ]       path:[42] pl-index:33 ip4 weight=1 pref=0 recursive: oper-flags:resolved,
        via 10.10.3.2 in fib:0 via-fib:24 via-dpo:[dpo-load-balance:25]
      path:[44] pl-index:33 ip4 weight=1 pref=0 recursive: oper-flags:resolved,
        via 10.10.6.2 in fib:0 via-fib:26 via-dpo:[dpo-load-balance:28]
      path:[43] pl-index:33 ip4 weight=1 pref=0 recursive: oper-flags:resolved,
        via 10.10.12.2 in fib:0 via-fib:27 via-dpo:[dpo-load-balance:29]

 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:26 buckets:16 uRPF:38 to:[366212091:16845756186]]     [0] [@12]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:28 to:[0:0] via:[1570111449:72225126654]]           [0] [@5]: ipv4 via 10.10.3.2 memif0/0: mtu:9000 next:3 02fef98b60f802fe107edd530800     [1] [@12]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:28 to:[0:0] via:[1570111449:72225126654]]           [0] [@5]: ipv4 via 10.10.3.2 memif0/0: mtu:9000 next:3 02fef98b60f802fe107edd530800     [2] [@12]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:28 to:[0:0] via:[1570111449:72225126654]]           [0] [@5]: ipv4 via 10.10.3.2 memif0/0: mtu:9000 next:3 02fef98b60f802fe107edd530800     [3] [@12]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:28 to:[0:0] via:[1570111449:72225126654]]           [0] [@5]: ipv4 via 10.10.3.2 memif0/0: mtu:9000 next:3 02fef98b60f802fe107edd530800     [4] [@12]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:28 to:[0:0] via:[1570111449:72225126654]]           [0] [@5]: ipv4 via 10.10.3.2 memif0/0: mtu:9000 next:3 02fef98b60f802fe107edd530800     [5] [@12]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:28 to:[0:0] via:[1570111449:72225126654]]           [0] [@5]: ipv4 via 10.10.3.2 memif0/0: mtu:9000 next:3 02fef98b60f802fe107edd530800     [6] [@12]: dpo-load-balance: [proto:ip4 index:28 buckets:1 uRPF:36 to:[0:0] via:[198665280:9138602880]]           [0] [@5]: ipv4 via 10.10.6.2 memif2/0: mtu:9000 next:5 02fe0b17bc3602fe734e64440800     [7] [@12]: dpo-load-balance: [proto:ip4 index:28 buckets:1 uRPF:36 to:[0:0] via:[198665280:9138602880]]           [0] [@5]: ipv4 via 10.10.6.2 memif2/0: mtu:9000 next:5 02fe0b17bc3602fe734e64440800     [8] [@12]: dpo-load-balance: [proto:ip4 index:28 buckets:1 uRPF:36 to:[0:0] via:[198665280:9138602880]]           [0] [@5]: ipv4 via 10.10.6.2 memif2/0: mtu:9000 next:5 02fe0b17bc3602fe734e64440800     [9] [@12]: dpo-load-balance: [proto:ip4 index:28 buckets:1 uRPF:36 to:[0:0] via:[198665280:9138602880]]           [0] [@5]: ipv4 via 10.10.6.2 memif2/0: mtu:9000 next:5 02fe0b17bc3602fe734e64440800     [10] [@12]: dpo-load-balance: [proto:ip4 index:28 buckets:1 uRPF:36 to:[0:0] via:[198665280:9138602880]]           [0] [@5]: ipv4 via 10.10.6.2 memif2/0: mtu:9000 next:5 02fe0b17bc3602fe734e64440800     [11] [@12]: dpo-load-balance: [proto:ip4 index:29 buckets:1 uRPF:34 to:[0:0] via:[695832443:32008292378]]           [0] [@5]: ipv4 via 10.10.12.2 memif4/0: mtu:9000 next:6 02feec1766b902fe7d3d1e0f0800     [12] [@12]: dpo-load-balance: [proto:ip4 index:29 buckets:1 uRPF:34 to:[0:0] via:[695832443:32008292378]]           [0] [@5]: ipv4 via 10.10.12.2 memif4/0: mtu:9000 next:6 02feec1766b902fe7d3d1e0f0800     [13] [@12]: dpo-load-balance: [proto:ip4 index:29 buckets:1 uRPF:34 to:[0:0] via:[695832443:32008292378]]           [0] [@5]: ipv4 via 10.10.12.2 memif4/0: mtu:9000 next:6 02feec1766b902fe7d3d1e0f0800     [14] [@12]: dpo-load-balance: [proto:ip4 index:29 buckets:1 uRPF:34 to:[0:0] via:[695832443:32008292378]]           [0] [@5]: ipv4 via 10.10.12.2 memif4/0: mtu:9000 next:6 02feec1766b902fe7d3d1e0f0800     [15] [@12]: dpo-load-balance: [proto:ip4 index:29 buckets:1 uRPF:34 to:[0:0] via:[695832443:32008292378]]           [0] [@5]: ipv4 via 10.10.12.2 memif4/0: mtu:9000 next:6 02feec1766b902fe7d3d1e0f0800

*
*

Vanias*
*


On 10/12/20 1:03 μ.μ., Neale Ranns (nranns) wrote:

Hello Anonymous,

In order to debug IP forwarding issues I’m going to need more info. Please collect:

  ‘sh ip fib <YOUR_PREFIX>’

From a working and non-working configuration.

All FIB load-balancing is per-flow. So if you don’t have enough flows you won’t [necessarily] get the load distribution that you want.

/neale

*From: *<vpp-dev@lists.fd.io> on behalf of "vnaposto...@gmail.com" <vnaposto...@gmail.com>
*Date: *Wednesday 9 December 2020 at 16:43
*To: *"vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
*Subject: *[vpp-dev] VPP ip route add multiple paths

Hello to everyone,

I am having an issue when trying to apply multiple paths on a running VPP to 3 other VPPs.

Specifically after 2 paths are up when I add the 3rd path one or both the first two stop working. I am using these commands:

/sudo vppctl -s /run/vpp/cli-vpprouter1.sock ip route add 48.0.0.0/8 via 10.10.3.2/

/sudo vppctl -s /run/vpp/cli-vpprouter1.sock ip route add 48.0.0.0/8 via 10.10.6.2/

/sudo vppctl -s /run/vpp/cli-vpprouter1.sock ip route add 48.0.0.0/8 via 10.10.12.2/

On the same subject I have an issue when trying to set different weights on a 2 route path. As suggested on this

doc https://docs.fd.io/vpp/21.01/de/db4/clicmd_src_vnet_ip.html#clicmd_ip_route I use:

/sudo vppctl -s /run/vpp/cli-vpprouter1.sock ip route add 48.0.0.0/8 via 10.10.3.2 weight 9/

/sudo vppctl -s /run/vpp/cli-vpprouter1.sock ip route add 48.0.0.0/8 via 10.10.6.2 weight 1 /

but the traffic on weight 1 route is not the 1/10 of the total traffic, more like 4/10 of it. And that's the good scenario. When traffic

gets elevated VPP sends most of the traffic on the path with the lower value rather than what I expected it to do.

All vpps are connected through memif sockets.

Can anyone help me with any of these issues? Is it me I am doing something extremely wrong?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18306): https://lists.fd.io/g/vpp-dev/message/18306
Mute This Topic: https://lists.fd.io/mt/78831750/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to