Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

2017-04-25 Thread Pierre Pfister (ppfister)
Hello all,

As mentioned by Ed, introducing return traffic would dramatically reduce the 
performance of the solution.
-> Return traffic typically consists of data packets, whereas forward traffic 
mostly consists of ACKs. So you will have to have significantly more LB boxes 
if you want to support all your return traffic.
-> Having to deal with return traffic also means that we need to either make 
sure return traffic goes through the same core, or add locks to the structures 
(for now, everything is lockless, per-core), or steer traffic for core to core.

There also is something that I am not sure to understand. You mentioned DNAT in 
order to steer the traffic to the AS, but how do you make sure the return 
traffic goes back to the LB ? My guess is that all the traffic coming out of 
the ASs is routed toward one LB, is that right ? How do you make sure the 
return traffic is evenly distributed between LBs ?

It's a pretty interesting requirement that you have, but I am quite sure the 
solution will have to be quite far from MagLev's design, and probably less 
efficient.

- Pierre


Le 25 avr. 2017 à 05:11, Zhou, Danny 
mailto:danny.z...@intel.com>> a écrit :

Share  my two cents as well:

Firstly, introducing GRE or whatever other tunneling protocols to LB introduces 
performance overhead (for encap and decap) to both the load balancer as well as 
the network service. Secondly, other mechanism on the network service node not 
only needs to decap the GRE but also needs to perform a DNAT operation in order 
to change the destination IP of the original frame from LB’s IP to the service 
entity’s IP, which introduces the complexity to the network service.

Existing well-known load balancers such as Netfilter or Nginx do not adopt this 
tunneling approach, they just simply do a service node selection followed by a 
NAT operation.

-Danny

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Ni, Hongjun
Sent: Tuesday, April 25, 2017 11:05 AM
To: Ed Warnicke mailto:hagb...@gmail.com>>
Cc: Li, Johnson mailto:johnson...@intel.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

Hi Ed,

Thanks for your prompt response.

This item is required to handle legacy AS, because some legacy AS does not want 
to change their underlay forwarding infrastructure.

Besides, some AS IPs are private and invisible outside the AS cluster domain, 
and not allowed to expose to external network.

Thanks,
Hongjun

From: Ed Warnicke [mailto:hagb...@gmail.com]
Sent: Tuesday, April 25, 2017 10:44 AM
To: Ni, Hongjun mailto:hongjun...@intel.com>>
Cc: vpp-dev@lists.fd.io; Li, Johnson 
mailto:johnson...@intel.com>>
Subject: Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

Hongjun,

I can see this point of view, but it radically reduces the scalability of the 
whole system.
Wouldn't it just make sense to run vpp or some other mechanism to decap the GRE 
on whatever is running the other AS and feed whatever we are
load balancing to?  Forcing back traffic through the central load balancer 
radically reduces scalability (which is why
Maglev, which inspired what we are doing here, doesn't do it that way either).

Ed

On Mon, Apr 24, 2017 at 7:18 PM, Ni, Hongjun 
mailto:hongjun...@intel.com>> wrote:
Hey,

Currently, traffic received for a given VIP (or VIP prefix) is tunneled using 
GRE towards
the different ASs in a way that (tries to) ensure that a given session will
always be tunneled to the same AS.

But in real environment, many Application Servers do not support GRE feature.
So we raise a requirement for LB in VPP:
(1). When received traffic for a VIP, the LB need to do load balance, then do 
DNAT to change traffic’s destination IP from VIP to AS’s IP.
(2). When returned traffic from AS, the LB will do SNAT first to change 
traffic’s source IP from AS’s IP to VIP, then go through load balance sessions, 
and then sent to clients.

Any comments about this requirement are welcome.

Thanks a lot,
Hongjun


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

2017-04-25 Thread Ni, Hongjun
Hi Pierre,

For LB distribution case, I think we could assign a node IP for each LB box.
When received packets from client, LB will do both SNAT and DNAT. i.e. source 
IP -> LB’s Node IP, destination IP -> AS’s IP.
When returned packets from AS, LB also do both DNAT and SNAT. i.e. source IP -> 
AS’s IP, destination IP -> Client’s IP.

Thanks,
Hongjun

From: Pierre Pfister (ppfister) [mailto:ppfis...@cisco.com]
Sent: Tuesday, April 25, 2017 3:12 PM
To: Zhou, Danny 
Cc: Ni, Hongjun ; Ed Warnicke ; Li, 
Johnson ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

Hello all,

As mentioned by Ed, introducing return traffic would dramatically reduce the 
performance of the solution.
-> Return traffic typically consists of data packets, whereas forward traffic 
mostly consists of ACKs. So you will have to have significantly more LB boxes 
if you want to support all your return traffic.
-> Having to deal with return traffic also means that we need to either make 
sure return traffic goes through the same core, or add locks to the structures 
(for now, everything is lockless, per-core), or steer traffic for core to core.

There also is something that I am not sure to understand. You mentioned DNAT in 
order to steer the traffic to the AS, but how do you make sure the return 
traffic goes back to the LB ? My guess is that all the traffic coming out of 
the ASs is routed toward one LB, is that right ? How do you make sure the 
return traffic is evenly distributed between LBs ?

It's a pretty interesting requirement that you have, but I am quite sure the 
solution will have to be quite far from MagLev's design, and probably less 
efficient.

- Pierre


Le 25 avr. 2017 à 05:11, Zhou, Danny 
mailto:danny.z...@intel.com>> a écrit :

Share  my two cents as well:

Firstly, introducing GRE or whatever other tunneling protocols to LB introduces 
performance overhead (for encap and decap) to both the load balancer as well as 
the network service. Secondly, other mechanism on the network service node not 
only needs to decap the GRE but also needs to perform a DNAT operation in order 
to change the destination IP of the original frame from LB’s IP to the service 
entity’s IP, which introduces the complexity to the network service.

Existing well-known load balancers such as Netfilter or Nginx do not adopt this 
tunneling approach, they just simply do a service node selection followed by a 
NAT operation.

-Danny

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Ni, Hongjun
Sent: Tuesday, April 25, 2017 11:05 AM
To: Ed Warnicke mailto:hagb...@gmail.com>>
Cc: Li, Johnson mailto:johnson...@intel.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

Hi Ed,

Thanks for your prompt response.

This item is required to handle legacy AS, because some legacy AS does not want 
to change their underlay forwarding infrastructure.

Besides, some AS IPs are private and invisible outside the AS cluster domain, 
and not allowed to expose to external network.

Thanks,
Hongjun

From: Ed Warnicke [mailto:hagb...@gmail.com]
Sent: Tuesday, April 25, 2017 10:44 AM
To: Ni, Hongjun mailto:hongjun...@intel.com>>
Cc: vpp-dev@lists.fd.io; Li, Johnson 
mailto:johnson...@intel.com>>
Subject: Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

Hongjun,

I can see this point of view, but it radically reduces the scalability of the 
whole system.
Wouldn't it just make sense to run vpp or some other mechanism to decap the GRE 
on whatever is running the other AS and feed whatever we are
load balancing to?  Forcing back traffic through the central load balancer 
radically reduces scalability (which is why
Maglev, which inspired what we are doing here, doesn't do it that way either).

Ed

On Mon, Apr 24, 2017 at 7:18 PM, Ni, Hongjun 
mailto:hongjun...@intel.com>> wrote:
Hey,

Currently, traffic received for a given VIP (or VIP prefix) is tunneled using 
GRE towards
the different ASs in a way that (tries to) ensure that a given session will
always be tunneled to the same AS.

But in real environment, many Application Servers do not support GRE feature.
So we raise a requirement for LB in VPP:
(1). When received traffic for a VIP, the LB need to do load balance, then do 
DNAT to change traffic’s destination IP from VIP to AS’s IP.
(2). When returned traffic from AS, the LB will do SNAT first to change 
traffic’s source IP from AS’s IP to VIP, then go through load balance sessions, 
and then sent to clients.

Any comments about this requirement are welcome.

Thanks a lot,
Hongjun


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io

Re: [vpp-dev] five tuple nat

2017-04-25 Thread otroan
Ewan,

> Do we have any plan to surpport  five tuple nat like linux kernel?

That should already be supported in the SNAT plugin.
https://wiki.fd.io/view/VPP/SNAT

Best regards,
Ole


signature.asc
Description: Message signed with OpenPGP
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

2017-04-25 Thread Pierre Pfister (ppfister)

Le 25 avr. 2017 à 09:52, Ni, Hongjun 
mailto:hongjun...@intel.com>> a écrit :

Hi Pierre,

For LB distribution case, I think we could assign a node IP for each LB box.
When received packets from client, LB will do both SNAT and DNAT. i.e. source 
IP -> LB’s Node IP, destination IP -> AS’s IP.
When returned packets from AS, LB also do both DNAT and SNAT. i.e. source IP -> 
AS’s IP, destination IP -> Client’s IP.

I see.
Doing so you completely hide the client's source address from the application.
You also require per-connexion binding at the load balancer (MagLev does 
per-connexion binding, but in a way which allows for hash collisions, because 
it is not a big deal if two flows use the same entry in the hash table. This 
allows for smaller and fixed size hash table, which also provides a performance 
advantage to MagLev).

In my humble opinion, using SNAT+DNAT is a terribly bad idea, so I would advise 
you to reconsider finding a way to either:
- Enable any type of packet tunneling protocol in your ASs (IPinIP, L2TP, 
whatever-other-protocol, and extend VPP's LB plugin with the one you pick).
- Put some box closer to the ASs (bump in the wire) for decap.
- If your routers support MPLS, you could also use it as encap.

If you really want to use SNAT+DNAT (god forbid), and are willing to suffer (or 
somehow like suffering), you may try to:
- Use VPP's SNAT on the client-facing interface. The SNAT will just change 
clients source addresses to one of LB's source addresses.
- Extend VPP's LB plugin to support DNAT "encap".
- Extend VPP's LB plugin to support return traffic and stateless SNAT base on 
LB flow table (And find a way to make that work on multiple cores...).
The client->AS traffic, in VPP, would do ---> client-facing-iface --> SNAT --> 
LB(DNAT) --> AS-facing-iface
The AS->client traffic, in VPP, would do ---> AS-facing-iface --> LB(Stateless 
SNAT) --> SNAT Plugin (doing DNAT-back) --> client-facing-iface

Now the choice is all yours.
But I will have warned you.

Cheers,

- Pierre


Thanks,
Hongjun

From: Pierre Pfister (ppfister) [mailto:ppfis...@cisco.com]
Sent: Tuesday, April 25, 2017 3:12 PM
To: Zhou, Danny mailto:danny.z...@intel.com>>
Cc: Ni, Hongjun mailto:hongjun...@intel.com>>; Ed 
Warnicke mailto:hagb...@gmail.com>>; Li, Johnson 
mailto:johnson...@intel.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

Hello all,

As mentioned by Ed, introducing return traffic would dramatically reduce the 
performance of the solution.
-> Return traffic typically consists of data packets, whereas forward traffic 
mostly consists of ACKs. So you will have to have significantly more LB boxes 
if you want to support all your return traffic.
-> Having to deal with return traffic also means that we need to either make 
sure return traffic goes through the same core, or add locks to the structures 
(for now, everything is lockless, per-core), or steer traffic for core to core.

There also is something that I am not sure to understand. You mentioned DNAT in 
order to steer the traffic to the AS, but how do you make sure the return 
traffic goes back to the LB ? My guess is that all the traffic coming out of 
the ASs is routed toward one LB, is that right ? How do you make sure the 
return traffic is evenly distributed between LBs ?

It's a pretty interesting requirement that you have, but I am quite sure the 
solution will have to be quite far from MagLev's design, and probably less 
efficient.

- Pierre


Le 25 avr. 2017 à 05:11, Zhou, Danny 
mailto:danny.z...@intel.com>> a écrit :

Share  my two cents as well:

Firstly, introducing GRE or whatever other tunneling protocols to LB introduces 
performance overhead (for encap and decap) to both the load balancer as well as 
the network service. Secondly, other mechanism on the network service node not 
only needs to decap the GRE but also needs to perform a DNAT operation in order 
to change the destination IP of the original frame from LB’s IP to the service 
entity’s IP, which introduces the complexity to the network service.

Existing well-known load balancers such as Netfilter or Nginx do not adopt this 
tunneling approach, they just simply do a service node selection followed by a 
NAT operation.

-Danny

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Ni, Hongjun
Sent: Tuesday, April 25, 2017 11:05 AM
To: Ed Warnicke mailto:hagb...@gmail.com>>
Cc: Li, Johnson mailto:johnson...@intel.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

Hi Ed,

Thanks for your prompt response.

This item is required to handle legacy AS, because some legacy AS does not want 
to change their underlay forwarding infrastructure.

Besides, some AS IPs are private and invisible outside the AS cluster domain, 
and not allowed to expose to external network.

Thanks,
H

Re: [vpp-dev] How can I get API messages IDs?

2017-04-25 Thread otroan
Hi,

> Every API message has a ID number, where can I get the specific number?

The API client gets the message dictionary on connect.
The message ID numbers depend on the plugins loaded and so on.

There is an API where you can map name to ID.
vppapiclient.h:
  int vac_get_msg_index(unsigned char * name);

or you can iterate through
api_main_t *am = &api_main;
am->msg_index_by_name_and_crc;

Typically the language bindings would hide this for you. What are you trying to 
do?

Best regards,
Ole


signature.asc
Description: Message signed with OpenPGP
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

2017-04-25 Thread Zhou, Danny
Thanks Pierre, comments inline.

From: Pierre Pfister (ppfister) [mailto:ppfis...@cisco.com]
Sent: Tuesday, April 25, 2017 4:11 PM
To: Ni, Hongjun 
Cc: Zhou, Danny ; Ed Warnicke ; Li, 
Johnson ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Requirement on Load Balancer plugin for VPP


Le 25 avr. 2017 à 09:52, Ni, Hongjun 
mailto:hongjun...@intel.com>> a écrit :

Hi Pierre,

For LB distribution case, I think we could assign a node IP for each LB box.
When received packets from client, LB will do both SNAT and DNAT. i.e. source 
IP -> LB’s Node IP, destination IP -> AS’s IP.
When returned packets from AS, LB also do both DNAT and SNAT. i.e. source IP -> 
AS’s IP, destination IP -> Client’s IP.

I see.
Doing so you completely hide the client's source address from the application.
You also require per-connexion binding at the load balancer (MagLev does 
per-connexion binding, but in a way which allows for hash collisions, because 
it is not a big deal if two flows use the same entry in the hash table. This 
allows for smaller and fixed size hash table, which also provides a performance 
advantage to MagLev).

In my humble opinion, using SNAT+DNAT is a terribly bad idea, so I would advise 
you to reconsider finding a way to either:
- Enable any type of packet tunneling protocol in your ASs (IPinIP, L2TP, 
whatever-other-protocol, and extend VPP's LB plugin with the one you pick).
- Put some box closer to the ASs (bump in the wire) for decap.
- If your routers support MPLS, you could also use it as encap.
[Zhou, Danny] In a cloud environment where hundreds of or thousands of ASs are 
dynamically deployed in a VM or a container, it is not easy for orchestrator 
(within global view) to find a close enough boxes to be configured 
automatically in order to offload encap/decap works. Mostly like, it will be 
still software to do the encap/decap work. Secondly, if we are target small 
packet line rate performance, adding the tunnel heads increases the total 
packet size hence decrease the packet efficiency and cause packet loss. I would 
consider adding GRE tunnels for LB is like abuse of tunneling protocol, as 
those tunneling protocols are not designed for this case. SNAT + DNA has its 
own disadvantage, but they are widely used in software centric Cloud 
environment orchestrated by Openstack or Kubernetes.

If you really want to use SNAT+DNAT (god forbid), and are willing to suffer (or 
somehow like suffering), you may try to:
- Use VPP's SNAT on the client-facing interface. The SNAT will just change 
clients source addresses to one of LB's source addresses.
- Extend VPP's LB plugin to support DNAT "encap".
- Extend VPP's LB plugin to support return traffic and stateless SNAT base on 
LB flow table (And find a way to make that work on multiple cores...).
The client->AS traffic, in VPP, would do ---> client-facing-iface --> SNAT --> 
LB(DNAT) --> AS-facing-iface
The AS->client traffic, in VPP, would do ---> AS-facing-iface --> LB(Stateless 
SNAT) --> SNAT Plugin (doing DNAT-back) --> client-facing-iface

Now the choice is all yours.
But I will have warned you.

Cheers,

- Pierre



Thanks,
Hongjun

From: Pierre Pfister (ppfister) [mailto:ppfis...@cisco.com]
Sent: Tuesday, April 25, 2017 3:12 PM
To: Zhou, Danny mailto:danny.z...@intel.com>>
Cc: Ni, Hongjun mailto:hongjun...@intel.com>>; Ed 
Warnicke mailto:hagb...@gmail.com>>; Li, Johnson 
mailto:johnson...@intel.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

Hello all,

As mentioned by Ed, introducing return traffic would dramatically reduce the 
performance of the solution.
-> Return traffic typically consists of data packets, whereas forward traffic 
mostly consists of ACKs. So you will have to have significantly more LB boxes 
if you want to support all your return traffic.
-> Having to deal with return traffic also means that we need to either make 
sure return traffic goes through the same core, or add locks to the structures 
(for now, everything is lockless, per-core), or steer traffic for core to core.

There also is something that I am not sure to understand. You mentioned DNAT in 
order to steer the traffic to the AS, but how do you make sure the return 
traffic goes back to the LB ? My guess is that all the traffic coming out of 
the ASs is routed toward one LB, is that right ? How do you make sure the 
return traffic is evenly distributed between LBs ?

It's a pretty interesting requirement that you have, but I am quite sure the 
solution will have to be quite far from MagLev's design, and probably less 
efficient.

- Pierre


Le 25 avr. 2017 à 05:11, Zhou, Danny 
mailto:danny.z...@intel.com>> a écrit :

Share  my two cents as well:

Firstly, introducing GRE or whatever other tunneling protocols to LB introduces 
performance overhead (for encap and decap) to both the load balancer as well as 
the network service. Secondly, other mechanism on the network service node not 
only need

Re: [vpp-dev] vpp_papi: No such message type or failed CRC checksum

2017-04-25 Thread otroan
Hi,

This means that you give the Python API a JSON definition for an API message 
not available on the running VPP instance.
That might be caused by a plugin not loaded, version mismatch...

Best regards,
Ole

> When I try to connect vpp using python api, it shows the debug messages.
> 
> My codes:
> 
> from vpp_papi import VPP
> vpp = VPP()
> vpp.connect('test')
> 
> 
> And the terminal shows:
> 
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> udp_ping_add_del_reply_a08dec44
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> vxlan_gpe_ioam_transit_disable_reply_405af39d
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> vxlan_gpe_ioam_transit_disable_ee3cf5f9
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> vxlan_gpe_ioam_vni_disable_reply_2e8d61fa
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> udp_ping_add_del_req_a7280e39
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> vxlan_gpe_ioam_vni_enable_reply_6a273d6e
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> vxlan_gpe_ioam_export_enable_disable_20586df7
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> udp_ping_export_reply_7f8a6c87
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> vxlan_gpe_ioam_disable_reply_711375e4
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> vxlan_gpe_ioam_enable_6bf84bd6
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> vxlan_gpe_ioam_vni_disable_27392af3
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> vxlan_gpe_ioam_transit_enable_reply_4dc0cf51
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> vxlan_gpe_ioam_vni_enable_489195ec
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> vxlan_gpe_ioam_disable_1a373a3b
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> vxlan_gpe_ioam_enable_reply_ca79fc00
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> ioam_cache_ip6_enable_disable_reply_67f2a36b
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> ioam_cache_ip6_enable_disable_de631cb6
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> vxlan_gpe_ioam_transit_enable_2c399a17
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> udp_ping_export_req_e43a0203
> DEBUG:vpp_papi:No such message type or failed CRC checksum: 
> vxlan_gpe_ioam_export_enable_disable_reply_2baa825a
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev



signature.asc
Description: Message signed with OpenPGP
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] How can I get API messages IDs?

2017-04-25 Thread Weitao Han
Thank you!

I'm using Python binding for the VPP API. I want to receive
some asynchronous messages and use the register_event_callback to register
a papi_event_handler, just like wiki Python Language Binding
 does.

But I dont know how to distinguish what kind of messages I received
asynchronously, so I want to know the message IDs for all API messages.

Is there any other way to do this without knowing message IDs?

Best regards
Weitao Han
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] How can I get API messages IDs?

2017-04-25 Thread otroan
Weitao,

> I'm using Python binding for the VPP API. I want to receive some asynchronous 
> messages and use the register_event_callback to register a 
> papi_event_handler, just like wiki Python Language Binding does.
> 
> But I dont know how to distinguish what kind of messages I received 
> asynchronously, so I want to know the message IDs for all API messages.
> 
> Is there any other way to do this without knowing message IDs?

The callback is called with msgname and result (named tuple).
You should be able to demux based on the message name.

Best regards,
Ole


signature.asc
Description: Message signed with OpenPGP
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] An issue about SNAT when using different in and out interfaces

2017-04-25 Thread Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco)
Hi Hongjun,

What is your full vpp config? There should be route or something like this, so 
vpp knows where to destine 10.10.23.46 packets.
I've tried some older versions and without "ip route add 10.10.23.0/24 via 
GigabitEthernet0/a/0" it doesn't work too.

Regards,
Matus

From: Ni, Hongjun [mailto:hongjun...@intel.com]
Sent: Tuesday, April 25, 2017 7:51 AM
To: Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco) 
; vpp-dev@lists.fd.io
Cc: nsh_sfc-...@lists.fd.io
Subject: RE: [vpp-dev] An issue about SNAT when using different in and out 
interfaces

Hi Matus,

Yes. When out interface is virtual tunnel interface and has no address, it does 
not work.

Thanks a lot,
Hongjun

From: Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco) 
[mailto:matfa...@cisco.com]
Sent: Tuesday, April 25, 2017 1:34 PM
To: Ni, Hongjun mailto:hongjun...@intel.com>>; 
vpp-dev@lists.fd.io
Cc: nsh_sfc-...@lists.fd.io
Subject: RE: [vpp-dev] An issue about SNAT when using different in and out 
interfaces

Hi,

It looks like there is some bug when snat interface doesn't have address (if 
snat interfaces have address it works fine). I will fix issue.

Regards,
Matus

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Ni, Hongjun
Sent: Tuesday, April 25, 2017 6:05 AM
To: vpp-dev@lists.fd.io
Cc: nsh_sfc-...@lists.fd.io
Subject: [vpp-dev] An issue about SNAT when using different in and out 
interfaces

Hey,

When I applied SNAT in different in and out interfaces, I run into an issue:

My configuration:
set interface snat in TenGigabitEthernet5/0/0 out TenGigabitEthernet5/0/1
snat add static mapping local 192.168.50.76 external 10.10.23.45

I sent packets from TenGigabitEthernet5/0/0.
In previous code about a month ago, the packets are sent to  
TenGigabitEthernet5/0/1 as expected.

But in current 17.04 code, packets are sent to TenGigabitEthernet5/0/0, which 
is not expected.
Could you give some advice on how to fix this issue?

Below is the interface and snat detail:
DBGvpp# sh int
  Name   Idx   State  Counter  Count
TenGigabitEthernet5/0/0   1 up   rx packets 
1
 rx bytes   
   60
 tx packets 
1
 tx bytes   
   60
 ip4
1
TenGigabitEthernet5/0/1   2 up
local00down
DBGvpp#
DBGvpp# sh snat detail
SNAT mode: dynamic translations enabled
TenGigabitEthernet5/0/0 in
TenGigabitEthernet5/0/1 out
0 users, 0 outside addresses, 0 active sessions, 1 static mappings
Hash table in2out
0 active elements
0 free lists
0 linear search buckets
Hash table out2in
0 active elements
0 free lists
0 linear search buckets
Hash table worker-by-in
0 active elements
0 free lists
0 linear search buckets
Hash table worker-by-out
0 active elements
0 free lists
0 linear search buckets
static mappings:
local 192.168.50.76 external 10.10.23.45 vrf 0


Below is the packet trace:

00:02:16:415613: dpdk-input
  TenGigabitEthernet5/0/0 rx queue 0
  buffer 0xbf9c22: current data 14, length 46, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
  PKT MBUF: port 0, nb_segs 1, pkt_len 60
buf_len 2176, data_len 60, ol_flags 0x180, data_off 128, phys_addr 
0x28e6c780
packet_type 0x0
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
  IP4: 08:00:27:61:07:05 -> 90:e2:ba:48:7a:80
  UDP: 192.168.50.76 -> 10.10.23.46
tos 0x00, ttl 64, length 46, checksum 0x6693
fragment id 0x
  UDP: 63 -> 63
length 26, checksum 0xa2be
00:02:16:415653: ip4-input-no-checksum
  UDP: 192.168.50.76 -> 10.10.23.46
tos 0x00, ttl 64, length 46, checksum 0x6693
fragment id 0x
  UDP: 63 -> 63
length 26, checksum 0xa2be
00:02:16:415668: snat-in2out
  SNAT_IN2OUT_FAST_PATH: sw_if_index 1, next index 2, session -1
00:02:16:415685: snat-in2out-slowpath
  SNAT_IN2OUT_SLOW_PATH: sw_if_index 1, next index 0, session -1
00:02:16:415695: ip4-lookup
  fib 0 dpo-idx 3 flow hash: 0x
  UDP: 192.168.50.76 -> 10.10.23.46
tos 0x00, ttl 64, length 46, checksum 0x6693
fragment id 0x
  UDP: 63 -> 63
length 26, checksum 0xa2be
00:02:16:415703: ip4-rewrite
  tx_sw_if_index 1 dpo-idx 3 : ipv4 via 10.10.23.46 TenGigabitEthernet5/0/0: 
90e2ba48234590e2ba487a800800 flow hash: 0x
  : 90e2ba48234590e2ba487a800800452e3f116793c0a8324c0a0a
  0020: 172e003f003f001aa2be0

Re: [vpp-dev] vpp_papi: No such message type or failed CRC checksum

2017-04-25 Thread Weitao Han
Hi,

I installed vpp 17.04 release in a new clean ubuntu 16.04 server, except
that, I did nothing.

When I first connect vpp using python api,  the terminal show those DEBUG
messages.

Is this phenomenon right?

Best regards,
Weitao Han

2017-04-25 16:47 GMT+08:00 :

> Hi,
>
> This means that you give the Python API a JSON definition for an API
> message not available on the running VPP instance.
> That might be caused by a plugin not loaded, version mismatch...
>
> Best regards,
> Ole
>
> > When I try to connect vpp using python api, it shows the debug messages.
> >
> > My codes:
> >
> > from vpp_papi import VPP
> > vpp = VPP()
> > vpp.connect('test')
> >
> >
> > And the terminal shows:
> >
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> udp_ping_add_del_reply_a08dec44
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> vxlan_gpe_ioam_transit_disable_reply_405af39d
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> vxlan_gpe_ioam_transit_disable_ee3cf5f9
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> vxlan_gpe_ioam_vni_disable_reply_2e8d61fa
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> udp_ping_add_del_req_a7280e39
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> vxlan_gpe_ioam_vni_enable_reply_6a273d6e
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> vxlan_gpe_ioam_export_enable_disable_20586df7
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> udp_ping_export_reply_7f8a6c87
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> vxlan_gpe_ioam_disable_reply_711375e4
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> vxlan_gpe_ioam_enable_6bf84bd6
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> vxlan_gpe_ioam_vni_disable_27392af3
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> vxlan_gpe_ioam_transit_enable_reply_4dc0cf51
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> vxlan_gpe_ioam_vni_enable_489195ec
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> vxlan_gpe_ioam_disable_1a373a3b
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> vxlan_gpe_ioam_enable_reply_ca79fc00
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> ioam_cache_ip6_enable_disable_reply_67f2a36b
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> ioam_cache_ip6_enable_disable_de631cb6
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> vxlan_gpe_ioam_transit_enable_2c399a17
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> udp_ping_export_req_e43a0203
> > DEBUG:vpp_papi:No such message type or failed CRC checksum:
> vxlan_gpe_ioam_export_enable_disable_reply_2baa825a
> > ___
> > vpp-dev mailing list
> > vpp-dev@lists.fd.io
> > https://lists.fd.io/mailman/listinfo/vpp-dev
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] vpp gerrit 6387 virl sim start failure

2017-04-25 Thread Dave Barach (dbarach)
Please see https://gerrit.fd.io/r/#/c/6387. Any idea why this happened?


13:06:41 + VIRL_SID[${index}]='ERROR: Simulation started OK but devices never 
changed to ACTIVE state
13:06:41 Last VIRL response:
13:06:41 {u'\''session-OYbjfr'\'': {u'\''~mgmt-lxc'\'': {u'\''vnc-console'\'': 
False, u'\''subtype'\'': u'\''mgmt-lxc'\'', u'\''state'\'': u'\''ABSENT'\'', 
u'\''management-protocol'\'': u'\''ssh'\'', u'\''management-proxy'\'': 
u'\''self'\'', u'\''serial-ports'\'': 0}, u'\''tg1'\'': {u'\''vnc-console'\'': 
True, u'\''subtype'\'': u'\''server'\'', u'\''state'\'': u'\''ABSENT'\'', 
u'\''management-protocol'\'': u'\''ssh'\'', u'\''management-proxy'\'': 
u'\''lxc'\'', u'\''serial-ports'\'': 1}, u'\''sut1'\'': {u'\''vnc-console'\'': 
True, u'\''subtype'\'': u'\''vPP'\'', u'\''state'\'': u'\''ABSENT'\'', 
u'\''management-protocol'\'': u'\''ssh'\'', u'\''management-proxy'\'': 
u'\''lxc'\'', u'\''serial-ports'\'': 1}, u'\''sut2'\'': {u'\''vnc-console'\'': 
True, u'\''subtype'\'': u'\''vPP'\'', u'\''state'\'': u'\''ABSENT'\'', 
u'\''management-protocol'\'': u'\''ssh'\'', u'\''management-proxy'\'': 
u'\''lxc'\'', u'\''serial-ports'\'': 1}}}'
13:06:41 + retval=1
13:06:41 + '[' 1 -ne 0 ']'
13:06:41 + echo 'VIRL simulation start failed on 10.30.51.29'
13:06:41 VIRL simulation start failed on 10.30.51.29


Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

2017-04-25 Thread Li, Johnson
Hi Ed,
The main concern from my point is that the challenge for the application server 
and the
Management system, too. For Maglev, the backend AS doesn’t only need to 
de-capsulate
the GRE tunneling header, It also need to handle the VIP.

Adding the GRE tunneling decreases the performance absolutely and handling the 
same VIP
On thousands of backend ASs results in the complexity of network management, 
the ASs
Need at least three planes of network configuration ( GRE tunneling VTEP, VIP, 
data
Plane network to other service in the cluster) .  So how could we balance the 
scalability and
Complexity is quite a hard question to answer.

Waiting for your opinions, thanks!

Best Regards,
-Johnson

From: Ni, Hongjun
Sent: Tuesday, April 25, 2017 11:05 AM
To: Ed Warnicke 
Cc: vpp-dev@lists.fd.io; Li, Johnson 
Subject: RE: [vpp-dev] Requirement on Load Balancer plugin for VPP

Hi Ed,

Thanks for your prompt response.

This item is required to handle legacy AS, because some legacy AS does not want 
to change their underlay forwarding infrastructure.

Besides, some AS IPs are private and invisible outside the AS cluster domain, 
and not allowed to expose to external network.

Thanks,
Hongjun

From: Ed Warnicke [mailto:hagb...@gmail.com]
Sent: Tuesday, April 25, 2017 10:44 AM
To: Ni, Hongjun mailto:hongjun...@intel.com>>
Cc: vpp-dev@lists.fd.io; Li, Johnson 
mailto:johnson...@intel.com>>
Subject: Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

Hongjun,

I can see this point of view, but it radically reduces the scalability of the 
whole system.
Wouldn't it just make sense to run vpp or some other mechanism to decap the GRE 
on whatever is running the other AS and feed whatever we are
load balancing to?  Forcing back traffic through the central load balancer 
radically reduces scalability (which is why
Maglev, which inspired what we are doing here, doesn't do it that way either).

Ed

On Mon, Apr 24, 2017 at 7:18 PM, Ni, Hongjun 
mailto:hongjun...@intel.com>> wrote:
Hey,

Currently, traffic received for a given VIP (or VIP prefix) is tunneled using 
GRE towards
the different ASs in a way that (tries to) ensure that a given session will
always be tunneled to the same AS.

But in real environment, many Application Servers do not support GRE feature.
So we raise a requirement for LB in VPP:
(1). When received traffic for a VIP, the LB need to do load balance, then do 
DNAT to change traffic’s destination IP from VIP to AS’s IP.
(2). When returned traffic from AS, the LB will do SNAT first to change 
traffic’s source IP from AS’s IP to VIP, then go through load balance sessions, 
and then sent to clients.

Any comments about this requirement are welcome.

Thanks a lot,
Hongjun


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

2017-04-25 Thread Thomas F Herbert



On 04/25/2017 04:45 AM, Zhou, Danny wrote:


Thanks Pierre, comments inline.

*From:*Pierre Pfister (ppfister) [mailto:ppfis...@cisco.com]
*Sent:* Tuesday, April 25, 2017 4:11 PM
*To:* Ni, Hongjun 
*Cc:* Zhou, Danny ; Ed Warnicke 
; Li, Johnson ; 
vpp-dev@lists.fd.io

*Subject:* Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

Le 25 avr. 2017 à 09:52, Ni, Hongjun mailto:hongjun...@intel.com>> a écrit :

Hi Pierre,

For LB distribution case, I think we could assign a node IP for
each LB box.

When received packets from client, LB will do both SNAT and DNAT.
i.e. source IP -> LB’s Node IP, destination IP -> AS’s IP.

When returned packets from AS, LB also do both DNAT and SNAT. i.e.
source IP -> AS’s IP, destination IP -> Client’s IP.

Does NSH solve this problem solve this problem of transparently 
forwarding the traffic.


I see.

Doing so you completely hide the client's source address from the 
application.


You also require per-connexion binding at the load balancer (MagLev 
does per-connexion binding, but in a way which allows for hash 
collisions, because it is not a big deal if two flows use the same 
entry in the hash table. This allows for smaller and fixed size hash 
table, which also provides a performance advantage to MagLev).


In my humble opinion, using SNAT+DNAT is a terribly bad idea, so I 
would advise you to reconsider finding a way to either:


- Enable any type of packet tunneling protocol in your ASs (IPinIP, 
L2TP, whatever-other-protocol, and extend VPP's LB plugin with the one 
you pick).


- Put some box closer to the ASs (bump in the wire) for decap.

- If your routers support MPLS, you could also use it as encap.

*/[Zhou, Danny] In a cloud environment where hundreds of or thousands 
of ASs are dynamically deployed in a VM or a container, it is not easy 
for orchestrator (within global view) to find a close enough boxes to 
be configured automatically in order to offload encap/decap works. 
Mostly like, it will be still software to do the encap/decap work. 
Secondly, if we are target small packet line rate performance, adding 
the tunnel heads increases the total packet size hence decrease the 
packet efficiency and cause packet loss. I would consider adding GRE 
tunnels for LB is like abuse of tunneling protocol, as those tunneling 
protocols are not designed for this case. SNAT + DNA has its own 
disadvantage, but they are widely used in software centric Cloud 
environment orchestrated by Openstack or Kubernetes./*


If you really want to use SNAT+DNAT (god forbid), and are willing to 
suffer (or somehow like suffering), you may try to:


- Use VPP's SNAT on the client-facing interface. The SNAT will just 
change clients source addresses to one of LB's source addresses.


- Extend VPP's LB plugin to support DNAT "encap".

- Extend VPP's LB plugin to support return traffic and stateless SNAT 
base on LB flow table (And find a way to make that work on multiple 
cores...).


The client->AS traffic, in VPP, would do ---> client-facing-iface --> 
SNAT --> LB(DNAT) --> AS-facing-iface


The AS->client traffic, in VPP, would do ---> AS-facing-iface --> 
LB(Stateless SNAT) --> SNAT Plugin (doing DNAT-back) --> 
client-facing-iface


Now the choice is all yours.

But I will have warned you.

Cheers,

- Pierre



Thanks,

Hongjun

*From:*Pierre Pfister (ppfister) [mailto:ppfis...@cisco.com]
*Sent:*Tuesday, April 25, 2017 3:12 PM
*To:*Zhou, Danny mailto:danny.z...@intel.com>>
*Cc:*Ni, Hongjun mailto:hongjun...@intel.com>>; Ed Warnicke mailto:hagb...@gmail.com>>; Li, Johnson mailto:johnson...@intel.com>>; vpp-dev@lists.fd.io

*Subject:*Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

Hello all,

As mentioned by Ed, introducing return traffic would dramatically
reduce the performance of the solution.

-> Return traffic typically consists of data packets, whereas
forward traffic mostly consists of ACKs. So you will have to have
significantly more LB boxes if you want to support all your return
traffic.

-> Having to deal with return traffic also means that we need to
either make sure return traffic goes through the same core, or add
locks to the structures (for now, everything is lockless,
per-core), or steer traffic for core to core.

There also is something that I am not sure to understand. You
mentioned DNAT in order to steer the traffic to the AS, but how do
you make sure the return traffic goes back to the LB ? My guess is
that all the traffic coming out of the ASs is routed toward one
LB, is that right ? How do you make sure the return traffic is
evenly distributed between LBs ?

It's a pretty interesting requirement that you have, but I am
quite sure the solution will have to be quite far from MagLev's
design, and probably less efficient.

- Pierre

Le 25 avr. 2017 à 05:11, Zhou, 

Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

2017-04-25 Thread Zhou, Danny


From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Thomas F Herbert
Sent: Tuesday, April 25, 2017 10:01 PM
To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Requirement on Load Balancer plugin for VPP




On 04/25/2017 04:45 AM, Zhou, Danny wrote:
Thanks Pierre, comments inline.

From: Pierre Pfister (ppfister) [mailto:ppfis...@cisco.com]
Sent: Tuesday, April 25, 2017 4:11 PM
To: Ni, Hongjun 
Cc: Zhou, Danny ; Ed 
Warnicke ; Li, Johnson 
; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Requirement on Load Balancer plugin for VPP


Le 25 avr. 2017 à 09:52, Ni, Hongjun 
mailto:hongjun...@intel.com>> a écrit :

Hi Pierre,

For LB distribution case, I think we could assign a node IP for each LB box.
When received packets from client, LB will do both SNAT and DNAT. i.e. source 
IP -> LB's Node IP, destination IP -> AS's IP.
When returned packets from AS, LB also do both DNAT and SNAT. i.e. source IP -> 
AS's IP, destination IP -> Client's IP.
Does NSH solve this problem solve this problem of transparently forwarding the 
traffic.
[Zhou, Danny]  No, this has nothing to do with NSH. We are trying to use VPP to 
replace in_kernel iptables/Netfilter based distributed load balancer 
(controlled by Kube-proxy) for high performance container networking in NFV 
environment. And our learnings from NSH work is that even VPP's VTEP 
implementation has much high performance than in_kernel VTP, but still brings 
significant negative performance impact in comparison to processing non-tunnel 
packets (as you can see from the published CSIT performance reports), so legacy 
DNAT/SNAT based approach still has its unique benefits when processing small 
packets.


I see.
Doing so you completely hide the client's source address from the application.
You also require per-connexion binding at the load balancer (MagLev does 
per-connexion binding, but in a way which allows for hash collisions, because 
it is not a big deal if two flows use the same entry in the hash table. This 
allows for smaller and fixed size hash table, which also provides a performance 
advantage to MagLev).

In my humble opinion, using SNAT+DNAT is a terribly bad idea, so I would advise 
you to reconsider finding a way to either:
- Enable any type of packet tunneling protocol in your ASs (IPinIP, L2TP, 
whatever-other-protocol, and extend VPP's LB plugin with the one you pick).
- Put some box closer to the ASs (bump in the wire) for decap.
- If your routers support MPLS, you could also use it as encap.
[Zhou, Danny] In a cloud environment where hundreds of or thousands of ASs are 
dynamically deployed in a VM or a container, it is not easy for orchestrator 
(within global view) to find a close enough boxes to be configured 
automatically in order to offload encap/decap works. Mostly like, it will be 
still software to do the encap/decap work. Secondly, if we are target small 
packet line rate performance, adding the tunnel heads increases the total 
packet size hence decrease the packet efficiency and cause packet loss. I would 
consider adding GRE tunnels for LB is like abuse of tunneling protocol, as 
those tunneling protocols are not designed for this case. SNAT + DNA has its 
own disadvantage, but they are widely used in software centric Cloud 
environment orchestrated by Openstack or Kubernetes.

If you really want to use SNAT+DNAT (god forbid), and are willing to suffer (or 
somehow like suffering), you may try to:
- Use VPP's SNAT on the client-facing interface. The SNAT will just change 
clients source addresses to one of LB's source addresses.
- Extend VPP's LB plugin to support DNAT "encap".
- Extend VPP's LB plugin to support return traffic and stateless SNAT base on 
LB flow table (And find a way to make that work on multiple cores...).
The client->AS traffic, in VPP, would do ---> client-facing-iface --> SNAT --> 
LB(DNAT) --> AS-facing-iface
The AS->client traffic, in VPP, would do ---> AS-facing-iface --> LB(Stateless 
SNAT) --> SNAT Plugin (doing DNAT-back) --> client-facing-iface

Now the choice is all yours.
But I will have warned you.

Cheers,

- Pierre




Thanks,
Hongjun

From: Pierre Pfister (ppfister) [mailto:ppfis...@cisco.com]
Sent: Tuesday, April 25, 2017 3:12 PM
To: Zhou, Danny mailto:danny.z...@intel.com>>
Cc: Ni, Hongjun mailto:hongjun...@intel.com>>; Ed 
Warnicke mailto:hagb...@gmail.com>>; Li, Johnson 
mailto:johnson...@intel.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

Hello all,

As mentioned by Ed, introducing return traffic would dramatically reduce the 
performance of the solution.
-> Return traffic typically consists of data packets, whereas forward traffic 
mostly consists of ACKs. So you will have to have significantly more LB boxes 
if you want to support all your return traffi

Re: [vpp-dev] some problem about sub-interface

2017-04-25 Thread Damjan Marion

> On 24 Apr 2017, at 12:43, 薛欣颖  wrote:
> 
> Hi guys,
> I created a host-interface and a sub-interface, when I created bridge on 
> sub-interface, the two sub-interface can not communicate. Another L3 ARP can 
> not be normal learning.
> 
> Any else seeing this problem? 
> 
> Do vpp support this feature now? If not,are there any plans to develop this 
> feature?
> 

No, at the moment we don't have support for sub-interfaces on af-packet. Story 
is is that VLAN ID is transmitted out-of-band so some code change will be 
needed to take care for it.
Nothing terribly hard to implement, but requires time. Volunteers are welcome...

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

2017-04-25 Thread Ed Warnicke
Hongjun,

Thinking it through a bit, there are *many* different approaches to load
balancers.  I would suggest if you want to support a different one, then
starting a new plugin for it may be a good move :)

Ed

On Mon, Apr 24, 2017 at 8:04 PM, Ni, Hongjun  wrote:

> Hi Ed,
>
>
>
> Thanks for your prompt response.
>
>
>
> This item is required to handle legacy AS, because some legacy AS does not
> want to change their underlay forwarding infrastructure.
>
>
>
> Besides, some AS IPs are private and invisible outside the AS cluster
> domain, and not allowed to expose to external network.
>
>
>
> Thanks,
>
> Hongjun
>
>
>
> *From:* Ed Warnicke [mailto:hagb...@gmail.com]
> *Sent:* Tuesday, April 25, 2017 10:44 AM
> *To:* Ni, Hongjun 
> *Cc:* vpp-dev@lists.fd.io; Li, Johnson 
> *Subject:* Re: [vpp-dev] Requirement on Load Balancer plugin for VPP
>
>
>
> Hongjun,
>
>
>
> I can see this point of view, but it radically reduces the scalability of
> the whole system.
>
> Wouldn't it just make sense to run vpp or some other mechanism to decap
> the GRE on whatever is running the other AS and feed whatever we are
>
> load balancing to?  Forcing back traffic through the central load balancer
> radically reduces scalability (which is why
>
> Maglev, which inspired what we are doing here, doesn't do it that way
> either).
>
>
>
> Ed
>
>
>
> On Mon, Apr 24, 2017 at 7:18 PM, Ni, Hongjun  wrote:
>
> Hey,
>
>
>
> Currently, traffic received for a given VIP (or VIP prefix) is tunneled
> using GRE towards
>
> the different ASs in a way that (tries to) ensure that a given session
> will
>
> always be tunneled to the same AS.
>
>
>
> But in real environment, many Application Servers do not support GRE
> feature.
>
> So we raise a requirement for LB in VPP:
>
> (1). When received traffic for a VIP, the LB need to do load balance, then
> do DNAT to change traffic’s destination IP from VIP to AS’s IP.
>
> (2). When returned traffic from AS, the LB will do SNAT first to change
> traffic’s source IP from AS’s IP to VIP, then go through load balance
> sessions, and then sent to clients.
>
>
>
> Any comments about this requirement are welcome.
>
>
>
> Thanks a lot,
>
> Hongjun
>
>
>
>
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VPP and VXLAN tunnels

2017-04-25 Thread Patricio Latini
Hi,
First of all i would like to apologize if this is not the right forum 
to post this question, and if so if you could please redirect me to the right 
one.

I have been working in my lab to test Opendaylight+honeycomb+vpp as a 
sopporting network infrastructure for Openstack. I have got all working however 
i have found a problem where the vxlan tunnel is not created between the 
compute and controller nodes. Doing some troublehooting i found that when the 
ODL VBD process tries to build to topology there is a exception happening in 
Honeycomb that translates to an exception in ODL.

I am using 1701 VPP and tried both master and carbon ODL. Also tried to uso 
earlier versions of VPP however i found another problem.

am I missing anything trivial here? Any hint to solve this?

Thakns a lot

Patricio

LOgs



On ODL Side

2017-04-25 12:39:47,600 | WARN  | CommitFutures-1  | VbdNetconfTransaction  
  | 314 - org.opendaylight.honeycomb.vbd.impl - 1.1.0.SNAPSHOT | Netconf 
READ transaction failed to ReadFailedException{message=read execution failed, 
errorList=[RpcError [message=read execution failed, severity=ERROR, 
errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, 
cause=java.lang.IllegalArgumentException: Unable to read data: 
Optional.of(/(urn:ietf:params:xml:ns:yang:ietf-interfaces?revision=2014-05-08)interfaces-state),
 errors: [RpcError [message=Unexpected error, severity=ERROR, 
errorType=APPLICATION, tag=operation-failed, applicationTag=null, 
info=java.lang.IllegalStateException: Unexpected size of list: 
[Mapping{getIndex=1, getName=GigabitEthernet2/3/0, augmentations={}}, 
Mapping{getIndex=1, getName=neutron_port_dfbfab9e-ad41-4b1b-b2fc-ca242047f666, 
augmentations={}}]. Single item expected, cause=null}. Restarting 
transaction ... 
2017-04-25 12:39:48,200 | WARN  | CommitFutures-1  | VbdNetconfTransaction  
  | 314 - org.opendaylight.honeycomb.vbd.impl - 1.1.0.SNAPSHOT | Netconf 
READ transaction unsuccessful. Maximal number of attempts reached. Trace: {}
java.util.concurrent.ExecutionException: ReadFailedException{message=read 
execution failed, errorList=[RpcError [message=read execution failed, 
severity=ERROR, errorType=APPLICATION, tag=operation-failed, 
applicationTag=null, info=null, cause=java.lang.IllegalArgumentException: 
Unable to read data: 
Optional.of(/(urn:ietf:params:xml:ns:yang:ietf-interfaces?revision=2014-05-08)interfaces-state),
 errors: [RpcError [message=Unexpected error, severity=ERROR, 
errorType=APPLICATION, tag=operation-failed, applicationTag=null, 
info=java.lang.IllegalStateException: Unexpected size of list: 
[Mapping{getIndex=1, getName=GigabitEthernet2/3/0, augmentations={}}, 
Mapping{getIndex=1, getName=neutron_port_dfbfab9e-ad41-4b1b-b2fc-ca242047f666, 
augmentations={}}]. Single item expected, cause=null}
at 
org.opendaylight.yangtools.util.concurrent.MappingCheckedFuture.wrapInExecutionException(MappingCheckedFuture.java:64)[69:org.opendaylight.yangtools.util:1.1.0.SNAPSHOT]
at 
org.opendaylight.yangtools.util.concurrent.MappingCheckedFuture.get(MappingCheckedFuture.java:77)[69:org.opendaylight.yangtools.util:1.1.0.SNAPSHOT]
at 
org.opendaylight.vbd.impl.transaction.VbdNetconfTransaction.read(VbdNetconfTransaction.java:132)[314:org.opendaylight.honeycomb.vbd.impl:1.1.0.SNAPSHOT]
at 
org.opendaylight.vbd.impl.transaction.VbdNetconfTransaction.read(VbdNetconfTransaction.java:140)[314:org.opendaylight.honeycomb.vbd.impl:1.1.0.SNAPSHOT]
at 
org.opendaylight.vbd.impl.transaction.VbdNetconfTransaction.read(VbdNetconfTransaction.java:140)[314:org.opendaylight.honeycomb.vbd.impl:1.1.0.SNAPSHOT]
at 
org.opendaylight.vbd.impl.transaction.VbdNetconfTransaction.read(VbdNetconfTransaction.java:140)[314:org.opendaylight.honeycomb.vbd.impl:1.1.0.SNAPSHOT]
at 
org.opendaylight.vbd.impl.VppModifier.readIpAddressFromVpp(VppModifier.java:296)[314:org.opendaylight.honeycomb.vbd.impl:1.1.0.SNAPSHOT]
at 
org.opendaylight.vbd.impl.VppModifier.readIpAddressesFromVpps(VppModifier.java:270)[314:org.opendaylight.honeycomb.vbd.impl:1.1.0.SNAPSHOT]
at 
org.opendaylight.vbd.impl.VbdBridgeDomain.getTunnelEndpoints(VbdBridgeDomain.java:563)[314:org.opendaylight.honeycomb.vbd.impl:1.1.0.SNAPSHOT]
at 
org.opendaylight.vbd.impl.VbdBridgeDomain.addVxlanTunnel(VbdBridgeDomain.java:603)[314:org.opendaylight.honeycomb.vbd.impl:1.1.0.SNAPSHOT]
at 
org.opendaylight.vbd.impl.VbdBridgeDomain.access$400(VbdBridgeDomain.java:99)[314:org.opendaylight.honeycomb.vbd.impl:1.1.0.SNAPSHOT]
at 
org.opendaylight.vbd.impl.VbdBridgeDomain$2.apply(VbdBridgeDomain.java:294)[314:org.opendaylight.honeycomb.vbd.impl:1.1.0.SNAPSHOT]
at 
org.opendaylight.vbd.impl.VbdBridgeDomain$2.apply(VbdBridgeDomain.java:285)[314:org.opendaylight.honeycomb.vbd.impl:1.1.0.SNAPSHOT]
at 
com.google.common.util.concurrent.Futures$2.apply(Futures.java:760)[65:com.googl

[vpp-dev] OpenStack networking-vpp 17.04 released

2017-04-25 Thread Jerome Tollet (jtollet)
Dear FD.io community,
The new version of networking-vpp, the OpenStack Neutron ML2 driver for VPP 
17.04 has been released today.
This version contains significant evolutions including:
-VXLAN-GPE support to setup overlay
-Layer 3 (Neutron routers) with full support of ipv6, floating ip and SNAT.
-State resync to enable seamless component restarts

Many thanks to VPP team for your support as well as people who contributed to 
networking-vpp development.
You’ll find attached the original announcement sent on the OpenStack mailing 
list.
Jerome

--- Begin Message ---
In conjunction with the release of VPP 17.04, I'd like to invite you all to try 
out networking-vpp for VPP 17.04.  VPP is a fast userspace forwarder based on 
the DPDK toolkit, and uses vector packet processing algorithms to minimise the 
CPU time spent on each packet and maximise throughput.  networking-vpp  is a 
ML2 mechanism driver that controls VPP on your control and compute hosts to 
provide fast L2 forwarding under Neutron.

This version has a few additional features:
- resync - this allows you to upgrade the agent while packets continue to flow 
through VPP, and to update VPP and get it promptly reconfigured, and should 
mean you can do maintenance operations on your cloud with little to no network 
service interruption  (per NFV requirements)
- VXLAN GPE - this is a VXLAN overlay with a LISP-based control plane to 
provide horizontally scalable networking with L2FIB propagation.  You can also 
continue to use the standard VLAN and flat networking.
- L3 support - networking-vpp now includes a L3 plugin and driver code within 
the agent to use the L3 functionality of VPP to provide Neutron routers.

Along with this, there have been the usual bug fixes, code and test 
improvements.

The README [1] explains how you can try out VPP using devstack, which is even 
simpler than before the devstack plugin will deploy etcd, the mechanism driver 
and VPP itself and should give you a working system with a minimum of hassle.

We will continuing development between now and VPP's 17.07 release in July.  
There are several features we're planning to work on (you'll find a list in our 
RFE bugs at [2]), and we welcome anyone who would like to come help us.

Everyone is welcome to join our new biweekly IRC meetings, every Monday 
(including next Monday), 0900 PST = 1600 GMT. 

[1]https://github.com/openstack/networking-vpp/blob/master/README.rst
-- 
Ian.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--- End Message ---
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP and VXLAN tunnels

2017-04-25 Thread Patricio Latini
Well doing a REST on honeycomb of the interfaces list it seems that the problem 
is related to get the interface list as there seems to be a problem with vpp 
reusing the indexes (1 in this case) for all the interfaces.

looking in jira i found a resolved bug https://jira.fd.io/browse/HONEYCOMB-220 
 that seemed to related to this 
behaviour however is marked as resolved and merged into master. Due to the date 
seems that should be part of 1701 right?  ans also this doestn seem to use pbb 
when creating the interfaces.

any clue?


http://10.0.0.12:8183/restconf/operational/ietf-interfaces:interfaces-state




Error 500 Server Error


HTTP ERROR 500
Problem accessing 
/restconf/operational/ietf-interfaces:interfaces-state. Reason:

Server Error

Caused by:
java.lang.IllegalStateException: Unexpected size of list: 
[Mapping{getIndex=1, getName=GigabitEthernet2/3/0, augmentations={}}, 
Mapping{getIndex=1, getName=neutron_port_dfbfab9e-ad41-4b1b-b2fc-ca242047f666, 
augmentations={}}]. Single item expected
at 
io.fd.honeycomb.translate.util.RWUtils.lambda$singleItemCollector$0(RWUtils.java:53)
at java.util.function.Function.lambda$andThen$1(Function.java:88)
at 
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:503)
at 
io.fd.hc2vpp.common.translate.util.NamingContext.getName(NamingContext.java:81)
at 
io.fd.hc2vpp.v3po.interfacesstate.InterfaceCustomizer.lambda$getAllIds$4(InterfaceCustomizer.java:179)
at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at 
java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
at 
java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at 
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
io.fd.hc2vpp.v3po.interfacesstate.InterfaceCustomizer.getAllIds(InterfaceCustomizer.java:188)
at 
io.fd.honeycomb.translate.impl.read.GenericListReader.getAllIds(GenericListReader.java:94)
at 
io.fd.honeycomb.translate.impl.read.registry.CompositeReader$CompositeListReader.readList(CompositeReader.java:199)
at 
io.fd.honeycomb.translate.impl.read.registry.CompositeReader.readChildren(CompositeReader.java:121)
at 
io.fd.honeycomb.translate.impl.read.registry.CompositeReader.readCurrentAttributes(CompositeReader.java:145)
at 
io.fd.honeycomb.translate.util.read.AbstractGenericReader.readCurrent(AbstractGenericReader.java:61)
at 
io.fd.honeycomb.translate.impl.read.registry.CompositeReader.read(CompositeReader.java:84)
at 
io.fd.honeycomb.translate.impl.read.registry.CompositeReaderRegistry.read(CompositeReaderRegistry.java:122)
at 
io.fd.honeycomb.data.impl.ReadableDataTreeDelegator.readNode(ReadableDataTreeDelegator.java:136)
at 
io.fd.honeycomb.data.impl.ReadableDataTreeDelegator.read(ReadableDataTreeDelegator.java:102)
at 
io.fd.honeycomb.data.impl.ReadOnlyTransaction.read(ReadOnlyTransaction.java:76)
at 
org.opendaylight.netconf.sal.restconf.impl.BrokerFacade.readDataViaTransaction(BrokerFacade.java:524)
at 
org.opendaylight.netconf.sal.restconf.impl.BrokerFacade.readDataViaTransaction(BrokerFacade.java:516)
at 
org.opendaylight.netconf.sal.restconf.impl.BrokerFacade.readOperationalData(BrokerFacade.java:182)
at 
org.opendaylight.netconf.sal.restconf.impl.RestconfImpl.readOperationalData(RestconfImpl.java:735)
at 
org.opendaylight.netconf.sal.restconf.impl.StatisticsRestconfServiceWrapper.readOperationalData(StatisticsRestconfServiceWrapper.java:116)
at 
org.opendaylight.netconf.sal.rest.impl.RestconfCompositeWrapper.readOperationalData(RestconfCompositeWrapper.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodD

Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

2017-04-25 Thread Ni, Hongjun
Hi Pierre,

Thank you for giving some helpful choices.

For newly created network, the first choice is a good one, because it is 
flexible and scalable.
For legacy AS case, we intend to adopt the second solution.

Thanks a lot,
Hongjun

From: Pierre Pfister (ppfister) [mailto:ppfis...@cisco.com]
Sent: Tuesday, April 25, 2017 4:11 PM
To: Ni, Hongjun 
Cc: Zhou, Danny ; Ed Warnicke ; Li, 
Johnson ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Requirement on Load Balancer plugin for VPP


Le 25 avr. 2017 à 09:52, Ni, Hongjun 
mailto:hongjun...@intel.com>> a écrit :

Hi Pierre,

For LB distribution case, I think we could assign a node IP for each LB box.
When received packets from client, LB will do both SNAT and DNAT. i.e. source 
IP -> LB’s Node IP, destination IP -> AS’s IP.
When returned packets from AS, LB also do both DNAT and SNAT. i.e. source IP -> 
AS’s IP, destination IP -> Client’s IP.

I see.
Doing so you completely hide the client's source address from the application.
You also require per-connexion binding at the load balancer (MagLev does 
per-connexion binding, but in a way which allows for hash collisions, because 
it is not a big deal if two flows use the same entry in the hash table. This 
allows for smaller and fixed size hash table, which also provides a performance 
advantage to MagLev).

In my humble opinion, using SNAT+DNAT is a terribly bad idea, so I would advise 
you to reconsider finding a way to either:
- Enable any type of packet tunneling protocol in your ASs (IPinIP, L2TP, 
whatever-other-protocol, and extend VPP's LB plugin with the one you pick).
- Put some box closer to the ASs (bump in the wire) for decap.
- If your routers support MPLS, you could also use it as encap.

If you really want to use SNAT+DNAT (god forbid), and are willing to suffer (or 
somehow like suffering), you may try to:
- Use VPP's SNAT on the client-facing interface. The SNAT will just change 
clients source addresses to one of LB's source addresses.
- Extend VPP's LB plugin to support DNAT "encap".
- Extend VPP's LB plugin to support return traffic and stateless SNAT base on 
LB flow table (And find a way to make that work on multiple cores...).
The client->AS traffic, in VPP, would do ---> client-facing-iface --> SNAT --> 
LB(DNAT) --> AS-facing-iface
The AS->client traffic, in VPP, would do ---> AS-facing-iface --> LB(Stateless 
SNAT) --> SNAT Plugin (doing DNAT-back) --> client-facing-iface

Now the choice is all yours.
But I will have warned you.

Cheers,

- Pierre



Thanks,
Hongjun

From: Pierre Pfister (ppfister) [mailto:ppfis...@cisco.com]
Sent: Tuesday, April 25, 2017 3:12 PM
To: Zhou, Danny mailto:danny.z...@intel.com>>
Cc: Ni, Hongjun mailto:hongjun...@intel.com>>; Ed 
Warnicke mailto:hagb...@gmail.com>>; Li, Johnson 
mailto:johnson...@intel.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

Hello all,

As mentioned by Ed, introducing return traffic would dramatically reduce the 
performance of the solution.
-> Return traffic typically consists of data packets, whereas forward traffic 
mostly consists of ACKs. So you will have to have significantly more LB boxes 
if you want to support all your return traffic.
-> Having to deal with return traffic also means that we need to either make 
sure return traffic goes through the same core, or add locks to the structures 
(for now, everything is lockless, per-core), or steer traffic for core to core.

There also is something that I am not sure to understand. You mentioned DNAT in 
order to steer the traffic to the AS, but how do you make sure the return 
traffic goes back to the LB ? My guess is that all the traffic coming out of 
the ASs is routed toward one LB, is that right ? How do you make sure the 
return traffic is evenly distributed between LBs ?

It's a pretty interesting requirement that you have, but I am quite sure the 
solution will have to be quite far from MagLev's design, and probably less 
efficient.

- Pierre


Le 25 avr. 2017 à 05:11, Zhou, Danny 
mailto:danny.z...@intel.com>> a écrit :

Share  my two cents as well:

Firstly, introducing GRE or whatever other tunneling protocols to LB introduces 
performance overhead (for encap and decap) to both the load balancer as well as 
the network service. Secondly, other mechanism on the network service node not 
only needs to decap the GRE but also needs to perform a DNAT operation in order 
to change the destination IP of the original frame from LB’s IP to the service 
entity’s IP, which introduces the complexity to the network service.

Existing well-known load balancers such as Netfilter or Nginx do not adopt this 
tunneling approach, they just simply do a service node selection followed by a 
NAT operation.

-Danny

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Ni, Hongjun
Sent: Tuesday, April 25, 2017 11:05 AM
To: Ed Warnicke mailto:hagb...@

Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

2017-04-25 Thread Ni, Hongjun
Hi Ed,

Good idea. We will give it a try.

Thanks,
Hongjun

From: Ed Warnicke [mailto:hagb...@gmail.com]
Sent: Wednesday, April 26, 2017 12:04 AM
To: Ni, Hongjun 
Cc: vpp-dev@lists.fd.io; Li, Johnson 
Subject: Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

Hongjun,

Thinking it through a bit, there are *many* different approaches to load 
balancers.  I would suggest if you want to support a different one, then 
starting a new plugin for it may be a good move :)

Ed

On Mon, Apr 24, 2017 at 8:04 PM, Ni, Hongjun 
mailto:hongjun...@intel.com>> wrote:
Hi Ed,

Thanks for your prompt response.

This item is required to handle legacy AS, because some legacy AS does not want 
to change their underlay forwarding infrastructure.

Besides, some AS IPs are private and invisible outside the AS cluster domain, 
and not allowed to expose to external network.

Thanks,
Hongjun

From: Ed Warnicke [mailto:hagb...@gmail.com]
Sent: Tuesday, April 25, 2017 10:44 AM
To: Ni, Hongjun mailto:hongjun...@intel.com>>
Cc: vpp-dev@lists.fd.io; Li, Johnson 
mailto:johnson...@intel.com>>
Subject: Re: [vpp-dev] Requirement on Load Balancer plugin for VPP

Hongjun,

I can see this point of view, but it radically reduces the scalability of the 
whole system.
Wouldn't it just make sense to run vpp or some other mechanism to decap the GRE 
on whatever is running the other AS and feed whatever we are
load balancing to?  Forcing back traffic through the central load balancer 
radically reduces scalability (which is why
Maglev, which inspired what we are doing here, doesn't do it that way either).

Ed

On Mon, Apr 24, 2017 at 7:18 PM, Ni, Hongjun 
mailto:hongjun...@intel.com>> wrote:
Hey,

Currently, traffic received for a given VIP (or VIP prefix) is tunneled using 
GRE towards
the different ASs in a way that (tries to) ensure that a given session will
always be tunneled to the same AS.

But in real environment, many Application Servers do not support GRE feature.
So we raise a requirement for LB in VPP:
(1). When received traffic for a VIP, the LB need to do load balance, then do 
DNAT to change traffic’s destination IP from VIP to AS’s IP.
(2). When returned traffic from AS, the LB will do SNAT first to change 
traffic’s source IP from AS’s IP to VIP, then go through load balance sessions, 
and then sent to clients.

Any comments about this requirement are welcome.

Thanks a lot,
Hongjun


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] An issue about SNAT when using different in and out interfaces

2017-04-25 Thread Ni, Hongjun
Hi Matus,

Yes. You are right. The root cause is the lack of IP address in nsh_tunnel1.

When I configured ip address for nsh_tunnel1, it works well.
DBGvpp# sh ip fib
ipv4-VRF:0, fib_index 0, flow hash: src dst sport dport proto
10.0.35.2/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:16 buckets:1 uRPF:17 to:[1:36]]
[0] [@5]: ipv4 via 0.0.0.0 nsh_tunnel1:

When no ip address is configured on nsh_tunnel1, it does not work.
DBGvpp# sh ip fib
ipv4-VRF:0, fib_index 0, flow hash: src dst sport dport proto
10.0.35.2/32
  UNRESOLVED

Thank you so much,
Hongjun

From: Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco) 
[mailto:matfa...@cisco.com]
Sent: Tuesday, April 25, 2017 7:02 PM
To: Ni, Hongjun ; vpp-dev@lists.fd.io
Cc: nsh_sfc-...@lists.fd.io
Subject: RE: [vpp-dev] An issue about SNAT when using different in and out 
interfaces

Hi Hongjun,

What is your full vpp config? There should be route or something like this, so 
vpp knows where to destine 10.10.23.46 packets.
I've tried some older versions and without "ip route add 10.10.23.0/24 via 
GigabitEthernet0/a/0" it doesn't work too.

Regards,
Matus

From: Ni, Hongjun [mailto:hongjun...@intel.com]
Sent: Tuesday, April 25, 2017 7:51 AM
To: Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco) 
mailto:matfa...@cisco.com>>; 
vpp-dev@lists.fd.io
Cc: nsh_sfc-...@lists.fd.io
Subject: RE: [vpp-dev] An issue about SNAT when using different in and out 
interfaces

Hi Matus,

Yes. When out interface is virtual tunnel interface and has no address, it does 
not work.

Thanks a lot,
Hongjun

From: Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco) 
[mailto:matfa...@cisco.com]
Sent: Tuesday, April 25, 2017 1:34 PM
To: Ni, Hongjun mailto:hongjun...@intel.com>>; 
vpp-dev@lists.fd.io
Cc: nsh_sfc-...@lists.fd.io
Subject: RE: [vpp-dev] An issue about SNAT when using different in and out 
interfaces

Hi,

It looks like there is some bug when snat interface doesn't have address (if 
snat interfaces have address it works fine). I will fix issue.

Regards,
Matus

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Ni, Hongjun
Sent: Tuesday, April 25, 2017 6:05 AM
To: vpp-dev@lists.fd.io
Cc: nsh_sfc-...@lists.fd.io
Subject: [vpp-dev] An issue about SNAT when using different in and out 
interfaces

Hey,

When I applied SNAT in different in and out interfaces, I run into an issue:

My configuration:
set interface snat in TenGigabitEthernet5/0/0 out TenGigabitEthernet5/0/1
snat add static mapping local 192.168.50.76 external 10.10.23.45

I sent packets from TenGigabitEthernet5/0/0.
In previous code about a month ago, the packets are sent to  
TenGigabitEthernet5/0/1 as expected.

But in current 17.04 code, packets are sent to TenGigabitEthernet5/0/0, which 
is not expected.
Could you give some advice on how to fix this issue?

Below is the interface and snat detail:
DBGvpp# sh int
  Name   Idx   State  Counter  Count
TenGigabitEthernet5/0/0   1 up   rx packets 
1
 rx bytes   
   60
 tx packets 
1
 tx bytes   
   60
 ip4
1
TenGigabitEthernet5/0/1   2 up
local00down
DBGvpp#
DBGvpp# sh snat detail
SNAT mode: dynamic translations enabled
TenGigabitEthernet5/0/0 in
TenGigabitEthernet5/0/1 out
0 users, 0 outside addresses, 0 active sessions, 1 static mappings
Hash table in2out
0 active elements
0 free lists
0 linear search buckets
Hash table out2in
0 active elements
0 free lists
0 linear search buckets
Hash table worker-by-in
0 active elements
0 free lists
0 linear search buckets
Hash table worker-by-out
0 active elements
0 free lists
0 linear search buckets
static mappings:
local 192.168.50.76 external 10.10.23.45 vrf 0


Below is the packet trace:

00:02:16:415613: dpdk-input
  TenGigabitEthernet5/0/0 rx queue 0
  buffer 0xbf9c22: current data 14, length 46, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
  PKT MBUF: port 0, nb_segs 1, pkt_len 60
buf_len 2176, data_len 60, ol_flags 0x180, data_off 128, phys_addr 
0x28e6c780
packet_type 0x0
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
  IP4: 08:00:27:61:07:05 -> 90:e2:ba:48:7a:80
  UDP: 192.168.50.76 -> 10.10.23.46
tos 0x00, ttl 64, length 46, checksum 0x6693
  

Re: [vpp-dev] VPP and VXLAN tunnels

2017-04-25 Thread Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Hi,



not sure if vbd master/carbon works with honeycomb 17.01.

Please try it with vpp 17.04 and honeycomb 17.04-RC2.

In case of issues please attach full honeycomb log.



Moving thread to hc2vpp-dev.



Regards,

Marek


From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Patricio Latini
Sent: 26 kwietnia 2017 00:47
To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP and VXLAN tunnels

Well doing a REST on honeycomb of the interfaces list it seems that the problem 
is related to get the interface list as there seems to be a problem with vpp 
reusing the indexes (1 in this case) for all the interfaces.

looking in jira i found a resolved bug https://jira.fd.io/browse/HONEYCOMB-220 
that seemed to related to this behaviour however is marked as resolved and 
merged into master. Due to the date seems that should be part of 1701 right?  
ans also this doestn seem to use pbb when creating the interfaces.

any clue?


http://10.0.0.12:8183/restconf/operational/ietf-interfaces:interfaces-state




Error 500 Server Error


HTTP ERROR 500
Problem accessing 
/restconf/operational/ietf-interfaces:interfaces-state. Reason:

Server Error

Caused by:
java.lang.IllegalStateException: Unexpected size of list: 
[Mapping{getIndex=1, getName=GigabitEthernet2/3/0, augmentations={}}, 
Mapping{getIndex=1, getName=neutron_port_dfbfab9e-ad41-4b1b-b2fc-ca242047f666, 
augmentations={}}]. Single item expected
at 
io.fd.honeycomb.translate.util.RWUtils.lambda$singleItemCollector$0(RWUtils.java:53)
at java.util.function.Function.lambda$andThen$1(Function.java:88)
at 
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:503)
at 
io.fd.hc2vpp.common.translate.util.NamingContext.getName(NamingContext.java:81)
at 
io.fd.hc2vpp.v3po.interfacesstate.InterfaceCustomizer.lambda$getAllIds$4(InterfaceCustomizer.java:179)
at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at 
java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
at 
java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at 
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at 
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
io.fd.hc2vpp.v3po.interfacesstate.InterfaceCustomizer.getAllIds(InterfaceCustomizer.java:188)
at 
io.fd.honeycomb.translate.impl.read.GenericListReader.getAllIds(GenericListReader.java:94)
at 
io.fd.honeycomb.translate.impl.read.registry.CompositeReader$CompositeListReader.readList(CompositeReader.java:199)
at 
io.fd.honeycomb.translate.impl.read.registry.CompositeReader.readChildren(CompositeReader.java:121)
at 
io.fd.honeycomb.translate.impl.read.registry.CompositeReader.readCurrentAttributes(CompositeReader.java:145)
at 
io.fd.honeycomb.translate.util.read.AbstractGenericReader.readCurrent(AbstractGenericReader.java:61)
at 
io.fd.honeycomb.translate.impl.read.registry.CompositeReader.read(CompositeReader.java:84)
at 
io.fd.honeycomb.translate.impl.read.registry.CompositeReaderRegistry.read(CompositeReaderRegistry.java:122)
at 
io.fd.honeycomb.data.impl.ReadableDataTreeDelegator.readNode(ReadableDataTreeDelegator.java:136)
at 
io.fd.honeycomb.data.impl.ReadableDataTreeDelegator.read(ReadableDataTreeDelegator.java:102)
at 
io.fd.honeycomb.data.impl.ReadOnlyTransaction.read(ReadOnlyTransaction.java:76)
at 
org.opendaylight.netconf.sal.restconf.impl.BrokerFacade.readDataViaTransaction(BrokerFacade.java:524)
at 
org.opendaylight.netconf.sal.restconf.impl.BrokerFacade.readDataViaTransaction(BrokerFacade.java:516)
at 
org.opendaylight.netconf.sal.restconf.impl.BrokerFacade.readOperationalData(BrokerFacade.java:182)
at 
org.opendaylight.netconf.sal.restconf.impl.RestconfImpl.readOperationalData(RestconfImpl.java:735)
at 
org.opendaylight.netconf.sal.restconf.impl.StatisticsRestconfServiceWrapper.readOperationalData(StatisticsRestconfServiceWrapper.java:116)
at 
org.opendaylight.netconf.sal.rest.impl.RestconfCompositeWrapper.readOperationalData(RestconfCompositeWrapper.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

[vpp-dev] build on ubuntu 14.04.1 failed

2017-04-25 Thread yug...@telincn.com
Hey,
Here is my compile errors, any guides?

root@ubuntu:/usr/vpp/vpp# git checkout v17.04
root@ubuntu:/usr/vpp/vpp# make install-dep

Fetched 4,223 kB in 1min 20s (52.8 kB/s)
W: Failed to fetch 
bzip2:/var/lib/apt/lists/partial/ppa.launchpad.net_openjdk-r_ppa_ubuntu_dists_trusty_main_binary-amd64_Packages
  Hash Sum mismatch

W: Failed to fetch 
bzip2:/var/lib/apt/lists/partial/ppa.launchpad.net_openjdk-r_ppa_ubuntu_dists_trusty_main_binary-i386_Packages
  Hash Sum mismatch

W: Failed to fetch 
bzip2:/var/lib/apt/lists/partial/ppa.launchpad.net_openjdk-r_ppa_ubuntu_dists_trusty_main_i18n_Translation-en
  Hash Sum mismatch

E: Some index files failed to download. They have been ignored, or old ones 
used instead.
make: *** [install-dep] Error 100


root@ubuntu:/usr/vpp/vpp/build-root# apt-get install libtool
Reading package lists... Done
Building dependency tree   
Reading state information... Done
libtool is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 630 not upgraded.
root@ubuntu:/usr/vpp/vpp/build-root# 

root@ubuntu:/usr/vpp/vpp/build-root# ./bootstrap.sh 
Saving PATH settings in /usr/vpp/vpp/build-root/path_setup
Source this file later, as needed
Compile native tools
 Arch for platform 'native' is native 
 Finding source for tools 
 Makefile fragment found in /usr/vpp/vpp/build-root/packages/tools.mk 
 Source found in /usr/vpp/vpp/src 
 Configuring tools in /usr/vpp/vpp/build-root/build-tool-native/tools 
/usr/vpp/vpp/build-root/../src/configure: line 2231: LT_INIT: command not found
configure: error: cannot find install-sh, install.sh, or shtool in . 
"/usr/vpp/vpp/build-root/../src"/.
make: *** [tools-configure] Error 1
root@ubuntu:/usr/vpp/vpp/build-root# 

Regards,
Ewan


yug...@telincn.com
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev