This was discussed on 6MAN Linux patch kills IPv6 Anycast application load
balancing which hashes on the sessions using RFC 6437.

https://www.spinics.net/lists/ietf/msg108595.html

I believe 6MAN is working on fixing

On Sat, Nov 13, 2021 at 12:26 AM Gyan Mishra <[email protected]> wrote:

> Hi Linda and Jeffrey
>
> I have worked on designs in the using Anycast application VIPs for fast
> failover recovery and reading through my notes  what we found was that
> using Anycast failover from network fault recovered much better instantly
> based on Core / DC convergence for IP application traffic as compared to
> using DNS GEO load balancing relying on DNS to provide a different address
> in failover scenario is much slower and resulted in
>
> As Jeffrey pointed out most all router vendors support flow based ECMP
> load balancing default and not per packet to avoid out of order packets
> issues and the session does have affinity to the path for the duration of
> the session remains on the hashed path.  So when the application VIP from
> the primary DC fails the session is reset TCP RST and the application flow
> reestablished on the next closest proximity data center.
>
> So the issue that arise with Anycast design to be careful of is the core
> exist point distance to each DC exit point from the core must have enough
> of a variation in the metric or will result in session instability and
> flapping as traffic patterns change or links within the core failover the
> session may get constantly reset. Recommend that within region or close
> proximity to not rely on Anycast as that can result in instability.
>
> This is described in detail in the link below and matches my Anycast
> design experiences.
>
> https://weborigin.cachefly.com/anycast-think-before-you-talk-part-ii/
>
> In Linda’s 5G DC POP architecture as it’s regional and nominal metric
> difference or ECMP paths iBGP tie breaker or iBGP ECMP load sharing per
> flow load balancing in the core the problem is with the UE flipping between
> DC POPs TCP reset upon DC POP failure however if there are multiple DC POP
> ECMP path and multiple DC failures occur the session could be reset a few
> times.  As the flow has affinity to the path it’s on flow based load
> balancing only when the ECMP hashed path used for the DC flow fails would
> the TCP session reset and fail to alternate POP but still should be
> instantaneously.
>
> The link below talks about the issues related to relying on application
> server and content load balancers to use stickiness and issues as it relies
> on a cookie sent to the browser and in some cases the browser could block
> the cookie so has to accepted by the client.
>
>
> https://stackoverflow.com/questions/53703163/aws-alb-sticky-cookie-issue
>
> This link talks about Citrix Netscaler load balancer which I am familiar
> with the issues related to persistence for a flow and can yield uneven load
> balancing over the core links polarization of flows.
>
>
>    -
>
>    Persistence – When enabled, persistence (by definition) creates uneven
>    loading because the NetScaler appliance must direct requests from a
>    specific client to a specific server. Only unique or expired clients get
>    load balanced according to the load balancing method.
>
>
> https://support.citrix.com/article/CTX120542
>
>
> Kind Regards
>
> Gyan
>
> On Thu, Nov 11, 2021 at 12:44 PM Linda Dunbar <[email protected]>
> wrote:
>
>> Forget to mention:
>>
>> The local DNS to schedule traffic from different ingress routers to
>> different Load Balancers can become a bottleneck if network condition is
>> not leveraged.
>>
>>
>> Linda
>>
>> _____________________________________________
>> *From:* Linda Dunbar
>> *Sent:* Thursday, November 11, 2021 9:18 AM
>> *To:* Jeffrey (Zhaohui) Zhang <[email protected]>; [email protected]; 'IPv6
>> List' <[email protected]>; 'idr@ietf. org' <[email protected]>
>> *Subject:* RE: Comments about draft-dunbar-lsr-5g-edge-compute,
>> -idr-5g-edge-compute-app-meta-data and -6man-5g-edge-compute-sticky-service
>>
>>
>> Jeffrey,
>>
>> You are not alone in thinking the problem is best solved at "app layer".
>> My sister working for Cloud Operator's Layer4 & Layer 7 Load balancer
>> division debated with me all the time.
>>
>> They can:
>> - deploy many load balancers, each managing traffic among many active
>> servers.
>> - local DNS to schedule traffic from different ingress routers to
>> different Load Balancers
>>
>> As pointed in this NANOG presentation:
>> *https://pc.nanog.org/static/published/meetings/NANOG77/2082/20191030_Wang_An_Architecture_Of_v1.pdf*
>> <https://pc.nanog.org/static/published/meetings/NANOG77/2082/20191030_Wang_An_Architecture_Of_v1.pdf>
>>
>> << OLE Object: Picture (Device Independent Bitmap) >>
>> Leveraging the network condition to optimize traffic can solve those
>> issues.
>>
>> From the forwarding perspective, it is same as introducing TE metrics
>> into Path computation. When TE metrics, bandwidth, etc. change, the path
>> change accordingly.
>>
>> Linda Dunbar
>>
>> -----Original Message-----
>> From: Jeffrey (Zhaohui) Zhang <[email protected]>
>> Sent: Thursday, November 11, 2021 8:08 AM
>> To: Linda Dunbar <[email protected]>; [email protected]; 'IPv6 List'
>> <[email protected]>; 'idr@ietf. org' <[email protected]>
>> Subject: Comments about draft-dunbar-lsr-5g-edge-compute,
>> -idr-5g-edge-compute-app-meta-data and -6man-5g-edge-compute-sticky-service
>>
>> Hi,
>>
>> I did not have time to comment during today's LSR session so I am
>> bringing this to the list. I am also adding IDR and 6man lists because all
>> these three drafts are about the same use case.
>>
>> There were a long email discussion on the 6man draft here:
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmailarchive.ietf.org%2Farch%2Fmsg%2Fipv6%2F4rw-pBcNZN7mzkArjUtVUzLcQJU%2F&amp;data=04%7C01%7Clinda.dunbar%40futurewei.com%7C36cfe5852ea24a8f730108d9a51ca3bf%7C0fee8ff2a3b240189c753a1d5591fedc%7C1%7C0%7C637722364713687643%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=ILSLOrYcIROMHb%2FxCI2AqMHm%2FAHQA6BXGoJvArZaj30%3D&amp;reserved=0
>> back in March.
>>
>> There are basically two problems here:
>>
>> 1. which server to pick
>> 2. how to stick to the same server when a UE (mobile device) moves
>>
>> The long discussion on the 6man list is mainly focused on #2. I don't
>> know if we can say there was a conclusion, but some people (me included)
>> believe that both problems are best solved at "app layer" instead of
>> routing layer - just put a load balancer next to the 5G UPF.
>>
>> Jeffrey
>>
>> -----Original Message-----
>> From: Jeffrey (Zhaohui) Zhang
>> Sent: Thursday, March 25, 2021 3:46 PM
>> To: Linda Dunbar <[email protected]>; Kaippallimalil John <
>> [email protected]>; IPv6 List <[email protected]>; idr@ietf.
>> org <[email protected]>
>> Subject: questions about draft-dunbar-idr-5g-edge-compute-app-meta-data
>> and draft-dunbar-6man-5g-edge-compute-sticky-service
>>
>> Hi Linda, John,
>>
>> I have the following questions.
>>
>> The two related drafts listed the following three problems respectively:
>>
>>       1.3. Problem#1: ANYCAST in 5G EC Environment.............. 6
>>       1.4. Problem #2: Unbalanced Anycast Distribution due to UE
>> Mobility.................................................. 7
>>       1.5. Problem 3: Application Server Relocation............. 7
>>
>>       1.2. Problem #1: ANYCAST in 5G EC Environment.............. 4
>>       1.3. Problem #2: sticking to original App Server........... 5
>>       1.4. Problem #3: Application Server Relocation............. 5
>>
>> Why is problem #2 different in the two drafts? Is it true that none of
>> the two drafts address problem #3?
>> The idr draft talk about "soft anchoring" problem and solution - how is
>> that different from the "sticky service"?
>>
>> Thanks.
>> Jeffrey
>>
>> Juniper Business Use Only
>>
>> --------------------------------------------------------------------
>> IETF IPv6 working group mailing list
>> [email protected]
>> Administrative Requests: https://www.ietf.org/mailman/listinfo/ipv6
>> --------------------------------------------------------------------
>>
> --
>
> <http://www.verizon.com/>
>
> *Gyan Mishra*
>
> *Network Solutions A**rchitect *
>
> *Email [email protected] <[email protected]>*
>
>
>
> *M 301 502-1347*
>
> --

<http://www.verizon.com/>

*Gyan Mishra*

*Network Solutions A**rchitect *

*Email [email protected] <[email protected]>*



*M 301 502-1347*
_______________________________________________
Lsr mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/lsr

Reply via email to