Acee,

> I think a distributed flooding algorithm is more robust and will converge 
> faster when there more than one concurrent failure in the flooding topology.


No doubt. However, we do not normally attempt to protect against multiple 
concurrent failures. Regardless of how the flooding topology is computed, 
multiple failures can partition it and convergence will become seriously 
problematic.  In the face of multiple failures, no node could have a complete 
and consistent database and even with a distributed algorithm, you may have 
multiple iterations of computing a new topology, flooding, and computing.

Let’s not kid ourselves: distributed is no panacea. 

We do not design our redundant hardware systems to deal with multiple 
concurrent failures, simply because the complexity of such failures is so high. 
It’s paid off as these things simply don’t happen with significant frequency.  
What is important is that we have eventual consistency.


> Additionally, if there is a node failure of the flooding leader and that node 
> is a node is on flooding topology, there could be greater delays.


All nodes are on the flooding topology, necessarily.

If the leader fails, then it’s similar to any other node failure.  Flooding 
happens around the failed node. Assuming the flooding topology was 
bi-connected, flooding is not interrupted and the network should converge in a 
single flood/SPF cycle.

If the old leader returns to the topology quickly, then there is no issue.

If the old leader does not return, then there is a leader 
election/computation/flooding sequence, but that happens while there is a 
functioning flooding topology. NBD.

>  
> The same solution suggested for the centralized approach of flooding on the 
> union of the old and new topologies could be used for transitions. However, 
> prune to the newer topology could be quicker since it should be easier to 
> assure downstream neighbors have converged on the new topology than the whole 
> domain.  


You can’t assume that you had complete information when computing the 
distributed version.

Tony

_______________________________________________
Lsr mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/lsr

Reply via email to