>> I think distributed is more practical too. >I would appreciate more detailed insights as to why you (and others) feel this >way. It is not at all obvious to me. IGP is distributed in nature. The distributed computation of flooding topology like distributed SPF will keep IGP still distributed in nature. Introducing centralized feature into IGP will break IGP's distributed nature, which may cause some issues/problems.
>> For computing routes, we have been using distributed SPF on every node for >> many years. >True, but that algorithm is (and was) very well known and a fixed algorithm >that would clearly solve the problem at the time. If we were in a similar >situation, where we were ready to set an algorithm in >concrete, I might well >agree, but it’s quite clear that we are NOT at that point yet. We will need >to experiment and modify algorithms, and as discussed, that’s easier with a >centralized approach. After flooding reduction is deployed in an operational (ISP) network, will we be allowed to do experiments on their network? After an algorithm is determined/selected, will it be changed to another algorithm in a short time? >> In fact, we may not need to run the exact algorithm on every node. As long >> as the algorithms running on different nodes generate the same result, that >> would work. >Insuring a globally consistent result without running the exact same algorithm >on the exact same data will be quite a trick. Debugging distributed problems >at scale is already a hard problem. Having >different algorithms in different >locations would add another order of magnitude in difficulty. No thank you. In some existing networks, some nodes run IGPs from one vendor, some other nodes run IGPs from another vendor, and so on. Some may use normal SPF, some others may use incremental SPF. It seems that we have had these cases for many years. >Tony Best Regards, Huaimo _______________________________________________ Lsr mailing list [email protected] https://www.ietf.org/mailman/listinfo/lsr
