Robert, > > In both cases, this call, IMO a signaling capability from the receiver to > > the sender. > > So essentially you are asking for per peer flooding queue.
You already have a per-interface flooding ‘queue' through the implementation of the SRM bit in the LSDB, which must be managed on a per-interface basis. > Now this get's a little bit of tricky (especially if you are dealing with > relatively small timers) if one peer sends you 1 ms, second 50 ms and 10th > 250 ms. If the peers are on the same interface, then you clearly have to not overrun the slower peer. Note that both are slower than the currently mandated 33ms. If the peers are on different interfaces, then this tells you how to configure the per-interface timers. > Imagine that the LSP to be flooded to the 10th peer is already overwritten > due to new LSP but still sitting in the out queue ... do you drain that queue > and start over with new LSP or you in place replace the old one keeping the > running timer ? > > I am just curious what will happen under the hood :) You’re assuming a strict queue, when in fact, what happens under the hood is simpler: each LSP has a per-interface bit saying that it must be sent on that interface. When it’s time to transmit you walk the LSDB looking for an LSP with the bit set, and then transmit it. If an LSP is overwritten, that’s fine, the bit is still set and you would send the newer version. Walking the LSDB every time is obviously inefficient, so an implementation is free to maintain other data structures to optimize this if it chooses, but the semantics are quite clear: send LSPs that haven’t been sent yet. Regards, Tony
_______________________________________________ Lsr mailing list [email protected] https://www.ietf.org/mailman/listinfo/lsr
