Hi all,

I’d like to continue the discussion that we left off with last night.

The use case that I posited was a situation where we had 1000 LSPs to
flood. This is an interesting case that can happen if there was a large
network that partitioned and has now healed.  All LSPs from the other side
of the partition are going to need to be updated.

Let’s further suppose that the LSPs have an average size of 1KB.  Thus, the
entire transfer is around 1MB.

Suppose that we’re doing this on a 400Gb/s link. If we were to transmit the
whole batch of LSPs at once, it takes a whopping 20us.  Not milliseconds,
microseconds.  2x10^-5s.  Clearly, we are not going to be rate limited by
bandwidth.

Note that 20us is an unreasonable lower bound: we cannot reasonably expect
a node to absorb 1k PDUs back to back without loss today, in addition to
all of it’s other responsibilities.

At the opposite end of the spectrum, suppose we transmit one PDU every
33ms.  That’s then going to take us 33 seconds to complete. Unreasonably
slow.

How can we then maximize our goodput?  We know that the receiver has a set
of buffers and a processing rate that it can support. The processing rate
will vary, depending on other loads.

What we would like the transmitter to do is to transmit enough to create a
small processing queue on the receiver and then transmit at the receiver’s
processing rate.

Can we agree on this goal?

Tony
_______________________________________________
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr

Reply via email to