Les,

[LES:] Let’s use a very simple example.

A and B are neighbors
For LSPs originated by Node C here is the current state of the LSPDB:

A has (C.00-00(Seq 10), C.00-01(Seq 8), C-00.02(Seq 7) Merkle hash: 0xABCD
B has (C.00-00(Seq 10), C.00-01(Seq 9), C-00.02(Seq 6) Merkle hash: 0xABCD
(unlikely that the hashes match -  but possible)

When A and B exchange hash TLVs they will think they have the same set of LSPs 
originated by C even though they don’t.
They would clear any SRM bits currently set to send updated LSPs received from 
C on the interface connecting A-B.
We have just broken the reliability of the update process.

By that metric, the update process has always been unreliable.  All it takes is 
two LSPs with different contents and the same checksum.  This breaks CSNPs.  As 
Tony P. has said, we are now very much into the realm of stochastic processes.  
CSNPs work in practice because the odds of a collision are quite small. The 
HSNP approach carries that forward.

The analogy of the use of fletcher checksum on PDU contents is not a good one. 
The checksum allows a receiver to determine whether any bit errors occurred in 
the transmission. If a bit error occurs and is undetected by the checksum, that 
is bad – but it just means that a few bits in the data are wrong – not that we 
are missing the entire LSP.

I disagree. The Fletcher checksum is actually a very good hash. A mismatch does 
not imply that just a few bits are wrong. It could be that the LSP is wholly 
different. The Fletcher checksum takes a full LSP as input and produces 2 
octets (in the IS-IS usage, there are also higher Fletcher checksums). There 
are necessarily collisions as there are more than 65,535 possible LSPs.

You are correct in that with HSNP, we are covering more of the database with a 
single checksum and that the consequences of a collision are potentially 
higher.  Yes, that’s a good reason to talk about the hash function and ensure 
that we’re getting statistics that we feel comfortable with. That’s not a 
reason to avoid HSNPs.


I appreciate there is no magic here – but I think we can easily agree that 
improving scalability at the expense of reliability is not a tradeoff we can 
accept.

Assuming that we do it right, there is no meaningful compromise to reliability 
here.

2)Do we need a more sophisticated hash calculation in order to guarantee 
uniqueness? If the argument is the update process is already reliable even 
without CSNPs/HSNPs - that HSNPs are simply an optimization and don't have to 
be 100% reliable, then I think this implies that periodic CSNPs are not needed 
at all. And if the hash has a significant possibility of being non-unique, 
relying on HSNPs during adjacency bringup might actually be a hindrance, not a 
help.


Periodic CSNPs are not needed.  A periodic HSNP is sufficient, and if there are 
inconsistencies, then they will devolve into CSNPs to isolate the exact portion 
of the database that is inconsistent.  We intentionally re-use the CSNP and 
PSNP mechanisms as we saw no point in re-inventing them.
[LES:] My argument is that periodic xSNPs (be that CSNPs or HSNPs) may not be 
needed at all.

I disagree. I’ve seen too many errors where one system made a flooding mistake 
and it was eventually corrected by the CSNP.  Not doing a periodic check is a 
serious hit to reliability.  The original spec does not require periodic CSNPs 
on PTP links. As a pragmatic matter, we found that having it enabled on PTP 
links was a very practical benefit.  With HSNPs, we can continue that benefit 
at scale for lower overhead.
3)I would like to raise the question as to whether we should prioritize a 
solution that aids initial LSPDB sync on adjacency bringup over a solution 
which works well after LSPDB synchronization (periodic CSNPs).

Our solution works well in both cases.  In the case of initial bringup, our 
mechanism exchanges a logarithmic number of packets to isolate the exact LSPs 
that are inconsistent.  In the case where databases are already synchronized, 
this means that only a single top-level HSNP is required.

This is also true in the case of continuing verification of synchronized 
databases.

[LES:] The solution you have proposed works much better when the LSPDBs on the 
neighbors are “almost the same” because the ranges of LSPs covered in each hash 
are more likely to be the same.
At adjacency bringup this is less likely to be the case – meaning that every 
time I receive an HSNP from you I am more likely to need to calculate the hash 
the way you did rather than simply check a cached hash value.
(BTW – the use of cached hash values is mentioned in the draft as desirable – I 
did not invent this goal. 😊)
One way of improving this is to limit the hash TLV to LSPs from a single node 
(no range required).
This improves xSNP scalability from per LSP to per node.

We can have this debate at the mic. I am very happy to go down a route where we 
have a very clear way of aggregating hashes and building the tree. Others don’t 
agree with me, but I consider this to be less substantive than getting the HSNP 
architecture in place.

If there is a bulk disconnect, then reception of a mismatching HSNP should 
provoke the transmission of the next lower ‘degree’ of hashes. This can iterate 
back and forth until it becomes a complete CSNP.

If we feel that that logarithmic exchange is too painful, we can also add more 
database details (e.g., a count of LSPs) to help gauge the amount of 
disagreement and help trigger CSNPs directly.

The need for periodic CSNPs arose from early attempts at flooding optimizations 
(mesh groups) where an error in the manual configuration could jeopardize the 
reliability of the Update Process. In deployments where standards based 
flooding optimizations are used, the need for periodic CSNPs is lessened as the 
standards based solution should be well tested. Periodic CSNPs becomes the 
"suspenders" in a "belt" based deployment (or if you prefer the "belt" in a 
"suspenders" based deployment). I am wondering if we should deemphasize the use 
of periodic CSNPs?  In any case, the size of a full CSNP set is a practical 
issue in scale deployments - especially where a node has a large number of 
neighbors. Sending the full CSNP set on adjacency UP is a necessary step and 
therefore I would like to see this use case get greater attention over the 
optional periodic CSNP case.

Historically, we saw disconnects that were problematic and not just confined to 
mesh groups. All it takes is a minor implementation issue in anyone’s code and 
the entire domain is confused.  Not good.

Sending a full set of CSNPs is not always optimal.  Bringing up another 
parallel link between two systems with an existing adjacency, for example.  The 
databases are synchronized already, so CSNP doesn’t do anything of 
significance.  This is also true if we’re adding a link to an otherwise stable 
connected network.

SInce this now reduces to sending a single top level HSNP, and I like having a 
belt and suspenders (figuratively), things are already much cheaper and I would 
favor retaining that.


4)You choose to define new PDUs - which is certainly a viable option. But I am 
wondering if you considered simply defining a new TLV to be included in 
existing xSNPs. I can imagine cases - especially in PSNP usage - where a 
mixture of existing LSP entries and new Merkle Hash entries could usefully be 
sent in a PSNP to request/ack LSPs as we do today. The use of the hash TLV in 
PSNPs could add some efficiency to LSP acknowledgments.


We chose to go to new PDUs to not risk interoperability problems. We could 
easily see outselves wanting to generate packets that only include HSNP 
information and no legacy CSNP/PSNP information.
[LES:] I am cautious about new PDUs because it translates into new PDUs/level 
and – somewhere down the road – new PDUs to support new scopes (RFC 7356). (The 
256 LSP limit per node is another limitation that we may yet have to deal with.)
Given we are already negotiating the use of the new TLV/neighbor – and that in 
IS-IS unsupported TLVs are always ignored – I don’t see that the new TLV 
approach is more risky.

Another fine debate at the mic.
6)You do not discuss the use of HSNPs on LANs. It would seem intuitive that 
HSNPs could only be used when all neighbors on the LAN support it. But some 
discussion of LANs would be desirable.

Agreed.  Given the decreasing usage of actual LAN situations, I think that this 
is not a significant concern.
[LES:] Agreed – but for completeness it should be discussed.

Fair.

T




Juniper Business Use Only
_______________________________________________
Lsr mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to