inline just iterating some good ideas in this mail  (otherwise I tend to
agree with all point TL makes from my experience)

On Sat, Jul 19, 2025 at 11:55 AM Tony Li <[email protected]>
wrote:

>
>
> Les,
>
>
>
> *[LES:] The solution you have proposed works much better when the LSPDBs
> on the neighbors are “almost the same” because the ranges of LSPs covered
> in each hash are more likely to be the same.*
>
> *At adjacency bringup this is less likely to be the case – meaning that
> every time I receive an HSNP from you I am more likely to need to calculate
> the hash the way you did rather than simply check a cached hash value.*
>
> *(BTW – the use of cached hash values is mentioned in the draft as
> desirable – I did not invent this goal. **😊**)*
>
> *One way of improving this is to limit the hash TLV to LSPs from a single
> node (no range required).*
>
> *This improves xSNP scalability from per LSP to per node.*
>
>
>
> We can have this debate at the mic. I am very happy to go down a route
> where we have a very clear way of aggregating hashes and building the tree.
> Others don’t agree with me, but I consider this to be less substantive than
> getting the HSNP architecture in place.
>
>
>
> If there is a bulk disconnect, then reception of a mismatching HSNP should
> provoke the transmission of the next lower ‘degree’ of hashes. This can
> iterate back and forth until it becomes a complete CSNP.
>
>
>
> If we feel that that logarithmic exchange is too painful, we can also add
> more database details (e.g., a count of LSPs) to help gauge the amount of
> disagreement and help trigger CSNPs directly.
>

counter would be a simple clever thing ;-) that may however cost up to 32
bits on hash for a range (since we can easily blow through 65K total LSPs a
cash covers)

Generally, getting more "information at lower collision probability" will
always cost computation. as exampe: XOR'ing all the LSP hashes is the
fastest way to go about business but it does NOT take into account the
sequence which a  checksum would do. however, then the "adjust for range
mismatch:" becomes massively more expensive since we cannot just "XOR away
the excessive hashes" (addition may be simpler).

As I suggested, we can double hash, we can however also use a fast,
lightweight checksum over sequence of all checksums of lower level.

Generally, as first order thought experiment of likelihood of collision

assume sequence X1 in range R generating a hash X1'
at the same time sequence X2 in same range R generating a hash X2'

_if_ the hash production is ideally evenly distributed _and_ every bit in
hash is independent statistically of another the collision likelihood is

(1/2) ^ 32    i.e we have 50% chance of a bit in X1' and X2' being the same
and since probabilities are independent we end up with  .00000000023283064365
chance of collision. that's around the age of stars in terms of collisions
I'd venture ;-)

let's assume we have a super degenerate case where 16 bits become 100%
dependent (basically half the hash is useless since it either does not pick
up entropy or distribution of things in X1 and X2 causes the hash function
to degenerate in terms of distribution of generated values [hence the
double hashing proposal, it's much, much harder to degenerate 2 hashes])

we are with 16 bits collision likelihood

.00001525878906250000

which is 1/1000th of a percentile, i.e. statistically speaking we;d need
100'000 such completely degenerate sequences over range R to generate a
collision  (observe that the indicated range on the hash MUST be the same
to collide so collision in LSP ranges that  differ are irrelevant)  and
they would have to follow fairly closely after each other (less than aging
and so on)



>
>
>
> *4)You choose to define new PDUs - which is certainly a viable option. But
> I am wondering if you considered simply defining a new TLV to be included
> in existing xSNPs. I can imagine cases - especially in PSNP usage - where a
> mixture of existing LSP entries and new Merkle Hash entries could usefully
> be sent in a PSNP to request/ack LSPs as we do today. The use of the hash
> TLV in PSNPs could add some efficiency to LSP acknowledgments.*
>
>
>
>
>
> We chose to go to new PDUs to not risk interoperability problems. We could
> easily see outselves wanting to generate packets that only include HSNP
> information and no legacy CSNP/PSNP information.
>
> *[LES:] I am cautious about new PDUs because it translates into new
> PDUs/level and – somewhere down the road – new PDUs to support new scopes
> (RFC 7356). (The 256 LSP limit per node is another limitation that we may
> yet have to deal with.)*
>
> *Given we are already negotiating the use of the new TLV/neighbor – and
> that in IS-IS unsupported TLVs are always ignored – I don’t see that the
> new TLV approach is more risky.*
>
>
>

on IIH this "ignore unknown" has a fine purpose here as signalling. Both
show HSNP support, stuff starts, otherwise no (ether discussion
outstanding). for the HSNP itself try to shove  those into PSNP via some
TLV trickery will just generate a ball of yarn while the :new pdu in new
scope: argument seems red herring to me. Those packets are strictly CSNP
scope, there is no magic here.


>
>
>
_______________________________________________
Lsr mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to