Robert –

As has been stated multiple times before…there is nothing today – without 
MP-TLV support – that prevents an attacker from sending many TLVs for the same 
object. Introduction of MP-TLV does not alter that attack vector.
As was discussed in this thread with Med, if anything the addition of MP-TLV 
support makes an implementation better equipped to handle such attacks since 
MP-TLVs are recognized and not treated as duplicates.

   Les

From: Robert Raszuk <rob...@raszuk.net>
Sent: Friday, March 28, 2025 2:59 PM
To: Les Ginsberg (ginsberg) <ginsb...@cisco.com>
Cc: mohamed.boucad...@orange.com; The IESG <i...@ietf.org>; 
draft-ietf-lsr-multi-...@ietf.org; lsr-cha...@ietf.org; lsr@ietf.org; 
yingzhen.i...@gmail.com
Subject: Re: [Lsr] Re: Mohamed Boucadair's Yes on draft-ietf-lsr-multi-tlv-11: 
(with COMMENT)

Hi Les,

> we now have to do an upgrade

The way I understand Mohamed's suggestion is not to have a static 
implementation constant, but a configurable limit by an operator. If you prefer 
the default to be infinity so be it. If this is just a config no upgrade would 
be necessary.

But how do you protect the network from malicious or accidental injection of 
MP-TLV consisting of 1000s of pieces ? And it seems better to ignore the TLV 
exceeding such limit as opposed to break entire level.

So let's assume there is no limit ... how do you protect from such attack 
vectors or just bugs especially if what is exceeded is just a link state opaque 
value simply carried by ISIS for convenience of other upper level apps ?

Ideally such a limit would need to be set per each MP capable TLV.

Thx,
R.


On Fri, Mar 28, 2025 at 9:47 PM Les Ginsberg (ginsberg) 
<ginsberg=40cisco....@dmarc.ietf.org<mailto:40cisco....@dmarc.ietf.org>> wrote:
Med –

V13 of the draft has been posted which addresses the logging text and the 
Security section text.

There is then one open issue:


> > > # Section 5 (*)

> > >

> > > CURRENT:

> > >    When processing MP-TLVs, implementations MUST NOT impose a

> > minimum

> > >    length check.

> > >

> > > Agree... however, should we have a max of MP-TLVs to be used as a

> > > guard for splitting the information into a large numbers of TLVs?

> >

> > [LES:] I see no reason to impose a maximum. Any number chosen would be

> > arbitrary and risks becoming "too small" in the future.

>

> [Med] I'm not asking to pick a random max value, but to have a knob for

> Operators to control a max based on their local policy. We need some guards

> here.

>

[LES:] I am not comfortable specifying (or even suggesting) a max value.

[Med] As I said earlier, I’m not asking for that. The exact value will be up to 
the taste of the operation. My request here is to give that control.



It isn’t needed and I think introduces additional issues.

[Med] misbehaviors from within networks happen, bugs and misconfiguration 
happen as well, etc. I still think the guard does not bring any issue, but fix 
them.



The only reason an excessively large number of TLVs for the same object would 
be sent are:



a)They are actually needed because of the amount if information required in a 
given deployment

b)The sending implementation has a bug

c)There is an attacker



Regardless of the reason, the receiver has to deal with this. Specifying an 
arbitrary max isn’t going to help when the need is legitimate - and it 
obviously won’t help prevent the pathological cases.

Note that Section 5 (last paragraph) provides guidance on how to deal with 
duplicated information.



[Med] I’m afraid that para does not cover this point.



[LES2:] Let’s dig a bit deeper here.

You propose that each implementation locally choose a maximum value for the 
number of MP-TLVs it supports for a given object.

Suppose that Node A chooses “2” and Node B chooses “3”.

Both of them receive advertisements which have 3 MP-TLVs for a given object. 
What will happen?

Node A will use 2 of the TLVs – Node B will use all 3 of the TLVs.

Which means we are in an equivalent situation to having a node which doesn’t 
support MP-TLVs at all in the network.

And  we are vulnerable to the same problems – forwarding loops/black holes.



One way of dealing with this would be to specify a global maximum instead of 
leaving it to individual nodes to choose a maximum.

But this means if the number proves too small over time, we now have to do an 
upgrade and get all nodes to support a larger number.



And even with a consistent maximum, receivers still have to deal with what ever 
they receive i.e., nodes cannot simply ignore the additional TLVs. Even if they 
don’t actively use the information in those TLVs they have to track all of the 
TLVs associated with a given object so that they become aware when that number 
falls into the acceptable range – noting that you don’t know which of the TLVs 
may be withdrawn in the next update.



I appreciate the concern that you have – but imposing a limit isn’t going to 
help – it is only going to create additional problems.



    Les


_______________________________________________
Lsr mailing list -- lsr@ietf.org<mailto:lsr@ietf.org>
To unsubscribe send an email to lsr-le...@ietf.org<mailto:lsr-le...@ietf.org>
_______________________________________________
Lsr mailing list -- lsr@ietf.org
To unsubscribe send an email to lsr-le...@ietf.org

Reply via email to