On May 6, 2019, at 6:47 AM, Anoop Kumar Pandey <an...@cdac.in> wrote:
> Section 3 talks about various reasons for a certificate being large. Subject 
> Alternative Name field is typically used for multi-domain or wildcard 
> certificates (fb.com, *.facebook.om, facebook.net, messenger.com) where all 
> domains are protected by same certificate. I doubt that SAN will be a reason 
> in IoT world for larger certificate. 

  EAP-TLS certificates are used in areas other than IoT.

> Similarly key size and signature field is in bits (e.g. 2048 bit key, 2048 
> bit signature, 224 bit key if ECC is used). So this also shouldn’t contribute 
> to a larger certificate.
> 
> The same can be said about OID and user groups. Key usage is limited to 
> digital signature (80) [just one usage] in an end device certificate.  

  No.  Key usage is in every intermediate certificate.  i.e. if we have CA1 - > 
CA2 - > CA3 -> certs, then all intermediate certs will have OIDs that allow 
them to sign subsequent certificates.

> Section 4.1.1 talks about shortening or avoid certain fields in the 
> certificate. Mostly the changes are marginal and the authors have not 
> presented quantitatively, how much saving in certificate size could be 
> achieved with it and will it be sufficient.

  I agree more quantitative numbers would help here.

  The reality is that there are some organizations who treat certificates as a 
dumping ground for information.  Every intermediate cert has names, address, 
and anything else they can think of.  Then, the CA chain mirrors the corporate 
reporting chain.

  The result is a large certificate chain.  This happens in the real world:

http://lists.freeradius.org/pipermail/freeradius-users/2009-February/035621.html

  ... We are testing
WPA2/EAP-TLS authentication, with large certificate chains (just under
64K in PEM format).  Some individual cert sizes in the chain approach
10K in DER format.

  The solution there was "don't use large chains".

> Section 4.2.3 talks about compressing the certificate. This is achievable, 
> however the capability to compress, decompress and performing ECC certificate 
> validation within a small IoT device is questionable. 

 Again, this applicability is not just for IoT.

> P.S: In EAP-TLS, isn’t there clear direction on how many packets exchange are 
> allowed before dropping EAP session. Why do some implementations drop session 
> after 40-50 packets itself? 

  From the above thread, Jouni Malinen (hostap / wpa_supplicant) responds:

http://lists.freeradius.org/pipermail/freeradius-users/2009-February/035654.html

The main (well, more or less, the only) reason for that limit on
number of round trips is to work around issues where the EAP peer and
server ended up in an infinite loop ACKing their messages. I would
prefer to change that to be based on whether any real progress has
been made during the last round trip or two, i.e., to remove the hard
limit and allow as many round trips as it takes to get through the
authentication (or whatever else one adds into EAP, e.g., TNC)

  10 years later, the implementation still just counts exchanges.  It's fine, 
it works.

  And additional reason to forbid longer exchanges is that it could potentially 
be used for bulk data transfer.  If the exchange is unlimited, then a client 
and home server can collude to "tunnel" generic IP traffic over EAP.

  But the biggest reason to limit the exchanges is that there is simply no 
reason to allow them.  Even in the above situation (64K certificate chain), 
there's no *technical* reason for such large chains.

> In the standard of EAP-TLS, can the no. of exchange packets be increased? If 
> yes, apart from latency, what other impact could it have?

  Realistically, it can't be increased.  There are probably tens of millions of 
access points and switches in production.  They are the de-facto standard.  If 
you publish a new RFC saying "allow up to 128 packets", it won't affect those 
legacy devices.  The new standard *might* start showing up in new devices, once 
the standard is published.  Or it might not.  I've seen standards take 10+ 
years to *start* getting adopted.

  Allowing more packets wouldn't generally increase latency, because 
99.99999999% of authentications won't need more packets.

  And realistically, if you can't authenticate someone after exchanging 50K of 
data, you're likely doing something wrong.

  Alan DeKok.

_______________________________________________
Emu mailing list
Emu@ietf.org
https://www.ietf.org/mailman/listinfo/emu

Reply via email to