On May 8, 2019, at 7:16 AM, Anoop Kumar Pandey <an...@cdac.in> wrote:
> 
>> The reality is that there are some organizations who treat certificates as a 
>> dumping ground for information.
> 
> 3 Tier chained certificate with organization validated certificate in DER 
> encoded Binary mode has a size of just 1588 byte (1.55 KB) [Attached].  It 
> has a lot of Information including SANs and OIDs. 
> 
> You need to show quantitatively with example the effect of the fields that 
> are responsible for the larger certificate.

  I did.  64K certificate chains contain large certs.  Which fields contribute 
to that size is less important.

  If you really care, it's trivial to dump large strings into a certificate.  
1K octet email addresses aren't *technically* forbidden.  Similar arguments go 
for physical addresses.

  The point here is that no, I *don't* have to quantitatively demonstrate which 
fields are responsible for larger certificates.  The specs don't forbid large 
values in any field, so therefore the fields can contain large values.  QED.

  My argument is that we need to explain to administrators *why* such practices 
are wrong.  Right now, admins who have poor practices discover that the 
certificates don't work, for unknown reasons.  So to fix it, people poke at the 
certs until they work.

  I'm suggesting that explicit guidance is better than random guesses.

>> And additional reason to forbid longer exchanges is that it could 
>> potentially be used for bulk data transfer.  If the exchange is unlimited, 
>> then a client and home server can collude to "tunnel" generic IP traffic 
>> over EAP.
> 
> The protocol designer need to take care of such situation.

  They didn't.  Therefore, the implementors chose to enforce a limit.  And that 
limit is 40-50 packets.

  We now have to live with that, and bake it into the spec as best practices 
for "here's what will work".

>>> Realistically, it can't be increased.  There are probably tens of millions 
>>> of access points and switches in production.  They are the de-facto 
>>> standard.  If you publish a new RFC saying "allow up to 128 packets", it 
>>> won't affect those legacy devices.  The new standard *might* start showing 
>>> up in new devices, once the standard is published.  Or it might not.  I've 
>>> seen standards take 10+ years to *start* getting adopted.
> 
> This limitation would be with any protocol. If we ask everyone to use ECC or 
> certificate caching or certificate compression, that will also take time.

  These are upgrades to end user devices.

> Or if the customer insists or reports, OEM will have to provide firmware 
> upgrade or device replacement with new protocol implemented. 

  Those are upgrades to WiFi access points and switches.

  There are tens of millions, if not hundreds of millions of WiFi access points 
out there.  For many, the vendor is out of business.  For others, the vendor no 
longer supports that product, maybe due to losing the source.  For others, the 
admins don't know the passwords to the devices.  For most, admins don't have 
the time, budget, or inclination to upgrade our replace them when the devices 
already work.

  I'm not sure what the disagreement is here.  I'm saying that the practical 
limits are ~50 round trips, and maybe ~64K certificate chains.  You're not 
disagreeing, but you're asking me to justify my position, and are arguing 
against it.  I'm not clear what point you're trying to make.

  Alan DeKok.

_______________________________________________
Emu mailing list
Emu@ietf.org
https://www.ietf.org/mailman/listinfo/emu

Reply via email to