> On 17 Mar 2015, at 10:55 pm, Alec Muffett <al...@fb.com> wrote:
>> How does the certificate "dead line" affect (non-)DNS for .onion?
> 
> Permit me to quote Brad Hill:
> 
> Quote: "The end date for the internal names loophole* is October - all
> public certs [which are issued] not for public namespaces MUST be revoked
> at that point. CAs can continue to issue up to that time, but they all
> must expire or be revoked on Oct 1, and no new ones issued. Those certs
> are really extremely dangerous, and [Brad has] been working for NINE YEARS
> now to make them go away. Can't happen soon enough.” <endquote/>

        More details on the dangers associated with these certificates in the 
context of an active gTLD expansion especially in ICANN SSAC document SSAC057
 https://www.icann.org/en/system/files/files/sac-057-en.pdf 
<https://www.icann.org/en/system/files/files/sac-057-en.pdf>
        As per that document, ICANN security team have been among the groups 
pressuring to have the local namespaces loophole closed for at least a couple 
of years now. And the problem has scuttled some gTLD applications that are 
regarded as too tainted by the issue already (e.g. .corp).

        I agree with Richard Barnes that the special purpose behaviour of 
.onion always returning an NXDOMAIN where possible to prevent information 
leakage is enough to justify its inclusion on the Special-Use List. While it 
will be difficult to update all resolver implementations everywhere it 
shouldn’t be hard to achieve significant compliance (can’t you implement this 
requirement with a very small amount of RPZ config?), and thus significant 
mitigation of the information leakage issue can be achieved.
        I’m generally in favour of this proposal.

        David

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to