in draft-ietf-dnsop-5966bis-01, about which, more separately, this text appears:
There are several advantages and disadvantages to the increased use of TCP as well as implementation details that need to be considered. This document addresses these issues and therefore extends the content of [RFC5966], with additional considerations and lessons learned from new research and implementations [Connection-Oriented-DNS]. the [Connection-Oriented-DNS] paper (USC/ISI Technical Report ISI-TR-693, http://www.isi.edu/publications/trpublic/files/tr-693.pdf) is controversial and should not be used as the basis for standards work. examples of this technical report's failings include those shown below: -------- (beginning of ISI-TR-693 comments) > Well established techniques protect DNS servers from TCP-based DoS > attacks [..., 70] reference [70] is TCPCT (RFC 6013) which was rejected by the IETF TCPM WG in favour of google's less secure "tcp-fastopen", and is therefore not "well established." > We do not have data to quantify the number of DNS amplification > attacks. However, measurements of sourceIP > spoofing shows that the number of networks that allow spoofing has > been fairly steady for six years [7]. that study (http://www.internetsociety.org/doc/initial-longitudinal-analysis-ip-source-spoofing-capability-internet) uses the MIT Spoofer client, which must be downloaded by interested parties inside of the networks begin tested. so, another explaination for the fairly steady results over the last six years may be this fairly steady selection bias over the same period. also-- from the network security trench warfare perspective, the frequency and volume of the use of spoofing are both increasing over time, no matter what's happening to the spoof-capable launch footprint. > We assume clients and servers use current commodity hardware and > operating systems, and DNS client and server software with the changes > we suggest. An implicit cost of our work is therefore the requirement > to upgrade legacy hardware, and to deploy software with our > extensions. This requirement implies full deployment concurrent with > the natural business hardware upgrade cycle, perhaps 3 to 7 years. The > client and server software changes we describe are generally small, > and our prototypes are freely available. > ... > For clients, it may not be economical to field-upgrade embedded > clients such as home routers. We suggest that such systems still often > have an upgrade cycle, although perhaps a longer one that is driven by > improvements to the wireless or wireline access networks. measured hardware and software update cycles in the DNS field are between three months and forever, with a bright spot at about 15 years. source data: EDNS deployment curve; DNS SPR deployment curve; BIND4 and BIND8 update curves. it's wise to assume that anything not updated within ten years will never be updated, but rather, replaced when it fails. the implicit cost shown above will therefore not be paid on time, and any changes we wish to make to the protocol must (a) first, do no harm to the installed base, and (d) provide more benefit than cost during what will be an extensive transition period. CPE devices, in particular, are only upgraded when they fail, and not due to improvements in wireless access technologies. > Even if TCP reduces DoS attacks, we must ensure it does not create new > risks. Fortunately TCP security > is well studied due to the web ecosystem. > ... > We show that TCP and TLS-over-TCP can provide near-UDP performance > with connection caching. it's known because of the web that TCP (which has been well studied, as above) can provide near-UDP performance using connection caching (as above). this means a new TCP-based system running on a new port number and deployed only voluntarily by operators able to use modern hardware and operating systems could duplicate the excellent laboratory results shown in this paper. this, however, tells us nothing about what to expect from the installed base of existing TCP/53 speakers, nor of deployment of new TCP connection management among that installed base. > We have several implementations of these protocols. Our primary client > implementation is a custom client > resolver that we use for performance testing. This client implements > all protocol options discussed here and uses either the OpenSSL or > GnuTLS libraries. We also have some functionality in a version of dig. several of us have done the same work, using HTTP and HTTPS, but with similar proxies with similar SSL libraries, and have achieved similar results. and if you were arguing for proxying DNS over HTTPS, even if you were using binary payloads rather than JSON or XML as have been sometimes proposed by me, then these excellent results would be indicative of what to expect from global deployment that does not rely on current behaviour of current TCP/53 speakers -- many of which will *never* be upgraded. the difference is not the port number per se, but rahter, that HTTP and HTTPS have unambiguous and scalable connection management. > T-DNS deployment is technically feasible because our changes are > backwards compatible with current DNS > deployments. TLS negotiation is designed to disable itself when either > the client or server is unware, or if a middlebox prevents > communication. (Individuals may choose to operate without DNS privacy > or not if TLS is denied.) DNS already supports TCP, so clients and > servers can upgrade independently and will get better > performance with our implementation guidelines. Gradual deployment > does no harm; as clients and servers > upgrade, privacy becomes an option and performance for large responses > improves. if this protocol negotiates the use of TCP itself, and especially the use of persistent TCP, on the basis shown above for TLS -- that is, "disable itself when the client or server is unaware" -- then that's not obvious from the technical report. what's apparent from this technical report is that TLS is negotiable, not that the use of persistent TCP is negotiable. it is the use of TCP, and especially persistent TCP, which requires modern hardware and operating systems. it is also the use of TCP and especially persistent TCP which is not backward compatible with the installed base, due to unfortunate logic in RFC 1035 concerning TCP connection management. > T-DNS deployment is feasible and motivations exist for deployment, but > the need for changes to hardware > and software suggests that much deployment will likely follow the > natural hardware refresh cycle. this assertion ("T-DNS deployment is feasible") is not justified by the rest of the text. > A secondary challenge is optimizing TCP and TLS use servers so that > they do not create new DoS opportunities. Techniques to manage TCP SYN > floods are well understood [23], and large web providers have > demonstrated infrastructure that serves TCP and TLS in the face of > large DoS attacks. We are certain that > additional work is needed to transition these practices to TCP-based > DNS operations. i agree: additional work is needed to transition these practices to TCP-based DNS operations. we could build a system that behaves as described here, but we cannot do it unilaterally and presumptively without taking account of the reasonable expectations of implementers and operators who followed RFC 1035's unfortunate TCP connection logic advice to the letter. > DNS-over-HTTPS (perhaps using XML or JSON encodings) has been proposed > as a method that gets through middleboxes. We believe DNS-over-HTTPS > has greater protocol overheads than T-DNS: both use TCP, but use of > HTTP adds a layer of HTTP headers. It also requires deployment of an > completely new DNS resolution infrastructure in parallel with the > current infrastructure. Its main advantage is avoiding concerns about > transparent DNS middleboxes that would be confused by TLS. We suggest > that the degree to which this problem actually occurs should be > studied before “giving up” and just doing HTTP. first, as the web moves to HTTP/2, the size of the headers will shrink. even today they are nowhere near the difference between the size of UDP headers and the size of TCP headers, which difference is already assumed by this proposal. that is, if header size is a problem, then we should not be using TCP. second, the deployment of any HTTP-based solution would be opportunistic, exactly as described for this TCP/53 deployment -- that is, it won't have any benefit until both ends of a prospective transaction are upgraded. so, in the sense that complete new infrastructure would be required for DNS-over-HTTP, the exact same assertion applies in the case of this proposal's use of persistent TCP/53. third, i can't and won't argue against the need for further study, but if that is warranted for DNS-over-HTTP, then it would also be warranted for DNS-over-TCP/53. there is no stated comparative disadvantage here. > Use of a new port for T-DNS would avoid problems where middleboxes > misinterpret TLS-encoded communication on the DNS port. It also allows > skipping TLS negotiation, saving one RTT in setup. Other protocols > have employed STARTTLS to upgrade existing protocols with TLS, but an > experimental study of interference on the DNS reserved port is future > work. The DNS ports are often permitted through firewalls today, so > use of a new port may avoid problems with DNS-inspecting middleboxes > only to create problems with firewalls requiring an additional open port. since deployers of either a new DNS-over-HTTP system would be "opt-in", they would know that they had to open a hole in their firewalls to make the new system accessible to them. this is similar to the requirement that deployers of the proposed DNS-over-TCP/53 system would have to use modern hardware and operating systems. there is no stated comparative disadvantage here. > Additional safety comes from our approach to deal with all resource > exhaustion: a server can always close connections to shed resources if > it detects resource contention, such as running low on free memory. as proven by the last decade or so of botnets, any single TCP listener can be overwhelmed by repeated idle connections up to the limit of the server's state capacity. the cost of renting a 10K-bot network for this purpose is about twenty US dollars per month as long as it's not used for more than a few hours per day. this is similar to LOIC attacks but less traceable. while we know that any large TCP based service (such as akamai or google) can use massive provisioning to put these attacks out of reach for most attackers, we cannot require that this be level of provisioning be done for every public facing DNS server. -------- (end of ISI-TR-693 comments) the backdrop of these comments is that the DNS is a large, complex, and mostly-working system. as the stewards of its protocol, we have an obligation to do no avoidable harm. since a TCP/53 listener may have been developed and deployed by someone not within the sound of DNSOP's voice today, but who faithfully followed the unfortunate advice in RFC 1035 regarding TCP connection management, we have to respect their choices. what that really means is we have to avoid saturating their small TCP connection capacity (as managed by their long session timeouts) since that TCP capacity is the only hope of any UDP initiator who hears TC=1 from them. i argue that while any attacker can take down that TCP capacity at will with a trivial one-line perl script, that does not give us the right to take it down willfully, by using persistent TCP by default. Paul Vixie _______________________________________________ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop