[DNSOP] Re: draft-fujiwara-dnsop-dns-upper-limit-values
On 09/07/2024 11:06, Kazunori Fujiwara wrote: Dear DNSOP, I submitted new draft that proposes to consider "Upper limit value for DNS". If you are interested, please read and comment it. I disagree with the rationale for 13 name servers. The root (and .com) have that because it was what would fit into packets of a particular size given their naming scheme and that scheme's efficient compressibility. If there is to be a recommended limit, it should be specifically for packet size reasons, and not just "because this is what the root does". IIRC, Vixie et al wrote a draft on this, but it didn't reach RFC status. Ah, there it is: https://datatracker.ietf.org/doc/html/draft-ietf-dnsop-respsize-15.txt Ray ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Re: draft-fujiwara-dnsop-dns-upper-limit-values
On 10 Jul 2024, at 11:22, Ray Bellis wrote: > On 09/07/2024 11:06, Kazunori Fujiwara wrote: >> Dear DNSOP, >> I submitted new draft that proposes to consider "Upper limit value >> for DNS". If you are interested, please read and comment it. > > I disagree with the rationale for 13 name servers. > > The root (and .com) have that because it was what would fit into packets > of a particular size given their naming scheme and that scheme's > efficient compressibility. More than that, 13 nameservers was the maximum number you could fit in a priming response's additional section without EDNS(0) and assuming a maximally-compressible naming scheme and v4-only nameservers. The same limit is different when all nameserver glue has +A. It's a number that is historically interesting and it seems to be empirically reasonable given that priming in 2024 seems to work but it's no longer very special for hard, prescriptive reasons. Priming responses are special because the QNAME has a fixed size. This is not generally true for referral responses. So it's even less suitable as a limit there. > IIRC, Vixie et al wrote a draft on this, but it didn't reach RFC status. > > Ah, there it is: > > https://datatracker.ietf.org/doc/html/draft-ietf-dnsop-respsize-15.txt Yes, I like that draft. From memory it doesn't impose hard limits, it anticipates partial glue in referral responses and gives indicative guidance about the potential for failure instead, which I think is a better approach. I seem to recall it contains code written in Perl which I might argue has not aged well. Joe ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Dnsdir last call review of draft-ietf-dnsop-zoneversion-10
Reviewer: Nicolai Leymann Review result: Ready I am the designated DNS Directorate reviewer for draft-ietf-dnsop-zoneversion. This is the forth review I am doing - in my last reviews on the -08 and -09 versions I came to the conclusion that the document is ready for publication. The draft is going to be published as Informational RFC. The document is well written and defines an EDNS option which can be used for debugging purposes. Again, there were only a few minor changes (being raised during WGLC) to make the document more clear and readable compared to the -09 version. In addition some of the references were changed (removed). Overall I think the document is ready for publication. ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Re: draft-fujiwara-dnsop-dns-upper-limit-values
> I disagree with the rationale for 13 name servers. > > The root (and .com) have that because it was what would fit into > packets of a particular size given their naming scheme and that > scheme's efficient compressibility. > > If there is to be a recommended limit, it should be specifically > for packet size reasons, and not just "because this is what the > root does". In this case, what the root does is a minimum otherwise things would break. But there is more at play than packet sizes. From a packet size point of view the limit on RRsets is around 64 KB per RRset. For CNAMEs, all CNAMEs plus the result RRset need to fit in 64 KB. For delegations, all NS records plus required glue need to fit, etc. However those limits provide an opportunity to completely DoS a recursive resolver. No recursive resolver just keeps following CNAMEs until a 64 KB limit is reached. So in practice recursive resolvers have far lower limits on the number of CNAMEs they are willing to follow. So what we see is that some names cannot be resolved by some resolvers because the CNAME chain is longer than what the resolver accepts. Currently, security researchers seems to have a hard time finding interesting bugs in DNS software so they mainly focus on DoS attacks. The net result is that for recursive resolver software, there is a push to reduce the limits of what the resolver accepts. When the limits get to low, things start breaking. So the question becomes, do we want some limits in an RFC that everybody agrees on or do we want to keep the current informal system where limits are not fixed and people can get unlucky if they exceed limits they didn't know exist. ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Re: draft-fujiwara-dnsop-dns-upper-limit-values
On 10/07/2024 15:27, Philip Homburg wrote: So the question becomes, do we want some limits in an RFC that everybody agrees on or do we want to keep the current informal system where limits are not fixed and people can get unlucky if they exceed limits they didn't know exist. I do find the possible values in the document very strict at the moment and maybe further categorizing by QTYPE is even stricter. For example the KeyTrap vulnerability that is mentioned, is handled by the validator logic. I don't see a reason to restrict only to 3 DSes and hinder future operations and protocol development. My first attempt would be to bring the number or RRs down to a "sensible number" than the current as-many-as-it-fits. In contrast I do think that there should be a low limit on CNAME chains and NS records since they already allow for (resource) amplification factors that is not trivially tied to rogue users of resolvers. In general having limits in an RFC that people can point to goes a long way than developers trying to argue with users; for example on what a sensible length of a CNAME chain is. Best regards, -- Yorgos ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Re: draft-fujiwara-dnsop-dns-upper-limit-values
> On 10 Jul 2024, at 14:27, Philip Homburg wrote: > > So the question becomes, do we want some limits in an RFC that everybody > agrees on or do we want to keep the current informal system where limits > are not fixed and people can get unlucky if they exceed limits they didn't > know exist. I’d prefer somewhere in between. Nailing down fixed limits could be tricky because there are too many moving parts: transport, DNSSEC flavours, Do[TQH], what (not) to drop from the Additional Section, etc). And those limits may change as the DNS and/or the Internet evolves. The current informal arrangements may well be too loose. The info isn’t in one place, making it hard for DNS operators. IMO documenting the trade-offs in response sizes could be a better option. ie if the response > X, it breaks foo; if it’s > Y it breaks bar. ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Stateless Hash-Based Signatures in Merkle Tree Ladder Mode (SLH-DSA-MTL) for DNSSEC
Hi, since draft-fregly-dnsop-slh-dsa-mtl-dnssec-02 and draft-harvey-cfrg-mtl-mode-03 have been published now, I would like to discuss something I noticed when this was first brought to my attention during IETF in Prague. The Section 6.2 says: > As described in 9.2 of [I-D.harvey-cfrg-mtl-mode], when a verifier receives a > condensed signature, > the verifier determines whether any of the MTLs it has previously verified > includes a rung that is > compatible with the authentication path in the condensed signature. If not, > then the verifier requests > a new signed ladder. [...] > Accordingly, a resolver SHOULD first query a name server without the > mtl-mode-full option, and then, > if needed, re-issue the query with the mtl-mode-full option. Since responses > to queries with > the mtl-mode-full option are expected to be large, it is RECOMMENDED that > queries with > the mtl-mode-full option be issued over transports (e.g., TCP, TLS, QUIC) > that support large > responses without truncation and/or fragmentation. I have pointed out that a malicious zone operator can return a different run effectively making the resolver request a new signed ladder every time. This effectively removes any benefit that the resolvers gain from using the MTL mode. Again, if I am understanding the protocol correctly, it should be even possible to pre-generate the different answers and just mess with the resolver by invalidating the previously received response by using low TTL numbers and providing different answers every time. Please correct me if I am wrong. Cheers, Ondrej -- Ondřej Surý (He/Him) ond...@isc.org My working hours and your working hours may be different. Please do not feel obligated to reply outside your normal working hours. ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Re: draft-fujiwara-dnsop-dns-upper-limit-values
On 10/07/2024 14:27, Philip Homburg wrote: So the question becomes, do we want some limits in an RFC that everybody agrees on or do we want to keep the current informal system where limits are not fixed and people can get unlucky if they exceed limits they didn't know exist. I'm all for a recommended limit. But the rationale for the current one in the draft is bogus. All the root NSset does is put a lower bound on any proposal. Ray ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Re: draft-fujiwara-dnsop-dns-upper-limit-values
Fujiwara san, On Tue, Jul 09, 2024 at 07:06:27PM +0900, Kazunori Fujiwara wrote: > Dear DNSOP, > > I submitted new draft that proposes to consider "Upper limit value for DNS". > If you are interested, please read and comment it. Some of the recent CVEs to do with excessive processing can indeed do with some kind of limits. For example, the numbers of RRs in DNS messages. However some CVEs are also caused due to unsuitable data structures that are currently used in implementations. The current DNS protocols have been able to evolve so well since 1987 because of their flexibility. I suggest that limits be left to implementations rather than be set in stone in RFC. It could result in surprises when DNS data is extra-ordinary depending upon the implementation. But I feel it's better to leave the flexibility of the protocol as it is. Mukund ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Re: draft-fujiwara-dnsop-dns-upper-limit-values
Hi Mukund, On 10/07/2024 16:57, Mukund Sivaraman wrote: The current DNS protocols have been able to evolve so well since 1987 because of their flexibility. I suggest that limits be left to implementations rather than be set in stone in RFC. It could result in surprises when DNS data is extra-ordinary depending upon the implementation. But I feel it's better to leave the flexibility of the protocol as it is. I agree about the flexibility and evolution in general but with my implementer hat on I don't want those kind of limits to be left to implementations because implementations cannot back arbitrary limit choices without documents/research. CVEs help with that though :) And in order to resolve a dispute on limit values and get past the "but it works on x.x.x.x" arguments, we resort to flag days. When I mentioned "sensible number" for RRs before, I am thinking around a generous double of what someone would normally expect value which could leave ample room for evolution while still being more restrictive than the unlimited practice of today. Best regards, -- Yorgos ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Re: Dnsdir last call review of draft-ietf-dnsop-zoneversion-10
> On Jul 10, 2024, at 3:09 AM, Nicolai Leymann via Datatracker > wrote: > > The draft is going to be published as Informational RFC. The document is well > written and defines an EDNS option which can be used for debugging purposes. Thank you for the review! Note that as of -09 the intended status has been changed to Standards Track. DW smime.p7s Description: S/MIME cryptographic signature ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Re: Fwd: New Version Notification for draft-ietf-dnsop-ns-revalidation-07.txt
Thanks for the reference Gio (and Raffaele who also pointed this out to me), We're citing your paper now in our work-in-progress copy (see https://github.com/shuque/ns-revalidation/commit/5e52689 ), so it will be part of the next version. -- Willem Op 08-07-2024 om 12:55 schreef Giovane C. M. Moura: Hi Willem, We've got a peer-reviewed reference[0] that can help back up some of the claims in the draft. ``` 2. Motivation There is wide variability in the behavior of deployed DNS resolvers today with respect to how they process delegation records. Some of them prefer the parent NS set, some prefer the child, and for others, what they preferentially cache depends on the dynamic state of queries and responses they have processed. ``` Section 4 in [0] covers a bunch of such cases with Ripe Atlas, and we see just that, and section 5 evaluate some resolver software individually. In short: it backs up what you say ``` The delegation NS RRset at the bottom of the parent zone and the apex NS RRset in the child zone are unsynchronized in the DNS protocol. Section 4.2.2 of [RFC1034] says "The administrators of both zones should insure that the NS and glue RRs which mark both sides of the cut are consistent and remain so. ``` We found 13M of domains having parent/child NSSet inconsistency, from .com, .org, and .net, which amounts to 8% of the total. thanks, /giovane ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org OpenPGP_0xE5F8F8212F77A498_and_old_rev.asc Description: OpenPGP public key OpenPGP_signature.asc Description: OpenPGP digital signature ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Re: draft-fujiwara-dnsop-dns-upper-limit-values
On Wed, Jul 10, 2024 at 05:10:47PM +0200, Yorgos Thessalonikefs wrote: > Hi Mukund, > > On 10/07/2024 16:57, Mukund Sivaraman wrote: > > The current DNS protocols have been able to evolve so well since 1987 > > because of their flexibility. I suggest that limits be left to > > implementations rather than be set in stone in RFC. It could result in > > surprises when DNS data is extra-ordinary depending upon the > > implementation. But I feel it's better to leave the flexibility of the > > protocol as it is. > I agree about the flexibility and evolution in general but with my > implementer hat on I don't want those kind of limits to be left to > implementations because implementations cannot back arbitrary limit choices > without documents/research. CVEs help with that though :) Even RFC 1034 says "Bound the amount of work", but explicitly prescribed numbers in an RFC may end up being arbitrary. When there is a difference between something working well and not working well, it may either be due to too much work (too much NS lookup indirection for example) or implementation inefficiency (use of data structures that are inefficient, or resource limits on that particular platform such as amount of memory available). A recent CVE had to do with O(N^2) linked list traversals that was problematic when parsing large DNS messages. A different implementation could have parsed such messages somewhat more efficiently. DNS messages are not well suited to be parsed efficiently as their contents have to be scanned sequentially to get at further data, and matched up (e.g., in building RRsets). So even an efficient implementation may not perform well on thousands of RRs in a message. So a limit on section counts may be appropriate, but the limit that an implementation is able to perform well with is best decided by its implementors. There may be implementations that may perform well with thousands of RRs on commodity hardware and support very large RRsets. I cannot describe other CVEs that are in the works, but some of them are due to deficient data structures that need not affect other implementations. There are already limits that are in place in implementations for a variety of functions. A draft can document cases that may be limited with details of what would happen without limits so implementors are aware. My only suggestion is that prescribing numbers seems like it might go too far. Mukund ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Re: draft-fujiwara-dnsop-dns-upper-limit-values
I see several different directions this could go that might be useful. 1. "DNS at the 99th percentile" Rather than normatively declare limits on things like NS count or CNAME chain length, it would be interesting to measure behaviors out in the real world. How long can your CNAME chain be before resolution failure rates exceed 1%? How many NS records or RRSIGs can you add before 99% of resolvers won't try them all? 2. "DNS Lower Limits" Similar to the current draft, but a change of emphasis: instead of setting upper bounds on the complexity of zones, focus on setting lower bounds on the capability of resolvers. 3. "DNS Intrinsic Limits" Given the existing limits in the protocol (e.g. 64 KB responses, 255 octet names), document the extreme cases that might be challenging to resolve. This could be used to create a live test suite, allowing implementors to confirm that their resolvers scale to the worst-case scenarios. 4. "DNS Proof of Work" In most of these cases, the concern is that a hostile stub can easily request resolution of a pathological domain, resulting in heavy load on the recursive resolver. This is a problem of asymmetry: the stub does much less work than the resolver in each transaction. We could compensate for this by requiring the stub to increase the amount of work that it does. For example, we could * Recommend that resolvers limit the amount of work they will do for UDP queries, returning TC=1 when the limit is reached. * Create a system where stubs pad their UDP query packets to prevent reflection-amplification attacks. * Develop a novel proof-of-work extension, e.g. a continuation system that requires the stub to reissue heavy queries several times before getting the answer. --Ben From: Kazunori Fujiwara Sent: Tuesday, July 9, 2024 6:06 AM To: dnsop@ietf.org Subject: [DNSOP] draft-fujiwara-dnsop-dns-upper-limit-values Dear DNSOP, I submitted new draft that proposes to consider "Upper limit value for DNS". If you are interested, please read and comment it. I will attend IETF Hackathon. I would like to hear comments about the draft. Abstract: There are parameters in the DNS protocol that do not have clear upper limit values. If a protocol is implemented without considering the upper limit, it may become vulnerable to DoS attacks, and several attack methods have been proposed. This draft proposes reasonable upper limit values for DNS protocols. Name: draft-fujiwara-dnsop-dns-upper-limit-values Revision: 00 Title:Upper limit value for DNS Date: 2024-07-08 Group:Individual Submission Pages:6 URL: https://urldefense.com/v3/__https://www.ietf.org/archive/id/draft-fujiwara-dnsop-dns-upper-limit-values-00.txt__;!!Bt8RZUm9aw!_lsMWK02HedzneFr7X0_6TfwEg09CBgtmX_uc21HIPHwU7goaPidjlUsBGu9yIAf3tP9XpIKP38wnGo$ Status: https://urldefense.com/v3/__https://datatracker.ietf.org/doc/draft-fujiwara-dnsop-dns-upper-limit-values/__;!!Bt8RZUm9aw!_lsMWK02HedzneFr7X0_6TfwEg09CBgtmX_uc21HIPHwU7goaPidjlUsBGu9yIAf3tP9XpIKUGuh744$ HTMLized: https://urldefense.com/v3/__https://datatracker.ietf.org/doc/html/draft-fujiwara-dnsop-dns-upper-limit-values__;!!Bt8RZUm9aw!_lsMWK02HedzneFr7X0_6TfwEg09CBgtmX_uc21HIPHwU7goaPidjlUsBGu9yIAf3tP9XpIKD2pUkrE$ -- Kazunori Fujiwara, JPRS ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Re: draft-fujiwara-dnsop-dns-upper-limit-values
>I see several different directions this could go that might be >useful. > >1. "DNS at the 99th percentile" > >Rather than normatively declare limits on things like NS count >or CNAME chain length, it would be interesting to measure >behaviors out in the real world. How long can your CNAME chain >be before resolution failure rates exceed 1%? How many NS >records or RRSIGs can you add before 99% of resolvers won't try >them all? That has a bit of risk that we need a new document every year. >2. "DNS Lower Limits" > >Similar to the current draft, but a change of emphasis: instead >of setting upper bounds on the complexity of zones, focus on >setting lower bounds on the capability of resolvers. This is the same thing. If some popular resolvers implement the lower bound then it effectively because an upper bound on the complexity of zones. >3. "DNS Intrinsic Limits" > >Given the existing limits in the protocol (e.g. 64 KB responses, >255 octet names), document the extreme cases that might be >challenging to resolve. This could be used to create a live >test suite, allowing implementors to confirm that their resolvers >scale to the worst-case scenarios. Why? Do we really care if a resolver limits the size of RRsets to 32 KB? Tests can help to make sure that resolvers don't crash. But they may just return early when they see something ridiculous. >4. "DNS Proof of Work" > >In most of these cases, the concern is that a hostile stub can >easily request resolution of a pathological domain, resulting >in heavy load on the recursive resolver. This is a problem of >asymmetry: the stub does much less work than the resolver in >each transaction. We could compensate for this by requiring >the stub to increase the amount of work that it does. For >example, we could > >* Recommend that resolvers limit the amount of work they will >do for UDP queries, returning TC=1 when the limit is reached. That immediately prompts a question what the 'limit' is. For example, a resolver could set TC=1 after encountering 2 CNAMEs. But I'm sure that will make a lot of people very unhappy. >* Create a system where stubs pad their UDP query packets to >prevent reflection-amplification attacks. That seems unrelated to this draft. >* Develop a novel proof-of-work extension, e.g. a continuation >system that requires the stub to reissue heavy queries several >times before getting the answer. That raises exactly the same question: what is 'heavy'? ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Re: draft-fujiwara-dnsop-dns-upper-limit-values
On Jul 10, 2024, at 1:03 PM, Philip Homburg wrote: I see several different directions this could go that might be useful. 1. "DNS at the 99th percentile" Rather than normatively declare limits on things like NS count or CNAME chain length, it would be interesting to measure behaviors out in the real world. How long can your CNAME chain be before resolution failure rates exceed 1%? How many NS records or RRSIGs can you add before 99% of resolvers won't try them all? That has a bit of risk that we need a new document every year. That’s fine. Not every useful DNS-related document has to be an IETF RFC. 2. "DNS Lower Limits" Similar to the current draft, but a change of emphasis: instead of setting upper bounds on the complexity of zones, focus on setting lower bounds on the capability of resolvers. This is the same thing. If some popular resolvers implement the lower bound then it effectively because an upper bound on the complexity of zones. That’s a pretty big “if”, especially when multiplied across all the recommendations in the draft. Even then, it wouldn’t apply to zones with an unusual client base. 3. "DNS Intrinsic Limits" Given the existing limits in the protocol (e.g. 64 KB responses, 255 octet names), document the extreme cases that might be challenging to resolve. This could be used to create a live test suite, allowing implementors to confirm that their resolvers scale to the worst-case scenarios. Why? Do we really care if a resolver limits the size of RRsets to 32 KB? Yes. Unnecessary limits restrict our flexibility even if mainstream use cases don’t exist today. Large RRsets have been considered in many contexts over the years, most recently for postquantum keys and signatures. Tests can help to make sure that resolvers don't crash. But they may just return early when they see something ridiculous. 4. "DNS Proof of Work" In most of these cases, the concern is that a hostile stub can easily request resolution of a pathological domain, resulting in heavy load on the recursive resolver. This is a problem of asymmetry: the stub does much less work than the resolver in each transaction. We could compensate for this by requiring the stub to increase the amount of work that it does. For example, we could * Recommend that resolvers limit the amount of work they will do for UDP queries, returning TC=1 when the limit is reached. That immediately prompts a question what the 'limit' is. The limit is not standards-relevant. It could be “10 milliseconds of CPU time” or “3 cache misses” or whatever. The stub doesn’t need to know; it just retries over TCP as already required. For example, a resolver could set TC=1 after encountering 2 CNAMEs. But I'm sure that will make a lot of people very unhappy. A resolver can return TC=1 for all UDP queries if it wants, and this is often discussed as a DoS defense mechanism. Returning TC=1 for 1% of queries should not be a serious problem for anyone. * Create a system where stubs pad their UDP query packets to prevent reflection-amplification attacks. That seems unrelated to this draft. * Develop a novel proof-of-work extension, e.g. a continuation system that requires the stub to reissue heavy queries several times before getting the answer. That raises exactly the same question: what is 'heavy’? Implementation-defined. There’s no need to standardize it; the stub just “continues” the query until it gets an answer or loses patience. ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Re: draft-fujiwara-dnsop-dns-upper-limit-values
> On 10 Jul 2024, at 9:23 AM, Ben Schwartz > wrote: > > I see several different directions this could go that might be useful. > > 1. "DNS at the 99th percentile" > ... > 2. "DNS Lower Limits" > > ... > 3. "DNS Intrinsic Limits" > > ... > 4. "DNS Proof of Work" > ... The 99th percentile begs to obvious question: 99% of what? Some "resolvers" handle queries for tens of millions of users (or more), some handle queries for a single user. This kind of threshold measurement runs the risk of assuming that all resolvers are "equal" in some sense when in fact they are not. I can see what you are trying to get to here Ben, but there is a non-trivial set of unanswered measurement questions behind such a proposition. We've seen in other scenarios (IPv6 minimum unfragmented packet size, for example) that lower limits are more useful than upper bounds that do not have underlying protocol constraints. Setting a minimum capability level for resolvers and saying that if the particular configuration exceeds such lower bounds of capability, then not all resolvers may cope seems (to me) to be a better way of defining such concepts. Geoff smime.p7s Description: S/MIME cryptographic signature ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Re: draft-fujiwara-dnsop-dns-upper-limit-values
> From: Ray Bellis > I disagree with the rationale for 13 name servers. > > The root (and .com) have that because it was what would fit into > packets > of a particular size given their naming scheme and that scheme's > efficient compressibility. Yes, I know where the "13" came from. The BCP document should include what s currently required and used (at least on Root and TLDs). Then, at least "13" name servers should be allowed. > If there is to be a recommended limit, it should be specifically for > packet size reasons, and not just "because this is what the root > does". > > IIRC, Vixie et al wrote a draft on this, but it didn't reach RFC > status. > > Ah, there it is: > > https://datatracker.ietf.org/doc/html/draft-ietf-dnsop-respsize-15.txt I know the draft. If we think about packet size based limit, even if TCP can handle 64k byte DNS data, I would like to set a limit based on the sizes of 512, 1232, and 1400 that can be handled by UDP without fragmentation. In the case of PQC, I would like to discuss the part excluding the huge DNSKEY and RRSIG. Regards, -- Kazunori Fujiwara, JPRS ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Re: draft-fujiwara-dnsop-dns-upper-limit-values
> Even RFC 1034 says "Bound the amount of work", but explicitly > prescribed numbers in an RFC may end up being arbitrary. > > When there is a difference between something working well and not > working well, it may either be due to too much work (too much NS > lookup indirection for example) or implementation inefficiency (use > of data structures that are inefficient, or resource limits on that > particular platform such as amount of memory available). That is a different issue. The issue is that recursive resolvers implement arbitrary default limits to avoid issues. Any zone that exceeds those limits is in trouble. At any moment a popular resolver can release a new version of the software with lower limits. Breaking zones that worked before. That is a very unstable situation that in can be avoided mostly by only increasing limits. But when new attacks suggest that limits need to be lowered, this becomes a real risk. > So a limit on section counts may be appropriate, but the limit that > an implementation is able to perform well with is best decided by > its implementors. There may be implementations that may perform > well with thousands of RRs on commodity hardware and support very large RRsets. How that help a zone owner? A zone that only works with some resolvers? ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org
[DNSOP] Re: draft-fujiwara-dnsop-dns-upper-limit-values
>A resolver can return TC=1 for all UDP queries if it wants, and >this is often discussed as a DoS defense mechanism. Returning >TC=1 for 1% of queries should not be a serious problem for >anyone. It seems like a very interesting experiment. Just set TC=1 after 8 CNAMEs. I think some content providers will become very unhappy if we would do that. ___ DNSOP mailing list -- dnsop@ietf.org To unsubscribe send an email to dnsop-le...@ietf.org