Dear Fujiwara-san,
I find the idea of the draft very useful especially as an implementer.
I understand the argument that I often hear that DNS has allowed
innovation because there are no limits.
I also hear that implementers should take the advice of 1035 and apply
limits to work done if an RFC does not set them explicitly.
These seem to contradict themselves because different implementations
choose different hand-wavey limits and that has proven to be non
interoperable.
One such example is CNAME chains.
So I am very much in favor of introducing universal, documented limits
on DNS and looking forward to the discussion.
Some comments on the draft's text:
For the authoritative side I believe there needs to be a distinction
between primaries and secondaries.
Primaries can refuse to load the zone when limits are surpassed but
secondaries are garbage-in/garbage-out most of the time.
The wording "desirable upper limit" and the low values on the table in
section 5 caught me off guard.
Could I suggest that it is renamed to "observed max value"?
Unless I completely misunderstand the concept.
Which I may do as I see "number of CNAME/DNAME chains" with a value of 1.
I understand that hard limit is the only limit that matters, the one
imposed by resolvers (and primaries, but resolvers are the ones that
matter in my opinion).
I also understand that no value in the "hard limit" column means unlimited.
With that I would like to comment on:
- DNS message size; I don't think we can/should limit that at all. This
will kill the inovation part and/or PQC,
- number of RRs in an RRSet; this can be limited to a few hundreds I
suppose,
- number of NS RRs and number of glue RRs in a delegation; I believe we
need to do some research before imposing the root's operation as a
hard limit,
- number of RRSIG RRs for each name and type; quick note about Unbound
and MAX_VALIDATE_RRSIGS for the discussion. This is the maximum number
of RRSIGs that Unbound will try to validate. They can still be more
RRSIGs there that Unbound will ignore if it doesn't understand them
because of the algorithm used for example. We need to do some research
for the actual number here. Especially with the upcoming multisigner
scenarios.
In general, for the "number of $things" in a packet categories, I would
like to see an observed current operational max value and a hard limit
to roughly double that number.
It could still leave space for innovation, restrict the presence of
thousands of $things, and could be revisited in the future if we ever
need to bump that value up.
As for proceeding, if the working group finds this useful, implementers
can fish for other limits in their codebases and enrich the
recommendation table.
That will benefit homogeneous DNS resolution.
Personally I would like to see such a document advance and a flag day
for resolver implementers.
Specifically for implementers it will free a lot of time from responding
to resource exhaustion attacks on DNS and "but that other resolver
works" arguments.
Best regards,
-- Yorgos
On 04/03/2025 10:45, Kazunori Fujiwara wrote:
Dear dnsop WG,
I submitted draft-fujiwara-dnsop-dns-upper-limit-values-02
Upper limit values for DNS.
URL:
https://datatracker.ietf.org/doc/draft-fujiwara-dnsop-dns-upper-limit-values/
- added "Desirable upper limit", "hard limit", "protocol limit",
(existing) implementation limit.
- added text about problems about too many/deep use of unrelated name
server names.
- added: "DNS software is expected to make these items configurable
parameters that operators can control."
I'm currently unsure how to proceed. Is the draft useful ?
I'd like to hear comments from DNS implementers
who have actually implemented upper limits.
Regards,
--
Kazunori Fujiwara, JPRS <fujiw...@jprs.co.jp>
_______________________________________________
DNSOP mailing list -- dnsop@ietf.org
To unsubscribe send an email to dnsop-le...@ietf.org
_______________________________________________
DNSOP mailing list -- dnsop@ietf.org
To unsubscribe send an email to dnsop-le...@ietf.org