On 1/6/2020 1:48 PM, John R Levine wrote:
Well, OK, here's a concrete example. I download the COM zone every
day from Verisign, and also a separate file with an MD5 hash of the
main file. Using RFC 2119 language, what do I do if the hash I get
doesn't match their hash? ...
Ok - you've described half of this - the download and the
validation. Let's move on to the use. E.g. you now have a zone
with a good ZONEMD - you throw it into what application? Or you now
have a zone with a bad (unable to validate) ZONEMD, do you still
throw it into the application. Does the application check the ZONEMD
or did you do that manually? If you throw the zone into the
application without validation then what? Do you retry to download
it? How often and how long between tries?
No matter how many times you ask, the answer is the same: it depends
on the application.
*Sigh* Not exactly. If you have no automated applications that will use
this, then it depends on the human and I think that's what you mean by
"application". Otherwise, I think you'll find that most automated
applications will want to (or probably should want to) use the same logic.
If it's an AXFR to a secondary authoritative server you might do one
thing, if it's someone collecting stats on TLD zone files, they might
do something else.
I don't see any benefit to anyone to try to guess how people we don't
know will use this in applications we don't know about at unknown
times in the future. Not only are we not the DNS Police, we're not
the Omniscient DNS Experts either.
Please provide a general rule for automated handling of failed
validations.
"Do the same thing you do now when a zone is invalid."
Given that there's no fixed definition of when a "zone" is invalid, I
don't think I can do the "same thing"? See below for a screed on
data protocol vs transport protocol.
The all of these are transport protocol violations and each and every
one (except possibly the AXFR one?) of them results in "reject the
message as being invalidly encoded". And I'm pretty sure that each
and every one of these has specific language that says what to do in
some RFC.
Do the same thing you do now when a name is more than 255 octets.
Do the same thing you do now when a RR has an RRTYPE of 252 or 255.
Do the same thing you do now when an SOA isn't long enough to include
all seven fields, or the SOAs at the beginning and end of an AXFR
don't match.
Do the same thing you do now when a TXT character string is longer
than the RR's RDLENGTH.
Do the same thing you do now when the offset in a compressed name
points past tne end of the packet, or the pointers create a loop.
I'm sure you can come up with others.
DNS is both a transport protocol and a data protocol. In the beginning,
the data protocol was pretty simple: Change a record in the database
and change the SOA serial at the same time. The client data protocol
was mostly "did the name resolve after following the NS records from the
root?". The primary to secondary protocol was "did the SOA change, if
so download the zone again". There were optimizations here, but that's
really all there was. Even then you could find things like lame
delegations that were violations of the data protocol, but mostly those
were readily identifiable even from the client side.
Then came DNSSEC. Which had the effect of turning the zone database
from a completely incoherent set of data (e.g. records except within
RRSets were mostly not related to other records and changes to one did
not require changes to others - exceptions such as CNAME with their
special handling excepted, and of course SOA serial which provided a
hint to the secondaries), to a partially coherent database (e.g.
additions and deletions of names required changes to NSEC/NSEC3,
additions or changes to an RR within an RRSet required updates to the
signature records, changes to the DNSKEYs required changes to the DS
records in the parent zone etc). DNSSEC still can have data protocol
errors that can creep in that aren't trivially identified by normal
query validation failure (e.g. old signatures, names where NSEC says
there aren't supposed to be names, missing names or records where NSEC
says there are supposed to be names), and we still don't actually know
what to do - automatically - when DNSSEC validation fails. E.g. what
action does your home MTA take when DNSSEC validation fails on an
outbound message server name lookup? How about for IOT clients phoning
home? Yes, the last two may be per-application treatment of validation
failure, but I don't think ZONEMD falls into the same category. Note
that there are a number of other interesting corner cases (E.g. CNAME
reference from a secured zone to an unknown or untrusted zone generally
gets reported as secure rather than unknown or untrusted).
Now comes ZONEMD which wants to make a zone database wholly coherent -
ANY change in the contents of the zone will result in changes to the
ZONEMD record and to verify it, you MUST download exactly the same data
as was digested previously. Unlike changes to the SOA serial, this
record binds the entire zone to be a very specific set of bits.
AFAICT, ZONEMD verification will output one of "valid" (hash matched),
"secure valid" (hash matched and ZONEMD chained), "unknown" (no ZONEMD)
or "invalid" (hash did not match or ZONEMD did not chain when it was
supposed to), or "???" (I was told there was an ZONEMD - but its not
here - where's my ZONEMD? or Why are there 6 ZONEMD records and why are
3 duplicates?? or other failures of the data protocol that a well
meaning human might introduce).
Are you telling me that making a recommendation to a receiver covering,
even skimpily, those 4 cases at the front of the RFC for what it should
do for each of these outputs is out of scope for the document? OK -
then I say this is no better than an EXPERIMENTAL RFC.
Later, Mike
Regards,
John Levine, jo...@taugh.com, Taughannock Networks, Trumansburg NY
Please consider the environment before reading this e-mail. https://jl.ly
_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop