On 1/6/2020 6:21 PM, Wessels, Duane wrote:
Hello Mike, thanks for the feedback.

Hi Duane -




On Jan 4, 2020, at 5:14 PM, Michael StJohns <m...@nthpermutation.com> wrote:

Hi Tim et al -

I read through this back a few versions ago and mostly thought it harmless as 
an experimental RFC.  I'm not sure that it's quite ready for prime time as a 
Standards track RFC.

Here's what I think is missing:

1) A recommendation for the maximum size of the zone (and for that matter the 
maximum churn rate).  This is hinted at in the abstract, but missing from the 
body of the document.
I am reluctant to add this.  As John said, I think it won't age well.  I think 
there is no obvious size at which to make a recommendation.  For uses cases 
such as CZDS / zone file access, I see no harm at all to add ZONEMD for even 
very large zones.  What might be missing is a paragraph that says those that 
publish ZONEMD records need to be aware of the possible consequences it would 
have on the consumers of their zone data.

As I suggested in one of my messages, giving an idea of how long it takes to digest various sizes of zones given commodity hardware would be a good start.   Going on and talking about the ratio of that time to the typical update frequency of the zone  (e.g. zone digest 5 minutes, transfer time of 10 minutes, zone update 24 hours  - ratios of 1:288 and 1:144 are probably acceptable;   zone digest of 30 minutes, transfer time of 2 hours and an update of every 4 hours - ratios of 1:8 and 1:2  are probably stretching it)

At least give the zone admin and consumer some idea of when doing this is just going to be a lost cause.




2) For each of the use cases, an explanation of how this RRSet actually 
mitigates or solves the identified problem.  E.g. at least a paragraph each for 
each the subsections of 1.3. That paragraph should lay out why the receiver of 
the zone should actually want to do this verification and the cost/benefit of 
that for the end user.
As one of the coauthors I feel the use cases are pretty self explanatory, but 
I'm willing to be convinced by others.

OK.  The point is not to self-approve, but to get a few other non-authors to actually see if they can figure out what you're talking about here and whether they're ever going to see this use case.    E.g. reach out to the consumer side for each of the 4 cases and see if you can get some idea of a) would they actually use this, and b) what would they do if validation failed or succeeded?




3)   Section 2 uses SHOULD or MUST related to data content rather than 
protocol.   That's a problem in that humans are notorious for making mistakes 
and screwing up the records.

Thanks, I think I see what you're saying.  Expect changes there.


  This section describes the ZONEMD Resource Record, including its
    fields, wire format, and presentation format.  The Type value for the
    ZONEMD RR is 63.  The ZONEMD RR is class independent.  The RDATA of
    the resource record consists of four fields: Serial, Digest Type,
    Parameter, and Digest.

    This specification utilizes ZONEMD RRs located at the zone apex.
    Non-apex ZONEMD RRs are not forbidden, but have no meaning in this
    specification.

Instead - "non-apex ZONEMD RRs MUST be ignored by the receiving client".
The current text was agreed to during earlier working group discussion.  The problem with 
"ignore" (as John points out) is that it could mean the non-apex RR should be 
omitted from the zone.

At one point the document said that non-apex ZONEMD was forbidden, with the 
implication that if found the whole zone should be rejected.  Similar to what 
you might do with a non-apex SOA.  But that seemed pretty harsh and in the end 
we settled on the current text.

Your text "have no meaning in this specification" doesn't actually tell me what to do when I receive a non-apex ZONEMD RR. Maybe instead "Receivers SHALL NOT attempt to validate non-apex ZONEMD RRs.  All other validation rules apply (e.g. inclusion in the HASH using the actual value)"....

Assume that someone will screw up and place a ZONEMD RR where it shouldn't be.  Figure out what that does to the validation process.   Is it included in the hash?  If so, does it get included with the actual value of its fields or using the placeholder format?  Or do you check it both ways?

If you said forbidden and if included, zone doesn't validate I'd be happy too and that would be the simplest.



    A zone MAY contain multiple ZONEMD RRs to support algorithm agility
    [RFC7696] and rollovers.  Each ZONEMD RR MUST specify a unique Digest
    Type and Parameter tuple.

"A client that receives multiple ZONEMD RRs with the same DT and Parameter MUST try 
to verify each in turn and MUST accept the zone if any verify".
and "If there are multiple ZONEMD RRs with distinct DT and Parameters, the zone is 
acceptable if the       client can verify at least one of those RRs"
I don't understand the use case for this.  IMO multiple ZONEMD RRs with same DT 
and Parameter, but different digest value is an error and should not be allowed.

Assume the human preparing the zone left the old RR in place and also forgot to update the serial number.   So you have two RRs one of which will validate and one of which won't.

After consideration, I think I'm actually OK with section 4 bullet 4 so you can void this comment.



  It is RECOMMENDED that a zone include only
    one ZONEMD RR, unless the zone publisher is in the process of
    transitioning to a new Digest Type.

Lower case "recommended" here please.
I don't feel too strongly about it, but can you say why?  By my reading of the 
key words BCP upper case is appropriate?

Humans are not protocol elements and MUST/SHOULD can't really apply to them.  The inclusion of multiples is choice by a human, rather than a decision by a computer.




4) 2.1.3 - The parameter field MUST be set to 0 for SHA384-SIMPLE on creation, 
and the client MUST NOT accept the RR if this field is not set to zero for 
SHA384-SIMPLE.
Personally I would be willing to stipulate to that, but not in 2.1.3.  I would 
rather it go in section 4 (verification).

I think that's ok, except that you have "the Parameter field plays no role in digest calculation or verification" which I might read as "skip the fields when doing the digest calculation". Just delete the last sentence I think.



5) 3.1.2 - This is I believe different than how DNSSEC does it?  If it's the 
same, then this is fine, otherwise this protocol should be calculating the 
RRSet wire representation the same as DNSSEC does it.
In my experience, duplicates are suppressed either when a zone is loaded or 
when it is signed.  ZONEMD matches DNSSEC.


Here's how named-checkzone behaves:

$ named-checkzone -i none -o /dev/fd/1 example.com /dev/fd/0
$ORIGIN example.com.
@ 60 SOA a b 1 2 3 4 5
@ 60 NS ns
NS 60 A 192.168.1.1
@ 60 A 127.0.0.1
@ 60 A 127.0.0.1
zone example.com/IN: loaded serial 1
example.com.                                  60 IN SOA         a.example..com. 
b.example.com. 1 2 3 4 5
example.com.                                  60 IN NS          ns.example.com.
example.com.                                  60 IN A           127.0.0.1
NS.example.com.                               60 IN A           192.168.1..1
OK


And in ldns_dnssec_rrs_add_rr() at 
https://github.com/NLnetLabs/ldns/blob/develop/dnssec_zone.c#L46 you can see at 
the end that equal RRs are silently ignored.

Can you provide a cite?  Not disagreeing - just curious if its been written down in an RFC somewhere.




6) 3.2 - another set of data set MUSTs (the recommended isn't an issue, but 
should probably be lower       case) that need guidance for the accepting 
client if the MUST doesn't hold because of human error.
Okay.

7) 3.3 - Probably lower case may for DNSSEC.    The rest of this is operational 
guidance that really doesn't give anything useful for the protocol.

Okay.


8) 3.4.1, there is no reason whatsoever to make the setting of the parameter 
field a SHOULD here.  MUST is correct.
As I said above I'm willing to make this a MUST, but I disagree there are no 
reasons whatsoever.

I think one legitimate reason would be to avoid ossification, or what I think 
they call GREASE in the TLS world.


Nope.  Not even close.  The parameter value has meaning only in conjunction with the digest type.  In this case, '0' is "no special parameters for SHA384-SIMPLE.   For SHA384-SIMPLE, this is an invariant and you will not be changing the meaning of SHA384-SIMPLE at a later point, or if you do, you're going to be using a different digest number.

This more closely resembles AlgorithmIdentifier - the ASN1 structure used in X509 and other PKIX related protocols to define an algorithm type.



9) 3.4.2 - Third bullet.  See above and also, as currently written here this 
implies you ignore ALL RRs at that owner/class/type/RDATA if there are 
duplicates.  Rephrase at least.
is this better?

    Include only one instance of duplicate RRs with equal owner, class, type, 
and RDATA.
"Only one instance of duplicate RRs with equal owner, class, type and RDATA SHALL be included"

10) 3.5 - This section needs a bit of re-working.  Generally, what you want to 
say is that if you have ZONEMD RRs, that they have to be published at the same 
time as the matching SOA.
I think you're focusing on the last paragraph of 3.5.  I can attempt to clarify 
it.


You also want to probably make a note somewhere that if the SOA and ZONEMD RR 
do not match on receipt you do ... something?  Not sure what.
Yes, its there, #5 in section 4.

Yup - found it.




11) I really need to write an RFC on "SHOULD considered harmful (if not 
qualified)".   For section 4, bullet 1 - explain what you mean by SHOULD - e.g. is 
this a configuration option, or a implementation option.  If a configuration option, in 
which case might a recipient not want to do this?  If an implementation option, why isn't 
this a MUST?  Also, if you don't do the DNSSEC thing, identify the next step to be 
executed (i.e. 4).
How about removing the SHOULD and saying "The verifier first determines ....." ?

How about just "MUST" instead of SHOULD?

Also, I think its still possible to have zones for which the state is "unknown" - e.g. DNSSEC can neither prove nor disprove whether there's a requirement for a zone to be signed (e.g. zones subordinate to provably unsecure zones and for which the resolver has no trust anchors, but the zone itself might actually be signed).   Add guidance for that case please?



11.1) Sorry for the numbering - missed step 4 in section 4 - see point 3 and 
reconcile or remove 3rd and subsequent paras from section 2 to make this 
section the only normative one.

12) 4.1 vs my point 3 above - reconcile, or remove 3rd and subsequent paras 
from section 2 to make this section the only normative one.

13) Missing a section 4.2 which says what you do when a zone doesn't verify.  
Otherwise, what's the point?
I'm in alignment with John Levine's responses on this.  It depends.  And if 
folks are arguing for Experimental then I'd say it doesn't matter.

But if the WG supports Proposed Standard and wants to see such a section 4.2, 
then with the help of name server implementors I would be willing to add a 
section describing what name server software should do if the zone is signed 
with DNSSEC but the ZONEMD doesn't verify.

Let's continue to think about this.  If this is a tool with human only consumers, then that's fine.  If this is designed to work automatically, provide value add for the DNS infrastructure,  and to "do something" if validation fails  then guidance is needed for what "do something" looks like.



14) Section 6.1, third paragraph is incorrect.  Note that Section 4 step 2 is part of the stuff that's 
skipped if the zone is provably insecure or if you decide that SHOULD means "I'm lazy and I don't want 
to do it".  E.g. Section 4 does not "REQUIRE" this because of the preceding  and enclosing 
"RECOMMENDED".
Okay, will try to fix that.

15) Add a section 6.3 to security considerations which describe the downsides 
of this RR - e.g. for example that it can make a zone more fragile by requiring 
complete coherence in the zone and that this is a substantial change both to 
DNSSEC and the original design of DNS.  Or when applied to a large dynamic zone 
may never be able to calculate a valid digest in time, nor have a recipient 
accept it.
I will add something to the security considerations.

DW

Thanks - Mike








I think Experimental is fine.  I'm not sure without a clear text addressing my 
points 1,2, 13 and 15 that this is useful as a standards track document for 
general use.



Later, Mike






On 1/4/2020 5:30 PM, Tim Wicinski wrote:
All,

The chairs would like to welcome the new year with some work.
The authors and chairs feel this document is ready to move forward.

One thing to note: This document has the status "Experimental", but
the authors feel they've performed their experiments and their current
status is "Standards Track".

This starts a Working Group Last Call for "Message Digest for DNS Zones"

Current versions of the draft is available here:
https://datatracker.ietf.org/doc/draft-ietf-dnsop-dns-zone-digest/

The Current Intended Status of this document is: *Standards Track*
Please speak out if the intended status seems incorrect.

Please review the draft and offer relevant comments.
If this does not seem appropriate please speak out.
If someone feels the document is *not* ready for publication,
please speak out with your reasons.

This starts a two week Working Group Last Call process, and ends on:
18 January 2020

thanks
tim



_______________________________________________
DNSOP mailing list

DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to