Dne 1.6.2018 v 12:51 Shane Kerr napsal(a):
Wessels, Duane:

On May 25, 2018, at 11:33 AM, 神明達哉 <jin...@wide.ad.jp> wrote:

At Wed, 23 May 2018 15:32:11 +0000,
"Weinberg, Matt" <mweinberg=40verisign....@dmarc.ietf.org> wrote:

We’ve posted a new version of draft-wessels-dns-zone-digest.  Of note,
this -01 version includes the following changes:
[...]
We plan to ask for time on the dnsop agenda in Montreal.  Your feedback
is welcome and appreciated.

I've read the draft.  I have a few high level comments and specific
feedback on the draft content:

- It was not really clear exactly what kind of problem this digest
   tries to solve, especially given that the primarily intended usage
   is for the root zone, which is DNSSEC-signed with NSEC.

Thanks you for the feedback.  We will write a clearer problem statement
in the introduction for the next version.

As I mentioned, I have seen zones broken during transfer in production,
and having a standard way to check for this seems reasonable.

Hmm, can you share some details about your experience?
Did you find out when the data corruption took place?
a) network transfer
b) implementation bugs (e.g. incorrectly received IXFR)
c) on disk
d) some other option?

I wonder if this proposal is worth the complexity ... If we wanted plain checksum we could use TSIG with built-in key (all zeroes or so) and to get checksum for the network part with negligible implementation complexity. (TSIG trick is Mukund's idea, not mine.)

If we wanted checksum for on-disk data, well, the outside world has tools for this already ...

This proposal smells little bit as not-invented-by-dnsop syndrome because it does not build on existing technology. There are technologies to protect data in transit and technologies to protect data at rest already so we might look how to reuse them instead of reinventing wheel just for DNS. Reinvented wheels tend to have sharp edges, especially in security area :-D


- This digest can't be incrementally updated, that is, you'll need to
   calculate the digest over the entire zone even if just a single
   record is modified (am I correct?).  That's probably an inevitable
   cost for the motivation of providing cryptographically strong
   integrity check, but that's a pity for me.  One case I know of where
   things like "zone digest" is wanted is to ensure consistency for a
   very large zone between primary and secondary servers that are
   synchronized using IXFR.  In principle they must be consistent, but
   operators may want to have a lightweight (albeit not
   cryptographically strong) way to confirm no unexpected events (such
   as an implementation bug) quietly broke the consistency.  Perhaps
   such usage is just outside the scope of this proposal, but I first
   expected I could use it for this kind of purpose it was a bit
   disappointing and I wanted to mention it.

Incremental updates could be supported if the working group feels it is
important.  We have a working proof-of-concept implementation of this that
hashes individual RRsets and then XORs them into a final message digest
(thanks to Roy Arends for the suggestion).

However, this complicates the implementation.  It almost certainly requires
more CPU processing but probably not to an extent that matters significantly.
We could do some simple benchmarks.

If there is desire to follow this path, then we should discuss whether or
not to keep having the zone digest algorithms exactly match the DS digest
algorithms.  For example, digest alg 2 could mean "SHA256 over the zone
as a whole" while digest alg 3 could mean "SHA-256 digest and XOR individual
RRsets" to support incremental updates.

As I mentioned in my review of the first version of this document, I
think that inserting digest records periodically throughout the zone
could serve to allow incremental updates (as the recipient can cache
unchanged values) and also allow a degree of parallelism.

I certainly won't push for it. My main concern is that for large-ish
zones with frequent updates digest is basically useless without such a
mechanism.

It depends on what you want to protect from: If it is corruption during network transfer then TSIG is the answer, possibly with a built-in key (e.g. all zeroes to make it zero config/checksum only).

I think we need to first answer question why existing technologies do not fit the purpose.

--
Petr Špaček  @  CZ.NIC

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to