On Tue, Feb 23, 2021 at 8:50 AM Ben Schwartz <bemasc=
40google....@dmarc.ietf.org> wrote:

>
>
> On Tue, Feb 23, 2021 at 11:21 AM Samuel Weiler <wei...@watson.org> wrote:
> ...
>
>> Recognizing that I'm likely biased by my history of working on the
>> current "mandatory algorithm rules", I don't buy the need for this
>> complexity.  In practice our "weak" algorithms aren't _that_ weak.
>> And, if they are, we might as well stop signing with them entirely.
>
>
> I think that was true for a long time, but I'm not sure it's still true,
> or will stay true.
>



>   Validator update timelines are Very Slow, so we should be thinking about
> adding features we might need before we need them.
>
> Even if we are currently in a state where zone owners feel like they have
> simple, safe choices, I don't think we should assume that this will remain
> true indefinitely.
>
> This seems like unnecessary further loading of the camel.
>
>
TL;DR: I don't think the solution offered (Strict Mode) is worth pursuing,
but it is helpful in raising awareness of the problem(s).

Ben & Sam:

I think that, while an interesting technical solution, more thought may
need to be given to the sources of problems.

I also think some of these are things that can be measured and tested.

That measurement and testing would be an excellent project for the fine
folks at APNIC (the folks I think of as "Geoff and George" but probably
include many others).

Here's what I think are problem areas, in a not comprehensive or
well-organized list:

   - Validating resolvers that are being updated, but where deployed
   instances are not tracking those updates
   - Validating resolvers that no one is managing any more (attrition
   victims)
   - Resolvers (validating or not) embedded in firmware that is not
   supported and cannot be updated
   - Forwarders of any flavor
   - Resolvers that are not maintained at all
   - No ability to identify what category a particular server is in, even
   from the client side
   - No ability to identify how many "hops" of forwarding exist, or what
   the forwarding topology looks like (depth, tree/DAG/DG, per-hop
   characteristics)
   - EDNS is only single-hop capable
   - No ability to determine or contact operator of server

The algorithms supported by the deployed base of validating resolvers (or
forwarders) impacts the ability of zone operators to deploy new algorithms.

The need to maintain old algorithms is a function of both the deployment of
support for new algorithms, and of validating resolvers that do not support
new algorithms.

One parameter that may be difficult to measure or even infer, is the
portion of validating resolvers which are either likely to never have new
algorithms added, or certain to never have new algorithms added, for some
of the reasons itemized above.

Having said all of the above, I think there are a number of approaches
which might be worth considering, either individually or in combination:

   - Improving the client capability, to discover and potentially bypass
   "problem" forwarders (e.g. skipping past the first forwarder and reaching
   out to either a subsequent forwarder or the eventual resolver reached)
   - Adding validation to the client, and adding logic to either work with,
   or work past, upstream forwarder(s)/resolver(s), e.g. via use of the CD bit
   - Creating zones designed to "break" validators deliberately, for
   specific algorithms, as a "forcing function" (e.g. popular domains that
   will get the attention of the end users behind the bad validators)
   - Developing opt-in operator identification/notification specifications,
   so operators of resolvers can eventually become aware that they need to fix
   security problems, upgrade, or be aware of upcoming flag days or likely
   transition to "everything is indeterminate/insecure" states
   - Developing mechanisms similar to the trust-anchor
   measurement/reporting to allow measurements of useful data (while still
   maintaining general anonymity?)

I think more data collection, testing, and analysis might be advisable
before picking a solution to the underlying problems.

Tackling this from the point of view that does not adequately characterize
the reasons for multiple algorithms seems like it won't address the bigger
issues, and as such will only make the situation (proliferation of
algorithms deployed in parallel) worse.

My perspective is that most zone operators will only want to deploy a
single algorithm, and improving the rate at which new algorithms are
feasible to be adopted should be an explicit goal.

Making it feasible to use new algorithms exclusively, necessarily means
breaking validation for unmanaged or unmaintained resolvers or forwarders.

If it is possible to break validation for those devices/systems, without
adversely affecting clients (due to improved client logic and/or
capabilities), that would seem to be a good thing to pursue.

Having measurements that allow zone operators to determine the impact of
changing algorithms seems extremely valuable.

I don't think these necessarily involve protocol changes per se (at the
signing/validation level at least), but having interoperable
implementations that support measurements is technically still a "DNSOP"
thing, dovetailing into OARC work or other studies by researchers.

Brian
_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to