Re: [DNSOP] Is DNSSEC a Best Current Practice?

2022-03-10 Thread Colm MacCárthaigh
On Thu, Mar 10, 2022 at 2:59 PM Grant Taylor
 wrote:
> Aside:  Maybe it's just me, but I feel like there is more perceived
> value in clarifying existing documentation, in the hopes that others
> will be more likely to adopt current best practices, than there is in
> updating things.  Dare I say it, but I feel some urgency to do this.

I think a single BCP doc is a good idea, but here I'd actually go much
further and argue for a significant section in the BCP that
acknowledges that it is also a best current practice not to enable
DNSSEC. That is objectively the most common practice, and it is very
often intentional. I think there's a way to frame it and lay out the
intrinsic trade-offs between internet stability risks and the security
benefits. That framing actually underscores the importance and urgency
of all the best practices that can mitigate the stability risks and
enhance the security. That might more effectively persuade DNSSEC
skeptics. Absent a big change in adoption, a BCP could otherwise seem
quite disconnected from reality (TLD-scale outages, stale
cryptography) and tone-deaf to the skepticism that's out there. "We
hear you" is powerful.

-- 
Colm

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] BCP on rrset ordering for round-robin? Also head's up on bind 9.12 bug (sorting rrsets by default)

2018-06-15 Thread Colm MacCárthaigh
Just a question on this: was the old/classic behavior really
random/shuffled? Or was it that bind would "rotate" through iterations
where the order was the same each time if you think of the rrset list as a
ring, but with a different start and end point within that ring? (That's
what's described here:
https://docstore.mik.ua/orelly/networking_2ndEd/dns/ch10_07.htm)


On Fri, Jun 15, 2018 at 1:17 PM, Erik Nygren  wrote:

> On Fri, Jun 15, 2018 at 3:52 PM, Mukund Sivaraman  wrote:
>
>> On Fri, Jun 15, 2018 at 02:38:00PM -0400, Bob Harold wrote:
>> > Round-robin is a documented feature that many applications use.
>> Removing
>> > it from DNS resolvers, and then having to add it to a much larger
>> number of
>> > applications, does not seem like a good trade-off.
>>
>> The _default_ in BIND 9.12 was changed from order random to order
>> none. It seems to be missing from the release notes by mistake, but the
>> administrator manual mentions what the default is
>>
>
> We have many years of software that relies on emergent behaviors from the
> current default.
> While pedantically it may be true that these should be treated as
> unordered sets and that
> applications or stub resolver libraries should do some permutations or
> randomized selection,
> that doesn't match the current reality for widely used software (eg, curl
> and ssh, which I'm
> sure is just the tip of the iceberg).
>
> Software should have safe defaults that matches common expectations.
> Those common expectations, as demonstrated by the configuration of all
> of the large public resolvers I've tested, as well as by how common
> software behaves,
> is that the order of results is NOT consistent.  In many environments,
> this lack
> of consistency is relied upon for systems to work properly..  Switching to
> consistent
> order is no big deal on a small scale, but a widespread shift (eg, as
> would happen
> due to a change in default in popular software) would almost certainly
> have
> significant operational impact and is something that warrants significant
> discussion
> about the practical implications.
>
> This ambiguity in the current specifications results in this mismatch
> between the pedantic (rrsets are explicitly unordered, and a consistent
> order is a subset of that) and the current reality (applications and
> services
> rely on resolvers-at-scale to be explicitly inconsistent in the ordering
> of rrsets)
> is why I started off by proposing that we may need a BCP or informational
> RFC
> that describes the currently assumed defaults and best-practices
> (ie, round-robin is assumed in many places so don't consistently order
> at-scale by default).
>
> Erik
>
>
>
>
>
>
>
>
> ___
> DNSOP mailing list
> DNSOP@ietf.org
> https://www.ietf.org/mailman/listinfo/dnsop
>
>


-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] BCP on rrset ordering for round-robin? Also head's up on bind 9.12 bug (sorting rrsets by default)

2018-06-15 Thread Colm MacCárthaigh
I think so too; and I wouldn't be so strict on backwards compatibility
there.

That behavior is a side-channel that defeats DNS privacy in some cases.
E.g. I can query a record, watch you send an encrypted query, then query
the record again, and tell what you queried. Within some probability at
least.

For that reason, It'd be worth experimenting with an implementation that
does shuffle the results each time.

On Fri, Jun 15, 2018 at 4:54 PM, Shumon Huque  wrote:

> On Fri, Jun 15, 2018 at 5:55 PM Colm MacCárthaigh 
> wrote:
>
>>
>> Just a question on this: was the old/classic behavior really
>> random/shuffled? Or was it that bind would "rotate" through iterations
>> where the order was the same each time if you think of the rrset list as a
>> ring, but with a different start and end point within that ring? (That's
>> what's described here: https://docstore..mik.ua/
>> orelly/networking_2ndEd/dns/ch10_07.htm
>> <https://docstore.mik.ua/orelly/networking_2ndEd/dns/ch10_07.htm>)
>>
>
> ISC veterans can confirm, but my recollection is that the earliest
> implementations were indeed as described above - the response RRset was
> cycled/rotated, rather than randomized.
>
> Shumon.
>
>


-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] abandoning ANAME and standardizing CNAME at apex

2018-06-25 Thread Colm MacCárthaigh
On Mon, Jun 25, 2018 at 7:02 AM, Tony Finch  wrote:

> > Even that though requires that the authoritative server be capable of
> > waiting for an asynchronously retrieved value before it can respond.
> >
> > For some authoritative servers that might require a substantial redesign.
>
> That isn't required if the ANAME target records are fetched/updated by an
> out-of-band provisioning process. A server will want to do it this way if
> its query rate is bigger than the number of ANAME targetss divided by
> their TTLs.
>

A challenge with that is that many people now use geographic or latency
based DNS routing based on the resolver IP address or EDNS-client-subnet.
That's one of the reasons why Route53's ALIAS works only for targets that
Route53 is authoritative for.

-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] abandoning ANAME and standardizing CNAME at apex

2018-06-25 Thread Colm MacCárthaigh
On Mon, Jun 25, 2018 at 8:06 AM, Ray Bellis  wrote:

> On 25/06/2018 16:05, Paul Wouters wrote:
>
> > Then you might as well use that mechanism to update A/ records and
> > skip the intermediate ANAME?
>
> +1
>
> Apex records are a provisioning problem, not a protocol one.
>

When we implemented ALIAS for Route 53 we looked at a model like that,
where it would be more like an instruction to merely import certain DNS
entries. But we found that didn't quite match CNAME in terms of who pays
for the queries (we charge my the query, which is a common model).

As a Route 53 customer, when you ALIAS to something, you don't pay for
queries to those names, the owner of the hidden target name does.  That's
because the target retains control of the TTL, and we didn't want it that
the target could lower the TTL and increase your bill.

-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Whiskey Tango Foxtrot on key lengths...

2014-04-01 Thread Colm MacCárthaigh
On Tue, Apr 1, 2014 at 5:39 AM, Olafur Gudmundsson  wrote:

> Doing these big jumps is the wrong thing to do, increasing the key size
> increases three things:
> time to generate signatures
> bits on the wire
> verification time.
>
> I care more about verification time than bits on the wire (as I think that
> is a red herring).
> Signing time increase is a self inflicted wound so that is immaterial.
>
>   signverifysign/s verify/s
> rsa 1024 bits 0.000256s 0.16s   3902.8  62233.2
> rsa 2048 bits 0.001722s 0.53s580.7  18852.8
> rsa 4096 bits 0.012506s 0.000199s 80.0   5016.8
>
> Thus doubling the key size decreases the verification performance by
> roughly by about 70%.
>

With those numbers; if the validating resolver uses speculative/optimistic
concurrency [1] then jumping from 1024 to 4096 bits, the user-impact is
that ~180us are added to the overall resolution time. Zero in the cached
case.

There is an impact on the overall capacity of the resolver, though it's a
function of cache-miss-rate (since cache hits need not be verified). Large
centralised resolver operators may face some pressure (anyone doing
validation locally is unlikely to notice), but is it sensible to compromise
your zone security to accommodate that?

[1] There's no need to wait for a response to be validated before
recursing, a validating resolver can first recurse and later backtrack if
the parent signature doesn't verify.

-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Whiskey Tango Foxtrot on key lengths...

2014-04-01 Thread Colm MacCárthaigh
On Tue, Apr 1, 2014 at 6:39 AM, Phillip Hallam-Baker wrote:

> On Tue, Apr 1, 2014 at 9:05 AM, Nicholas Weaver  > wrote:
>
>> Lets assume a typical day of 1 billion external lookups for a major ISP
>> centralized resolver, and that all are verified.  Thats less 1 CPU core-day
>> to validate every DNSSEC lookup that day at 2048b keys.
>>
>
>
> Yes, I agree, but you are proposing a different DNSSEC model to the one
> they believe in.
>
> The DNS world has put all their eggs into the DNSSEC from Authoritative to
> Stub client model. They only view the Authoritative to Resolver as a
> temporary deployment hack.
>


I think even in the imagined future of validating stub resolvers, there's
still value in centralized caching; it speeds up lookup times. There's no
sense in intermediates caching bad answers, especially since it can lead to
denial of service, so there's still some value in validating centrally too.

-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Whiskey Tango Foxtrot on key lengths...

2014-04-01 Thread Colm MacCárthaigh
On Tue, Apr 1, 2014 at 3:39 PM, Mark Andrews  wrote:
>
> As I have said many times.  There is a myth that recursive servers
> do not need to validate answers.  Recursive servers will always
> need to validate answers.  Stub resolvers can't recover from recursive
> servers that pass through bogus answers.


This too is going too far; of course they can, they can ask another
recursive resolver.

Always set CD=1 is also bad advice.  Stub resolvers need to send
> both CD=1 and CD=0 queries and should default to CD=0.  CD=1 should
> be left to the case where they get a SERVFAIL result to the CD=0
> to handle the case where the recursive server's clock is broken or
> it has a bad trust anchor.
>

Defaulting to CD=0 renders DNSSEC, essentially, pointless. Resolvers, and
the path between resolvers and stubs, are the easiest components in the
lookup chain to subvert.

> So they resisted the idea of an authenticated Stub-client <-> Resolver
> > protocol and they dumb down the crypto so their model will work.
>
> DNSSEC is quite capable to protecting that path.  Why do you need
> a second protocol.
>

That statement is not consistent with setting CD=0 on that path.

-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Whiskey Tango Foxtrot on key lengths...

2014-04-01 Thread Colm MacCárthaigh
On Tue, Apr 1, 2014 at 5:31 PM, Mark Andrews  wrote:

> > This too is going too far; of course they can, they can ask another
> > recursive resolver.
>
> Which also passes through bogus answers.  I will repeat stub resolvers
> can't recover from recursive servers that pass through bogus answers.
>

DNSSEC is a mitigation against spoofed responses, man-in-the-middle
interception-and-rewriting and cache compromises. These threats are
endpoint and path specific, so it's entirely possible that one of your
resolvers (or its path) has been compromised, but not others. If all of
your paths have been compromised, then there is no recovery; only
detection. But that is always true for DNSSEC.


> > Defaulting to CD=0 renders DNSSEC, essentially, pointless. Resolvers, and
> > the path between resolvers and stubs, are the easiest components in the
> > lookup chain to subvert.
>
> CD=0 tells the resolver to validate the answers it getsi if it is
> validating.  It has NOTHING to do with whether you are validating
> or not.  You have fallen for the myth that CD=1 indicates that you
> intend to validate and that CD=0 means that you are not validating.
> CD DOES NOT HAVE THOSE MEANINGS.
>
> DO=1 is the ONLY bit REQUIRED to be set if you are validating.
>
> If DO=1 is set you should assume the client may be validating.
> Named assumes this when deciding if it will intentionally break
> DNSSEC validation down stream.
>

As you pointed out, if I set CD=1, I always expect a meaningful answer
containing signatures that I can validate. If I set CD=0, then an empty
SERVFAIL response is valid. If I get SERVFAIL, how do I validate that it's
a real error? Your suggestion is to regress to the CD=0 case and re-check
it (or maybe do your own recursion?). Why not just do CD=0 all along?

Now I agree that a resolver should always validate the signatures anyway,
and if I were writing a caching resolver, I'd never cache rrsets that fail
validation, even if the user has CD set to 1. But that's separate.

> > DNSSEC is quite capable to protecting that path.  Why do you need
> > > a second protocol.
> >
> > That statement is not consistent with setting CD=0 on that path.
>
> I sugges that you go re-read all the DNSSEC RFC's if you believe
> that because you are categorically WRONG.
>

Please stay civil, and also please don't assume that I haven't read the
DNSSEC RFCs.

If you set CD=0, you can't authenticate the failure case, empty SERVFAILs
can be spoofed or inserted towards the stub. And how do you disambiguate
between SERVFAILs that are validation errors and other server failures?
Without some kind of resolver redundancy (so recovering via retrying
another resolver) I don't see a way. Of course if all of your resolvers
return SERVFAIL, you're left in the same situation - but again, if every
path you have has been compromised, there is no escape.

But this can all be boiled down to;  As you've already written, you agree
that CD=1 is necessary in the failure case - it's the only hope of
authenticating the error. So why bother with CD=0 at all?

-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Whiskey Tango Foxtrot on key lengths...

2014-04-01 Thread Colm MacCárthaigh
On Tuesday, April 1, 2014, Olafur Gudmundsson  wrote:
>
> you are assuming one validation per question ?
> what if the resolver needs to to 10? that is 1.8ms,
>

I'm not :) as I wrote - if the resolver validates after it has recursed,
only the final end of the line validation increases the overall latency.
Responses can be assumed to be valid, recursed and then that recursion can
be cancelled and backtracked if the response is found to be invalid.

In all system design we need to take into account where the system can be
> subverted, right now the
> registration part of DNS system is the weakest link, thus most cost
> effective way to gain hold of a domain is to
> divert the registration.
>

 There are several weak links, and it makes sense to work on them all.

> [1] There's no need to wait for a response to be validated before
> recursing, a validating resolver can first recurse and later backtrack if
> the parent signature doesn't verify.
>
>
> In the scope of things verification times are small compared to network
> delays but can add up if done as batch operation.
>

Optimistic concurrency doesn't imply batching. Each response can be
validated, while the next question is awaiting a response - one at a time.



-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] CD (Re: Whiskey Tango Foxtrot on key lengths...)

2014-04-01 Thread Colm MacCárthaigh
On Tue, Apr 1, 2014 at 7:49 PM, Evan Hunt  wrote:

> On Tue, Apr 01, 2014 at 06:25:12PM -0700, Colm MacC?rthaigh wrote:
> > DNSSEC is a mitigation against spoofed responses, man-in-the-middle
> > interception-and-rewriting and cache compromises. These threats are
> > endpoint and path specific, so it's entirely possible that one of your
> > resolvers (or its path) has been compromised, but not others. If all of
> > your paths have been compromised, then there is no recovery; only
> > detection. But that is always true for DNSSEC.
>
> Consider the scenario in which one authoritative server for a zone
> has been compromised and the others have not, and that one happens to
> have the lowest round-trip time, so it's favored by your resolver.


> If you query with CD=0, a validating resolver detects the problem
> and tries again with another auth server.  It doesn't give up until
> the whole NS RRset has failed.


> If you query with CD=1, you get the bogus data and it won't validate.
>

I don't think this makes much sense for a coherent resolver. If I were
writing a resolver, the behaviour would instead be;  try really hard to
find a valid response, exhaust every reasonable possibility. If it can't
get a valid response, then if CD=1 it's ok to pass back the invalid
response and its supposed signatures - maybe the stub will no better, at
least fail open. If CD=0, then SERVFAIL, fail closed.

Although CD means "checking disabled", I wouldn't actually disable
checking, simply because that's stupid (I don't mean to be impolite, but I
don't have a better word to use here). But by preserving the on-the-wire
semantics of the CD bit, I'd preserve effectiveness as a cache, and pass on
what's needed to validate even the failure cases.


-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Current DNSOP thread and why 1024 bits

2014-04-02 Thread Colm MacCárthaigh
On Wed, Apr 2, 2014 at 6:30 AM, Edward Lewis wrote:

> I found that there are two primary reasons why 1024 bits is used in zone
> signing keys.
>
>  One - peer pressure.  Most other operators start out with 1024 bits.  I
> know of some cases where operators wanted to choose other sizes but were
> told to "follow the flock."
>
> Two - it works.  No one has ever demonstrated a failure of a 1024 bit key
> to provide as-expected protection.
>

Cryptographic failures are often undemonstrated for decades. If a state
actor has broken 1024b keys, they're unlikely to advertise that, just use
it now and then as quietly as they can.

Secondly, the application of signatures in DNS and the nature of the DNS
protocol itself presents significant risks that don't make a
straightforward comparison easy.

Suppose your goal is to intercept traffic, and you'd like to cause
www.example.com, a signed domain, to resolve to an IP address that you
control.  Now suppose you also happen to have a /16, not unreasonable for a
large actor - small even. If you can craft a matching signature for
www.example.com with even one of your 2^16 IP addresses, you've succeeded.
You don't have to care which particular IP address you happened to craft a
matching signature for.  This property makes it easier to sieve for
matching signatures.

>From these two main reasons (and you'll notice nothing about cryptographic
> strength in there) a third very import influence must be understood - the
> tools operators use more or less nudge operators to the 1024 bit size.
>  Perhaps via the default settings or perhaps in the tutorials and
> documentation that is read.
>

Do you think that this would be as relevant to the root zone and large TLDs
though?

-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] CD (Re: Whiskey Tango Foxtrot on key lengths...)

2014-04-02 Thread Colm MacCárthaigh
On Wed, Apr 2, 2014 at 2:40 PM, Mark Andrews  wrote:

> > I don't think this makes much sense for a coherent resolver. If I were
> > writing a resolver, the behaviour would instead be;  try really hard to
> > find a valid response, exhaust every reasonable possibility. If it can't
> > get a valid response, then if CD=1 it's ok to pass back the invalid
> > response and its supposed signatures - maybe the stub will no better, at
> > least fail open. If CD=0, then SERVFAIL, fail closed.
>
> Guess what, resolvers do not work like that.  They are not required
> to work like that.


Nothing can compel any particular resolver to choose a particular
implementation - but I take note of
https://tools.ietf.org/html/rfc6840#section-5.9 and
https://tools.ietf.org/html/rfc6840#appendix-B which recommends it (as a
"SHOULD") and I generally agree with the good reasoning that's in the RFC.

As I wrote, if it were me writing a validating stub resolver, I would
always set CD=1 - and when acting as an intermediate resolver, I would
always make a reasonable effort to find a validating response, even if CD=0
is on the incoming query. I'm certain that at least one resolver does work
like this, and I suspect it's also how Google Public DNS works, based on
some experimentation.


-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] DNS over DTLS (DNSoD)

2014-04-23 Thread Colm MacCárthaigh
TLS seems like a poor choice for any new cryptographic transport, it is a
very complicated protocol with a considerable amount of implementation
complexity, computational and network costs. DTLS seems poorer still, as it
is an adaptation of primitives never intended for datagram transmission.

But feedback on the draft:

   * It's unclear how your protocol would really mitigate an active
attacker sending bogus responses. Won't the attacker still be able to
disrupt the DTLS session? Allowing session multiplexing by query-id likely
amplifies this risk.

   * In DTLS, the ClientHello is in the plain - this presents opportunities
for downgrade attacks and inference making. Considering the proposal
advocates for hardcoding the certificate, why not just use a key from the
off?

   * Some nameservers definitely don't just "not respond" when they get
messages they don't understand :/

   * Is the entire protocol subject to the simplest downgrade attack of
all? Just cause the first server response to be dropped and regular DNS
will be used?

   * How long should session state persist?

   * The network costs of certificate transmission probably pale in
comparison to the computational costs of key negotiation. How should
trivial key-exchange ddos attacks be prevented?

   * TLS Heartbeat messages do not permit asymmetric MTU discovery.

On Wed, Apr 23, 2014 at 6:47 AM, Dan Wing  wrote:

> For discussion.
>
>DNS queries and responses are visible to network elements on the path
>between the DNS client and its server.  These queries and responses
>can contain privacy-sensitive information which is valuable to
>protect.  An active attacker can send bogus responses causing
>misdirection of the subsequent connection.
>
>To counter passive listening and active attacks, this document
>proposes the use of Datagram Transport Layer Security (DTLS) for DNS,
>to protect against passive listeners and certain active attacks.  As
>DNS needs to remain fast, this proposal also discusses mechanisms to
>reduce DTLS round trips and reduce DTLS handshake size.  The proposed
>mechanism runs over the default DNS port and can also run over an
>alternate port.
>
> http://tools.ietf.org/html/draft-wing-dnsop-dnsodtls
>
> -d
>
> ___
> DNSOP mailing list
> DNSOP@ietf.org
> https://www.ietf.org/mailman/listinfo/dnsop
>



-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] call to work on edns-client-subnet

2014-05-06 Thread Colm MacCárthaigh
On Tue, May 6, 2014 at 10:18 AM, Joe Abley  wrote:

> On the authority side, support for this option has a potential impact on
> query load. On the recursive side, support for this option has a potential
> impact on cache size.
>

Just to add some limited data; CloudFront (a large CDN) has been using
EDNS0 client subnet for a few months now, and publically announced a month
ago. In general, the uptick on the authority side has been surprisingly
modest.


> With multiple implementations, there are interop issues.
>

We've noticed some inconsistency around the subnet lengths being required
in responses, but nothing unmanageable. In general, it's been pretty smooth!

The biggest operational problem is probably a lack of support in diagnostic
tools, with an RFC, I'm hopeful we could get a patch into the standard
version of dig - which would be useful for debugging and so on.

There might also be a place for an informational document/rfc on
source-dependent answers in general. For example, even for those who
believe that source dependent answers has a place, having source-dependent
NS, DS records or delegation paths can be just plain unworkable.


-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] call to work on edns-client-subnet

2014-05-06 Thread Colm MacCárthaigh
On Tue, May 6, 2014 at 6:04 PM, Jiankang Yao  wrote:

>   Section 
> 3.1.1.
> Responses Tailored to the Originator in the draft-iab-dns-applications-07
> has some related discussion to this topic.
>
> From the IAB draft, it seems that IAB does not prefer to tailor dns
> response based on the originator.
>

3.1.1 reads pretty neutral to me, even saying that it "introduces little
harm" (for web portals) and that it has broad adoption in the field.  It
just notes that it doesn't have much support in the community.

But it clearly has broad support on the internet. At this point a majority
of DNS responses are likely based on the originator (that's my guess based
on local data, but it'd be interesting to see real data).

-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] call to work on edns-client-subnet

2014-05-08 Thread Colm MacCárthaigh
On Thu, May 8, 2014 at 9:15 AM, Ralf Weber  wrote:
>
> There is madness, but the madness is in mixing authoritative and recursive
> functions in one server and not in using DNS to direct traffic.


That's a pretty big assumption to jump to. It's pretty unlikely that all
ANAME implementors do that, as they'd have significant availability
problems. I'd say it's likely that some query the ANAME target periodically
and put what they find into the authoritative data store.

Of course that's even less compatible with edns-client-subnet, but there
you go.

-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] call to work on edns-client-subnet

2014-05-16 Thread Colm MacCárthaigh
On Fri, May 16, 2014 at 6:41 AM, Ted Lemon  wrote:

> On May 16, 2014, at 8:18 AM, Andrew Sullivan 
> wrote:
> > But it seems to me we ought to
> > be more enthusiastic than resigned in this case, even if we have to
> > hold our collective nose as well.  Either those who understand how the
> > DNS works will document what to do, or else people who have no clue
> > will make more "improvements".
>
> The big can of worms to which I was referring in the previous message was
> DNSSEC.   Deploying CDN functionality with DNSSEC is hard.   Not
> impossible, but definitely hard.   I'm not convinced it's the right way to
> solve the problem.   But then, I'm not convinced that DNS is the right way
> to solve these problems generally, although as you say, those with
> operational skin in the game seem to have good reason to have chosen this
> solution out of those available.
>

Just to back that up; DNS tricks do play an important role in keeping the
internet robust and healthy. They're a key part of many DDOS mitigation
techniques, and network failure mitigation too. In my experience DNS tricks
are also much better than the alternative (pure anycast, redirects, etc
..).

It is harder to deploy DNSSEC around these tricks, and one must consider
that signed answers are replayable across "views" - but it's not that
significant in comparison to the overall challenges of deploying DNSSEC at
scale.

-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] call to work on edns-client-subnet

2014-05-16 Thread Colm MacCárthaigh
On Fri, May 16, 2014 at 7:24 AM, Nicholas Weaver
wrote:

> No its not.  All you have to be willing to do is release the constraint on
> "all signatures offline".  Doing online signatures allows all the CDN
> functionality you want to be DNSSEC validated (not like DNSSEC really does
> much good for A records anyway...).
>

There's no incompatibility between offline signing and returning different
answers to different source IPs; just sign every variant.

And even 4096b RSA signatures only take a handful of milliseconds to
> construct on the fly, you can cache signature validity for minutes even in
> the very dynamic case, and this is one of those operations that parallelize
> obscenely well.
>

You won't survive a trivial DOS from a wristwatch computer with that
approach :) Having static answers around greatly increases capacity, by
many orders of magnitude.

-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] call to work on edns-client-subnet

2014-05-16 Thread Colm MacCárthaigh
On Fri, May 16, 2014 at 7:34 AM, Nicholas Weaver
wrote:

>
> On May 16, 2014, at 7:29 AM, Colm MacCárthaigh  wrote:
> >> And even 4096b RSA signatures only take a handful of milliseconds to
> construct on the fly, you can cache signature validity for minutes even in
> the very dynamic case, and this is one of those operations that parallelize
> obscenely well.
> >>
> > You won't survive a trivial DOS from a wristwatch computer with that
> approach :) Having static answers around greatly increases capacity, by
> many orders of magnitude.
>
> Actually, you can.  You prioritize non-NSEC3 records, since thats a
> finite, identifiable, priority set, and cache the responses.  Thus if you
> have 10k valid names, each with 100 different possible responses, and have
> a max 1 minute TTL on signatures, thats only 16k signatures/s in the
> absolute worst case, which you can do on a single, 16 core computer.
>

16k/second is nothing, and I can generate that from a wristwatch computer.
Caching doesn't help, as the attackers can (and do) bust caches with
nonce-names and so on :/  A 16 core machine can do a million QPS relatively
easily - so it's a big degradation.

-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] call to work on edns-client-subnet

2014-05-16 Thread Colm MacCárthaigh
On Fri, May 16, 2014 at 7:50 AM, Paul Vixie  wrote:
>
> what we do have is advice: "if you're going to do this, here is a way
> that works." in many cases, and DNSSEC is an example, the advice has an
> additional property: "if you want a system like this, here is how
> everybody else is doing it." in the past, the DNS advice offered by the
> IETF has all had both properties -- these things work and we're trying
> to get everybody to do it the same way because we have a vision of the
> whole internet having this new feature."
>

There may still be an opportunity to give sensible proscriptive advice. For
example; it might be sensible to strongly caution against variable NS or DS
rrsets, and that's a behavior resolvers could be advised to reject when
they see it. It's an example of something that's not common, but that some
authoritative operator might experiment with (it's interesting to think
about) but would likely cause some real complexity chaos.

-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] call to work on edns-client-subnet

2014-05-16 Thread Colm MacCárthaigh
On Fri, May 16, 2014 at 7:54 AM, Nicholas Weaver
wrote:

> > 16k/second is nothing, and I can generate that from a wristwatch
> computer. Caching doesn't help, as the attackers can (and do) bust caches
> with nonce-names and so on :/  A 16 core machine can do a million QPS
> relatively easily - so it's a big degradation.
>
> You miss my point.  That server is doing a million QPS, but its only
> providing ~16k/s distinct answers.
>

That's not a typical CDN environment though. CDNs typically have far more
names than that. But you're right; online signing + caching probably is
workable in some environments.


> Your wristwatch computer can only cause a dynamic server a problem if its
> competing with the legitimate query stream's priority category.  The
> "priority" category, assuming 10k names and 100 options/name and 1m max TTL
> requires only a single system to support.
>

I've never been able to make prioritisation really work at microsecond
scale. I can imagine a dedicated process for signing and having prioritized
queues to it, but that would need so much packet copying that it would
likely degrade throughput seriously. Alternatively the DNS handling process
may defer the signing and keep its own queue locally, but that introduces
scheduling overhead. Every time I've tried it, I've found that taking out
prioritisation and smart scheduling made the overall average faster.


> Thus your wristwatch loaders can only act to load the non-priority
> category, which would be NSEC3.  If you actually care about zone
> enumeration, you MUST generate NSEC3 records on the fly, because lets face
> it, NSEC3 in the static case doesn't stop trivial enumeration of the zone.
>

Another approach to this is to pre-sign a fixed number of NSEC3 records per
zone, regardless of the zone's real size or contents :)

-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] fyi [Pdns-users] Please test: ALIAS/ANAME apex record in PowerDNS

2014-09-21 Thread Colm MacCárthaigh
On Sun, Sep 21, 2014 at 11:37 AM, Paul Vixie  wrote:

> i'd be very interested in a standards-track (interoperable; including
> DNSSEC support and AXFR/IXFR) version of this feature. my hope is that you
> will remove out-of-zone capability here, that is, the target of ALIAS
> should have to be authority data in the same zone.
>

But then the feature is pointless; you could just include the record
directly at the apex if you knew what the value should be.

-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] fyi [Pdns-users] Please test: ALIAS/ANAME apex record in PowerDNS

2014-09-22 Thread Colm MacCárthaigh
On Mon, Sep 22, 2014 at 7:06 AM, Tony Finch  wrote:
> The fun bit is that an auth server implementing some kind of proxying
> ANAME is in a position very like Google and OpenDNS. That is, if the
> target of the ANAME is a hostname provided by Akamai or CloudFlare or
> whoever, and if the auth server is going to proxy the answer faithfully,
> then it has to implement client-subnet.

I wonder if the best thing to do would be to define an ANAME/NAME
that can be negotiated by resolvers. If the resolver supports it (it
can let the auth know via EDNS0) then the ANAME/NAME is returned
without resolution. If the resolver doesn't support it, then a
synthetic A/ can be returned.

-- 
Colm

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] New version of the DNS terminology draft

2015-01-21 Thread Colm MacCárthaigh
Awesome doc, just some small observations;

TTL: It might be worth using the word 'maximum' in relation to the
TTL; I think there is consensus that TTLs may be truncated.

RRSet:  Are the RRs in an RRSet required to have different data? For
types such as A//SRV/MX this makes sense, but maybe not for TXT. I
also think views and other implementation specific features confuse
things here. A user might have 10 A records defined for a given name;
but if their DNS server returns one at a time (say it's using weighted
round robin) - I don't think of the 10 as an RRSet; but rather it's 10
RRSets. What's actually sent on the wire is what matters, I think.

Stealth server:  this definition seems a bit contradictory. Starts out
by saying it's a slave, but then says it can also be a master.

Suggestions for section 6:  Open resolver, Public resolver

Suggestions for section 7: glue, child-centric, parent-centric





On Mon, Jan 19, 2015 at 2:16 PM, Paul Hoffman  wrote:
> Greetings again. Andrew, Kazunori, and I have done a massive revision on the 
> DNS terminology draft based on the input we got on the -00. We're sure we 
> have further to go, but we wanted people to look over the new version and 
> give feedback. Thanks!
>
> Name:   draft-hoffman-dns-terminology
> Revision:   01
> Title:  DNS Terminology
> Document date:  2015-01-19
> Group:  Individual Submission
> Pages:  14
> URL:
> http://www.ietf.org/internet-drafts/draft-hoffman-dns-terminology-01.txt
> Status: 
> https://datatracker.ietf.org/doc/draft-hoffman-dns-terminology/
> Htmlized:   http://tools.ietf.org/html/draft-hoffman-dns-terminology-01
> Diff:   
> http://www.ietf.org/rfcdiff?url2=draft-hoffman-dns-terminology-01
>
> ___
> DNSOP mailing list
> DNSOP@ietf.org
> https://www.ietf.org/mailman/listinfo/dnsop



-- 
Colm

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] New version of the DNS terminology draft

2015-01-21 Thread Colm MacCárthaigh
On Wed, Jan 21, 2015 at 7:25 AM, Paul Vixie  wrote:
>
> RRSet: Are the RRs in an RRSet required to have different data? For
> types such as A//SRV/MX this makes sense, but maybe not for TXT. I
> also think views and other implementation specific features confuse
> things here. A user might have 10 A records defined for a given name;
> but if their DNS server returns one at a time (say it's using weighted
> round robin) - I don't think of the 10 as an RRSet; but rather it's 10
> RRSets. What's actually sent on the wire is what matters, I think.
>
>
> if their server returns only one RR at a time, then there are ten RRsets,
> as you say. however, such a server would not be speaking the DNS protocol
> as defined, if it starts from a zone file or zone transfer where there is
> within the zone ten RR's for a given name. so, by definition, the current
> text is correct.
>

If there are two zones for the same name, with different views, do the RRs
of a given name and type in both zones form a single rrset? I don't think
so. Zone files aren't a requirement of the DNS protocol either; and I don't
think there's any case to be made that the configuration of multiple rrsets
for the same name/type is not speaking the DNS protocol as defined.


>  Stealth server: this definition seems a bit contradictory. Starts out
> by saying it's a slave, but then says it can also be a master.
>
> in other words, what makes you a master is that someone is transferring from 
> you. the primary master is the only master that by definition cannot also be 
> a slave. the terms "master" and "slave" refer to protocol roles within the 
> AXFR/IXFR transaction.
>
> It might be worth updating the text to say "is often also a master" to
make the non-exclusivity between master and slave a bit more clear.


-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [dns-operations] dnsop-any-notimp violates the DNS standards

2015-03-13 Thread Colm MacCárthaigh
On Thu, Mar 12, 2015 at 4:09 PM, Mark Andrews  wrote:
>
> In message <3d558422-d5da-4434-bded-e752ba353...@flame.org>, Michael Graff 
> writes:
>> What problem are we specifically trying to solve here again?
>
> A non-problem for most of us.
>
>> Michael
>
> If one really wants to reduce the number of packets required with
> SMTP processibg just write a RFC that says A and  records should
> be returned in the additional section if no MX records exist at the
> qname.  This is currently permitted so vendors could do this today.

For some data; Route 53 does do this for MX, NS, SRV and CNAME. It's
never been a problem, and does seem to speed up processing a little.

-- 
Colm

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop