On 29.3.2018 22:05, Mukund Sivaraman wrote:
> On Thu, Mar 29, 2018 at 03:18:45PM -0400, Suzanne Woolf wrote:
>>
>> On Mar 28, 2018, at 11:19 AM, Mukund Sivaraman <m...@isc.org> wrote:
>>
>>> On Wed, Mar 28, 2018 at 10:55:13AM -0400, tjw ietf wrote:
>>>> I would say that most things we have adopted in the past few years do have
>>>> some implementations to reference.
>>>> Not when drafts are adopted, but generally before we head to WGLC I've
>>>> always wanted to see someone
>>>> who implemented the option in some manner.
>>>>
>>>> But yes, agree.
>>>
>>> I'd raise the bar even higher, to see complete implementation in a major
>>> open source DNS implementation when it applies. Sometimes implementation
>>> problems are very revealing (client-subnet should have gone through
>>> this).
>>
>> This is actually quite a good example of another problem:
>> Client-subnet was documented for Informational purposes; it was
>> already in wide use in the public internet and had an EDNS opt code
>> codepoint allocated from the IANA registry.
>>
>> Nothing that happened in DNSOP or the IETF changed that, and I
>> strongly suspect nothing that happened in DNSOP or the IETF could have
>> changed it.
> 
> We found issues in the client-subnet description (in the draft stages)
> that were corrected before it became RFC - this involved some behavioral
> changes. However, the opportunity was not given to address fundamental
> design issues, so what's in the RFC largely documents the
> implementations preceding it and still has issues. I didn't think
> client-subnet was in wide use when it came to dnsop. Even today it
> doesn't look like it's in wide use.
> 
> You have asked an interesting question, because what happens if
> something is already being used and there's no chance to update/correct
> problems because that would make it a different protocol?
> 
> IMO, we should not put anything through dnsop that does not allow
> reviewing and correcting problems. It is pointless for dnsop to shepherd
> a draft to RFC when there's no possibility to influence it.
> 
> During later stages of the draft via dnsop, we implemented client-subnet
> (resolver side) and found various problems. Some of these were addressed
> in the draft, but it was revealing how poor the state of the
> design/draft was. This is why I strongly suggest: A code contribution of
> a _complete implementation_ of everything described in the draft to one
> of the open source DNS implementations should come sometime during
> adoption, so that
> 
> (a) issues in production implementation are revealed
> 
> (b) it can be tried in the real world to understand how it behaves.
> 
> As you're a co-chair:
> 
> When Bert did that Camel presentation, I felt hooray, here's one of us
> who's talking about what the steady churn of ideas and RFCs is doing to
> DNS implementations. We finish implementing one draft and at almost
> breathless pace, a few more drafts queue up. It's not sustainable,
> esp. as all the existing features also need to be maintained for years.
> 
> Introducing a new RR type that doesn't involve additional processing is
> relatively trivial and nobody objects about it.
> 
> Introducing something like TCP pipelining, or rather, out-of-order query
> processing, may seem trivial when describing the change, but
> implementing it may need fundamental restructuring of the
> implementation. Proper implementation of client-subnet needs change
> everywhere within a nameserver from the client message parsing code to
> the data structures used for cache, and how cache lifetime is managed.
> 
> Implementations are being stretched and bent 5 different ways to adapt
> to the length and breadth of all these newfangled DNS items that
> literally are showing up at an alarming pace.
> 
> Bert really hit the spot at the right time. Something needs to be done
> to check what becomes an RFC. A good way is to require an open source
> implementation. If draft authors cannot supply that or convince an open
> source implementation to write support for it, it can go back to where
> it came from.

I tend to agree with Mukund. What's the point of doing standards, if
there is nothing to test against?

Let's imagine that e.g. Cisco and Google propose a brand new feature of
DNS protocol, but its implementation is available only for their
enterprise customers. Why should The Internet bother?

DNS is/was mostly open-source place, and I have a feeling that RFCs are
in the end closely followed only by open-source implementations anyway,
and that rest of the DNS players do whatever they want.
Examples:
- Empty non-terminals vs. Akamai
- EDNS vs. Cisco middleboxes
- EDNS ignorance (auth side) vs. Google
- <do not force me to copy&paste dns-violations here>

So, why should we (as dnsop) bother "ratifying" something for the big
folks who simply do not care enough to follow what we already published?

I can see parallel with TLS group where "big guys" continually submit
various drafts to accommodate their needs while not paying attention to
impact on the rest of ecosystem. TLS WG resists, we as dnsop should
resist as well.

-- 
Petr Špaček  @  CZ.NIC

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to