On Tue, 18 Apr 2023, Ralf Weber wrote:

[speaking as individual]

On 18 Apr 2023, at 13:11, Benjamin Schwartz wrote:

The draft's opening words are "DNS filtering is widely deployed for network
security".  This is true, but by far the "widest" deployment of DNS
filtering is for authoritarian national censorship, to prevent citizens
from engaging with forbidden ideas.

Do you have any data to back this claim up? I see far more solutions out
there protecting against attackers (malware, phishing, etc) then there
are for “censoring” or parental controls.

That could be because the nation state filters are not as visible or
advertised.

Also there are democratic
countries that with agreements of their citizens block access to some
content using DNS.

I would avoid choices in reasoning based on the term "democratic", as
that definition is not very clear for a lot of nation states.

But none of that should matter here as the idea of
this draft is to help the user to be informed why something has happened
using extended DNS errors and I think that is a win for the end user.

Will it? Every time there is helpful security software, we see filters
that are lying about the categories. Like the ACLU being filtered for
being "pornographic".

I think the idea of "suberrors" for the "Censored" EDE code probably just
doesn't make sense.  By definition, this code indicates that the resolver
_doesn't_ know why the result was filtered.  (The resolver operator may
know a _claimed_ reason, but it has no way to know whether this rationale
is the real motivation.)  Thus, one way forward might be to exclude this
code from the suberror registry.

The resolver operator might subscribe to various filtering lists and it
might have some idea. It would be useful if it can use categories to
explain to the user, provided it can also provide an identifier to the
source of such claims.

Even for the other codes, I think this registry opens a terrible can of
worms that the IETF can and should avoid.  Shall we add codes for "adult
content"? "advertising"? "social media"? "political extremism"? "terrorist
content"? "CSAM"? "fake news"?

Whther we should or not, if the registry is FCFS, these types will be
added and will end up being used. (subcategory or main category)

The EDE draft manages this to some extent by presenting an initial list of
codes that are plainly technical or structural in nature.  This draft does
the opposite, by starting to enumerate all the perceived evils of the
Internet.

Let's not go down that road.

I'm also leaning towards not doing it, but think it will happen inevitably.

Ok, so for me sub errors are useful as they can give some further information
on why something was blocked, and that information can be conveyed to the
end user. The EDE categories are rather broad and hence don’t give enough
information on why exactly a site is blocked. Now we could also use free
text

Free text was explicitely excluded in the original EDE RFC because of
its potential of abuse, eg [go to www.evil.com - your computer is infected!]
or another venue of delivering advertisements.

or let each resolver operator pick there own, but maybe some categories that
we all can agree on would be good to put in here. I’m ok with not adding
stuff to the censored EDE code, but we should allow sub errors to Filtered
and Blocked.

What I find far more important is to fix RBLs so that the original DNS
data is still delivered but in a containerized way, so that the
filtering works because it is desired (optin) and not because it is
enforced against the will of the enduser.

Paul

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to