El 18 ag 2017, a les 11:33, Lanlan Pan <abby...@gmail.com> va escriure:
> So, can you talk about how your proposal saves cost over using a heuristic?
> It can be used with cache aging heuristic.
> Heuristic read in aaa/bbb/ccc.foo.com <http://ccc.foo.com/>, expire and move 
> out;  then read in xxx/yyy/zzz.foo.com <http://zzz.foo.com/>,  expire and 
> move out;  loop...
> => Map aaa/bbb/ccc/xxx/yy/zzz.foo.com <http://zzz.foo.com/> to *.foo.com 
> <http://foo.com/> when heuristic read, it will reduce the load of move in/out.

By "move out" you mean "remove," right?   Move out implies that you are moving 
it somewhere.   You haven't actually answered my question.  You say that SWILD 
will remove the load, but you don't give any evidence of this.

> 
>> 2) cache miss
>> All of temporary subdomain wildcards will encounter cache miss.
>> Query xxx.foo.com <http://xxx.foo.com/>,  then query yyy.foo.com 
>> <http://yyy.foo.com/>, zzz.foo.com <http://zzz.foo.com/>, ...
>> We can use SWILD to optimize it,  only query xxx.foo.com 
>> <http://xxx.foo.com/> for the first time and get SWILD, avoid to send 
>> yyy/zzz.foo.com <http://zzz.foo.com/> queries to authoritative server.
> 
> Can you characterize why sending these queries to the authoritative server is 
> a problem?
> 
> Ok, Similar to RFC8198 section 6 
> <https://datatracker.ietf.org/doc/html/rfc8198#section-6>
> Benefit but not problem,  directly return from cache, avoid send queries to 
> authoritative, and wait for response, reduce laterncy.

Okay, but this isn't a reason to prefer this to existing, standardized 
technology.

>> 3) DDoS risk 
>> The botnet ddos risk and defense is like NSEC aggressive wildcard, or NSEC 
>> unsigned.
>> For example,  [0-9]+.qzone.qq.com <http://qzone.qq.com/> is a popular SNS 
>> website in China, like facebook. If botnets send "popular website wildcards" 
>> to recursive,  the cache size of recursive will rise, recursive can not just 
>> simply remove them like some other random label attack.
>> We prefer recursive directly return the IP of subdomain wildcards, and not 
>> rise recursive cach, not send repeat query to authoritative.
> 
> Why do you prefer this?   Just saying "we prefer ..." is not a reason for the 
> IETF to standardize something.
> 
> Sorry, my expression is fault.
> 
> More details:
> 1) All of the attack botnets were customers of ISP, sent queries to ISP 
> recursive with low rate, so all of the client's IP addresses were 
> "legitimate", could not simply use ACL.
> 2) Normal users also visit [0-9]+.qzone.qq.com <http://qzone.qq.com/>, all 
> the the random queries domain seems to "legitimate".
> => The client ip addresses and the random subdomains are all in the 
> whitelist, not in blacklist.
> 3) ISP didn't have any DNS firewall equipment ( very sad situation, but it 
> was true ) to take over the response of "*.qzone.qq.com 
> <http://qzone.qq.com/>".
> 
> In this weaker scenario,  it will be better if give recursive more 
> information to directly answer queries from cache, and make recursive not to 
> send/cache many subdomains query/response.
> Of course, we can defense the attack with professional operation, solve the 
> problem very well. But there are also many more weaker recursive only run 
> bind software, without any protection...

Maybe they should upgrade.

> I will reconsider these problems of the proposal, make the improvement 
> analysis on real-world caches before next step.

Thanks!   However, I would really encourage you to step back from your proposal 
and see if there's a way to accomplish what you want without adding this 
resource record.   I think you can get the same results you want without SWILD, 
and the result will be a lot better for the DNS as a whole.

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to