Keep in mind that most cache system are using Least Recent Used Algorithm for their cache without any removal of expired records.
So the reason that stuff gets thrown out is not because of TTL expiry, but rather because the cache is full. I don't know your exact test setup, but that might be why you are seeing so many resolvers not respecting your TTL. I think that is one of the reason why pre-fetching is giving such low gain. Every now and then you are going to prefetch something nobody wants on the expense of throwing out something that somebody actually later wanted. /Stephan From: dns-operations-boun...@lists.dns-oarc.net [mailto:dns-operations-boun...@lists.dns-oarc.net] On Behalf Of Wiley, Glen Sent: Thursday, November 07, 2013 9:18 AM To: Edward Lewis; DNS Operations Subject: Re: [dns-operations] Opinions sought .... have I come to the right place? Be careful about conclusions you may draw from your data. It may be helpful to remember that many large recursive implementations are comprised of a non-trivial footprint of hosts who may not share a cache across the network. In this case where you may find a TTL respected by a single host behind that recursive server VIP or even across multiple nodes at a single site behind the VIP it is possible that multiple successive queries land on dieffrent nodes with different caches. -- Glen Wiley KK4SFV Sr. Engineer The Hive, Verisign, Inc. From: Edward Lewis <ed.le...@neustar.biz> Date: Thursday, November 7, 2013 9:52 AM To: DNS Operations <dns-operati...@mail.dns-oarc.net> Cc: Edward Lewis <ed.le...@neustar.biz> Subject: [dns-operations] Opinions sought .... have I come to the right place? I've been studying TTL settings off and on for a few weeks, trying to decide what are appropriate numbers. In the past we taught the trade-off as - longer TTLs will reduce queries while shorter TTLs will enable agility. In looking at a set of data with a long TTL - 6 days - over a period of time I noticed that 0.005% of all queriers respected the TTL setting I had. I don't want to fork over details, so you can even say "0.005% +/- 5%" and in any case, it's small. I'll admit by number here might be a little bit of an undercount, still, it's little. In experimenting with some recursive servers (and by no means an exhaustive set), some code bases did adhere to the "rules" and some code bases seem to ignore the "rules." I say this to the extent that the collective set of deployed tools out there pretty much are eating into the "longer TTLs will reduce queries" part of the above trade-off. I see that in the IETF there are drafts to pre-fetch expiring data sets - which one can't argue with - but it makes, for an authoritative server operator - even more uncertainty in planning TTLs. And I'll throw in another factoid from history. During DNSSEC workshops eons ago, we found that is the TTLs got too low, DNSSEC had problems. (Presumably because it took longer to fetch the chain than the TTL of the queried data.) Has anyone found a TTL to be too low for DNSSEC? So, I'm turning to this list...what is a good range for TTLs? -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= -=- Edward Lewis NeuStar You can leave a voice message at +1-571-434-5468 Why is it that people who fear government monitoring of social media are surprised to learn that I avoid contributing to social media? ________________________________ No virus found in this message. Checked by AVG - www.avg.com Version: 2014.0.4158 / Virus Database: 3629/6816 - Release Date: 11/07/13
_______________________________________________ dns-operations mailing list dns-operations@lists.dns-oarc.net https://lists.dns-oarc.net/mailman/listinfo/dns-operations dns-jobs mailing list https://lists.dns-oarc.net/mailman/listinfo/dns-jobs