Using a sample of our US traffic over a 7 minute period, I see 112,044 lookups 
of www.facebook.com out of a total of 4,081,834 queries (2.74%).
That's 266 www.facebook.com queries per second.

Doing a lookup where our resolver is only missing star.c10r.facebook.com (which 
the www.facebook.com CNAME points at; the record with a 60 second TTL) takes 
1.6ms.
Doing a lookup where everything's cached, our resolver takes 0.7ms.

Given that our prefetch implementation actually charges the client that first 
queries the record in the last 3 seconds of its TTL, and given that we now cap 
our resolvers at 6 threads, this means that in the 7 minute period, we had 7 
clients waiting for the upstream rather than 265 times 7 cilents.

This is a relatively big win, but is also a drop in the ocean -- not much more 
than a second of latency shared among these clients.

The real win for us is that we don't ask facebook the same question 6 times per 
minute (one per thread).  Instead, we ask once every 57 seconds.  This is 
bigger in the grand scheme of things because it makes the facebook auth servers 
happier :)

On Nov 6, 2013, at 2:48 PM, Daniel Migault <mglt.i...@gmail.com> wrote:

> Hi, 
> 
> Thanks for providing this information.
> 
> "- Under normal circumstances, when a record expires, all clients querying 
> that record suffer (unnecessary) latency while the record is being re-queried 
> upstream.  Fixing this was a convenient benefit."
> 
> Do you have specific metrics to measure/evaluate this benefit?
> 
> BR, 
> Daniel
> 
> On Wed, Nov 6, 2013 at 10:50 PM, Brian Somers <bsom...@opendns.com> wrote:
> Hi,
> 
> I mentioned at the dnsop talk at IETF88 yesterday that I have some 
> (hopefully) useful information regarding W.C.A. Wijngaards' prefetch work.
> 
> At OpenDNS, we implemented the same thing some months ago (without knowing 
> about this work) with the following differences:
> - Our HAMMER_TIME is set to 3 seconds
> - Our STOP is hardwired as HAMMER_TIME + 1
> 
> We saw similar non-results in our graphs - at ~50 billion queries per day, 
> prefetch didn't change anything.... However, there were two other issues 
> addressed:
> - Under normal circumstances, when a record expires, all clients querying 
> that record suffer (unnecessary) latency while the record is being re-queried 
> upstream.  Fixing this was a convenient benefit.
> - Because our resolvers run multiple threads sharing the same cache, 
> previously, a popular record expiration would tend to result in an upstream 
> query from *each* thread.  This was the issue we wanted to address.
> 
> You could argue that our implementation should be clever enough to piggy-back 
> the upstream queries from different threads (one query, multiple clients on 
> multiple threads waiting for the response), but having the threads 
> interact/contend against eachother for more than just cache lookups/updates 
> is undesirable at higher loads.
> 
> --
> Brian Somers
> bsom...@opendns.com
> 
> 
> 
> 
> 
> _______________________________________________
> DNSOP mailing list
> DNSOP@ietf.org
> https://www.ietf.org/mailman/listinfo/dnsop
> 
> 
> 
> 
> -- 
> Daniel Migault
> Orange Labs -- Security
> +33 6 70 72 69 58

--
Brian Somers
bsom...@opendns.com




Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to