-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Andrew,

Andrew Miehs wrote:
> You may want to have a look at this
> 
> http://homepages.tesco.net/J.deBoynePollard/FGA/dns-round-robin-is-useless.html

No offense taken, but I wasn't (as the author of this piece asserts)
claiming that R-R DNS is an effective load balancer. In fact, I have
said quite the opposite: load balancing is best done by a component
which actually understands loads:

1. mod_jk (right? I've never done it)
2. BigIP or some other hardware load balancer

The /only/ reason, IMO, to ever use R-R DNS is to avoid single points of
failure. The author claims that there is no reason, ever (ever ever
ever) to use R-R DNS. I respectfully disagree.

If your client attempts to lookup www.foo.com, it resolves to a single
IP address, which points to a single piece of hardware in your data
center, you might be screwed by:

1. Faulty wiring that happens to go bad at an inconvenient time.
2. Faulty hardware device (fw, lb, switch, anything) that dies.
3. Network or power going down (which is a stretch, since data
   centers are pretty good at keeping the lights on)

R-R DNS allows you to /partially/ weather this storm by diverting an
unpredictable amount of traffic to another hardware device (possibly in
another data center, which gets you around all of the above).

Sure, some of your clients won't be able to connect. But, not /all/ of
them will be denied service.

This author claims that the following "foundations" are flawed reasons
for using R-R DNS. Note that I do not claim a single one of them:

1. Shuffling resource records affects client connection behavior.
   (I don't care... my only assumption is that not every client can
    possibly conspire to choose the same IP address every single time).
2. Shuffling resource records provides even or predictable distribution
   (I don't care... it's enough that not all requests go to the same
    place. The distribution is is irrelevant, as long as not every
    single request goes to the same IP every time).

The whole point is that you have to suffer 100% loss of your frontend
hardware in order to shut off 100% of your users. This is true no matter
how many "points of failure" you have... it's just that 1 point if
failure means that only one device has to go. If you have a dozen lb's
(or fw's with lb's, as Leon suggests), then losing 1 device loses you a
completely unpredictable 1/12th of your users. If R-R DNS works
perfectly (which it doesn't), then you still lose 1/12th of all
requests. But the worst case simply can't be that 1 of 12 servers going
down results in 100% request loss.

I am unaware of any other strategy which allows you to lose a primary
piece of hardware such as a load balancer and still be able to limp
along with at least /some/ requests going through.

I'm open to suggestions. (And SRV doesn't count, since not a single web
browser supports them).

- -chris

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFF712f9CaO5/Lv0PARAs02AJ0b1mJWg+bRXidicTpQH5NNYaDg3QCdEpDX
hUtDnuLQH8k2KT5mOaWYWqA=
=3BXP
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to