Stable infrastructures require deterministic behavior to be understood. I
believe that mdns limits the determinism of a system by requiring that I
accept that a machine will be picking a random address. When I setup my DCs
I want machines to have a single constant ip address so that I don't have to
do a bunch of work to figure out which machine I am trying to talk to when
things go wrong.

As such, I find it hard to believe that mdns belongs in large DCs.

wt

On Thu, Jul 7, 2011 at 10:17 AM, Eric Yang <eric...@gmail.com> wrote:

> Internet Assigned Number Authority has allocated 169.254.1.0 to
> 169.254.254.255 for the propose of communicate between nodes.  This is
> 65024 IP address designed for local area network only.  They are not
> allow to be routed.  Zeroconf is randomly selecting one address out of
> the 65024 available address, and broadcasts ARP message.  If no one is
> using this address, then the machine will use the selected ip address
> and communicate with zeroconf service for name resolution.  If the
> address is already in use, the system restart from scratch to pick
> another address.  Hence, the actual limit is not bound to 1000, but
> 65024.  In real life, it is unlikely to use all 65024 for name
> resolution due to chance of loss packet on modern ethernet (10^-12) or
> delay from repetitive selection of the the same ip address from
> different hosts.  It can easily push the limit to 10,000-20,000 nodes
> without losing reliability in server farm settings.
>
> It would be nice to support both dynamic discovery of master and
> slaves, and preserve the exist configuration style management for EC2
> like deployment.  This is one innovation worth having.
>
> regards,
> Eric
>
> On Wed, Jul 6, 2011 at 5:49 PM, Allen Wittenauer <a...@apache.org> wrote:
> >
> > On Jul 6, 2011, at 5:05 PM, Eric Yang wrote:
> >
> >> Did you know that almost all linux desktop system comes with avahi
> >> pre-installed and turn on by default?
> >
> >        ... which is why most admins turn those services off by default.
> :)
> >
> >>  What is more interesting is
> >> that there are thousands of those machines broadcast in large
> >> cooperation without anyone noticing them?
> >
> >        That's because many network teams turn off multicast past the
> subnet boundary and many corporate desktops are in class C subnets.  This
> automatically limits the host count down to 200-ish per network.  Usually
> just the unicast traffic is bad enough.  Throwing multicast into the mix
> just makes it worse.
> >
> >> I have recently built a
> >> multicast dns browser and look into the number of machines running in
> >> a large company environment.  The number of desktop, laptop and
> >> printer machines running multicast dns is far exceeding 1000 machines
> >> in the local subnet.
> >
> >        From my understanding of Y!'s network, the few /22's they have
> (which would get you 1022 potential hosts on a subnet) have multicast
> traffic dropped at the router and switch levels.  Additionally, DNS-SD (the
> service discovery portion of mDNS) offers unicast support as well.  So there
> is a very good chance that the traffic you are seeing is from unicast, not
> multicast.
> >
> >        The 1000 number, BTW, comes from Apple.  I'm sure they'd be
> interested in your findings given their role in ZC.
> >
> >        BTW, I'd much rather hear that you set up a /22 with many many
> machines running VMs trying to actually use mDNS for something useful.  A
> service browser really isn't that interesting.
> >
> >> They are all happily working fine without causing any issues.
> >
> >        ... that you know of.  Again, I'm 99% certain that Y! is dropping
> multicast packets into the bit bucket at the switch boundaries.  [I remember
> having this conversation with them when we setup the new data centers.]
> >
> >>  Printer works fine,
> >
> >        Most admins turn SLP and other broadcast services on printers off.
>   For large networks, one usually sees print services enabled via AD or
> master print servers broadcasting the information on the local subnet.  This
> allows a central point of control rather than randomness.   Snow Leopard (I
> don't think Leopard did this) actually tells you where the printer is coming
> from now, so that's handy to see if they are ZC or AD or whatever.
> >
> >> itune sharing from someone
> >> else works fine.
> >
> >        iTunes specifically limits its reach so that it can't extend
> beyond the local subnet and definitely does unicast in addition to ZC, so
> that doesn't really say much of anything, other than potentially
> invalidating your results.
> >
> >>  For some reason, things tend to work better on my
> >> side of universe. :)
> >
> >        I'm sure it does, but not for the reasons you think they do.
> >
> >> Allen, if you want to get stuck on stone age
> >> tools, I won't stop you.
> >>
> >
> >        Multicast has a time and place (mainly for small, non-busy
> networks).  Using it without understanding the network impact is never a
> good idea.
> >
> >        FWIW, I've seen multicast traffic bring down an entire campus of
> tens of thousands of machines due to routers and switches having bugs where
> they didn't subtract from the packet's TTL.  I'm not the only one with these
> types of experiences.  Anything multicast is going to have a very large
> uphill battle for adoption because of these widespread problems.  Many
> network vendors really don't get this one right, for some reason.
>

Reply via email to