Giampaolo Tomassoni wrote:
-----Original Message-----
From: mouss [mailto:[EMAIL PROTECTED]
Sent: Sunday, August 31, 2008 7:23 PM
Cc: users@spamassassin.apache.org
Subject: Re: Handy script for generating /etc/resolv.conf

Giampaolo Tomassoni wrote:
-----Original Message-----
From: Nix [mailto:[EMAIL PROTECTED]
Sent: Sunday, August 31, 2008 5:12 PM
To: Marc Perkel
Cc: users@spamassassin.apache.org
Subject: Re: Handy script for generating /etc/resolv.conf

On 28 Aug 2008, Marc Perkel told this:

Here's something I threw together to make sure the /etc/resolv.conf
points to a working nameserver. I run this once a minute.
How do you arrange that all the existing programs that have already
sucked in resolv.conf note the change? They're generally not going
to
unless you restart them: nothing polls resolv.conf looking for
changes
to it as far as I know, that would be far too inefficient.
Depending on the specific implementation of the resolver library, the
application may check for changes in the resolv.conf file. Maybe they
don't
check at every and each resolv request, however: they may instead
check for
changes every, say, 10 secs or maybe every 1.000 requests. This way,
looking
for changes in the /etc/resolv.conf file is not that much
inefficient...
as you say, this is generally inefficient.

No, I'm saying the exact opposite: I'm saying that the brute implementation
may be inefficient. I'm also saying that, due to this, many implementations
don't adopt a brute approach to the problem.

the implementation you showed is inefficient. stat-ing the filesystem every now and then is silly.


Finally, restarting a whole set of apps just because the /etc/resolv.conf
file changed actually *IS* inefficient.

if the apps are only started when a change is detected and if change detection is correctly done, then this is better than polling. except if you have an unstable setup that changes every time, but then your problem is serious: ask a doctor ;-p



most resolver
implementations
don't do that.

No, come on: most do.

I defy you to list systems where you _know_ this is as you say.


At least by when Internet started to be a mass-market:
most connections where dialup ones with dynamic IPs, and NAT routers were
expensive. You didn't have to restart all your running apps once connected
just because the /etc/resolv.conf was modified by pppd implementation...


what are you talking about? a system you developped? which systems have an /etc/resolv.conf that changes all of a sudden? and since when unix systems support dynamic setups? (as of today, the unix implementation generally relies on the horrible isc dhclient).


and even then, not all applications will obey that (the
mozilla family is known to play bad games here).

I don't know about mozilla, but please note that special apps may borrow
their own special implementation of the resolv library. While perhaps
Mozilla is one of them, I don't believe its own resolv library doesn't pay
care to changes in the /etc/resolv.conf content.

/etc/resolv.conf was designed to be a "stable" file. in an environment where it changes now and then, it is simply not appropriate. many chrooted apps need a copy in their cage, in which case patching the resolver to check for resolv.conf changes doesn't help (besides being a horrible kiddy hack).


Is mozilla involved in this, anyway?


It was an example of a "long running" application. people who run to patch glibc should think about such apps or document their lib to explictely state that their API is not compatible with well known practice.



It is better to run a dns server on the machine and do all your stuff
there. you can restart it, reload the zone, ... without caring for
resolver or application specific behaviour. This also "conforms" to
modularity as was seen in plan9: let servers do the job.

Right, I agree with you in this. This is a much more flexible and polite
solution, but it is not easy to implement by everybody: you need to know
what is a "zone" and a "reverse zone", how to configure it, some basic
knowledge of DNS server setup and, finally, even what is a DNS server... :)

come on. most unix admins are capable of installing and running a basic dns server. filtering mail is far more difficult. the "it's difficult" argument is often used when it should not. I've seen "basic" $lusers do things that many vendors claim are too hard (but the claim is only a marketing defense to justify their bad choices). More generally, any usability argument should be justified with rigourous arguments and a clear evidence.


that said, there is a better argument for your "goal". running bind adds a security risk. but even this argument doesn't stand. it is possible to minimize bind risks. and whatever you do, you rely on dsn (which is not very secure. nor is the internet).


Please note a lot of Linux distributions do provide some mean to dynamically
update the /etc/resolv.conf file. They don't impose the use of a local DNS
server. And the reason to me is to avoid burdening the user with (unneeded)
complexity.


no. the reason to me is that
- support for "dynamic networking" is not yet stable under unix systems (people are still playing with NetworkManager, avahi, dhclient, ... etc),
- there is no "definitive" replacement for BIND
- resolution is still primitive (/etc/hosts, resolv.conf, dns, dhclient, ... etc).

Reply via email to