new version of bind

2009-06-08 Thread Mohammed Ejaz
 

Hi, 

I am sysadmin of one of the leading ISPs of Saudi Arabia, I am going to
upgrade the bind which is from BIND 9.3.4-P1 to the latest one, so please
can any one confirm that the latest BIND 9.6.0-P1 can be helpful in ISP's
environment. As I have experienced some issues earlier when I installed the
BIND 9.5.1-P2 version, such as problem in opening the websites and slow
browsing issues etc...

 

Ejaz 

___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Single Zone Forwarding Dilema

2009-06-08 Thread Matus UHLAR - fantomas
On 06.06.09 01:10, Ben Croswell wrote:
> If you want to force forwarding you will probably want to add the forward
> only; directive.

> By default your server will try to follow NS delegations and then forward if
> it can't follow them

I think it's the opposite - the server will try to query the configured
forwarders first, then to continus in usual NS resolution.

> Forward only; tells it to not even bother trying to follow NS delegations.

and thus I recomment not to use this for public zones - if the forwarders
are unavailable or from some reason can't answer, the classic resolution
will be used.

I guess the configured forwarders have one of these problems
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Windows 2000: 640 MB ought to be enough for anybody
___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: new version of bind

2009-06-08 Thread Kal Feher
Those issues you describe are likely not related to the version, rather the
configuration.

Should you suffer those symptoms again, post their description and your
config here and we¹ll try to help out as best we can :)

When upgrading anything of value I would suggest trying it on a test system.
Luckily with BIND, that should be fairly easy.

Depending on how you feel, this might be an opportunity to clean up an old
config. If not, then you can use your existing config and test how the
upgrade will affect it without causing your company problems.

>confirm that the latest BIND 9.6.0-P1 can be helpful in ISP¹s environment
Yes it can be helpful for an ISP. Check out the announcement with each major
release for full details. But significantly, 9.6 will contain a lot more
DNSSEC support/features than 9.3.4.

Here is a very brief page with the highlights:
https://www.isc.org/software/bind/new-features

Also note that 9.3.4 is no longer a current release.

On 8/6/09 1:00 PM, "Mohammed Ejaz"  wrote:

>  
> Hi, 
> I am sysadmin of one of the leading ISPs of Saudi Arabia, I am going to
> upgrade the bind which is from BIND 9.3.4-P1 to the latest one, so please can
> any one confirm that the latest BIND 9.6.0-P1 can be helpful in ISP¹s
> environment. As I have experienced some issues earlier when I installed the
> BIND 9.5.1-P2 version, such as problem in opening the websites and slow
> browsing issues etc...
>  
> Ejaz 
> 
> 
> ___
> bind-users mailing list
> bind-users@lists.isc.org
> https://lists.isc.org/mailman/listinfo/bind-users


-- 
Kal Feher

___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Single Zone Forwarding Dilema

2009-06-08 Thread Kal Feher
First you should check that you can receive a valid response for the
intended zone from your forwarders (from your caching server) not from your
pc. It wasn't clear from your initial email that this is what you did.

yourcacheserver ~ # dig @forwarder_address A host.fwd.zone.net

Although it may seem appropriate to mask the domain you are looking up. It
does make solving your problem quite difficult. If the above test works yet
other queries fail, I would suggest providing the full result of a:

yourlocalpc ~ # dig @yourcacheserver A host.fwd.zone.net

You may also wish to provide the query logs for this query.


On 8/6/09 4:01 PM, "Matus UHLAR - fantomas"  wrote:

> On 06.06.09 01:10, Ben Croswell wrote:
>> If you want to force forwarding you will probably want to add the forward
>> only; directive.
> 
>> By default your server will try to follow NS delegations and then forward if
>> it can't follow them
> 
> I think it's the opposite - the server will try to query the configured
> forwarders first, then to continus in usual NS resolution.
> 
>> Forward only; tells it to not even bother trying to follow NS delegations.
> 
> and thus I recomment not to use this for public zones - if the forwarders
> are unavailable or from some reason can't answer, the classic resolution
> will be used.
> 
> I guess the configured forwarders have one of these problems

-- 
Kal Feher

___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Delegation of already loading zones?

2009-06-08 Thread Todd Snyder
Good day,

Looking through configuration of one of my servers (ns01.local), I have
example.com loading, and test.example.com loading.

In example.com, someone has delegated test.example.com back to the
server:

test.example.comIN  NS  ns01.local

Since I am loading test.example.com specifically, that delegation
appears redundant.  Are there cases where that delegation is required?
Is there a standard that says I should do that for all zones I'm loading
that are subzones of another zone I'm loading?  Is this just an oddball
configuration that should be cleaned up?

Thanks,

Todd.


-
This transmission (including any attachments) may contain confidential 
information, privileged material (including material protected by the 
solicitor-client or other applicable privileges), or constitute non-public 
information. Any use of this information by anyone other than the intended 
recipient is prohibited. If you have received this transmission in error, 
please immediately reply to the sender and delete this information from your 
system. Use, dissemination, distribution, or reproduction of this transmission 
by unintended recipients is not authorized and may be unlawful.
___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Delegation of already loading zones?

2009-06-08 Thread Kevin Darcy
There's a standard that says *all* zones beneath the root should be 
delegated hierarchically. It's a very broad rule that doesn't depend on 
what you happen to be "loading" at any particular time on any particular 
server.


It may seem overkill now, if all of your nameservers slave all of your 
zones, but what if some day you want to set up a slave of example.com 
that does *not* happen to also be a slave of test.example.com? In the 
absence of those delegation records in its slave copy of the zone, it 
won't know that test.example.com exists. If any client queries such a 
nameserver for a test.example.com name, it'll get a "no such name" 
(NXDOMAIN) response.


It's good form, and a good habit to get into, to always delegate zones, 
even if the delegated zone is slaved on all of your (current set of) 
nameservers. Give yourself some flexibility and scalability for the future.


- Kevin

Todd Snyder wrote:

Good day,

Looking through configuration of one of my servers (ns01.local), I have
example.com loading, and test.example.com loading.

In example.com, someone has delegated test.example.com back to the
server:

test.example.comIN  NS  ns01.local

Since I am loading test.example.com specifically, that delegation
appears redundant.  Are there cases where that delegation is required?
Is there a standard that says I should do that for all zones I'm loading
that are subzones of another zone I'm loading?  Is this just an oddball
configuration that should be cleaned up?

Thanks,

Todd.


-
This transmission (including any attachments) may contain confidential 
information, privileged material (including material protected by the 
solicitor-client or other applicable privileges), or constitute non-public 
information. Any use of this information by anyone other than the intended 
recipient is prohibited. If you have received this transmission in error, 
please immediately reply to the sender and delete this information from your 
system. Use, dissemination, distribution, or reproduction of this transmission 
by unintended recipients is not authorized and may be unlawful.
___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


  


___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


2GB Memory Limits on Solaris 10

2009-06-08 Thread Raymond Popowich
Hello,

I am running several Bind 9.6.0-P1 DNS resolvers on Solaris 10.  The largest
does around 2500 queries/second at peak times.  They are configured with
--enable-largefile support.  About once a month I am having a problem with
the largest resolvers breaking when the named process hits 2GB.  I've logged
a few different errors including file descriptor limits which I increased
when that happened, to increasing the option for max-cache-size, to my
current errors such as ns_client_replace() failed: out of memory.  The
servers have 8GB of physical memory.  I am OK with telling bind to use an
unlimited amount of resources or specifying a double in the current maximum
up to 4GB.  Would it be possible for someone to provide a full list of all
of the named.conf options that I need to specify in named.conf and increase
from the default settings?  I've been fixing these errors one at a time for
a while now and I really can't afford to keep troubleshooting this problem
by waiting for new errors to happen.

Thank you for your time,

-Raymond
___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

RE: 2GB Memory Limits on Solaris 10

2009-06-08 Thread Matthew Huff
enable-largefile support turns on 64 bit filesystem, but not 64 bit memory.
Normally under Solaris even a 32 bit process should be able to use the full
4GB address space (or at least 3.5-3.8GB). Try checking  your ulimits in the
script that starts the process.

 

BTW, by default the named process even on a 64 bit system is compiled in 32
bit mode.  The main reason is that any other libraries it might use
(openssl, etc) will also need to have 64 bit versions.

 


Matthew Huff   | One Manhattanville Rd
OTA Management LLC | Purchase, NY 10577
http://  www.ox.com  | Phone: 914-460-4039
aim: matthewbhuff  | Fax:   914-460-4139



 

From: bind-users-boun...@lists.isc.org
[mailto:bind-users-boun...@lists.isc.org] On Behalf Of Raymond Popowich
Sent: Monday, June 08, 2009 3:35 PM
To: bind-users@lists.isc.org
Subject: 2GB Memory Limits on Solaris 10

 

Hello,

I am running several Bind 9.6.0-P1 DNS resolvers on Solaris 10.  The largest
does around 2500 queries/second at peak times.  They are configured with
--enable-largefile support.  About once a month I am having a problem with
the largest resolvers breaking when the named process hits 2GB.  I've logged
a few different errors including file descriptor limits which I increased
when that happened, to increasing the option for max-cache-size, to my
current errors such as ns_client_replace() failed: out of memory.  The
servers have 8GB of physical memory.  I am OK with telling bind to use an
unlimited amount of resources or specifying a double in the current maximum
up to 4GB.  Would it be possible for someone to provide a full list of all
of the named.conf options that I need to specify in named.conf and increase
from the default settings?  I've been fixing these errors one at a time for
a while now and I really can't afford to keep troubleshooting this problem
by waiting for new errors to happen.

Thank you for your time,

-Raymond

<>

Matthew Huff.vcf
Description: Binary data


smime.p7s
Description: S/MIME cryptographic signature
___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: looking for reference to correct behavior

2009-06-08 Thread Kevin Darcy
It doesn't really matter whether the vendor claims that the data is 
"also" cached data, since the RFC clearly states "if the desired data is 
present in authoritative form [...] use the authoritative data in 
preference to cached data". In other words, authoritative data trumps 
cached data. This would apply even if one were to view the authoritative 
data as "also" being cached data.


RFC 2181 clarified this even further, by assigning "trustworthiness" 
levels to different kinds of data, with "Data from a primary zone file, 
other than glue data" and "data from a zone transfer, other than glue" 
being the two most trustworthy, and "non-authoritative data from the 
answer section of authoritative answers", being a few levels of 
"trustworthiness" lower than either of those. (When one or more CNAMEs 
are present in the Answer Section, only the CNAME matching QNAME is 
authoritative, everything else is non-authoritative, so this category 
matches your situation as described). Less-trustworthy data is never 
supposed to overwrite/override more-trustworthy data.


It's quite obvious that the vendor is not in compliance with the RFCs 
here, the main challenge is to pierce their obfuscations enough to make 
this plain to the Powers That Be.


- Kevin

Maria Iano wrote:
By saying things like "We load the authoritative data into memory so 
that is also cached data" and other nonsense the vendor is stating 
that this behavior is in compliance with the RFCs and refusing to fix 
their code. Very frustrating, as I believe this behavior is clearly 
wrong and also seems to me to be a security issue.


Thanks for your help anyway!
Maria

On May 11, 2009, at 2:33 PM, Kevin Darcy wrote:


The "resolver algorithm" in RFC 1034, Section 5.3.3, states

1. See if the answer is in local information, and if so return

it to the client.


and is further detailed as

Step 1 searches the cache for the desired data. If the data is in the
cache, it is assumed to be good enough for normal use. Some resolvers
have an option at the user interface which will force the resolver to
ignore the cached data and consult with an authoritative server. This
is not recommended as the default. If the resolver has direct access to
a name server's zones, it should check to see if the desired data is
present in authoritative form, and if so, use the authoritative data in
preference to cached data.

This would be a case where the resolver "has direct access to the 
name server's zones", so there is no debate, in my opinion, that the 
resolver in question is doing The Wrong Thing.


RFC 2181 also makes it clear that authoritative data ranks higher 
than cached data, so that could also be used as a relevant normative 
reference.


- Kevin

Maria Iano wrote:

My apologies if this is considered to be too off-topic.

I have a situation where my company uses a number of servers with a 
commercial DNS implementation (in addition to our BIND servers). The 
other implementation is Windows DNS, and there is some behavior that 
I do not think is acceptable, but which the vendor claims is 
acceptable behavior. I really want them to fix this bug (as I 
consider it), but first I need to get general agreement that it is a 
bug. I will be looking through the RFCs as much as I can time for, 
but haven't found what I need yet. Since my next meeting with the 
vendor is tomorrow, I thought I would also ask if anyone can already 
point me to a relevant RFC or other reference.


Here is the behavior that I think is not acceptable.

We have configured a zone on the windows server - dmz.example.com. 
This zone contains an A record for foo.dmz.example.com with IP 
address 10.240.240.240. The zone example.com is hosted elsewhere and 
contains a CNAME record foo.example.com pointing to 
foo.dmz.example.com.


If the cache has just been cleared, and a client asks the WIndows 
DNS server for foo.example.com, it has a forwarding server to which 
it forwards the request. The forwarding server hands it back the 
CNAME record but it also hands back an A record for 
foo.dmz.example.com pointing to an incorrect IP address 
192.168.240.240. The Windows DNS server accepts this A record for 
foo.dmz.example.com with an incorrect IP address into its cache, and 
hands out that incorrect information to the client. Even though it 
concurrently has dmz.gannett.com configured on it as a primary zone 
with a record for that same owner name pointing to a different IP 
address.


I believe it shouldn't do that. Since it hosts dmz.example.com as a 
primary zone, I think it should discard that bad A record and hand 
back its own.


The vendor's argument is that it should blindly trust the forwarding 
resolver.


Can anyone point me to an RFC or reference about this?

Thanks,
Maria





___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Trying to understand DNSSEC and BIND versions better

2009-06-08 Thread Mark Andrews

In message <99e6a67a9da87041a8020fbc11f480b3031cc...@exvs01.dsw.net>, "Jeff Lig
htner" writes:
> BIND versions on RHEL (e.g. 9.3.4-6.0.3.P1.el5_2) have backported
> patches from later BIND versions so it isn't exactly the same animal as
> the EOL 9.3 which is why it isn't listed simply as 9.3

I've yet to see a vendor back port every bug fix and that is what
would be required to really support a product in a OS which is at
EOL by the producer.

Mark

> -Original Message-
> From: bind-users-boun...@lists.isc.org
> [mailto:bind-users-boun...@lists.isc.org] On Behalf Of Mark Andrews
> Sent: Friday, June 05, 2009 12:23 AM
> To: Chris Adams
> Cc: comp-protocols-dns-b...@isc.org
> Subject: Re: Trying to understand DNSSEC and BIND versions better=20
> 
> 
> In message , Chris
> Adams write
> s:
> > Since I read that the root is supposed to be signed by the end of the
> > year, I am just trying to understand DNSSEC support and the various
> > versions of BIND a little better here, so please don't throw too many
> > rocks if I ask something stupid...
> >=20
> > I run the nameservers for an ISP.  For the recursive servers, what are
> > the hazzards in enabling DNSSEC (once the root is signed, so no DLV
> > necessary I guess)?
> 
>   Once the root is signed you will be able to validation answers
>   where there is a unbroken chaing of trust.  DLV will still be
>   useful for zones were the TLD isn't yet signed or there is
>   another break in the chain of trust.
> 
> > I know the things that generally break with
> > "regular" DNS, but I don't know that with DNSSEC (I know there have
> been
> > DLV troubles but that's it).
> 
>   Not having a clean EDNS path between the validator and
>   authoritative server can result in validation failures.
>   EDNS responses are bigger that plain DNS and may result in
>   fagmented responses.  You need to ensure that any NAT's and
>   firewalls are configured to handle fragments UDP responses
>   up 4096 bytes with a modern BIND.  Any forwarders used
>   should also support EDNS and preferably be performing
>   validation as well.
> 
>   Failure to re-sign a zone will cause lookups to fail.
>   Failure to update DS records on DNSKEY changes will cause
>   lookups to fail.  Failure to update DLV records on DNSKEY
>   changes will cause lookups to fail.
> 
>   "dig +cd +dnssec " is your friend.  This will let
>   you see what is failing to validate.
> 
>   "dig +cd +multi DNSKEY " will provide you with the
>   keyids necessary to check the signatures.
> 
>   "dig +cd +multi DS " will provide you with the DS
>   records so you can check the linkage between parent and
>   child.  Look at the key id field.
> 
>   "dig +cd +multi DLV ." will provide you with the
> DS
>   records so you can check the linkage between parent and
>   child.  Look at the key id field.
> 
>   If the zone is using NSEC3 then nsec3hash can be used to
>   check workout in the NSEC3 records are sane.
> 
>   "date -u +%Y%m%d%H%M%S" returns the system date in a form
>   that is easy to comare to the dates in the RRSIG records.
> 
>   A understand of how DNSSEC works is useful.
> 
>   Checking if you get a DNSKEY returned, without +cd, at each zone
>   cut is useful for working out where to examine more closely.
> 
>   dig, date and a understanding of DNSSEC is all you should
>   need to identify a configuration error.  If the keyid match
>   and timestamps are good and associated NSEC/NSEC3 appear
>   to be sane the you will most probably have found a
>   implementation bug.
> 
> > Currently, my servers run BIND 9.3.4-10.P1 (as patched by Red Hat in
> > RHEL; we typically stick with their security patched version, since
> > that's what we pay them for).  What does that mean with .ORG for
> > example, where NSEC3 is used?  Would we just not see NXDOMAIN
> responses
> > as validated (and what happens to unvalidated responses)?  I've put in
> a
> > request to Red Hat to update to a version that supports NSEC3 but I
> > don't know what their response will be yet.
> 
>   BIND 9.3 is already at EOL.
> 
> > For our authoritative servers, we'll need to set up a system to sign
> the
> > zones.  Is it expected that ISPs will sign every zone they serve, or
> > just the domains we consider "important"?  What kind of problems might
> > be expected here?
> >=20
> > In both cases, what kind of CPU and/or RAM overhead will large-scale
> use
> > of DNSSEC add?
> > --=20
> > Chris Adams 
> > Systems and Network Administrator - HiWAAY Internet Services
> > I don't speak for anybody but myself - that's enough trouble.
> > ___
> > bind-users mailing list
> > bind-users@lists.isc.org
> > https://lists.isc.org/mailman/listinfo/bind-users
> --=20
> Mark Andrews, ISC
> 1 Seymour St., Dundas Valley, NSW 2117, Australia
> PHONE: +61 2 98

RE: 2GB Memory Limits on Solaris 10

2009-06-08 Thread Igor V. Ruzanov
|From: bind-users-boun...@lists.isc.org
|[mailto:bind-users-boun...@lists.isc.org] On Behalf Of Raymond Popowich
|Sent: Monday, June 08, 2009 3:35 PM
|To: bind-users@lists.isc.org
|Subject: 2GB Memory Limits on Solaris 10
|
| 
|
|Hello,
|
|I am running several Bind 9.6.0-P1 DNS resolvers on Solaris 10.  The largest
|does around 2500 queries/second at peak times.  They are configured with
|--enable-largefile support.  About once a month I am having a problem with
|the largest resolvers breaking when the named process hits 2GB.  I've logged
|a few different errors including file descriptor limits which I increased
|when that happened, to increasing the option for max-cache-size, to my
|current errors such as ns_client_replace() failed: out of memory.  The
|servers have 8GB of physical memory.  I am OK with telling bind to use an
|unlimited amount of resources or specifying a double in the current maximum
|up to 4GB.  Would it be possible for someone to provide a full list of all
|of the named.conf options that I need to specify in named.conf and increase
|from the default settings?  I've been fixing these errors one at a time for
|a while now and I really can't afford to keep troubleshooting this problem
|by waiting for new errors to happen.
|
|Thank you for your time,
|
|-Raymond
|
First of all check what kernel is booting on your system. It should be 64 
bit version (that is printed at boot process of Solaris 10). It means that 
the 32-bit system should support PAE mode.
In the GRUB kernel path looks like:

kernel /platform/i86pc/multiboot
module /platform/i86pc/boot_archive

Also you can check with prtconf | grep Mem 
to see how much of RAM is available in the system.

+---+
! CANMOS ISP Network!
+---+
! Best regards  !
! Igor V. Ruzanov, network operational staff!
! e-Mail: ig...@canmos.ru   !
+---+
___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users