Re: Add TXT records for SPF when CNAME exists in same sub-domain

2022-11-29 Thread G.W. Haywood via bind-users

Hi there,

On Tue, 29 Nov 2022, Mark Andrews wrote:


Chris Liesfield wrote:



> It appears TXT and CNAME records for the same string/host cannot
> co-exist. We are able to specify an SPF record for the origin only
> in each sub-domain.
> 
> Open to any suggestions on how to get around this issue.


Place the TXT record at the target of the CNAME.


See also RFC2181 section 10.

--

73,
Ged.
--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


forwarder cache

2022-11-29 Thread Hamid Maadani
Hi there,

I am running two instances of named on the same server (BIND 9.16.33 on alpine 
3.16). They are running using completely separate config directories, and they 
have separate work directories as well as control ports. Let's call them NS1 
and NS2.

NS1 is a forwarding instance. It listens on any:53 and forwards all requests to 
127.0.0.1:153
NS2 is a normal bind9 instance. It has one zone (test.com), and listens to 
127.0.0.1:153

My understanding is, when NS1 receives a request for "test.com", it will 
initially forward that query to NS2 for resolution, and then cache the result 
in memory for TTL of that record. The next request coming in for "test.com", 
should be served from in-memory cache of NS1, and NS2 should be out of the 
picture.

Based on that, I am running some tests. Initial dump of NS1's memory shows an 
empty cache:
/ # cat /var/cache/ns1/named_dump.db
;
; Start view _default
;
;
; Cache dump of view '_default' (cache _default)
;
; using a 86400 second stale ttl
$DATE 20221129172701
;
; Address database dump

Next, I send an A record request for test.com to NS1, which returns the correct 
result. Dumping the cache:
;
; Start view _default
;
;
; Cache dump of view '_default' (cache _default)
;
; using a 86400 second stale ttl
$DATE 20221129172835
; authanswer
; stale
test.com. 86390 IN A 10.10.10.10
;
; Address database dump

Which shows that the A record is cached by NS1 at this point, and should be 
valid for the next 86390 seconds.
The next test would be to kill NS2, and query the record. Desired outcome would 
be NS1 resolving the query, without the need for NS2.
After killing NS2 however, NS1 fails to resolve the query. Looking at NS1 cache:
;
; Start view _default
;
;
; Cache dump of view '_default' (cache _default)
;
; using a 86400 second stale ttl
$DATE 20221129173157
; authanswer
; stale
test.com. 86188 IN A 10.10.10.10
;
; Address database dump

Which shows me that the cache still exists and is valid. Looking at the logs:
29-Nov-2022 17:31:52.014 serve-stale: info: test.com resolver failure, stale 
answer unavailable
29-Nov-2022 17:31:52.014 query-errors: info: client @0x7feeb7f1b308 
192.168.56.1#59506 (test.com): query failed (SERVFAIL) for test.com/IN/A at 
query.c:5871

which tells me the query fails, because the stale result is unavailable.
in NS1's config, I have:
options {
listen-on port 53 { any; };
listen-on-v6 { none; };

directory "/var/cache/ns1";

recursion yes;
allow-transfer { none; };
allow-query { any; };

forwarders {
127.0.0.1 port 153;
};
forward only;

stale-answer-enable yes;
stale-answer-ttl 300;

dnssec-validation yes;

statistics-file "/var/run/named.ns1.stats";

auth-nxdomain no;
};

Two questions about this situation:
1. Why would the test.com entry in cache be stale, if the TTL has not expired 
yet? The ideal scenario would be for the forwarder not to reach out to NS2 
unless necessary. Am I not understanding the stale record concept correctly?
2. Why is the stale answer not available in this scenario, even though stale 
answers are enabled and the cache exists and is valid? Am I missing some config 
part?

Any help would be appreciated.

Regards
Hamid Maadani
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: forwarder cache

2022-11-29 Thread Darren Ankney
I have a sort of similar configuration to this in my home network.  I
have two recursive servers and two "authoritative" servers (for a
domain I call "mylocal" which has forward and also in.addr.arpa for my
inside network).  These are all running on one Intel NUC.  The only
difference is that my "authoritative" servers are not running at
127.0.0.1 but rather 192.168.40.142 and 40.182.  Recursive servers are
at 192.168.40.42 and 40.82.  Had to use the "forwarders" statement to
send lookups for the local domain and reverse for local IPs.  Another
difference is that these are all running in chroot jails with separate
directories.  Last difference is I don't have any settings regarding
stale entries.  I can shut off the "authoritative' servers and the
recursive servers will still answer questions about "mylocal" hosts
and in.addr.arpa queries as long as they had previously looked up such
answers.I'm not sure why it isn't working for you.  I do have my
forwarders setup differently (ie: I have them only on a per domain
level instead of at the options level).  Example:

zone "40.168.192.in-addr.arpa" {
  type forward;
  // this defines the addresses of the resolvers to
  // which queries for this zone will be forwarded
  forwarders {
192.168.40.142;
192.168.40.182;
  };
  // indicates all queries for this zone will be forwarded
  forward only;
};
zone "mylocal" {
  type forward;
  // this defines the addresses of the resolvers to
  // which queries for this zone will be forwarded
  forwarders {
192.168.40.142;
192.168.40.182;
  };
  // indicates all queries for this zone will be forwarded
  forward only;
};

The reason I did it that way was that I didn't think it would make
sense to send other recursive queries to the "authoritative" servers
that won't have an answer and have no way to get an answer for
"www.microsoft.com", for example.  Not sure how that would make a
difference for the problem you are having, however.

On Tue, Nov 29, 2022 at 12:47 PM Hamid Maadani  wrote:
>
> Hi there,
>
> I am running two instances of named on the same server (BIND 9.16.33 on 
> alpine 3.16). They are running using completely separate config directories, 
> and they have separate work directories as well as control ports. Let's call 
> them NS1 and NS2.
>
> NS1 is a forwarding instance. It listens on any:53 and forwards all requests 
> to 127.0.0.1:153
> NS2 is a normal bind9 instance. It has one zone (test.com), and listens to 
> 127.0.0.1:153
>
> My understanding is, when NS1 receives a request for "test.com", it will 
> initially forward that query to NS2 for resolution, and then cache the result 
> in memory for TTL of that record. The next request coming in for "test.com", 
> should be served from in-memory cache of NS1, and NS2 should be out of the 
> picture.
>
> Based on that, I am running some tests. Initial dump of NS1's memory shows an 
> empty cache:
> / # cat /var/cache/ns1/named_dump.db
> ;
> ; Start view _default
> ;
> ;
> ; Cache dump of view '_default' (cache _default)
> ;
> ; using a 86400 second stale ttl
> $DATE 20221129172701
> ;
> ; Address database dump
>
> Next, I send an A record request for test.com to NS1, which returns the 
> correct result. Dumping the cache:
> ;
> ; Start view _default
> ;
> ;
> ; Cache dump of view '_default' (cache _default)
> ;
> ; using a 86400 second stale ttl
> $DATE 20221129172835
> ; authanswer
> ; stale
> test.com. 86390 IN A 10.10.10.10
> ;
> ; Address database dump
>
> Which shows that the A record is cached by NS1 at this point, and should be 
> valid for the next 86390 seconds.
> The next test would be to kill NS2, and query the record. Desired outcome 
> would be NS1 resolving the query, without the need for NS2.
> After killing NS2 however, NS1 fails to resolve the query. Looking at NS1 
> cache:
> ;
> ; Start view _default
> ;
> ;
> ; Cache dump of view '_default' (cache _default)
> ;
> ; using a 86400 second stale ttl
> $DATE 20221129173157
> ; authanswer
> ; stale
> test.com. 86188 IN A 10.10.10.10
> ;
> ; Address database dump
>
> Which shows me that the cache still exists and is valid. Looking at the logs:
> 29-Nov-2022 17:31:52.014 serve-stale: info: test.com resolver failure, stale 
> answer unavailable
> 29-Nov-2022 17:31:52.014 query-errors: info: client @0x7feeb7f1b308 
> 192.168.56.1#59506 (test.com): query failed (SERVFAIL) for test.com/IN/A at 
> query.c:5871
>
> which tells me the query fails, because the stale result is unavailable.
> in NS1's config, I have:
> options {
> listen-on port 53 { any; };
> listen-on-v6 { none; };
>
> directory "/var/cache/ns1";
>
> recursion yes;
> allow-transfer { none; };
> allow-query { any; };
>
> forwarders {
> 127.0.0.1 port 153;
> };
> forward only;
>
> stale-answer-enable yes;
> stale-answer-ttl 300;
>
> dnssec-validation yes;
>
> statistics-file "/var/run/named.ns1.stats";
>
> auth-nxdomain no;
> };
>
> Two questions about this situation:
> 1. Why would the test.com entry in cache be stale, if

Re: forwarder cache

2022-11-29 Thread Hamid Maadani
Thank you for your response, Darren. Appreciate that.

> I do have my forwarders setup differently (ie: I have them only on a per 
> domain level instead of at the options level)
> Not sure how that would make a difference for the problem you are having, 
> however.

Just to double check, I changed my config to match that. Setup a specific 
forwarding zone in NS1 and turned off global forwarding. NS2 on, all good. NS2 
off, No dice.
If I query test.com, I get the stale error in logs.
If I comment out the stale config options, reload and query test.com, I just 
get this in logs:
29-Nov-2022 21:57:49.931 queries: info: client @0x7f325e5a2108 
192.168.56.1#57660 (test.com): query: test.com IN A +E(0) (172.17.0.3)
29-Nov-2022 21:57:49.931 resolver: debug 1: fetch: test.com/A
29-Nov-2022 21:57:49.933 query-errors: info: client @0x7f325e5a2108 
192.168.56.1#57660 (test.com): query failed (SERVFAIL) for test.com/IN/A at 
query.c:7375

> The only difference is that my "authoritative" servers are not running at 
> 127.0.0.1 but rather 192.168.40.142 and 40.182

Changed the IP to eth0 IP instead of loopback, no difference.

> Another difference is that these are all running in chroot jails with 
> separate directories

In my setup, I have separated the instances by:
- using separate config directories (/etc/bind/ns1/ , /et/bind/ns2/)
- using separate work directories (/var/cache/ns1/ , /var/cache/ns2/)
- turning off PID file for NS2 (only one PID file exists, and it is for NS1)
- separating ports the listen on (NS1 -> 53, NS2 -> 153)
- separating control ports for rndc (953 for NS1 and 1953 for NS2)
instead of chrooting them.

For what I know, this should completely separate the instances apart, unless 
I'm missing something. Also, they are separate processes, so they do not share 
any memory unless there is something different in bind that I do not know about.

I still can not figure out why the cache does not work on NS1!

Regards
Hamid Maadani
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: forwarder cache

2022-11-29 Thread Darren Ankney
On Tue, Nov 29, 2022 at 5:27 PM Hamid Maadani  wrote:
> If I comment out the stale config options, reload and query test.com, I just 
> get this in logs:
> 29-Nov-2022 21:57:49.931 queries: info: client @0x7f325e5a2108 
> 192.168.56.1#57660 (test.com): query: test.com IN A +E(0) (172.17.0.3)
> 29-Nov-2022 21:57:49.931 resolver: debug 1: fetch: test.com/A
> 29-Nov-2022 21:57:49.933 query-errors: info: client @0x7f325e5a2108 
> 192.168.56.1#57660 (test.com): query failed (SERVFAIL) for test.com/IN/A at 
> query.c:7375

That looks like, if the stale config options are removed, then NS1
can't get an answer from NS2 at all?  Or you are saying that's what
you get if NS2 isn't running and you query NS1 regarding test.com
without the stale config options?

> In my setup, I have separated the instances by:
> - using separate config directories (/etc/bind/ns1/ , /et/bind/ns2/)
> - using separate work directories (/var/cache/ns1/ , /var/cache/ns2/)
> - turning off PID file for NS2 (only one PID file exists, and it is for NS1)
> - separating ports the listen on (NS1 -> 53, NS2 -> 153)
> - separating control ports for rndc (953 for NS1 and 1953 for NS2)
> instead of chrooting them.
>
> For what I know, this should completely separate the instances apart, unless 
> I'm missing something. Also, they are separate processes, so they do not 
> share any memory unless there is something different in bind that I do not 
> know about.
>

That seems reasonable to me.
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: forwarder cache

2022-11-29 Thread Hamid Maadani
> That looks like, if the stale config options are removed, then NS1
> can't get an answer from NS2 at all? Or you are saying that's what
> you get if NS2 isn't running and you query NS1 regarding test.com
> without the stale config options?

It would be the latter, I removed stale configs from NS1, and shut down NS2. 
Then verified NS1 cache still has an entry for test.com, and queried NS1 for 
test.com
Basically, NS1 does not utilize its own cache at all!

Regards
Hamid Maadani
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


DF-Flag on UDP-based sockets?

2022-11-29 Thread Tom

Hi list

Regarding ARM 9.18.9 
(https://bind9.readthedocs.io/en/v9_18_9/reference.html#namedconf-statement-edns-udp-size):

"The named now sets the DON’T FRAGMENT flag on outgoing UDP packets."

Tested with BIND-9.18.9, I didn't saw any UDP packets, where the 
"DF"-flag was set on the IP header (true for TCP, but never seen for UDP).


Which circumstands or which queries enforces BIND9 to set the "DF"-flag 
on outgoing UDP-based packets?


Any hints for this?

Thanks a lot.
Tom
--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users