Make dig and nslookup DNSSEC aware?

2024-05-22 Thread Robert Wagner
Sorry if this has already been hashed through, but I cannot find anything in 
the archive.  Is there any chance someone can make dig and nslookup DNSSEC 
aware and force it to use DoT or DoH ports - TCP 443 or 853 only?

RW
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Make dig and nslookup DNSSEC aware?

2024-05-22 Thread Robert Wagner
https://www.isc.org/blogs/bind-doh-update-2021/
BIND DoH Update
Status of DNS-over-HTTPS support in BIND 9 as of March, 2021 The latest 
development release of BIND 9 contains a significant number of improvements to 
DNS-over-HTTP (DoH).
www.isc.org

It looks like +https was added in version 9.17 I just need to get RedHat to 
start using it


RW


From: bind-users  on behalf of Havard Eidnes 
via bind-users 
Sent: Wednesday, May 22, 2024 11:47 AM
To: don.frie...@gov.bc.ca 
Cc: ond...@isc.org ; bind-users@lists.isc.org 

Subject: Re: Make dig and nslookup DNSSEC aware?

This email originated from outside of TESLA

Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

>   Doesn't dig already offer DoT using +tls and DoH using +https?

You're right, it does.

I need to sort out my $PATH...

Regards,

- Håvard
--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Question about ISC BIND COPR repositories for 9.16->9.18 ESV transition

2024-06-17 Thread Robert Wagner
If 9.16 was EOL at the end of April: 
https://kb.isc.org/docs/bind-9-end-of-life-dates  Help me understand why ESV 
wasn't rolled to 9.18 at that time or before in 2023 when it was marked as ESV?

It is difficult to explain to leadership why something was marked as EOL, but 
is still active in the pipeline.

The rollover plan and the graphic ISC's Software Support Policy and Version 
Numbering<https://kb.isc.org/v1/docs/aa-00896> do not seem to match.

Robert Wagner



From: bind-users  on behalf of John Thurston 

Sent: Monday, June 17, 2024 11:19 AM
To: bind-users@lists.isc.org 
Subject: Re: Question about ISC BIND COPR repositories for 9.16->9.18 ESV 
transition

This email originated from outside of TESLA

Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

Have you considered scheduling the change in version published in each COPR 
repository so it doe not coincide with the release of a new version of BIND?

I have some hosts tied to the COPR for BIND-ESV, and some tied to BIND. I hit a 
stumbling block during the last "roll over" event, and it took me a a bit to 
figure out if it was due to the switch of BIND-ESV from 9.11 - > 9.16 in the 
repository, or the switch from 9.16.x -> 9.16.y in the code-release.

If we could have the version published in the BIND-ESV repository advance to 
the same version which was most recently published in BIND repository (i.e. 
ship 9.18.x in BIND, a couple of weeks later roll BIND-ESV to 9.18.x and BIND 
to 9.20.x, and a couple of weeks later release 9.18.y and 9.20.y), then 
problems with the COPR "roll over" would be a little more obvious.

--
Do things because you should, not just because you can.

John Thurston907-465-8591
john.thurs...@alaska.gov<mailto:john.thurs...@alaska.gov>
Department of Administration
State of Alaska

On 6/17/2024 2:32 AM, Michał Kępień wrote:

While I don't have a specific date for you, we plan to do such a
"rollover" again when BIND 9.20.1 or 9.20.2 gets released, i.e. in about
2-3 months from now.  We will definitely roll all three repositories at
the same time, i.e.:

  - "bind-esv" will move from 9.16 to 9.18,
  - "bind" will move from 9.18 to 9.20,
  - "bind-dev" will move from 9.19/9.20 to 9.21.
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: netstat showing multiple lines for each listening socket

2024-07-08 Thread Robert Wagner

Some diagnostics is needed.  When you reboot, does it show it up multiple binds 
to the same port?  Can your run netstat -tP to identify the process ID (are 
they the same or different).  There may also be other options to provide more 
diagnostics.

-Trying to determine if you are really binding the service four times to the 
same port or this is just a ghost in the netstat program...  Most systems are 
designed to prevent binding multiple applications to the same ip/port, but a 
service can spawn multiple threads on the same ip/port.  You may be seeing the 
threads and not unique service instances.

Looking at the process ID, you may be able to track back to the root process 
and determine if these are just service threads.


Robert Wagner


From: bind-users  on behalf of Thomas 
Hungenberg via bind-users 
Sent: Monday, July 8, 2024 4:52 AM
To: bind-users@lists.isc.org 
Subject: netstat showing multiple lines for each listening socket

This email originated from outside of TESLA

Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

Hello,

we have been running some BIND nameservers on Debian-based systems for many 
years.

Until (including) Debian 10 with BIND 9.11.5, netstat always showed only one 
line
per listening socket, e.g.

tcp0  0 10.x.x.x:53 0.0.0.0:*   LISTEN  
1234/named
tcp0  0 127.0.0.1:530.0.0.0:*   LISTEN  
1234/named
udp0  0 10.x.x.x:53 0.0.0.0:*   
1234/named
udp0  0 127.0.0.1:530.0.0.0:*   
1234/named


We noticed that with Debian 11 and 12 (BIND 9.16.48 / 9.18.24), netstat instead
shows multiple (on some systems four, on others up to 20) completely identical 
lines
for each listening socket, like this:

tcp0  0 10.x.x.x:53 0.0.0.0:*   LISTEN  
1234/named
tcp0  0 10.x.x.x:53 0.0.0.0:*   LISTEN  
1234/named
tcp0  0 10.x.x.x:53 0.0.0.0:*   LISTEN  
1234/named
tcp0  0 10.x.x.x:53 0.0.0.0:*   LISTEN  
1234/named
tcp0  0 127.0.0.1:530.0.0.0:*   LISTEN  
1234/named
tcp0  0 127.0.0.1:530.0.0.0:*   LISTEN  
1234/named
tcp0  0 127.0.0.1:530.0.0.0:*   LISTEN  
1234/named
tcp0  0 127.0.0.1:530.0.0.0:*   LISTEN  
1234/named
udp0  0 10.x.x.x:53 0.0.0.0:*   
1234/named
udp0  0 10.x.x.x:53 0.0.0.0:*   
1234/named
udp0  0 10.x.x.x:53 0.0.0.0:*   
1234/named
udp0  0 10.x.x.x:53 0.0.0.0:*   
1234/named
udp0  0 127.0.0.1:530.0.0.0:*   
1234/named
udp0  0 127.0.0.1:530.0.0.0:*   
1234/named
udp0  0 127.0.0.1:530.0.0.0:*   
1234/named
udp0  0 127.0.0.1:530.0.0.0:*   
1234/named


We wonder what is causing this and if this is intended behaviour?


- Thomas

--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: DS digest type(s)

2024-10-16 Thread Robert Wagner
Correct. The RFC is a bit behind the whole post quantum crypto effort, but I 
would expect it to get updated with both Hashes and Lattice-based crypto in the 
upcoming years. This is more of a - 'here's where we will need to go over the 
next decade' rather than an issue with not following the existing standard.
With that in mind, it may be more useful for an experimental release rather 
than a production one (as DNS clients may not be able to understand the 
communications).

Hopefully, the cryptographic modules in BIND are flexible enough that adding 
new hashes or cipher suites is a minor configuration issue rather than an 
overhaul.

RW



From: bind-users  on behalf of Danilo Godec 
via bind-users 
Sent: Wednesday, October 16, 2024 8:21 AM
To: bind-users@lists.isc.org 
Subject: Re: DS digest type(s)

This email originated from outside of TESLA

Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

I've been looking at RFC8624 and there is no mention of SHA-512 - just this:


   ++-+---+---+
   | Number | Mnemonics   | DNSSEC Delegation | DNSSEC Validation |
   ++-+---+---+
   | 0  | NULL (CDS only) | MUST NOT [*]  | MUST NOT [*]  |
   | 1  | SHA-1   | MUST NOT  | MUST  |
   | 2  | SHA-256 | MUST  | MUST  |
   | 3  | GOST R 34.11-94 | MUST NOT  | MAY   |
   | 4  | SHA-384 | MAY   | RECOMMENDED   |
   ++-+---+---+


Are there any newer RFCs or guidelines regarding DNSSEC algorithms?


   Danilo




On 16. 10. 24 14:15, Robert Wagner wrote:
Our preference would be to at least allow SHA-384 and SHA-512 per the CNSA 2.0 
requirements: CSA_CNSA_2.0_ALGORITHMS_.PDF 
(defense.gov)<https://media.defense.gov/2022/Sep/07/2003071834/-1/-1/0/CSA_CNSA_2.0_ALGORITHMS_.PDF>


My understanding is this will be the base requirement for all US Government 
cryptography.


RW


From: bind-users 
<mailto:bind-users-boun...@lists.isc.org> on 
behalf of Danilo Godec via bind-users 
<mailto:bind-users@lists.isc.org>
Sent: Wednesday, October 16, 2024 8:00 AM
To: bind-users@lists.isc.org<mailto:bind-users@lists.isc.org> 
<mailto:bind-users@lists.isc.org>
Subject: DS digest type(s)

This email originated from outside of TESLA

Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

Hi,


I've been doing some more reading into DNSSEC and if I understand
correctly, it is allowed to have multiple DS records for one KSK - with
different digest types. Apparently, SHA-1 is deprecated and shouldn't be
used anymore, while SHA-256 is mandatory and has to exist.

That leaves SHA-384, which is optional and I can generate manually with
'dnssec-dsfromkey'. Since I have to ask my registrar to add DS records
to parent zones (.eu in this case), I can just send them both records,
right?


Is it also possible to have dnssec-policy to generate both digest types
as CDS records?


 Regards,

 Danilo


--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org<mailto:bind-users@lists.isc.org>
https://lists.isc.org/mailman/listinfo/bind-users


Lep pozdrav / Best regards,
--
Danilo Godec | Sistemska podpora / System Administration
AGENDA d.o.o. | Ul. Pohorskega bataljona 49, Sl-2000 Maribor
E: danilo.go...@agenda.si <mailto:danilo.go...@agenda.si> | T: +386 (0)2 421 61 
31
Agenda OpenSystems <https://www.agenda.si/> | Največji slovenski odprtokodni 
integrator
Red Hat v Sloveniji <http://www.redhat.si/> | Red Hat Premier Business Partner
ElasticBox <http://elasticbox.eu/> | Poslovne rešitve v oblaku
[Agenda d.o.o.] <https://www.agenda.si/>
Izjava o omejitvi odgovornosti / Legal disclaimer statement 
<https://www.agenda.si/index.php?id=228>
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: DS digest type(s)

2024-10-16 Thread Robert Wagner
Our preference would be to at least allow SHA-384 and SHA-512 per the CNSA 2.0 
requirements: CSA_CNSA_2.0_ALGORITHMS_.PDF 
(defense.gov)


My understanding is this will be the base requirement for all US Government 
cryptography.


RW


From: bind-users  on behalf of Danilo Godec 
via bind-users 
Sent: Wednesday, October 16, 2024 8:00 AM
To: bind-users@lists.isc.org 
Subject: DS digest type(s)

This email originated from outside of TESLA

Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

Hi,


I've been doing some more reading into DNSSEC and if I understand
correctly, it is allowed to have multiple DS records for one KSK - with
different digest types. Apparently, SHA-1 is deprecated and shouldn't be
used anymore, while SHA-256 is mandatory and has to exist.

That leaves SHA-384, which is optional and I can generate manually with
'dnssec-dsfromkey'. Since I have to ask my registrar to add DS records
to parent zones (.eu in this case), I can just send them both records,
right?


Is it also possible to have dnssec-policy to generate both digest types
as CDS records?


 Regards,

 Danilo


--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: DNSSEC algo rollover fails to delete old keys

2024-10-16 Thread Robert Wagner
Can do to provide instructions on how to follow the upcoming post quantum 
cryptography requirements?

CSA_CNSA_2.0_ALGORITHMS_.PDF 
(defense.gov)

It would be exteremely helpful. If the crypto is not ready yet, then please 
keep these standards in mind for future direction when available.


RW



From: bind-users  on behalf of Matthijs 
Mekking 
Sent: Wednesday, October 16, 2024 4:03 AM
To: bind-users@lists.isc.org 
Subject: Re: DNSSEC algo rollover fails to delete old keys

This email originated from outside of TESLA

Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

If you provide the output of `rndc dnssec -status` it might give a hint
why the keys are still published.

I suspect that BIND needs to be told that the DS has been withdrawn for
the parent zone (assuming you don't have parental-agents set up).

For future algorithm rollovers: You can just change from "algo8" to
"algo13", no need to have an intermittent "algo8-13" policy.

Best regards,

Matthijs

On 10/16/24 02:54, Arnold DECHAMPS wrote:
> Hello everyone,
>
> I made a algo rollover in DNSSEC from algo 8 to algo 13.
>
> Software version : 9.18.28-1~deb12u2-Debian
>
> My zone configuration refers to policies :
>
> ==
>
> dnssec-policy "algo8" {
>  keys {
>  ksk lifetime unlimited algorithm rsasha256;
>  zsk lifetime 30d algorithm rsasha256;
>  };
>  max-zone-ttl 1d;
>  signatures-validity 14d;
>  signatures-refresh 7d;
> };
>
> dnssec-policy "algo13" {
>  keys {
>  ksk lifetime unlimited algorithm 13;
>  zsk lifetime 30d algorithm 13;
>  };
>  max-zone-ttl 1d;
>  signatures-validity 14d;
>  signatures-refresh 7d;
> };
>
> dnssec-policy "algo8-13" {
>  keys {
>  ksk lifetime unlimited algorithm rsasha256;// Old Algo
>  zsk lifetime 30d algorithm rsasha256;// Old Algo
>  ksk lifetime unlimited algorithm 13;// New Algo
>  zsk lifetime 30d algorithm 13;// New Algo
>  };
>  max-zone-ttl 1d;
>  signatures-validity 14d;
>  signatures-refresh 7d;
> };
>
> ==
>
> The zone config looks like :
>
> ==
>
> zone "somedomain.com"{
>  ...
>  inline-signing yes;
>  dnssec-policy "algo13";
>  key-directory "/etc/bind/keys";
> };
>
> ==
>
>
> The initial idea was to switch the config of the domains that had to be
> rolled over to algo8-13 and temporarily have both keys in the zone
> waiting for the TTL of the DS records to expire. This was successful and
> algo 13 is now in use. I then switched to the algo13 policy and deleted
> the algo 8 keys of my keys directory.
>
> At this point, Bind sees that all the algo 8 keys are expired. It also
> see's that it can't find the files anymore (which prevents me from using
> dnssec-settime as far as I know).
>
> ==
> dns_dnssec_keylistfromrdataset: error reading
> /etc/bind/keys/Ksomedomain.com.+008+16000.private: file not found
> dns_dnssec_findzonekeys2: error reading
> /etc/bind/keys/Ksomedomain.com.+008+16000.private: file not found
> ==
>
> It stills publishes the DNSKEY in the signed zone. I would like to
> ideally correct this by forcing bind to discard the old keys. Is this
> possible to do? And if yes, how?
>
> Regards,
>
> Arnold
--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: DNSSEC, OpenDNS and www.cdc.gov - DNS Compliance checker?

2024-11-04 Thread Robert Wagner
Any chance someone from the bind group knows of an open-source DNS compliance 
validation tool that can analyze and check configuration settings?
I hate to say it, but there are a lot of people managing DNS servers as part of 
other responsibilities. If it responds with an IP, then they consider it 
working/functional and nothing needs to be done. Having a tool that reviews 
your configuration and points out issues would help us advocate for proper 
configuration.  Kind of a SSL checker for DNS...

Thanks in advance for any thoughts you can provide.


Robert Wagner


From: bind-users  on behalf of Robert Edmonds 

Sent: Friday, November 1, 2024 4:16 PM
To: Robert Mankowski 
Cc: bind-users@lists.isc.org 
Subject: Re: DNSSEC, OpenDNS and www.cdc.gov

This email originated from outside of TESLA

Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

This is a problem with the operational configuration of the cdc.gov
nameservers.

The gov nameservers publish the following NS records for cdc.gov:

cdc.gov.10800   IN  NS  auth00.ns.uu.net.
cdc.gov.10800   IN  NS  auth100.ns.uu.net.
cdc.gov.10800   IN  NS  ns1.cdc.gov.
cdc.gov.10800   IN  NS  ns2.cdc.gov.
cdc.gov.10800   IN  NS  ns3.cdc.gov.

The cdc.gov nameservers publish the following NS records for cdc.gov:

cdc.gov.3600IN  NS  ns1.cdc.gov.
cdc.gov.3600IN  NS  ns2.cdc.gov.
cdc.gov.3600IN  NS  ns3.cdc.gov.

This NS RRset from the cdc.gov nameservers (the NS RRset directly above
which has three .cdc.gov NS records) is the authoritative NS RRset for
cdc.gov and is used by standards conforming resolver implementation in
preference to the non-authoritative NS RRset in the referral response
from the .gov nameservers (the NS RRset at the top of this email with
five NS records). See RFC 2181, section 5.4.1.

The domain name www.cdc.gov<http://www.cdc.gov> is a CNAME to 
www.akam.cdc.gov:<http://www.akam.cdc.gov:>

www.cdc.gov<http://www.cdc.gov>.300 IN  CNAME   
www.akam.cdc.gov<http://www.akam.cdc.gov>.

The ".cdc.gov" nameservers all generate RCODE "REFUSED" answers for
queries for www.akam.cdc.gov<http://www.akam.cdc.gov> (as well as akam.cdc.gov):

; <<>> DiG 9.20.2-1-Debian <<>> +norec @ns1.cdc.gov 
www.akam.cdc.gov<http://www.akam.cdc.gov>
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 27267
;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;www.akam.cdc.gov.  IN  A

;; Query time: 4 msec
;; SERVER: 198.246.96.61#53(ns1.cdc.gov) (UDP)
;; WHEN: Fri Nov 01 15:40:44 EDT 2024
;; MSG SIZE  rcvd: 45

This is a standards conformance problem with the .cdc.gov nameservers.
The .cdc.gov nameservers should either return a response containing
records that answer the query, or a referral response to the nameservers
that do. See RFC 1034, section 4.3.2(3)(b).

The reason that some DNS resolver services are able to resolve this
name is because they are probably also querying the .uu.net nameservers
included in the delegation NS RRset for cdc.gov, and those nameservers
are able to return a delegation for the akam.cdc.gov zone:

; <<>> DiG 9.20.2-1-Debian <<>> +norec @auth00.ns.uu.net 
www.akam.cdc.gov<http://www.akam.cdc.gov>
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51056
;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 6, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; NSID: 56 65 72 69 7a 6f 6e ("Verizon")
;; QUESTION SECTION:
;www.akam.cdc.gov.  IN  A

;; AUTHORITY SECTION:
akam.cdc.gov.   86400   IN  NS  a1-43.akam.net.
akam.cdc.gov.   86400   IN  NS  a5-66.akam.net.
akam.cdc.gov.   86400   IN  NS  a2-64.akam.net.
akam.cdc.gov.   86400   IN  NS  a8-67.akam.net.
akam.cdc.gov.   86400   IN  NS  a9-64.akam.net.
akam.cdc.gov.   86400   IN  NS  a28-65.akam.net.

;; Query time: 16 msec
;; SERVER: 198.6.1.65#53(auth00.ns.uu.net) (UDP)
;; WHEN: Fri Nov 01 15:44:13 EDT 2024
;; MSG SIZE  rcvd: 185

To be clear, though, this is unambiguously a problem with the cdc.gov
nameservers and not a fault in resolver implementations that utilize the
authoritative NS RRset for cdc.gov which does not include the .uu.net
nameservers.

Var

Re: SIG(0) "request has invalid signature: not verified yet (NOERROR)"

2024-11-05 Thread Robert Wagner
Crypto question - You mention using RSASHA512, but the record shows ed25519 
(elliptic curve) crypto. Any chance you can standardize on one or the other 
(RSA or ECC)? This may not be an issue, but it seems odd.



Robert Wagner


From: bind-users  on behalf of Malcolm Scott 

Sent: Tuesday, November 5, 2024 10:08 AM
To: bind-users@lists.isc.org 
Subject: SIG(0) "request has invalid signature: not verified yet (NOERROR)"

This email originated from outside of TESLA

Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

Dear all,

I've been using SIG(0) successfully for some years to deal with Lets Encrypt 
dns-01 challenge/response.  Clients use dnssec-keygen to make themselves a 
RSASHA512 key pair; I manually add that once during setup as a KEY record to 
the zone using local nsupdate on the primary NS; then clients can add/remove 
TXT records (etc.) as needed for certificate issuance/renewal using nsupdate 
authenticated with that key, care of a "grant * selfsub * ..." update-policy.

This was working well until I upgraded to BIND 9.20.  Now my clients sometimes 
get SERVFAIL responses when they try to add or remove TXT records.  When that 
happens, BIND logs:

client @0x766a56300800 [elided]#52749: request has invalid signature: not 
verified yet (NOERROR)

Generally it seems that a particular key either consistently works or 
consistently doesn't work (_sometimes_ I can work around this by clearing out 
the KEY records and provisioning new keys, though quite often the replacement 
key also fails), though at least once I have seen a client manage to add a TXT 
record and then fail to delete it again a moment later, despite being 
authenticated using the same key both times.  This feels a bit like it could be 
a race condition, and a regression as everything was reliable prior to 9.20.

This seems to (mainly?) affect names which have more than one KEY record 
(useful because these FQDNs correspond to services hosted on multiple machines, 
each of which needs to go through Lets Encrypt validation in order to get its 
own certificate for the shared FQDN).

My BIND is 9.20.3 from the deb.sury.org package (specifically, package versions 
1:9.20.3-1+ubuntu22.04.1+deb.sury.org+1 and 
1:9.20.3-1+ubuntu24.04.1+deb.sury.org+1 -- I see this on two independent name 
servers, one Ubuntu 22.04 and one Ubuntu 24.04).

Any pointers or suggestions welcome; thanks.

Configuration snippet below.  You can see some of the failing KEYs for real in 
e.g. 'dig key _acme-challenge.gpu-pool0-list.caelumdns.cl.cam.ac.uk.'; the one 
starting "AwEAAdC/34L2C" is consistently failing right now, whereas others 
there are working.

Malcolm


dnssec-policy "simple" {
keys {
csk key-directory lifetime unlimited algorithm ed25519;
};
};
zone "caelumdns.cl.cam.ac.uk" {
type master;
file "/var/lib/bind/caelumdns.cl.cam.ac.uk";
allow-transfer { ... };
dnssec-policy "simple";
update-policy {
grant local-ddns zonesub ANY;
grant * selfsub * TXT PTR A  MX CNAME SSHFP;
};
};


-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Executive Order 14144 - encrypted DNS

2025-01-27 Thread Robert Wagner

FYI - EO 14144 has the following provision related to encrypting DNS:

(c) Encrypting Domain Name System (DNS) traffic in transit is a critical step 
to protecting both the confidentiality of the information being transmitted to, 
and the integrity of the communication with, the DNS resolver.
  (i) Within 90 days of the date of this order, the Secretary of Homeland 
Security, acting through the Director of CISA, shall publish template contract 
language requiring that any product that acts as a DNS resolver (whether client 
or server) for the Federal Government support encrypted DNS and shall recommend 
that language to the FAR Council. Within 120 days of receiving the recommended 
language, the FAR Council shall review it, and, as appropriate and consistent 
with applicable law, the agency members of the FAR Council shall jointly take 
steps to amend the FAR. (ii) Within 180 days of the date of this order, FCEB 
agencies shall enable encrypted DNS protocols wherever their existing clients 
and servers support those protocols. FCEB agencies shall also enable such 
protocols within 180 days of any additional clients and servers supporting such 
protocols.


2025-01470.pdf
Federal Register on 01/17/2025 and available online at Nationality Act of 1952 
(8 U.S.C. 1182(f)), and section 301 of https://federalregister.gov/d/2025-01470 
EXECUTIVE ORDER U.S.C. 1601 et seq. 
14144
6 develop and publish a preliminary update to the SSDF. This update shall 
include practices, procedures, controls, and implementation examples regarding 
the
public-inspection.federalregister.gov
If codified in FAR - then I believe all contractors will be required to encrypt 
DNS as well.

Should be interesting...

RW

-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Survey on the impact of software regulation on DNS systems

2025-01-29 Thread Robert Wagner
This is not a good survey...

  1.
The 2025 US Executive orders point to a dead links. Use the Federal Registrar 
link as it should be there long-term.  
2025-01470.pdf  
CISA   Federal Register :: Improving the Nation's 
Cybersecurity
[https://www.federalregister.gov/assets/open_graph_site_banner.png]
Federal Register :: Improving the Nation's 
Cybersecurity
This site displays a prototype of a “Web 2.0” version of the daily Federal 
Register. It is not an official legal edition of the Federal Register, and does 
not replace the official print version or the official electronic version on 
GPO’s govinfo.gov.
www.federalregister.gov

Federal Register on 01/17/2025 and available online at Nationality Act of 1952 
(8 U.S.C. 1182(f)), and section 301 of https://federalregister.gov/d/2025-01470 
EXECUTIVE ORDER U.S.C. 1601 et seq. 
14144
6 develop and publish a preliminary update to the SSDF. This update shall 
include practices, procedures, controls, and implementation examples regarding 
the
public-inspection.federalregister.gov

  2.
How can one determine the impact of unknown regulations??

FYI - If the EU took it upon themselves to analyze every bit of software and 
provide a free rating - that may have one outcome.  However, if everyone 
producing open- source software was required to pay some large sum to get their 
software tested (and face fines if they didn't), that would have a different 
outcome.

Regulations can be a carrot or stick approach.

Software can be buggy but still be very useful/helpful.  Malicious software can 
be well written (no obvious bugs).

RW



From: bind-users  on behalf of Marc 

Sent: Tuesday, January 28, 2025 3:27 PM
To: Victoria Risk ; BIND Users ; 
'cnect...@ec.europa.eu' 
Subject: RE: Survey on the impact of software regulation on DNS systems

This email originated from outside of TESLA

Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

>
> Did you know that there is significant momentum building to regulate
> software, including open source, in at least Europe and the US (and
> possibly elsewhere as well), in order to improve cybersecurity? Do you
> think this regulation will improve cybersecurity for your operations?
> What are the opportunities and pitfalls you can envision?
>
>

What about regulating standards? What is the point of regulation open source, 
when companies like apple and microsoft sabotage third party 
software/connectivity by not implementing software according to standards. 
Their upgrades miraculously only break third parties implementations and not 
their own.
Think eg. of auto provisioning.


--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: localhost name lookup

2025-01-14 Thread Robert Wagner
All,
I wanted to better understand the use-case of having a DNS server provide 
localhost lookup. I think every OS has a hosts file with localhost set for 
127.0.0.1. This is an instantaneous resolution for localhost, rather than going 
through the process of setting of a network connection or worse (TCP socket 
with TLS).
Offhand, having a DNS server resolve this seems like unnecessary traffic.
I would be interested in the timing difference between having curl.localhost in 
the hosts file versus your DNS server.
This may also allow your localhost resolution and services to continue should 
something prevent you from reaching the DNS server (or network delays) - thus 
improving uptime.

From: bind-users  on behalf of Eric 

Sent: Sunday, January 12, 2025 9:39 PM
To: Lee 
Cc: bind-users@lists.isc.org 
Subject: Re: localhost name lookup

This email originated from outside of TESLA

Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

I did, but my thought would be it's up to the dns admin to define those zone 
configurations as you have done. I may be wrong though.



Jan 12, 2025 6:36:03 PM Lee :

> On Sun, Jan 12, 2025 at 5:15 PM Eric wrote:
>>
>> That is means that the 'domain' is reserved and can be used locally. It 
>> doesn't specify all records in that namespace / domain will resolve to 
>> 127.0.01.
>>
>> Think of it like .com
>>
>> If you want every A record in *.localhost to resolve to 127.0.0.1 what you 
>> did will do that.
>
> Did you look at the RFC?
>
>4.  Caching DNS servers SHOULD recognize localhost names as special
>and SHOULD NOT attempt to look up NS records for them, or
>otherwise query authoritative DNS servers in an attempt to
>resolve localhost names.  Instead, caching DNS servers SHOULD,
>for all such address queries, generate an immediate positive
>response giving the IP loopback address...
>
>5.  Authoritative DNS servers SHOULD recognize localhost names as
>special and handle them as described above for caching DNS
>servers.
>
> So OK.. SHOULD isn't the same as MUST so bind as configured isn't
> violating that RFC.  But is there a _good_ reason to not follow the
> SHOULD recommendation?
>
> Thanks,
> Lee
>
>>
>> Jan 12, 2025 4:38:09 PM Lee:
>>
>>> Excuse my ignorance, but
>>>
>>> https://datatracker.ietf.org/doc/html/rfc6761#section-6.3
>>>
>>>The domain "localhost." and any names falling within ".localhost."
>>>are special in the following ways:
>>>
>>> sure seems to mean that if I lookup curlmachine.localhost I should get
>>> a 127.0.0.1 or ::1 address returned.  Correct?
>>>
>>> I had to change my db.local file to
>>>
>>> $ cat db.local
>>> ;
>>> ; BIND data file for local loopback interface
>>> ;
>>> $TTL604800
>>> @   IN  SOA localhost. root.localhost. (
>>>   3 ; Serial
>>>  604800 ; Refresh
>>>   86400 ; Retry
>>> 2419200 ; Expire
>>>  604800 )   ; Negative Cache TTL
>>> ;
>>> @   IN  NS  localhost.
>>> @   IN  A   127.0.0.1
>>> @   IN  ::1
>>>
>>> *   IN  A   127.0.0.1
>>> IN  ::1
>>>
>>>
>>> to make localhost and curl.localhost work.
>>>
>>> Is this wrong?  and if so, why?
>>>
>>> TIA,
>>> Lee
>>> --
>>> Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
>>> this list
>>>
>>> ISC funds the development of this software with paid support subscriptions. 
>>> Contact us at https://www.isc.org/contact/ for more information.
>>>
>>>
>>> bind-users mailing list
>>> bind-users@lists.isc.org
>>> https://lists.isc.org/mailman/listinfo/bind-users
--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: localhost name lookup

2025-01-14 Thread Robert Wagner
Looking at a Rocky9 box...
ping localhost
ping squirrel.localhost
ping curl.localhost

all resolve to 127.0.0.1.  Avg response .043-.047ms for each.  Pinging another 
ip is like 10-20 times slower.
The localhosts file contains:

127.0.0.1   localhost  localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost   localhost.localdomain localhost6 
localhost6.localdomain6

I don't know if that is unique to the redhat line of systems - or version 9.  
It seems to have wildcard built in.


RW


From: Lee 
Sent: Tuesday, January 14, 2025 10:48 AM
To: Robert Wagner 
Cc: bind-users@lists.isc.org 
Subject: Re: localhost name lookup

This email originated from outside of TESLA

Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

On Tue, Jan 14, 2025 at 6:56 AM Robert Wagner wrote:
>
> All,
> I wanted to better understand the use-case of having a DNS server provide 
> localhost lookup. I think every OS has a hosts file with localhost set for 
> 127.0.0.1. This is an instantaneous resolution for localhost, rather than 
> going through the process of setting of a network connection or worse (TCP 
> socket with TLS).
> Offhand, having a DNS server resolve this seems like unnecessary traffic.

Yes, it is.  But it happens sometimes.  What does your machine do with
a "ping zippy.localhost" ?

> I would be interested in the timing difference between having curl.localhost 
> in the hosts file versus your DNS server.
> This may also allow your localhost resolution and services to continue should 
> something prevent you from reaching the DNS server (or network delays) - thus 
> improving uptime.

I don't care about how long it takes .. all that much :)  I'm more
concerned with Doing The Right Thing and answering with a localhost
address for foo.bar.bax.localhost seems to be the right thing to do
(and isn't possible in the general case for /etc/hosts - or does it
allow wildcards now?)

The question came up here:
https://lists.privoxy.org/pipermail/privoxy-devel/2025-January/000801.html

It'd be nice to avoid things like

= > On my systems hostnames ending in .localhost resolve to 127.0.0.1 and ::1.
=
= On my system this isn't the case.  I first had to install
= systemd-resolved and point DNS to 127.0.0.53 instead of using the
= locally installed bind on 127.0.0.1.

Thanks
Lee


> 
> From: bind-users  on behalf of Eric 
> 
> Sent: Sunday, January 12, 2025 9:39 PM
> To: Lee 
> Cc: bind-users@lists.isc.org 
> Subject: Re: localhost name lookup
>
> This email originated from outside of TESLA
>
> Do not click links or open attachments unless you recognize the sender and 
> know the content is safe.
>
> I did, but my thought would be it's up to the dns admin to define those zone 
> configurations as you have done. I may be wrong though.
>
>
>
> Jan 12, 2025 6:36:03 PM Lee :
>
> > On Sun, Jan 12, 2025 at 5:15 PM Eric wrote:
> >>
> >> That is means that the 'domain' is reserved and can be used locally. It 
> >> doesn't specify all records in that namespace / domain will resolve to 
> >> 127.0.01.
> >>
> >> Think of it like .com
> >>
> >> If you want every A record in *.localhost to resolve to 127.0.0.1 what you 
> >> did will do that.
> >
> > Did you look at the RFC?
> >
> >4.  Caching DNS servers SHOULD recognize localhost names as special
> >and SHOULD NOT attempt to look up NS records for them, or
> >otherwise query authoritative DNS servers in an attempt to
> >resolve localhost names.  Instead, caching DNS servers SHOULD,
> >for all such address queries, generate an immediate positive
> >response giving the IP loopback address...
> >
> >5.  Authoritative DNS servers SHOULD recognize localhost names as
> >special and handle them as described above for caching DNS
> >servers.
> >
> > So OK.. SHOULD isn't the same as MUST so bind as configured isn't
> > violating that RFC.  But is there a _good_ reason to not follow the
> > SHOULD recommendation?
> >
> > Thanks,
> > Lee
> >
> >>
> >> Jan 12, 2025 4:38:09 PM Lee:
> >>
> >>> Excuse my ignorance, but
> >>>
> >>> https://datatracker.ietf.org/doc/html/rfc6761#section-6.3
> >>>
> >>>The domain "localhost." and any names falling within ".localhost."
> >>>are special in the following ways:
> >>>
> >>> sure seems to mean that if I lookup cu

Re: Bind and DHCP

2025-01-09 Thread Robert Wagner
I am not sure this was clear, but are you talking about DNS/DHCP for internal 
computers or trying to DNS for both internal and external, DHCP for internal.  
As mentioned below, your load (QPS) will probably determine may determine if 
you can support a single server.  A small network supplying internal hosts of < 
a couple hundred hosts it would be fine. I assume at least a primary and 
secondary for each service.

I don't think anyone will recommend servicing external DNS and internal 
services like DHCP on the same box... That is just an accident waiting to 
happen.

Also think about the Confidentiality, Integrity and Availability triad.  A 
large network may also have separation of duties and you may have different 
admins for each service (they don't want to reboot the other's services).  A 
DNS server may require high uptime, but a DHCP server should be able to sustain 
a little downtime.

Good luck,
RW

From: bind-users  on behalf of Fred Morris 

Sent: Wednesday, January 8, 2025 2:11 PM
To: Bind-users 
Subject: Re: Bind and DHCP

This email originated from outside of TESLA

Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

Good operational network design calls for network segmentation; proper
segmentation implies the functions of DDI to be technically (as opposed to
organizationally) managed by segment. This would include actual recursing
resolvers and DHCP services, not forwarders, at the segment edge.

A lot of people are invested in solutionism via centralization so this is
inherently controversial.

On Wed, 8 Jan 2025, Karol Nowicki via bind-users wrote:
> Does a good practice recommend to split running ISC Bind and DHCP into
> two different machines or make DNS+DHCP running on same server is
> allowed ?

What allows you do to the best job with logging, according to your
policies on observability?

--

Fred Morris

--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Question about post-quantum X25519Kyber768

2025-01-02 Thread Robert Wagner

>From my poke a few months back - stuff like PQC and NSA's Commercial Solutions 
>for Classified settings need to go through the RFC process. Since both the DNS 
>server and DNS client need to be on the same page as to which cipher suites 
>they agree on.

Around 10/16:

Robert, if you'd like to propose standardizing SHA-512 for use in DS records 
please propose this in an Internet Draft — there is a helpful page here: 
https://authors.ietf.org/en/home .

W



Robert Wagner



From: bind-users  on behalf of Carlos 
Horowicz via bind-users 
Sent: Thursday, January 2, 2025 7:32 AM
To: bind-users@lists.isc.org 
Subject: Question about post-quantum X25519Kyber768

This email originated from outside of TESLA

Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

Hi there,

does anyone know of the bind developers thinking of incorporating
post-quantum cryptography into bind9 , like Cloudflare with
X25519Kyber768 on BoringSSL ?

I'm just curious about if there are thoughts or ongoing work, or if this
is in the near roadmap at all.

Thank you,

Carlos Horowicz
Avascloud/Planisys

--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Just a suspicion for now: Memory leak in 9.20.4?

2025-02-13 Thread Robert Wagner
Not sure if we have a good howto on watching for memory leaks.  But you could 
run something like "sudo pmap [pid]" and watch it over time (like several 
days). Expect some fluctuations for load.  You may find a dependent library 
that has an issue.

Others may have better tools that are commonly found on Linux.

Some additional tools you can download:
https://www.baeldung.com/linux/memory-leak-active-process
[https://www.baeldung.com/wp-content/uploads/sites/2/2021/09/Featured-Linux-6.png]<https://www.baeldung.com/linux/memory-leak-active-process>
How to Find Memory Leak of a Running Process in 
Linux<https://www.baeldung.com/linux/memory-leak-active-process>
The top and htop commands both provide a high-level view of system resource 
usage, including memory consumption. htop is a better version of the top 
command that is interactive and more informative.These tools are particularly 
useful for quickly checking if a process is consuming too much memory over 
time. We’ll focus on htop for now, which can be installed using apt:
www.baeldung.com


Robert Wagner


From: bind-users  on behalf of Ondřej Surý 

Sent: Thursday, February 13, 2025 4:33 AM
To: Borja Marcos 
Cc: bind-users 
Subject: Re: Just a suspicion for now: Memory leak in 9.20.4?

This email originated from outside of TESLA

Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

The increase could be for various reasons. The query pattern is different, the 
underlying database is different, the other data structures are different. 
Unless there’s unbounded growth (in the stats), or the cache memory goes over 
configured limit, there’s nothing to worry about.

Sometimes it is possible to have smaller and faster, sometimes the smaller even 
means faster, but there are times where faster means larger.

Ondrej
--
Ondřej Surý — ISC (He/Him)

My working hours and your working hours may be different. Please do not feel 
obligated to reply outside your normal working hours.

> On 13. 2. 2025, at 10:16, Borja Marcos via bind-users 
>  wrote:
>
> Hi,
>
> I am running 9.18.32 and 9.20.4 on FreeBSD. I have noticed that 9.20.4 is 
> using much more memory 24 hours since restarting them, despite the fact that 
> the 9.18.32 has a higher query load.
>
> Nothing substantial now, but I would like to confirm (or not) whether someone 
> else has observed something similar.
>
> Cheers,
>
>
>
>
> Borja.
>
>
> --
> Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
> this list
>
> ISC funds the development of this software with paid support subscriptions. 
> Contact us at https://www.isc.org/contact/ for more information.
>
>
> bind-users mailing list
> bind-users@lists.isc.org
> https://lists.isc.org/mailman/listinfo/bind-users
> 

--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: XoT Testing: TLS peer certificate verification failed

2025-02-27 Thread Robert Wagner

When validating a certificate, be sure to use the context of the DNS service...
So, if your service runs under user BIND, you may need to su to BIND to test. 
This may help flush out issues where the ca.crt file was set so BIND could not 
read it.

I don't know what happens when you set TLS to strict, but I would think there's 
a way of trusting a self-signed certificate by adding as a CA file.

You could expand your openssl commands to create a self-signed CA, then sign 
each certificate with that CA.  See easy-rsa as way of testing this. You only 
need to add a few more openssl commands to your list.

RW




From: bind-users  on behalf of Klaus Darilion 
via bind-users 
Sent: Thursday, February 27, 2025 11:10 AM
To: Greg Choules via bind-users 
Subject: XoT Testing: TLS peer certificate verification failed

This email originated from outside of TESLA

Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

Hi! I want to test XoT between Bind9.20.6 primary and secondary.



On the primary I created a self-signed certificate with 
CN=xot-test-primary.ops.nic.at and configured bind:



# Create a 10years valid self-signed certificate:

#   openssl genpkey -algorithm RSA -out private.key -pkeyopt 
rsa_keygen_bits:2048

#   openssl req -new -key private.key -out request.csr -subj 
"/CN=xot-test-primary.ops.nic.at"

#   openssl x509 -req -days 3650 -in request.csr -signkey private.key -out 
certificate.crt

#   openssl x509 -text -noout -in certificate.crt

#   chmod g+r private.key

#

# Create DH-params file to enable Diffie-Hellman Perfect Forward Secrecy:

#   openssl dhparam -out dhparam.pem 4096

#

# https://bind9.readthedocs.io/en/v9.20.6/reference.html#namedconf-statement-tls

tls xot-test {

cert-file "/etc/bind/certificate.crt";

dhparam-file "/etc/bind/dhparam.pem";

key-file  "/etc/bind/private.key";

};



options {

listen-on  { 193.46.106.51; };

listen-on-v6   { 2a02:850:1:4::51; };

listen-ontls xot-test  { 193.46.106.51; };

listen-on-v6 tls xot-test  { 2a02:850:1:4::51; };

};



That seems to work fine. Then I configured the secondary similar:

# Create a 10years valid self-signed certificate:

#   openssl genpkey -algorithm RSA -out private.key -pkeyopt 
rsa_keygen_bits:2048

#   openssl req -new -key private.key -out request.csr -subj 
"/CN=xot-test-secondary.ops.nic.at"

#   openssl x509 -req -days 3650 -in request.csr -signkey private.key -out 
certificate.crt

#   openssl x509 -text -noout -in certificate.crt

#   chmod g+r private.key

#

# Create DH-params file to enable Diffie-Hellman Perfect Forward Secrecy:

#   openssl dhparam -out dhparam.pem 4096

#

# https://bind9.readthedocs.io/en/v9.20.6/reference.html#namedconf-statement-tls

tls xot-test {

#ca-file   "/etc/bind/ca.crt";  # Activating ca-file force 
client-certificates for incoming TLS connections

cert-file "/etc/bind/certificate.crt";

dhparam-file "/etc/bind/dhparam.pem";

key-file  "/etc/bind/private.key";

#remote-hostname "xot-test-primary.ops.nic.at";

}; // may occur multiple times



zone "test.klaus" {

type secondary;

file "/var/cache/bind/test.klaus";  // Path to your zone file



primaries  {

  193.46.106.51key "tsig-key" tls xot-test;

  2a02:850:1:4::51 key "tsig-key" tls xot-test;

};



I copied the primary’s certificate.crt to the secondary as ca.crt.



Using opportunistic TLS, zone transfer works fine.



But if I enable strict TLS, either by uncommenting ‘ca-file’ or 
‘remote-hostname’ option, the TLS verification fails:



   transfer of 'test.klaus/IN' from 193.46.106.51#853: failed to connect: TLS 
peer certificate verification failed



But the setup on the primary looks fine. I can successfully open a TLS 
connection when using curl:

# curl -v https://xot-test-primary.ops.nic.at:853 --cacert ca.crt

* Host xot-test-primary.ops.nic.at:853 was resolved.

* IPv6: (none)

* IPv4: 193.46.106.51

*   Trying 193.46.106.51:853...

* Connected to xot-test-primary.ops.nic.at (193.46.106.51) port 853

* ALPN: curl offers h2,http/1.1

* TLSv1.3 (OUT), TLS handshake, Client hello (1):

*  CAfile: ca.crt

*  CApath: /etc/ssl/certs

* TLSv1.3 (IN), TLS handshake, Server hello (2):

* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):

* TLSv1.3 (IN), TLS handshake, Certificate (11):

* TLSv1.3 (IN), TLS handshake, CERT verify (15):

* TLSv1.3 (IN), TLS handshake, Finished (20):

* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):

* TLSv1.3 (OUT), TLS handshake, Finished (20):

* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / X25519 / RSASSA-PSS

* ALPN: server did not agree on a protocol. Uses default.

* Server certificate:

*  subject: CN=xot-test-primary.ops.nic.at

*  start date: Feb 27 14:02:56 2025 GMT

*  expire date: Feb 25 14:02:56 2035 

Re: XoT Testing: TLS peer certificate verification failed

2025-03-04 Thread Robert Wagner

I see this note and some examples on this page that include the DNS: option:

http://wiki.cacert.org/FAQ/subjectAltName

FAQ/subjectAltName (SAN)
What is subjectAltName ?
subjectAltName specifies additional subject identities, but for host names (and 
everything else defined for subjectAltName) :
subjectAltName must always be used (RFC 3280 4.2.1.7, 1. paragraph). CN is only 
evaluated if subjectAltName is not present and only for compatibility with old, 
non-compliant software. So if you set subjectAltName, you have to use it for 
all host names, email addresses, etc., not just the "additional" ones.
subjectAltName and CAcert CSR parser
The CSR parser strips any commonNames and subjectAltNames if the system can't 
match the domain in the system to your account, you can view domains listed on 
your account by going to the domains section of the website after you log in, 
and then clicking on View. (For this 
link to work, you have to log in with your username and password, not with a 
client certificate.)
According to the standards commonName will be ignored if you supply a 
subjectAltName in the certificates, verified to be working in both the latest 
version of MS IE and Firefox (as of 2005/05/12)...


RW



From: bind-users  on behalf of Klaus Darilion 
via bind-users 
Sent: Tuesday, March 4, 2025 8:55 AM
To: Klaus Darilion ; Ondřej Surý 
Cc: bind-us...@isc.org 
Subject: RE: XoT Testing: TLS peer certificate verification failed

This email originated from outside of TESLA

Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

I think I have solved the mistery: Bind (or openssl, who ever does the 
validation) requires Subject Alternative Name. Regardless if using the hostname 
or the IP address, they must be in the subject alternative name. When using 
self-signed certificates, it is probably best to put both in the SAN. Using the 
following certificate on the server, the validation in dig works fine, 
regardless if using the hostname or IP address.



If somebody wants to test XoT, that might help bootstrapping:

openssl req -x509 -newkey rsa:4096 -sha256 -days 3650 -nodes -keyout 
san-private.key -out san-certificate.crt -subj 
"/CN=xot-test-primary.ops.nic.at" -addext 
"subjectAltName=DNS:xot-test-primary.ops.nic.at,IP:193.46.106.51"



regards

Klaus





From: bind-users  On Behalf Of Klaus Darilion 
via bind-users
Sent: Tuesday, March 4, 2025 11:31 AM
To: Ondřej Surý 
Cc: bind-us...@isc.org
Subject: RE: XoT Testing: TLS peer certificate verification failed



In my case it should not be SNI relevant, as the server only has 1 certificate 
to present. Anyways, I will now test with a certificate that uses the IP 
address in the Subject CN.



Regards

Klaus



--

Klaus Darilion, Head of Operations

nic.at GmbH, Jakob-Haringer-Straße 8/V

5020 Salzburg, Austria



From: Ondřej Surý mailto:ond...@isc.org>>
Sent: Tuesday, March 4, 2025 10:05 AM
To: Klaus Darilion mailto:klaus.daril...@nic.at>>
Cc: bind-us...@isc.org
Subject: Re: XoT Testing: TLS peer certificate verification failed



Sounds like this: https://gitlab.isc.org/isc-projects/bind9/-/issues/3896

--

Ondřej Surý — ISC (He/Him)



My working hours and your working hours may be different. Please do not feel 
obligated to reply outside your normal working hours.



On 4. 3. 2025, at 10:01, Klaus Darilion via bind-users 
mailto:bind-users@lists.isc.org>> wrote:



May it be, that the validation is just broken? Even when using dig, and 
explicitely use the hostname of the Primary (which uses its hostname in its 
certificate) in @... and tls-hostname, the verification fails due to hostname 
mismatch:



# dig @xot-test-primary.ops.nic.at test.klaus +tls axfr +tls-ca=ca.crt 
+tls-hostname=xot-test-primary.ops.nic.at +tls-certfile=certificate.crt 
+tls-keyfile=private.key

;; TLS peer certificate verification for 193.46.106.51#853 failed: hostname 
mismatch





Regards

Klaus





From: Klaus Darilion
Sent: Thursday, February 27, 2025 5:11 PM
To: Greg Choules via bind-users 
mailto:bind-users@lists.isc.org>>
Subject: XoT Testing: TLS peer certificate verification failed



Hi! I want to test XoT between Bind9.20.6 primary and secondary.



On the primary I created a self-signed certificate with 
CN=xot-test-primary.ops.nic.at and configured bind:



# Create a 10years valid self-signed certificate:

#   openssl genpkey -algorithm RSA -out private.key -pkeyopt 
rsa_keygen_bits:2048

#   openssl req -new -key private.key -out request.csr -subj 
"/CN=xot-test-primary.ops.nic.at"

#   openssl x509 -req -days 3650 -in request.csr -signkey private.key -out 
certificate.crt

#   openssl x509 -text -noout -in certificate.crt

#   chmod g+r private.key

#

# Create DH-params file to enable Diffie-Hellman Perfect Forward Secrecy:

#   openssl dhparam -out dhparam.pem 4096

#

# https://bind9.readthedocs.io/en/