Make sure (sub)domains served exclusively by dnsmasq are marked as local=/tier1.internal/. That will prevent dnsmasq to forward any other queries to upstream nameserver, which very likely does not know them. That is if sshgw.tier1.internal has only A address, but AAAA is forwarded further and timeouts there. In fact make sure whole .internal is stopped somewhere at your border and not forwarded to your ISP. IPv4 works better, because those names are defined by dnsmasq and it does not forward them. For AAAA it is not defined and therefore forwarded. Although it is also problem at ISP, it should respond with NXDOMAIN or REFUSED, but it should respond with some response anyway.

try it with "dig -t AAAA sshgw.tier1.internal" command.

You can do that by local=/internal/ or auth-server=internal on recent versions. If you have multiple devices serving internal subdomains, make sure the one before ISP's nameserver stops queries to them and answer when the record does not exist.

I think in general the problem is that the client sends AAAA queries, even when no IPv6 route exists on that machine. But that should be fixed at glibc.

On 05/08/2024 23:25, Klaus Vink Slott via Dnsmasq-discuss wrote:
Hi. I am new to dnsmasq and do not really care about IPv6 as our ISP does not support it. I am trying to replace the build in dhcp/dns in pfSense with a dnsmasq on a separate machine. Currently there is 3 Linux host on this vlan, on with dnsmasq.

I have setup everything as I think it should work. But I am confused on how to configure the IPv6 part. For IPv4 everything seem fine: hosts gets a ip fixed or dynamic addresses - and testing with the dig command on all hosts works perfectly:

localadm@dhcpdns:~> dig sshgw.tier1.internal +short
192.168.80.8
localadm@dhcpdns:~> dig -x 192.168.80.8 +short
sshgw.tier1.internal.

But when I try to use any internal address, everything takes ages. A test with the host command reveals:

localadm@dhcpdns:~> host sshgw.tier1.internal
sshgw.tier1.internal has address 192.168.80.8
;; communications error to 127.0.0.1#53: timed out
;; communications error to 127.0.0.1#53: timed out
;; no servers could be reached

;; communications error to 127.0.0.1#53: timed out
;; communications error to 127.0.0.1#53: timed out
;; no servers could be reached
host by default does A, AAAA and MX query for the name, unless you use -t A explicitly. It probably means dig -t AAAA and dig -t MX does timeout, only -t A works as expected. You just need to ensure something is made to authoritatively say such record does not exist.

I seems that the Linux host is not satisfied with the first result and continues to lookup a IPv6 address. I have tried different setups and would like dnsmasq to return some kind of "f... off - no ipv6 here" But if I get it to return the real local ipv6 address for the target, that would be all right to.

But I have no clue on why this happens with the current settings:

localadm@dhcpdns:~> grep -v '^#' /etc/dnsmasq.conf | sed '/^$/d'
domain-needed
bogus-priv
resolv-file=/etc/dnsmasq.d/dnsmasq.forward
server=/busene.dk/192.168.225.1
server=/rstd.internal/192.168.225.1
expand-hosts
domain=tier1.internal
dhcp-range=set:direct,192.168.80.36,192.168.80.131,12h
dhcp-range=::f,::ff,constructor:eth0
dhcp-host=00:50:56:b5:ee:27,dhcpdns,192.168.80.4
dhcp-host=00:50:56:b5:e5:7a,sshgw,192.168.80.8
dhcp-option=tag:direct,option:router,192.168.80.1
dhcp-option=tag:direct,option:ntp-server,192.168.80.1
dhcp-option=tag:direct,option:dns-server,192.168.80.4
dhcp-authoritative
conf-dir=/etc/dnsmasq.d/,*.conf
Unless upstream nameservers in dnsmasq.forward know .internal, add local=/internal/ and that should fix it. That instructs dnsmasq to say whatever under .internal it does not know about, it does not exists then.

localadm@dhcpdns:~> cat /etc/dnsmasq.d/dnsmasq.forward
search tier1.internal
nameserver 80.71.82.83
nameserver 80.71.82.82

I have tried different IPv6 related settings for dhcp-range= but it does not seem to do any difference.

Hosts interface:

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:b5:ee:27 brd ff:ff:ff:ff:ff:ff
    altname enp11s0
    altname ens192
    inet 192.168.80.4/24 brd 192.168.80.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:feb5:ee27/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever

I guess it is most likely be down to the setup on the clients (openSUSE). But as I plan to roll a lot af clients, I would like to be able to keep the default setup. And when I was using the build in DNS in pfSense I had no problems like that.

Any ideas?

--
Petr Menšík
Software Engineer, RHEL
Red Hat, https://www.redhat.com/
PGP: DFCF908DB7C87E8E529925BC4931CA5B6C9FC5CB


_______________________________________________
Dnsmasq-discuss mailing list
Dnsmasq-discuss@lists.thekelleys.org.uk
https://lists.thekelleys.org.uk/cgi-bin/mailman/listinfo/dnsmasq-discuss

Reply via email to