On Tuesday, September 05, 2017 20:44:49 Andrey V. Elsukov wrote:
> On 05.09.2017 20:09, Andrey V. Elsukov wrote:
> $ ping6 fe80:::4013:23::2%lagg0
> ping6: UDP connect: Network is unreachable
> >>>
> >>> Hmm. Can you show the second word of address in this example?
> >>> Is it not
On 05.09.2017 20:09, Andrey V. Elsukov wrote:
$ ping6 fe80:::4013:23::2%lagg0
ping6: UDP connect: Network is unreachable
>>>
>>> Hmm. Can you show the second word of address in this example?
>>> Is it not zero? I.e. fe80:: is correct or you missed '::' part?
>>>
>> Correct, ne
On 05.09.2017 18:25, Greg Rivers wrote:
> dtrace saw nothing, yet tcpdump recorded what one would expect. Apparently
> the inbound RAs and NSs are not making it through to the IPv6 stack. At this
> point I suspect a bug in the Emulex oce(4) driver, or a bad interaction
> between oce(4) and lagg(
On Tuesday, September 05, 2017 13:51:32 Andrey V. Elsukov wrote:
> You can try to use dtrace to detect that RA is received by IPv6 stack.
> # kldload dtraceall
> # dtrace -n 'fbt::nd6_ra_input:entry {m = (struct mbuf *)arg0; ip6 =
> (struct ip6_hdr *)m->m_data; printf("RA from %s received on %s",
On 05.09.2017 00:20, Greg Rivers wrote:
> Thanks. Trying your same experiment, I do get output for duplicate detection,
> though it doesn't include the interface identifier or a check for the auto
> generated link-local address (maybe you're running -CURRENT?):
Yes, it is CURRENT, but auto gener
On Monday, September 04, 2017 13:22:16 Andrey V. Elsukov wrote:
> On 03.09.2017 09:20, Greg Rivers wrote:
> > Aside from ruling out the MTU option in the RAs as the cause, I've
> > made little progress on finding the problem.
> >
> > Can anyone explain the use of net.inet6.icmp6.nd6_debug=1 for ND
On 03.09.2017 09:20, Greg Rivers wrote:
> Aside from ruling out the MTU option in the RAs as the cause, I've
> made little progress on finding the problem.
>
> Can anyone explain the use of net.inet6.icmp6.nd6_debug=1 for NDP
> debugging? I get no log output at all.
I just tried:
# sysctl net.ine
Aside from ruling out the MTU option in the RAs as the cause, I've made little
progress on finding the problem.
Can anyone explain the use of net.inet6.icmp6.nd6_debug=1 for NDP debugging? I
get no log output at all.
--
Greg Rivers
https://lists.freebsd.org/pipermail/freebsd-stable/2017-Augu
On Wednesday, August 09, 2017 17:41:47 Hiroki Sato wrote:
> Greg Rivers wrote
> in <2045487.fzlpjxt...@flake.tharned.org>:
>
> gc> > 2. What is shown by the command "ping6 ff02::1%lagg0" ...?
> gc> >
> gc> $ ping6 -c 2 ff02::1%lagg0
> gc> PING6(56=40+8+8 bytes) fe80::ae16:2dff:fe1e:b880%lagg0
Greg Rivers wrote
in <2045487.fzlpjxt...@flake.tharned.org>:
gc> > 2. What is shown by the command "ping6 ff02::1%lagg0" and "rtsol -dD
lagg0"?
gc> >
gc> $ ping6 -c 2 ff02::1%lagg0
gc> PING6(56=40+8+8 bytes) fe80::ae16:2dff:fe1e:b880%lagg0 --> ff02::1%lagg0
gc> 16 bytes from fe80::ae16:2dff:f
On Wednesday, August 09, 2017 16:26:50 Hiroki Sato wrote:
> The configuration looks correct to me, but two questions:
>
> 1. Does "sysctl net.inet6.ip6.forwarding" command show "0"?
>
$ sysctl net.inet6.ip6.forwarding
net.inet6.ip6.forwarding: 0
> 2. What is shown by the command "ping6 ff02::1
Greg Rivers wrote
in <1557648.bebeymq...@flake.tharned.org>:
gc> On Monday, August 07, 2017 15:57:04 Andrey V. Elsukov wrote:
gc> > So, set net.inet6.icmp6.nd6_debug=1 and show what you have in the
gc> > ndp -p
gc> > ndp -r
gc> > ndp -i lagg0
gc> >
gc> # sysctl net.inet6.icmp6.nd6_debug=1
gc
On Monday, August 07, 2017 15:57:04 Andrey V. Elsukov wrote:
> So, set net.inet6.icmp6.nd6_debug=1 and show what you have in the
> ndp -p
> ndp -r
> ndp -i lagg0
>
# sysctl net.inet6.icmp6.nd6_debug=1
net.inet6.icmp6.nd6_debug: 0 -> 1
# suspend
[1] + Stopped (SIGSTOP)su -
$ ndp -p
fe80::
On 07.08.2017 15:30, Andrey V. Elsukov wrote:
> On 06.08.2017 06:35, Greg Rivers wrote:
>> The running interface looks like this:
>> lagg0: flags=8843 metric 0 mtu 1500
>>
>> options=507bb
>> ether ac:16:2d:1e:b8:80
>> inet xxx.xxx.217.100 netmask 0xff80 broadcast xxx.xxx.217.12
On 06.08.2017 06:35, Greg Rivers wrote:
> The running interface looks like this:
> lagg0: flags=8843 metric 0 mtu 1500
>
> options=507bb
> ether ac:16:2d:1e:b8:80
> inet xxx.xxx.217.100 netmask 0xff80 broadcast xxx.xxx.217.127
> inet6 fe80::ae16:2dff:fe1e:b880%lagg0 pr
On Saturday, August 05, 2017 21:37:35 Ultima wrote:
> Never tested SLAAC on lagg, but your configuration looks correct to me. One
> flag that I notice that looks questionable is the MTU option being 9216
> while the lagg interface is set to 1500. I doubt this would cause SLAAC to
> fail but it mayb
Never tested SLAAC on lagg, but your configuration looks correct to me. One
flag that I notice that looks questionable is the MTU option being 9216
while the lagg interface is set to 1500. I doubt this would cause SLAAC to
fail but it maybe worth investigating.
On Sat, Aug 5, 2017 at 8:54 PM, Greg
On Saturday, August 05, 2017 20:44:56 Ultima wrote:
> Do you have pf or ipfw running? are they accepting ICMP type 128, 129, 135,
> 136? the first 2 are for ping requests last 2 for Neighbor
> solicitation/advertisement.
>
Ah, good question. Neither host is running a firewall; I should have mention
Do you have pf or ipfw running? are they accepting ICMP type 128, 129, 135,
136? the first 2 are for ping requests last 2 for Neighbor
solicitation/advertisement.
On Sat, Aug 5, 2017 at 8:35 PM, Greg Rivers
wrote:
> I have a couple of hosts on different networks running 11.1-RELEASE amd64.
> Nei
I have a couple of hosts on different networks running 11.1-RELEASE amd64.
Neither host will auto-configure its IPv6 address, even though valid router
advertisements[1] are present. Both hosts have two oce(4) interfaces aggregated
in fail-over mode via lagg(4). The lagg interface is configured t
20 matches
Mail list logo