Good afternoon,

Thank you for your analysis, and my apologies for the slow reply.  I have some 
questions about your results, and then some additional information from my 
environment.

Your first packet capture (icmp.pkt) contained packet-too-big messages like 
this:

> 10.188.210.10 > 10.188.211.123: icmp: echo request (DF)
> 10.188.211.123 > 10.188.210.10: icmp: \
>     10.188.211.123 unreachable - need to frag (mtu 1480) (DF)

The source of the packet-too-big is the target of the ping (10.188.211.123) and 
not the PF router (10.188.210.50; not explicitly stated but taken from your 
topology document).  So while a packet-too-big was received, it was not 
generated by the router itself (is the link MTU between PF and the test box 
greater than 1500?)

Your second packet capture (icmp-eco.pkt) contained packet-too-big messages 
like this:

> 10.188.210.10 > 10.188.212.52: icmp: echo request (DF)
> 10.188.211.51 > 10.188.210.10: icmp: \
>     10.188.212.52 unreachable - need to frag (mtu 1300)

Here again we have received a packet-too-big message, but it wasn't generated 
by the PF box but rather by RT (which has the 1300 MTU).  I'm interested in the 
case where the PF box is where the MTU shift occurs (due to the larger headers 
of IPv6), and so it must generate the error rather than just forwarding one 
from upstream.

Thus, I'm not sure if PF is actually generating packet-too-big messages, as 
your tests don't show it.  This is consistent with my testing, as I am not 
seeing any messages from PF, and there are no other hosts that could generate 
them since the packets are too big to put on the wire.



Independent of this, I wanted to provide some additional information about my 
environment, as it is not as simple as the test environment.  Our setup makes 
use of rdomains, which I did not include in the original ticket, but realize 
now does make for a different setup.  I'll try to define the topology:

PF box has three physical interfaces in use:

em0 (member of trunk0)
em1 (member of trunk0)
em2 (management interface)

em0/em1 are bonded using trunk(4) into interface trunk0

trunk0 is connected to an upstream switch with tagged VLANs enabled, so we 
create vlan(4) interfaces on top of trunk0

For the purposes of this bug, we will deal with a single vlan, vlan42

vlan42 has both an IPv4 and IPv6 address assigned to it.  Our intent is to use 
it as a kind of "hairpin CLAT"; IPv4 packets are received on vlan42's IPv4 
interface, af-translated by PF, and emitted back on vlan42 as IPv6 packets.  
The default router on vlan42 (not managed by OpenBSD) will forward the packets 
to our NAT64 box and eventual delivery.

To isolate VLANs from each other, each vlan interface is put into its own 
rtable (42, in this case).

Under this setup, we do not see any ICMPv4 packet-too-big messages.  We have 
attempted packet captures on both vlan42 and on em2 (management), but have not 
seen any ICMP codes.  We even went so far as to add a IPv4 address and default 
route to em2 in case PF was sending them via the default rtable instead of the 
one assigned to the incoming interface, but nothing there either.

We have a "pass out" default rule in our pf.conf, so I do not believe we are 
preventing any generated packets from leaving the box.

I can provide full PF rules and network topology if necessary, but I think we 
should debug your test network case first to see if PF actually will generate a 
packet-too-big message before we move on to anything more complicated.

Thanks,

Jason

-- 
.   Jason Healy                               Director of Technology
.   Suffield Academy                     Opatrny Chair in Technology
.
.                http://web.suffieldacademy.org/~jhealy

Reply via email to