Stuart Henderson wrote:
> Currently iked (and isakmpd) use flows, not routes. These use messages
> on the PF_KEY socket not the route socket. (If I watch route -nv monitor
> while iked starts and brings up tunnels, I don't see any messages).
>
> IIUC the parts you found which currently exist are for the "iface"
> option which tells an iked initiator (a machine, typically but not
> necessarily mobile) to use mode-config to fetch an address, and
> configure it on an interface.
>
> There is a diff for route-based IPsec which does use the route table
> (https://marc.info/?l=openbsd-tech&m=168844868110327&w=2), but I think
> if you were using that, you could just look for messages relating to
> sec(4) interfaces anyway and not have to worry about labels?

Oops, you are correct. I was doing my initial testing on the client
(since it's my primary desktop) and didn't look closely at at the
routes that were getting set up.

The good news, though, is that now I can get away without route labels
even before the sec(4) code gets merged. Since the flows that are
visible in the PF_KEY socket are tunneled routes only, I can apply the
following reasoning: for this use case, any flow (not route) for which
one side is a single address and the other side is "::" (for "any") is
one that needs to be proxied. So now my monitoring program needs to
monitor two sockets instead of one, but I don't have to make any
changes to iked to support it.


> If you're on a low end VPS that is not already doing filtering like
> this, you may find that later on they change to implement that, when
> they figure out why the L3 switch they're using as a router is
> running out of steam when handling junk traffic...

As Zack may have figured out earlier, I am in fact using Vultr. Their
IPv6 guide for OpenBSD specifically says they use routeradv,
neighbrsol, and neighbradv ICMPv6 messages to identify the active IP
addresses, and they also specifically permit the addition of additional
IPv6 addresses to the same interface. Source:

https://www.vultr.com/docs/configuring-ipv6-on-your-vps/#IPv6_on_OpenBSD

The consensus in this thread seems to be that Vultr is doing it wrong
and their bad decisions will cause problems both for them (because
their gateway router is susceptible to NDP cache exhaustion from
external scans even if I don't do anything creative with my host
configuration) and for me (because I've spent the last week in a
fifteen-email exchange about figuring out how to make down with a /64
instead of using that time to figure out how to configure dhcpcd and
rad to make use of the /56 that is my IETF birthright). I won't try to
defend Vultr's IPv6 choices here; I can only say that since they're
explicitly telling me to use NDP to request IPv6 addresses then they
won't change that behavior behind my back without telling me.


>> Since my proposal to have iked enable NDP proxying itself failed to
>> gain traction I looked into other options (that don't involve
>> requesting a larger subnet from my ISP and VPS providers).
>
> I think you're thinking of having a subnet routed across as something
> that is going to cause more pain for the upstream provider.
>
> The reality is the opposite.
>
> If the provider doesn't understand this, there are probably a few
> other things they don't understand too, it would be a bit of a red
> flag for me.
>
> Yes there are a huge number of addresses in a /64, but really a /64
> is what providers are expected to assign where they would assign an
> individual address for IPv4.
>
> For a situation where you'd have a couple of addresses with v4,
> with v6 it's really normal to have a /56 or /48.

In Vultr's defense, they did only assign me a single IPv4 address. You
would probably counter, "Yes, but if you want your VPN connection to
support IPv4 when the server has a single address then you have to use
NAT, which you can also do with IPv6. If you want to avoid using NAT,
then for IPv4 you should request another address and for IPv6 you
should request a larger subnet." And to *that* I will admit that Vultr
has an option to buy additional IPv4 addresses but doesn't have an
option to increase the size of your IPv6 subnet. Presumably this is
where you and Zack would both say, "Exactly, that's the problem."

So OK, the final conclusions here seem to be:

- Vultr is bad for not offering a way to allocate and statically route
  a larger subnet. (Note that it would also work for me if they
  statically routed the entire /64 to my VPS instead of filtering it in
  their gateway router, but they don't offer a way to do that either.)

- The NDP proxy trick is too hacky to justify making changes to iked to
  make it easier to implement.

- I can still automate NDP proxying by creating a second process that
  monitors the PF_KEY socket for changes to the "flow table" to get the
  tunnel addresses so it can enable and disable NDP proxying when
  clients connect and disconnect. Even better, I should be able to do
  this without any changes to iked. It'll be a clumsy affair (in my
  heart I still think an "ndp-proxy" keyword in iked would be less
  clumsy though I understand why the iked maintainers don't want to
  get stuck having to support it forever) but it seems fundamentally
  doable, and I think I know enough now to do it in C instead of as a
  bulky shell script that runs continuously in the background.


Thanks, everyone, for all of the useful information---especially Tobias
for quickly spotting my missing enc interface in rdomain 1, and Stuart
for giving me the name "NDP proxying" for what I was trying to do, and
for pointing me at the PF_KEY socket which provides the flow
information I need. I also do appreciate the warnings that this is the
wrong approach to take, even though architecturally it seems
straightforward enough. (I stand by my arguments earlier that this
isn't *that* hacky, but maybe in the future when I switch to a VPS
that gives me more subnets I will look back on this exercise as a
whole bunch of wasted effort.)

For anyone who is reading this thread in the future and wondering if
there is an easier way to tunnel a single host to a network that has
only a /64 subnet to work with and a router that assigns IP addresses
only through NDP, there are two other options I could have gone with
here:

- Make iked assign a static IP address to the connecting client
  (which is feasible because I only have a few computers that will
  ever be connecting clients) and then add

        !ndp -s ${STATIC_IP} ${SERVER_MAC_ADDRESS}

  to the end of /etc/hostname.vio on the server. This approach frees me
  from having to write a separate process, but has the disadvantage
  that I won't get a new IP address if I reset my VPN connection, which
  feels like it partially defeats the privacy-related purposes of using
  a VPN connection in the first place.

- Set up a layer-2 tunnel instead of a layer-3 one (that is, tunnel
  Ethernet frames instead of IP packets). One downside here would be
  additional network overhead, because the tunneled frames would all
  have Ethernet headers. Another downside is that it would be more
  potential compatbility issues if I tried to connect with non-OpenBSD
  clients, since there are more protocols at work. But the upsides are
  that it's architecturally clean (being logically equivalent to just
  plugging the client computer's Ethernet cable into the server's
  Ethernet network) and there would be no NDP issues because the NDP
  messages would travel up and down the tunnel so the client could
  handle them automatically. This approach appears to be documented
  in the man page for etherip(4), but I haven't actually tried to get
  that approach working.

Thanks again, everyone, and I hope you have a nice week.

Anthony Coulter

Reply via email to