On 29 July 2018 at 15:35, Samuel Thibault <samuel.thiba...@ens-lyon.org> wrote: > From: Andrew Oates <aoa...@google.com> > > On Linux, SOCK_DGRAM+IPPROTO_ICMP sockets give only the ICMP packet when > read from. On macOS, however, the socket acts like a SOCK_RAW socket > and includes the IP header as well. > > This change strips the extra IP header from the received packet on macOS > before sending it to the guest. > > Signed-off-by: Andrew Oates <aoa...@google.com> > Signed-off-by: Samuel Thibault <samuel.thiba...@ens-lyon.org> > --- > slirp/ip_icmp.c | 16 +++++++++++++++- > 1 file changed, 15 insertions(+), 1 deletion(-) > > diff --git a/slirp/ip_icmp.c b/slirp/ip_icmp.c > index 0b667a429a..6316427ed3 100644 > --- a/slirp/ip_icmp.c > +++ b/slirp/ip_icmp.c > @@ -420,7 +420,21 @@ void icmp_receive(struct socket *so) > icp = mtod(m, struct icmp *); > > id = icp->icmp_id; > - len = qemu_recv(so->s, icp, m->m_len, 0); > + len = qemu_recv(so->s, icp, M_ROOM(m), 0); > +#ifdef CONFIG_DARWIN > + if (len >= sizeof(struct ip)) { > + /* Skip the IP header that OS X (unlike Linux) includes. */ > + struct ip *inner_ip = mtod(m, struct ip *); > + int inner_hlen = inner_ip->ip_hl << 2; > + if (inner_hlen > len) { > + len = -1; > + errno = -EINVAL; > + } else { > + len -= inner_hlen; > + memmove(icp, (unsigned char *)icp + inner_hlen, len); > + } > + } > +#endif
I think it's generally preferable to avoid per-OS ifdefs -- is this really OSX specific and not (for instance) also applicable to the other BSDs? Is there some other (configure or runtime) check we could do to identify whether this is required? For instance the FreeBSD manpage for icmp(4) https://www.freebsd.org/cgi/man.cgi?query=icmp&apropos=0&sektion=0&manpath=FreeBSD+11.2-RELEASE&arch=default&format=html says "incoming packets are received with the IP header and options intact" and I would be unsurprised to find that all the BSDs behave the same way here. thanks -- PMM