On Thu, Sep 12, 2024 at 06:16:18PM +0100, Sad Clouds wrote:
Hi, I'm using FreeBSD-14.1 and on this particular system I only have a
single physical network interface, so I followed instructions for
networking vnet jails via epair and bridge, e.g.
(snip)
The issue is bulk TCP performance throughput between this jail and the
host is quite poor, with one CPU spinning 100% in kernel and others
sitting mostly idle.
It seems there is some lock contention somewhere, but I'm not sure if
this is around vnet, epair or bridge subsystems. Are there
other alternatives for vnet jails? Can anyone recommend specific
deployment scenarios? I've seen references to netgraph which could be
used with jails. Does it have better performance and scalability and
could replace epair and bridge combination?
I've noticed bandwidth problems in virtualised adapters, too.
I ran some simple tests and put the results here:
http://void.f-m.fm.user.fm/bhyve-virtio-testing.html
My own context here is bhyve vms. Linux guests greatly out-perform FreeBSD ones
and
I'm trying to find out why, if it's a tunable that needs tuning, if it's a fault
with bge0, how it could be fixed. It's interesting to me that you see similar
effects in quite a different context. I'm using bridge and tap interfaces and
within the (freebsd) vms the interface is vtnet0. So maybe there's something
amiss or needs tuning on these virtual interfaces? The bhyve host gets line
speeds after accounting for tcp/ip overhead, as expected. It's just the vms.
I've read that linux doesn't "epoll" something like that but I don't know much
of anything about linux. Clearly it's doing something different with its own
virtualised adapter, internally.
--