On Fri, Oct 18, 2024 at 07:28:49AM -0400, Cheng Cui wrote:
The patch is a TCP congestion control algorithm improvement. So to
be clear, it only impacts a TCP data sender. These hosts are just traffic
forwarders, not TCP sender/receiver.
I can send you a patch for the FreeBSD 14/stable to test p
On Thu, Oct 17, 2024 at 11:17 AM void wrote:
> On Thu, Oct 17, 2024 at 11:08:27AM -0400, Cheng Cui wrote:
> >The patch has no effect at the host if the host is a data receiver.
>
> In this context, the vms being tested are on a bhyve host.
>
> Is the host a data sender, because the tap interfaces
On Thu, Oct 17, 2024 at 11:08:27AM -0400, Cheng Cui wrote:
The patch has no effect at the host if the host is a data receiver.
In this context, the vms being tested are on a bhyve host.
Is the host a data sender, because the tap interfaces are bridged with bge0
on the host. Or is the host a da
The patch has no effect at the host if the host is a data receiver.
Also the patch is for the FreeBSD main (15-CURRENT in development).
There is no plan to merge the commit into prior releases, given the code
base has been branched for quite some time.
cc
On Thu, Oct 17, 2024 at 5:49 AM void wr
On Thu, Oct 17, 2024 at 05:05:41AM -0400, Cheng Cui wrote:
My commit is inside the FreeBSD kernel, so you just rebuild the `kernel`,
and you don't need to rebuild the `world`.
OK thanks. Would building it the same on the bhyve *host* have an effect?
I'm asking because you earlier mentioned it's
My commit is inside the FreeBSD kernel, so you just rebuild the `kernel`,
and
you don't need to rebuild the `world`.
cc
On Wed, Oct 16, 2024 at 1:39 PM void wrote:
> Hi,
>
> On Tue, Oct 15, 2024 at 09:48:58AM -0400, Cheng Cui wrote:
> >I am not sure if you are using FreeBSD15-CURRENT for testin
Thanks for your testing!
>From your VM test result in the link, if I understand correctly, the CUBIC
in base stack has +24.5% better performance than the CUBIC in rack stack,
and it is better than old releases and OpenBSD. That sounds like an
improvement, although it still needs to be improved to
Hi,
On Tue, Oct 15, 2024 at 09:48:58AM -0400, Cheng Cui wrote:
I am not sure if you are using FreeBSD15-CURRENT for testing in VMs.
But given your iperf3 test result has retransmissions, if you can try, there
is a recent VM friendly improvement from TCP congestion control CUBIC.
I did some fur
Hi,
On Tue, Oct 15, 2024 at 09:48:58AM -0400, Cheng Cui wrote:
I am not sure if you are using FreeBSD15-CURRENT for testing in VMs.
But given your iperf3 test result has retransmissions, if you can try, there
is a recent VM friendly improvement from TCP congestion control CUBIC.
commit ee450610
Hi,
On Tue, Oct 15, 2024 at 09:48:58AM -0400, Cheng Cui wrote:
I am not sure if you are using FreeBSD15-CURRENT for testing in VMs.
the test vm right now is main-n272915-c87b3f0006be built earlier today.
the bhyve host is n271832-04262ed78d23 built Sept 8th
the iperf3 listener is stable/14-n26
I am not sure if you are using FreeBSD15-CURRENT for testing in VMs.
But given your iperf3 test result has retransmissions, if you can try, there
is a recent VM friendly improvement from TCP congestion control CUBIC.
commit ee45061051715be4704ba22d2fcd1c373e29079d
Author: Cheng Cui
Date: Thu
On Tue, Oct 15, 2024 at 03:48:56AM +0100, void wrote:
(snip)
main-n272915-c87b3f0006be GENERIC-NODEBUG with
tcp_rack_load="YES" in /boot/loader.conf and in /etc/sysctl.conf:
#
# network
net.inet.tcp.functions_default=rack
net.pf.request_maxcount=40
net.local.stream.recvspace=65536
net.local
On Thu, Sep 12, 2024 at 06:16:18PM +0100, Sad Clouds wrote:
Hi, I'm using FreeBSD-14.1 and on this particular system I only have a
single physical network interface, so I followed instructions for
networking vnet jails via epair and bridge, e.g.
(snip)
The issue is bulk TCP performance throug
> On Sep 16, 2024, at 10:47 PM, Aleksandr Fedorov wrote:
>
> If we are talking about local traffic between jails and/or host, then in
> terms of TCP throughput we have a room to improve, for example:
Without RSS option enabled, if_epair will only use one thread to move packets
between the p
On 2024-09-16 07:32, Miroslav Lachman wrote:
On 15/09/2024 19:56, Sad Clouds wrote:
On Sun, 15 Sep 2024 18:01:07 +0100
Doug Rabson wrote:
I just did a throughput test with iperf3 client on a FreeBSD 14.1 host
with
an intel 10GB nic connecting to an iperf3 server running in a vnet jail on
a t
On 15/09/2024 19:56, Sad Clouds wrote:
On Sun, 15 Sep 2024 18:01:07 +0100
Doug Rabson wrote:
I just did a throughput test with iperf3 client on a FreeBSD 14.1 host with
an intel 10GB nic connecting to an iperf3 server running in a vnet jail on
a truenas host (13.something) also with an intel 1
On Sun, 15 Sept 2024 at 18:56, Sad Clouds
wrote:
> On Sun, 15 Sep 2024 18:01:07 +0100
> Doug Rabson wrote:
>
> > I just did a throughput test with iperf3 client on a FreeBSD 14.1 host
> with
> > an intel 10GB nic connecting to an iperf3 server running in a vnet jail
> on
> > a truenas host (13.s
On Sun, 15 Sep 2024 18:01:07 +0100
Doug Rabson wrote:
> I just did a throughput test with iperf3 client on a FreeBSD 14.1 host with
> an intel 10GB nic connecting to an iperf3 server running in a vnet jail on
> a truenas host (13.something) also with an intel 10GB nic and I get full
> 10GB throug
Dnia Thu, Sep 12, 2024 at 06:16:18PM +0100, Sad Clouds napisał(a):
> Hi, I'm using FreeBSD-14.1 and on this particular system I only have a
> single physical network interface, so I followed instructions for
> networking vnet jails via epair and bridge, e.g.
>
> devel
> {
> vnet;
>
I just did a throughput test with iperf3 client on a FreeBSD 14.1 host with
an intel 10GB nic connecting to an iperf3 server running in a vnet jail on
a truenas host (13.something) also with an intel 10GB nic and I get full
10GB throughput in this setup. In the past, I had to disable LRO on the
tru
On Sat, 14 Sep 2024 10:45:03 +0800
Zhenlei Huang wrote:
> The overhead of vnet jail should be neglectable, compared to legacy jail
> or no-jail. Bare in mind when VIMAGE option is enabled, there is a default
> vnet 0. It is not visible via jls and can not be destroyed. So when you see
> bottlenec
> On Sep 13, 2024, at 10:54 PM, Sad Clouds wrote:
>
> On Fri, 13 Sep 2024 08:08:02 -0400
> Mark Saad wrote:
>
>> Sad
>> Can you go back a bit you mentioned there is a RPi in the mix ? Some of
>> the raspberries have their nic usb attached under the covers . Which will
>> kill the total s
On Fri, 13 Sep 2024 08:08:02 -0400
Mark Saad wrote:
> Sad
>Can you go back a bit you mentioned there is a RPi in the mix ? Some of
> the raspberries have their nic usb attached under the covers . Which will
> kill the total speed of things.
>
> Can you cobble together a diagram of what y
>
> On Sep 13, 2024, at 5:09 AM, Sad Clouds wrote:
>
> Tried both, kernel built with "options RSS" and the following
> in /boot/loader.conf
>
> net.isr.maxthreads=-1
> net.isr.bindthreads=1
> net.isr.dispatch=deferred
>
> Not sure if there are race conditions with these implementations, bu
Tried both, kernel built with "options RSS" and the following
in /boot/loader.conf
net.isr.maxthreads=-1
net.isr.bindthreads=1
net.isr.dispatch=deferred
Not sure if there are race conditions with these implementations, but
after a few short tests, the epair_task_0 got stuck 100% on CPU and
stayed
I built new kernel with "options RSS" however TCP throughput performance
now decreased from 128 MiB/sec to 106 MiB/sec.
Looks like the problem has shifted from epair to netisr
PID USERNAMEPRI NICE SIZERES STATEC TIMEWCPU COMMAND
12 root-56- 0B 272K CPU3
There seems to be an open bug describing similar issue:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=272944
On Thu, 12 Sep 2024 13:25:32 -0400
Paul Procacci wrote:
> You need to define `poor'.
> You need to show `top -SH` while the `problem' occurs.
>
> My guess is packets are getting shuttled between a global taskqueue thread.
> This is the default, or at least I'm not aware of this default being
> c
On 12/09/2024 19:16, Sad Clouds wrote:
Hi, I'm using FreeBSD-14.1 and on this particular system I only have a
single physical network interface, so I followed instructions for
networking vnet jails via epair and bridge, e.g.
devel
{
vnet;
vnet.interface = "e0b_devel";
On Thu, Sep 12, 2024 at 1:16 PM Sad Clouds
wrote:
> Hi, I'm using FreeBSD-14.1 and on this particular system I only have a
> single physical network interface, so I followed instructions for
> networking vnet jails via epair and bridge, e.g.
>
> devel
> {
> vnet;
> vnet.interface
30 matches
Mail list logo