Hello @net,
tl;dr : is there anything specific to freebsd that needs to be set
in order for openvpn to perform well? What buffer space is ping
complaining about?
context is recent 13.2 stable, on amd64, and it's a bhyve guest.
The openvpn client uses UDP, on tun0.
The problem is that when the c
Hi,
On Thu, Sep 21, 2023 at 09:25:38PM +0200, M. Mader wrote:
I run OpenVPN without any such problems. My sysctl.conf as
well as my loader.conf are pretty much default.
I'd try what happens without vm.vnode_pbufs="10240"
The problem is worse without it. I don't mean (direct)
throughput, th
Hello,
My objective is to protect services on a bhyve host, while allowing traffic
to the bhyve guests to pass to them unprocessed, as these each have pf and
their own firewall policies. The host running an up-to-date 13-stable.
I know ipfw can process both layer 2 and layer 3 traffic, but pf
On Sun, Oct 15, 2023 at 10:46:57AM -0700, Paul Vixie wrote:
You don't need L2 for this. The firewall pattern when your bare metal
host has an address in the vlan you use for guests is:
Allow the specific things you want the bare metal host to do;
Deny all else involving the bare metal host;
A
(originally posted to hackers@ but on second thoughts this ML is
more relevant)
Hi,
Is it sufficient to
EITHER
1. # sysctl net.inet.tcp.drop_synfin=1
OR
2. # sysrc tcp_drop_synfin=YES
or
3. must one do both ?
--
Hi, thank you for your response
On Sun, Aug 11, 2024 at 09:47:28AM -0400, Michael Sierchio wrote:
sysrc is for editing rc files, and that's not what you want to do.
you may manually set the MIB with sysctl net.inet.tcp.drop_synfin=1 or you
can put this line in /etc/sysctl.conf
net.inet.tcp.dro
On Thu, Dec 20, 2001 at 12:50:39AM +0300, Yar Tikhiy wrote:
>
> Source routing itself is a Bad Thing, as is TELNET or rlogin.
Telnet with Kerberos or other security options can be a fine thing.
--
Ben
"An art scene of delight
I created this to be ..." -- Sun Ra
To Unsubscribe:
On Tue, Oct 15, 2024 at 03:48:56AM +0100, void wrote:
(snip)
main-n272915-c87b3f0006be GENERIC-NODEBUG with
tcp_rack_load="YES" in /boot/loader.conf and in /etc/sysctl.conf:
#
# network
net.inet.tcp.functions_default=rack
net.pf.request_maxcount=40
net.local.stream.recvs
On Thu, Sep 12, 2024 at 06:16:18PM +0100, Sad Clouds wrote:
Hi, I'm using FreeBSD-14.1 and on this particular system I only have a
single physical network interface, so I followed instructions for
networking vnet jails via epair and bridge, e.g.
(snip)
The issue is bulk TCP performance throug
On Fri, Oct 18, 2024 at 07:28:49AM -0400, Cheng Cui wrote:
The patch is a TCP congestion control algorithm improvement. So to
be clear, it only impacts a TCP data sender. These hosts are just traffic
forwarders, not TCP sender/receiver.
I can send you a patch for the FreeBSD 14/stable to test p
On Mon, Oct 21, 2024 at 10:42:49AM -0400, Cheng Cui wrote:
Change the subject to `Performance test for CUBIC in stable/14`, was `Re:
Performance issues with vnet jails + epair + bridge`.
I actually prepared two patches, one depends on the other:
https://reviews.freebsd.org/D47218 << apply t
On Thu, Oct 17, 2024 at 05:05:41AM -0400, Cheng Cui wrote:
My commit is inside the FreeBSD kernel, so you just rebuild the `kernel`,
and you don't need to rebuild the `world`.
OK thanks. Would building it the same on the bhyve *host* have an effect?
I'm asking because you earlier mentioned it's
Hi,
On Tue, Oct 15, 2024 at 09:48:58AM -0400, Cheng Cui wrote:
I am not sure if you are using FreeBSD15-CURRENT for testing in VMs.
But given your iperf3 test result has retransmissions, if you can try, there
is a recent VM friendly improvement from TCP congestion control CUBIC.
I did some fur
Hi,
On Tue, Oct 15, 2024 at 09:48:58AM -0400, Cheng Cui wrote:
I am not sure if you are using FreeBSD15-CURRENT for testing in VMs.
But given your iperf3 test result has retransmissions, if you can try, there
is a recent VM friendly improvement from TCP congestion control CUBIC.
commit ee450610
On Tue, Oct 22, 2024 at 03:57:42PM -0400, Cheng Cui wrote:
What is the output from `ping` (latency) between these VMs?
That test wasn't between VMs. It was from the vm with the patches to a
workstation
on the same switch.
ping from the vm to the workstation:
--- 192.168.1.232 ping statistics
On Tue, Oct 22, 2024 at 10:59:28AM -0400, Cheng Cui wrote:
Please re-organize your test result in before/after patch order. So that I
can understand and compare them.
Sure.
Before:
[ ID] Interval Transfer Bandwidth
[ 1] 0.00-60.02 sec 5.16 GBytes 738 Mbits/sec
After:
[ ID] In
On Wed, Jul 17, 2024 at 02:00:31PM -0600, Alan Somers wrote:
So I benchmarked all available congestion control algorithms for
single download streams. The results are summarized in the table
below.
Sorry for resurrecting an old thread, but I note your testing was with
single streams. What ar
On Wed, Oct 23, 2024 at 08:28:01AM -0400, Cheng Cui wrote:
The latency does not sound a problem to me. What is the performance of
TCP congestion control algorithm `newreno`?
In case you need to load `newreno` first.
cc@n1:~ % sudo kldload newreno
cc@n1:~ % sudo sysctl net.inet.tcp.cc.algorithm
On Fri, 25 Oct 2024, at 13:13, Cheng Cui wrote:
> Here is my example. I am using two 6-core/12-threads desktops for my
> bhyve servers.
> CPU: AMD Ryzen 5 5560U with Radeon Graphics (2295.75-MHz
> K8-class CPU)
>
> You can find test results on VMs from my wiki:
> https://wiki.freebsd.or
On Wed, Oct 23, 2024 at 03:14:08PM -0400, Cheng Cui wrote:
I see. The result of `newreno` vs. `cubic` shows non-constant/infrequent
packet
retransmission. So TCP congestion control has little impact on improving the
performance.
The performance bottleneck may come from somewhere else. For exampl
Surprisingly, freebsd guest performance is about 1/3rd of the line speed.
Do some sysctls need to be tuned in freebsd specifically for when it is in a
guest context?
The host is 15.0-CURRENT (GENERIC-NODEBUG) #1 n271832-04262ed78d23
Xeon E5-2690 @ 2.90GHz with 128GB RAM and the guests are all o
On Sun, Sep 29, 2024 at 08:12:23AM +0100, Lexi Winter wrote:
On 29/09/2024 07:58, void wrote:
Surprisingly, freebsd guest performance is about 1/3rd of the line speed.
Do some sysctls need to be tuned in freebsd specifically for when it
is in a guest context?
i tested this here and cannot
Hi,
On Tue, Oct 15, 2024 at 09:48:58AM -0400, Cheng Cui wrote:
I am not sure if you are using FreeBSD15-CURRENT for testing in VMs.
the test vm right now is main-n272915-c87b3f0006be built earlier today.
the bhyve host is n271832-04262ed78d23 built Sept 8th
the iperf3 listener is stable/14-n26
On Thu, Oct 17, 2024 at 11:08:27AM -0400, Cheng Cui wrote:
The patch has no effect at the host if the host is a data receiver.
In this context, the vms being tested are on a bhyve host.
Is the host a data sender, because the tap interfaces are bridged with bge0
on the host. Or is the host a da
Hi,
On Fri, 29 Nov 2024, at 15:00, Tom Jones wrote:
> Hi hackers,
>
> I have written a 10th Network status report, you can find it here:
> https://adventurist.me/posts/00337
>
> And all previous posts are collected at this url:
> https://adventurist.me/tag/networkstatus
>
> One thing we would like
Hi,
(originally posted on the forums)
My objective is to protect services on a bhyve host, while allowing traffic
to the bhyve guests to pass to and from them unprocessed, as these each have pf and
their own firewall policies. The host running recent -current.
I know ipfw can process both la
Hi Ronald, thank you for your reply.
On Sun, Mar 23, 2025 at 08:21:21PM +0100, Ronald Klop wrote:
I assume that in your setup igb0 is the host interface as well as bridge member.
That's correct.
That makes the setup a bit hard to reason about. IMHO you now have a virtual setup
which you wo
27 matches
Mail list logo