openvpn and no buffer space available (13.2-stable)

2023-09-21 Thread void
Hello @net, tl;dr : is there anything specific to freebsd that needs to be set in order for openvpn to perform well? What buffer space is ping complaining about? context is recent 13.2 stable, on amd64, and it's a bhyve guest. The openvpn client uses UDP, on tun0. The problem is that when the c

Re: openvpn and no buffer space available (13.2-stable)

2023-09-21 Thread void
Hi, On Thu, Sep 21, 2023 at 09:25:38PM +0200, M. Mader wrote: I run OpenVPN without any such problems. My sysctl.conf as well as my loader.conf are pretty much default. I'd try what happens without vm.vnode_pbufs="10240" The problem is worse without it. I don't mean (direct) throughput, th

ipfw firewalling for bhyve host, bypassing bhyve guests

2023-10-15 Thread void
Hello, My objective is to protect services on a bhyve host, while allowing traffic to the bhyve guests to pass to them unprocessed, as these each have pf and their own firewall policies. The host running an up-to-date 13-stable. I know ipfw can process both layer 2 and layer 3 traffic, but pf

Re: ipfw firewalling for bhyve host, bypassing bhyve guests

2023-10-15 Thread void
On Sun, Oct 15, 2023 at 10:46:57AM -0700, Paul Vixie wrote: You don't need L2 for this. The firewall pattern when your bare metal host has an address in the vlan you use for guests is: Allow the specific things you want the bare metal host to do; Deny all else involving the bare metal host; A

drop synfin

2024-08-11 Thread void
(originally posted to hackers@ but on second thoughts this ML is more relevant) Hi, Is it sufficient to EITHER 1. # sysctl net.inet.tcp.drop_synfin=1 OR 2. # sysrc tcp_drop_synfin=YES or 3. must one do both ? --

Re: drop synfin

2024-08-11 Thread void
Hi, thank you for your response On Sun, Aug 11, 2024 at 09:47:28AM -0400, Michael Sierchio wrote: sysrc is for editing rc files, and that's not what you want to do. you may manually set the MIB with sysctl net.inet.tcp.drop_synfin=1 or you can put this line in /etc/sysctl.conf net.inet.tcp.dro

Re: Processing IP options reveals IPSTEALH router

2001-12-19 Thread void
On Thu, Dec 20, 2001 at 12:50:39AM +0300, Yar Tikhiy wrote: > > Source routing itself is a Bad Thing, as is TELNET or rlogin. Telnet with Kerberos or other security options can be a fine thing. -- Ben "An art scene of delight I created this to be ..." -- Sun Ra To Unsubscribe:

Re: Performance issues with vnet jails + epair + bridge

2024-10-14 Thread void
On Tue, Oct 15, 2024 at 03:48:56AM +0100, void wrote: (snip) main-n272915-c87b3f0006be GENERIC-NODEBUG with tcp_rack_load="YES" in /boot/loader.conf and in /etc/sysctl.conf: # # network net.inet.tcp.functions_default=rack net.pf.request_maxcount=40 net.local.stream.recvs

Re: Performance issues with vnet jails + epair + bridge

2024-10-14 Thread void
On Thu, Sep 12, 2024 at 06:16:18PM +0100, Sad Clouds wrote: Hi, I'm using FreeBSD-14.1 and on this particular system I only have a single physical network interface, so I followed instructions for networking vnet jails via epair and bridge, e.g. (snip) The issue is bulk TCP performance throug

Re: Performance issues with vnet jails + epair + bridge

2024-10-18 Thread void
On Fri, Oct 18, 2024 at 07:28:49AM -0400, Cheng Cui wrote: The patch is a TCP congestion control algorithm improvement. So to be clear, it only impacts a TCP data sender. These hosts are just traffic forwarders, not TCP sender/receiver. I can send you a patch for the FreeBSD 14/stable to test p

Re: Performance test for CUBIC in stable/14

2024-10-21 Thread void
On Mon, Oct 21, 2024 at 10:42:49AM -0400, Cheng Cui wrote: Change the subject to `Performance test for CUBIC in stable/14`, was `Re: Performance issues with vnet jails + epair + bridge`. I actually prepared two patches, one depends on the other: https://reviews.freebsd.org/D47218 << apply t

Re: Performance issues with vnet jails + epair + bridge

2024-10-17 Thread void
On Thu, Oct 17, 2024 at 05:05:41AM -0400, Cheng Cui wrote: My commit is inside the FreeBSD kernel, so you just rebuild the `kernel`, and you don't need to rebuild the `world`. OK thanks. Would building it the same on the bhyve *host* have an effect? I'm asking because you earlier mentioned it's

Re: Performance issues with vnet jails + epair + bridge

2024-10-16 Thread void
Hi, On Tue, Oct 15, 2024 at 09:48:58AM -0400, Cheng Cui wrote: I am not sure if you are using FreeBSD15-CURRENT for testing in VMs. But given your iperf3 test result has retransmissions, if you can try, there is a recent VM friendly improvement from TCP congestion control CUBIC. I did some fur

Re: Performance issues with vnet jails + epair + bridge

2024-10-16 Thread void
Hi, On Tue, Oct 15, 2024 at 09:48:58AM -0400, Cheng Cui wrote: I am not sure if you are using FreeBSD15-CURRENT for testing in VMs. But given your iperf3 test result has retransmissions, if you can try, there is a recent VM friendly improvement from TCP congestion control CUBIC. commit ee450610

Re: Performance test for CUBIC in stable/14

2024-10-22 Thread void
On Tue, Oct 22, 2024 at 03:57:42PM -0400, Cheng Cui wrote: What is the output from `ping` (latency) between these VMs? That test wasn't between VMs. It was from the vm with the patches to a workstation on the same switch. ping from the vm to the workstation: --- 192.168.1.232 ping statistics

Re: Performance test for CUBIC in stable/14

2024-10-22 Thread void
On Tue, Oct 22, 2024 at 10:59:28AM -0400, Cheng Cui wrote: Please re-organize your test result in before/after patch order. So that I can understand and compare them. Sure. Before: [ ID] Interval Transfer Bandwidth [ 1] 0.00-60.02 sec 5.16 GBytes 738 Mbits/sec After: [ ID] In

Re: TCP Success Story (was Re: TCP_RACK, TCP_BBR, and firewalls)

2024-10-23 Thread void
On Wed, Jul 17, 2024 at 02:00:31PM -0600, Alan Somers wrote: So I benchmarked all available congestion control algorithms for single download streams. The results are summarized in the table below. Sorry for resurrecting an old thread, but I note your testing was with single streams. What ar

Re: Performance test for CUBIC in stable/14

2024-10-23 Thread void
On Wed, Oct 23, 2024 at 08:28:01AM -0400, Cheng Cui wrote: The latency does not sound a problem to me. What is the performance of TCP congestion control algorithm `newreno`? In case you need to load `newreno` first. cc@n1:~ % sudo kldload newreno cc@n1:~ % sudo sysctl net.inet.tcp.cc.algorithm

Re: Performance test for CUBIC in stable/14

2024-10-27 Thread void
On Fri, 25 Oct 2024, at 13:13, Cheng Cui wrote: > Here is my example. I am using two 6-core/12-threads desktops for my > bhyve servers. > CPU: AMD Ryzen 5 5560U with Radeon Graphics (2295.75-MHz > K8-class CPU) > > You can find test results on VMs from my wiki: > https://wiki.freebsd.or

Re: Performance test for CUBIC in stable/14

2024-10-23 Thread void
On Wed, Oct 23, 2024 at 03:14:08PM -0400, Cheng Cui wrote: I see. The result of `newreno` vs. `cubic` shows non-constant/infrequent packet retransmission. So TCP congestion control has little impact on improving the performance. The performance bottleneck may come from somewhere else. For exampl

slow network performance in bhyve with freebsd guests compared with any other guest os

2024-09-28 Thread void
Surprisingly, freebsd guest performance is about 1/3rd of the line speed. Do some sysctls need to be tuned in freebsd specifically for when it is in a guest context? The host is 15.0-CURRENT (GENERIC-NODEBUG) #1 n271832-04262ed78d23 Xeon E5-2690 @ 2.90GHz with 128GB RAM and the guests are all o

Re: slow network performance in bhyve with freebsd guests compared with any other guest os

2024-09-29 Thread void
On Sun, Sep 29, 2024 at 08:12:23AM +0100, Lexi Winter wrote: On 29/09/2024 07:58, void wrote: Surprisingly, freebsd guest performance is about 1/3rd of the line speed. Do some sysctls need to be tuned in freebsd specifically for when it is in a guest context? i tested this here and cannot

Re: Performance issues with vnet jails + epair + bridge

2024-10-15 Thread void
Hi, On Tue, Oct 15, 2024 at 09:48:58AM -0400, Cheng Cui wrote: I am not sure if you are using FreeBSD15-CURRENT for testing in VMs. the test vm right now is main-n272915-c87b3f0006be built earlier today. the bhyve host is n271832-04262ed78d23 built Sept 8th the iperf3 listener is stable/14-n26

Re: Performance issues with vnet jails + epair + bridge

2024-10-17 Thread void
On Thu, Oct 17, 2024 at 11:08:27AM -0400, Cheng Cui wrote: The patch has no effect at the host if the host is a data receiver. In this context, the vms being tested are on a bhyve host. Is the host a data sender, because the tap interfaces are bridged with bge0 on the host. Or is the host a da

Re: FreeBSD Network Status Report for Week 48 2024

2024-11-30 Thread void
Hi, On Fri, 29 Nov 2024, at 15:00, Tom Jones wrote: > Hi hackers, > > I have written a 10th Network status report, you can find it here: > https://adventurist.me/posts/00337 > > And all previous posts are collected at this url: > https://adventurist.me/tag/networkstatus > > One thing we would like

ipfw layer2+3 firewalling question

2025-03-23 Thread void
Hi, (originally posted on the forums) My objective is to protect services on a bhyve host, while allowing traffic to the bhyve guests to pass to and from them unprocessed, as these each have pf and their own firewall policies. The host running recent -current. I know ipfw can process both la

Re: ipfw layer2+3 firewalling question

2025-03-25 Thread void
Hi Ronald, thank you for your reply. On Sun, Mar 23, 2025 at 08:21:21PM +0100, Ronald Klop wrote: I assume that in your setup igb0 is the host interface as well as bridge member. That's correct. That makes the setup a bit hard to reason about. IMHO you now have a virtual setup which you wo