Re: HyStart availability in FreeBSD stack

2025-03-24 Thread Cheng Cui
aeyong > Hi Jaeyong, Sorry for the delayed response. There is a patch to enable Hystart++ for the default TCP stack. If you can help test it, I think it would be very helpful to speed up its acceptance. https://reviews.freebsd.org/D46425 Best Regards, Cheng Cui

Re: Sending empty segment upon receiving partial ACK

2025-03-06 Thread Cheng Cui
; did not receive any packets hence > nothing to ACK. That said, those retransmission packets are the pure > purpose of retransmissions and its segment length is zero, which seems > pure overhead. > Hope this makes it clear. > > Thanks, > Jaeyong > > 2025년 3월 5일 (수) 오후 12:30,

Re: Sending empty segment upon receiving partial ACK

2025-03-05 Thread Cheng Cui
re the same and no possibility of challenge ack as well). Isn't loss recovery phase used for data retransmission? There should be retransmitting packets. But what do you mean "no packets going out"? Best Regards, Cheng Cui

Re: Sending empty segment upon receiving partial ACK

2025-02-26 Thread Cheng Cui
encing congestions or packet drops if you checked at the right code. If yes, that was a problem to look at in the first place. If not, the zero length TCP packet is a pure ack, which in different code path has its own purpose. Best Regards, Cheng Cui

Re: struct ifnet is now hidden

2024-11-14 Thread Cheng Cui
ks, and will > likely need lots of review and editing. > > Again, thanks for everyone's help. I hope the road forward with this > is not too bumpy. > > - Justin > > -- Best Regards, Cheng Cui

Re: Performance test for CUBIC in stable/14

2024-10-25 Thread Cheng Cui
, especially for these throughput over 900 Mb/s. cc On Wed, Oct 23, 2024 at 5:43 PM void wrote: > On Wed, Oct 23, 2024 at 03:14:08PM -0400, Cheng Cui wrote: > >I see. The result of `newreno` vs. `cubic` shows non-constant/infrequent > >packet > >retransmission. So TCP c

Re: Performance test for CUBIC in stable/14

2024-10-23 Thread Cheng Cui
any way to reduce CPU usage? cc On Wed, Oct 23, 2024 at 11:04 AM void wrote: > On Wed, Oct 23, 2024 at 08:28:01AM -0400, Cheng Cui wrote: > >The latency does not sound a problem to me. What is the performance of > >TCP congestion control algorithm `newreno`? > > > &

Re: Performance test for CUBIC in stable/14

2024-10-23 Thread Cheng Cui
@n1:~ % And let me know the result of `newreno` vs. `cubic`, for example: iperf3 -B ${src} --cport ${tcp_port} -c ${dst} -l 1M -t 20 -i 2 -VC newreno cc On Tue, Oct 22, 2024 at 4:13 PM void wrote: > On Tue, Oct 22, 2024 at 03:57:42PM -0400, Cheng Cui wrote: > >What is the outpu

Re: Performance test for CUBIC in stable/14

2024-10-22 Thread Cheng Cui
What is the output from `ping` (latency) between these VMs? cc On Tue, Oct 22, 2024 at 11:31 AM void wrote: > On Tue, Oct 22, 2024 at 10:59:28AM -0400, Cheng Cui wrote: > > > Please re-organize your test result in before/after patch order. So that > I > > can unders

Re: Performance test for CUBIC in stable/14

2024-10-22 Thread Cheng Cui
On Mon, Oct 21, 2024 at 2:25 PM void wrote: > On Mon, Oct 21, 2024 at 10:42:49AM -0400, Cheng Cui wrote: > >Change the subject to `Performance test for CUBIC in stable/14`, was `Re: > >Performance issues with vnet jails + epair + bridge`. > > > >I actually prepared

Re: Performance test for CUBIC in stable/14

2024-10-21 Thread Cheng Cui
- - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.07 sec 1.08 GBytes 918 Mbits/sec 8320 sender [ 5] 0.00-10.11 sec 1.08 GBytes 915 Mbits/sec receiver iperf Done. cc On Fri, Oct 18, 2024 at 9:13 AM void wrote: > On Fri, Oct 18, 2024 at 07:28:49AM -

Re: Performance issues with vnet jails + epair + bridge

2024-10-18 Thread Cheng Cui
On Thu, Oct 17, 2024 at 11:17 AM void wrote: > On Thu, Oct 17, 2024 at 11:08:27AM -0400, Cheng Cui wrote: > >The patch has no effect at the host if the host is a data receiver. > > In this context, the vms being tested are on a bhyve host. > > Is the host a data se

Re: Performance issues with vnet jails + epair + bridge

2024-10-17 Thread Cheng Cui
wrote: > On Thu, Oct 17, 2024 at 05:05:41AM -0400, Cheng Cui wrote: > >My commit is inside the FreeBSD kernel, so you just rebuild the `kernel`, > >and you don't need to rebuild the `world`. > > OK thanks. Would building it the same on the bhyve *host* have an effect

Re: Performance issues with vnet jails + epair + bridge

2024-10-17 Thread Cheng Cui
My commit is inside the FreeBSD kernel, so you just rebuild the `kernel`, and you don't need to rebuild the `world`. cc On Wed, Oct 16, 2024 at 1:39 PM void wrote: > Hi, > > On Tue, Oct 15, 2024 at 09:48:58AM -0400, Cheng Cui wrote: > >I am not sure if you are using F

Re: Performance issues with vnet jails + epair + bridge

2024-10-17 Thread Cheng Cui
7.5: 676 Mbits/sec Linux 6.6.52-0-virt: 941 Mbits/sec cc On Wed, Oct 16, 2024 at 8:16 PM void wrote: > Hi, > > On Tue, Oct 15, 2024 at 09:48:58AM -0400, Cheng Cui wrote: > >I am not sure if you are using FreeBSD15-CURRENT for testing in VMs. > >But given you

Re: Performance issues with vnet jails + epair + bridge

2024-10-15 Thread Cheng Cui
I am not sure if you are using FreeBSD15-CURRENT for testing in VMs. But given your iperf3 test result has retransmissions, if you can try, there is a recent VM friendly improvement from TCP congestion control CUBIC. commit ee45061051715be4704ba22d2fcd1c373e29079d Author: Cheng Cui Date: Thu

Re: Network starvation question

2023-11-03 Thread Cheng Cui
tion's responsiveness may be different Best Regards, Cheng Cui On Fri, Nov 3, 2023 at 12:42 AM Yuri wrote: > Hi, > > > I've encountered the situation when the application A was using 100% of > the outbound bandwidth which is approximately 3.5 MBps of UDP traffic. > >

tcp and udp traffic over IPv6 does not work from the latest e1000 git change 918c25677d

2023-07-26 Thread Cheng Cui
efault) [ 1] local fd00::3 port 5001 connected with fd00::2 port 64612 [ ID] Interval Transfer BandwidthJitter Lost/Total Datagrams [ 1] 0.00-0.01 sec 3.42 KBytes 2.38 Mbits/sec 0.011 ms 0/3 (0%) Best Regards, Cheng Cui

Re: CFT: lem(4), em(4) e1000 Ethernet TSO testing

2023-07-26 Thread Cheng Cui
f5d irq 107 at device 4.1 on pci9 em4: port 0xacc0-0xacff mem 0xdf3e-0xdf3f irq 101 at device 3.0 on pci10 em5: port 0xac80-0xacbf mem 0xdf3c-0xdf3d irq 102 at device 3.1 on pci10 Best Regards, Cheng Cui On Tue, Jul 25, 2023 at 10:38 PM Kevin Bowling wrote: > Hi, >

Re: -current dropping ssh connections

2023-06-21 Thread Cheng Cui
> > There don't seem to be any error messages on the console at all, the > client > session simply reports > client_loop: send disconnect: Broken pipe > Have you tried SSH keepalive? https://stackoverflow.com/questions/25084288/keep-ssh-session-alive Best Regards, Che

Re: how to increase the vnet speed?

2023-05-24 Thread Cheng Cui
-a FreeBSD fbsd.cc.home 13.1-RELEASE FreeBSD 13.1-RELEASE releng/13.1-n250148-fc952ac2212 GENERIC amd64 cc@fbsd ~$ Best Regards, Cheng Cui On Wed, May 24, 2023 at 2:19 AM Benoit Chesneau wrote: > Sorry, I thought I posted it but it's a bridge: > > ``` > vlan200: flags=8943 metric &

Re: missing hw.em.msix support in FreeBSD 14.0-CURRENT?

2023-05-08 Thread Cheng Cui
On Mon, May 8, 2023 at 9:50 AM Yuri wrote: > Cheng Cui wrote: > > A followup question: Shouldn't the man page for em(4) be updated > > accordingly, as the sysctls are different now? > > > > https://man.freebsd.org/cgi/man.cgi?em(4) > > <https://man

Re: missing hw.em.msix support in FreeBSD 14.0-CURRENT?

2023-05-07 Thread Cheng Cui
Found the RSS support, only if the hardware is ">= e1000_82571" from the if_em.c. if (hw->mac.type >= em_mac_min)// #define em_mac_min e1000_82571 > ... > scctx->isc_txrx = &em_txrx; > ... > > Best Regards, Cheng Cui On Sun, May 7

Re: missing hw.em.msix support in FreeBSD 14.0-CURRENT?

2023-05-07 Thread Cheng Cui
.iflib.override_qs_enable: 0 dev.em.0.iflib.override_nrxqs: 0 dev.em.0.iflib.override_ntxqs: 0 dev.em.0.iflib.driver_version: 7.7.8-fbsd root@s1:~ # Best Regards, Cheng Cui On Sun, May 7, 2023 at 4:18 PM Yuri wrote: > Cheng Cui wrote: > > Hello, > > > > I am using this em(4) driver for some te

missing hw.em.msix support in FreeBSD 14.0-CURRENT?

2023-05-07 Thread Cheng Cui
e_setting: 1 hw.em.rx_process_limit: 100 hw.em.sbp: 0 hw.em.smart_pwr_down: 0 hw.em.rx_abs_int_delay: 66 hw.em.tx_abs_int_delay: 66 hw.em.rx_int_delay: 0 hw.em.tx_int_delay: 66 hw.em.disable_crc_stripping: 0 root@s1:~ # Best Regards, Cheng Cui