On Fri, 25 Oct 2024, at 13:13, Cheng Cui wrote:
> Here is my example. I am using two 6-core/12-threads desktops for my
> bhyve servers.
> CPU: AMD Ryzen 5 5560U with Radeon Graphics (2295.75-MHz
> K8-class CPU)
>
> You can find test results on VMs from my wiki:
> https://wiki.freebsd.or
Here is my example. I am using two 6-core/12-threads desktops for my bhyve
servers.
CPU: AMD Ryzen 5 5560U with Radeon Graphics (2295.75-MHz K8-class
CPU)
You can find test results on VMs from my wiki:
https://wiki.freebsd.org/chengcui/testD46046
All the CPU utilization results are low,
On Wed, Oct 23, 2024 at 03:14:08PM -0400, Cheng Cui wrote:
I see. The result of `newreno` vs. `cubic` shows non-constant/infrequent
packet
retransmission. So TCP congestion control has little impact on improving the
performance.
The performance bottleneck may come from somewhere else. For exampl
I see. The result of `newreno` vs. `cubic` shows non-constant/infrequent
packet
retransmission. So TCP congestion control has little impact on improving the
performance.
The performance bottleneck may come from somewhere else. For example, the
sender CPU shows 97.7% utilization. Would there be any
On Wed, Oct 23, 2024 at 08:28:01AM -0400, Cheng Cui wrote:
The latency does not sound a problem to me. What is the performance of
TCP congestion control algorithm `newreno`?
In case you need to load `newreno` first.
cc@n1:~ % sudo kldload newreno
cc@n1:~ % sudo sysctl net.inet.tcp.cc.algorithm
The latency does not sound a problem to me. What is the performance of
TCP congestion control algorithm `newreno`?
In case you need to load `newreno` first.
cc@n1:~ % sudo kldload newreno
cc@n1:~ % sudo sysctl net.inet.tcp.cc.algorithm=newreno
net.inet.tcp.cc.algorithm: cubic -> newreno
cc@n1:
On Tue, Oct 22, 2024 at 03:57:42PM -0400, Cheng Cui wrote:
What is the output from `ping` (latency) between these VMs?
That test wasn't between VMs. It was from the vm with the patches to a
workstation
on the same switch.
ping from the vm to the workstation:
--- 192.168.1.232 ping statistics
What is the output from `ping` (latency) between these VMs?
cc
On Tue, Oct 22, 2024 at 11:31 AM void wrote:
> On Tue, Oct 22, 2024 at 10:59:28AM -0400, Cheng Cui wrote:
>
> > Please re-organize your test result in before/after patch order. So that
> I
> > can understand and compare them.
>
> Su
On Tue, Oct 22, 2024 at 10:59:28AM -0400, Cheng Cui wrote:
Please re-organize your test result in before/after patch order. So that I
can understand and compare them.
Sure.
Before:
[ ID] Interval Transfer Bandwidth
[ 1] 0.00-60.02 sec 5.16 GBytes 738 Mbits/sec
After:
[ ID] In
On Mon, Oct 21, 2024 at 2:25 PM void wrote:
> On Mon, Oct 21, 2024 at 10:42:49AM -0400, Cheng Cui wrote:
> >Change the subject to `Performance test for CUBIC in stable/14`, was `Re:
> >Performance issues with vnet jails + epair + bridge`.
> >
> >I actually prepared
On Mon, Oct 21, 2024 at 10:42:49AM -0400, Cheng Cui wrote:
Change the subject to `Performance test for CUBIC in stable/14`, was `Re:
Performance issues with vnet jails + epair + bridge`.
I actually prepared two patches, one depends on the other:
https://reviews.freebsd.org/D47218 <<
Change the subject to `Performance test for CUBIC in stable/14`, was `Re:
Performance issues with vnet jails + epair + bridge`.
I actually prepared two patches, one depends on the other:
https://reviews.freebsd.org/D47218 << apply this patch firstly
https://reviews.freebsd.org/
12 matches
Mail list logo