Re: Performance issues with vnet jails + epair + bridge

2024-09-13 Thread Sad Clouds
I built new kernel with "options RSS" however TCP throughput performance
now decreased from 128 MiB/sec to 106 MiB/sec.

Looks like the problem has shifted from epair to netisr

  PID USERNAMEPRI NICE   SIZERES STATEC   TIMEWCPU COMMAND
   12 root-56- 0B   272K CPU3 3   3:45 100.00% intr{swi1: 
netisr 0}
   11 root187 ki31 0B64K RUN  0   9:00  62.41% idle{idle: 
cpu0}
   11 root187 ki31 0B64K CPU2 2   9:36  61.23% idle{idle: 
cpu2}
   11 root187 ki31 0B64K RUN  1   8:24  55.03% idle{idle: 
cpu1}
0 root-64- 0B   656K -2   0:50  21.50% 
kernel{epair_task_2}

Are there any other tricks I can try to distribute the  load over
multiple CPU cores?

I found this document which describes various vnet examples:
https://freebsdfoundation.org/wp-content/uploads/2020/03/Jail-vnet-by-Examples.pdf

The problem is, with Raspberry Pi I cannot add any pcie network cards
with multiple ports or SR-IOV features. I was hoping to use some software
based virtualization device, but if the scalability is this bad, then
I may have to discard vnet completely and configure jails to use the
host network stack.

Thanks.



Re: Ethernet device with shared mdio

2024-09-13 Thread Milan Obuch
On Fri, 6 Sep 2024 18:03:39 +
Mike Belanger  wrote:

> The following device tree specifies a shared mdio.
> The ffec driver uses miibus.
> When there is a shared mdio, one of the device instances will not be
> able to properly configure the PHY, as it needs to use the other
> devices resource to read/write the PHY.
> 
> &fec1 {pinctrl-names = "default";
>pinctrl-0 = <&pinctrl_fec1>;
>phy-mode = "rgmii-id";
>phy-handle = <ðphy0>;
>fsl,magic-packet;
>status = "okay";
> 
>mdio {
>  #address-cells = <1>;
>  #size-cells = <0>;
> 
>  ethphy0: ethernet-phy@0 {
>   compatible = 
> "ethernet-phy-ieee802.3-c22"; reg = <0>;
>  };
> 
>  ethphy1: ethernet-phy@1 {
>   compatible = 
> "ethernet-phy-ieee802.3-c22"; reg = <1>;
>  };
>};
> };
> 
> &fec2 {
>pinctrl-names = "default";
>pinctrl-0 = <&pinctrl_fec2>;
>phy-mode = "rgmii-txid";
>phy-handle = <ðphy1>;
>phy-supply = <®_fec2_supply>;
>nvmem-cells = <&fec_mac1>;
>nvmem-cell-names = "mac-address";
>rx-internal-delay-ps = <2000>;
>fsl,magic-packet;
>status = "okay";
> };
> 
> Does FreeBSD have any plans for supporting hardware that specifies a
> shared mdio in the dtb? Just knowing the general approach being
> considered would be helpful.
> 

I can't speak for FreeBSD project, I just can share my experience with
similar case. It is described in my post to hackers mailing list (see
https://lists.freebsd.org/archives/freebsd-hackers/2021-December/000649.html
for details), unfortunately, no response received. Another attempt to
get some attention a week later on net mailing list was done, see
https://lists.freebsd.org/archives/freebsd-net/2021-December/001114.html
for the post, with no response either.

As you see, my case was similar, just the mdio block was attached to
second controller. This makes it a bit more problematic - you can't use
mdio controller before being initialized, naturally.

I was not able to use miiproxy approach as noted in my post to hackers
mailing list, additionally, miiproxy was removed from the tree with
MIPS arch some time later. I resolved the issue by modifying cgem driver
and mii layer. This was just a proof of concept with some hacks, but I
was able to use both ports with proper link state change detection. I
did not continue the work because vendor changed hardware design and
there was no shared mdio anymore.

If you are interested I can dig for the sources, big part of my changes
would not be necessary, just the idea of decoupling MDIO and MII
interfaces still applies, I think. By the way, which board are you
working on? Is it accessible for general audience?

Regards,
Milan



Re: Performance issues with vnet jails + epair + bridge

2024-09-13 Thread Sad Clouds
Tried both, kernel built with "options RSS" and the following
in /boot/loader.conf

net.isr.maxthreads=-1
net.isr.bindthreads=1
net.isr.dispatch=deferred

Not sure if there are race conditions with these implementations, but
after a few short tests, the epair_task_0 got stuck 100% on CPU and
stayed there in this state indefinitely, until I rebooted.

  PID USERNAMEPRI NICE   SIZERES STATEC   TIMEWCPU COMMAND
0 root-64- 0B   672K CPU0 0   6:24 100.00% 
kernel{epair_task_0}

Is RSS considered too experimental, hence not enabled by default?



Re: Performance issues with vnet jails + epair + bridge

2024-09-13 Thread Mark Saad



> 
> On Sep 13, 2024, at 5:09 AM, Sad Clouds  wrote:
> 
> Tried both, kernel built with "options RSS" and the following
> in /boot/loader.conf
> 
> net.isr.maxthreads=-1
> net.isr.bindthreads=1
> net.isr.dispatch=deferred
> 
> Not sure if there are race conditions with these implementations, but
> after a few short tests, the epair_task_0 got stuck 100% on CPU and
> stayed there in this state indefinitely, until I rebooted.
> 
>  PID USERNAMEPRI NICE   SIZERES STATEC   TIMEWCPU COMMAND
>0 root-64- 0B   672K CPU0 0   6:24 100.00% 
> kernel{epair_task_0}
> 
> Is RSS considered too experimental, hence not enabled by default?

Sad
   Can you go back a bit you mentioned there is a RPi in the mix ? Some of the 
raspberries have their nic usb attached under the covers . Which will kill the 
total speed of things. 
 
Can you cobble together a diagram of what you have on either end ?


---
Mark Saad | nones...@longcount.org


Re: Ethernet device with shared mdio

2024-09-13 Thread Mike Belanger
Thank you for the response and for sharing your scenario.

We’ve also hacked up the cgem and the ffec driver to support a shared mdio.
That was not too difficult, but we have a new scenario where the mdio is now 
being shared between two different devices that use different drivers (ffec and 
eqos).
This presents a few extra challenges.

I was hoping that FreeBSD may have considered supporting a shared mdio.  We can 
come up with something, but if there is an existing architecture/approach in 
the works…we would like to use a consistent approach.  At first glance, 
miiproxy did not seem like a fit.

I do not have the hardware.  I am trying to help somebody else with this.  I 
have seen the dtb.
It’s a Variscite DAR-MX8M-PLUS.

Regards,
Mike.

From: owner-freebsd-...@freebsd.org  on behalf 
of Milan Obuch 
Date: Friday, September 13, 2024 at 3:08 AM
To: freebsd-net@freebsd.org 
Subject: Re: Ethernet device with shared mdio
CAUTION - This email is from an external source. Please be cautious with links 
and attachments. (go/taginfo)

On Fri, 6 Sep 2024 18:03:39 +
Mike Belanger  wrote:

> The following device tree specifies a shared mdio.
> The ffec driver uses miibus.
> When there is a shared mdio, one of the device instances will not be
> able to properly configure the PHY, as it needs to use the other
> devices resource to read/write the PHY.
>
> &fec1 {pinctrl-names = "default";
>pinctrl-0 = <&pinctrl_fec1>;
>phy-mode = "rgmii-id";
>phy-handle = <ðphy0>;
>fsl,magic-packet;
>status = "okay";
>
>mdio {
>  #address-cells = <1>;
>  #size-cells = <0>;
>
>  ethphy0: ethernet-phy@0 {
>   compatible = 
> "ethernet-phy-ieee802.3-c22"; reg = <0>;
>  };
>
>  ethphy1: ethernet-phy@1 {
>   compatible = 
> "ethernet-phy-ieee802.3-c22"; reg = <1>;
>  };
>};
> };
>
> &fec2 {
>pinctrl-names = "default";
>pinctrl-0 = <&pinctrl_fec2>;
>phy-mode = "rgmii-txid";
>phy-handle = <ðphy1>;
>phy-supply = <®_fec2_supply>;
>nvmem-cells = <&fec_mac1>;
>nvmem-cell-names = "mac-address";
>rx-internal-delay-ps = <2000>;
>fsl,magic-packet;
>status = "okay";
> };
>
> Does FreeBSD have any plans for supporting hardware that specifies a
> shared mdio in the dtb? Just knowing the general approach being
> considered would be helpful.
>

I can't speak for FreeBSD project, I just can share my experience with
similar case. It is described in my post to hackers mailing list (see
https://urldefense.com/v3/__https://lists.freebsd.org/archives/freebsd-hackers/2021-December/000649.html__;!!JoeW-IhCUkS0Jg!fv0DHFN5Xb4FbKwre1H4UDCUvbmhAoO1y5HgQiDkN6wuv2t3B4pyS1akuKuCn6ZqO1AfbrCaFsVsJibdfui4KfJQGw$
for details), unfortunately, no response received. Another attempt to
get some attention a week later on net mailing list was done, see
https://urldefense.com/v3/__https://lists.freebsd.org/archives/freebsd-net/2021-December/001114.html__;!!JoeW-IhCUkS0Jg!fv0DHFN5Xb4FbKwre1H4UDCUvbmhAoO1y5HgQiDkN6wuv2t3B4pyS1akuKuCn6ZqO1AfbrCaFsVsJibdfujKN3_xCA$
for the post, with no response either.

As you see, my case was similar, just the mdio block was attached to
second controller. This makes it a bit more problematic - you can't use
mdio controller before being initialized, naturally.

I was not able to use miiproxy approach as noted in my post to hackers
mailing list, additionally, miiproxy was removed from the tree with
MIPS arch some time later. I resolved the issue by modifying cgem driver
and mii layer. This was just a proof of concept with some hacks, but I
was able to use both ports with proper link state change detection. I
did not continue the work because vendor changed hardware design and
there was no shared mdio anymore.

If you are interested I can dig for the sources, big part of my changes
would not be necessary, just the idea of decoupling MDIO and MII
interfaces still applies, I think. By the way, which board are you
working on? Is it accessible for general audience?

Regards,
Milan

--
This transmission (including any attachments) may contain confidential 
information, privileged material (including material protected by the 
solicitor-client or other applicable privileges), or constitute non-public 
information. Any use of this information by anyone other than the intended 
recipient is p

Re: Ethernet device with shared mdio

2024-09-13 Thread Milan Obuch
On Fri, 13 Sep 2024 13:40:55 +
Mike Belanger  wrote:

> Thank you for the response and for sharing your scenario.
> 
> We’ve also hacked up the cgem and the ffec driver to support a shared
> mdio. That was not too difficult, but we have a new scenario where
> the mdio is now being shared between two different devices that use
> different drivers (ffec and eqos). This presents a few extra
> challenges.

Could you elaborate a bit more? Are your hacks published somewhere or
could you share?

> I was hoping that FreeBSD may have considered supporting a shared
> mdio. We can come up with something, but if there is an existing
> architecture/approach in the works…we would like to use a consistent
> approach. At first glance, miiproxy did not seem like a fit.

I don't know anything specific. I think I saw some DTBs with shared
MDIO, but did not analysed the details. And miiproxy was just possible
solution for the case, where MDIO controller is being initialised after
MII controller (which was my case), but I did use some hacks for proof
of concept.

> I do not have the hardware. I am trying to help somebody else with
> this. I have seen the dtb. It’s a Variscite DAR-MX8M-PLUS.

OK, this makes the development a bit slower :)

Regards,
Milan



Re: Performance issues with vnet jails + epair + bridge

2024-09-13 Thread Sad Clouds
On Fri, 13 Sep 2024 08:08:02 -0400
Mark Saad  wrote:

> Sad
>Can you go back a bit you mentioned there is a RPi in the mix ? Some of 
> the raspberries have their nic usb attached under the covers . Which will 
> kill the total speed of things. 
>  
> Can you cobble together a diagram of what you have on either end ?

Hello, I'm not sending data across the network, only between the host
and the jails. I'm trying to evaluate how FreeBSD handles TCP data
locally within a single host.

I understand that vnet jails will have more overhead, compared to a
shared TCP/IP stack via localhost. So I'm trying to measure it and see
where the bottlenecks are.

The Raspberry Pi 4 host has a single vnet jail, exchanging data with
the host via epair(4) and if_bridge(4) interfaces. I don't really know
what topology FreeBSD is using to represent all this so can't draw any
diagrams, but I think all data flows through the kernel internally and
never leaves the physical network interface.



[Bug 281460] if_ovpn doesn't work with crypto.ko

2024-09-13 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=281460

Mark Linimon  changed:

   What|Removed |Added

   Assignee|b...@freebsd.org|n...@freebsd.org

--- Comment #2 from Mark Linimon  ---
^Triage: appears to be a problem with loadable modules.

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 281452] Realtek 'if_re' module crashes on module load

2024-09-13 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=281452

Mark Linimon  changed:

   What|Removed |Added

   Assignee|b...@freebsd.org|n...@freebsd.org

--- Comment #3 from Mark Linimon  ---
^Triage: appears to be a problem only on kldload.

-- 
You are receiving this mail because:
You are the assignee for the bug.


Re: Performance issues with vnet jails + epair + bridge

2024-09-13 Thread Zhenlei Huang



> On Sep 13, 2024, at 10:54 PM, Sad Clouds  wrote:
> 
> On Fri, 13 Sep 2024 08:08:02 -0400
> Mark Saad  wrote:
> 
>> Sad
>>   Can you go back a bit you mentioned there is a RPi in the mix ? Some of 
>> the raspberries have their nic usb attached under the covers . Which will 
>> kill the total speed of things. 
>> 
>> Can you cobble together a diagram of what you have on either end ?
> 
> Hello, I'm not sending data across the network, only between the host
> and the jails. I'm trying to evaluate how FreeBSD handles TCP data
> locally within a single host.

When you take vnet into account, the **locally** traffic should within
on single vnet jail. If you want traffic across vnet jails, if_epair or netgraph
hooks should be employed, and it of course will introduce some overhead.

> 
> I understand that vnet jails will have more overhead, compared to a
> shared TCP/IP stack via localhost. So I'm trying to measure it and see
> where the bottlenecks are.

The overhead of vnet jail should be neglectable, compared to legacy jail
or no-jail. Bare in mind when VIMAGE option is enabled, there is a default
vnet 0. It is not visible via jls and can not be destroyed. So when you see
bottlenecks, for example this case, it is mostly caused by other components
such as if_epair, but not the vnet jail itself.

> 
> The Raspberry Pi 4 host has a single vnet jail, exchanging data with
> the host via epair(4) and if_bridge(4) interfaces. I don't really know
> what topology FreeBSD is using to represent all this so can't draw any
> diagrams, but I think all data flows through the kernel internally and
> never leaves the physical network interface.

For vnet jails, when you try to describe the network topology, you can
treat them as VM / physical boxes.

I have one box with dozens of vnet jails. Each of them has very single
responsibility, e.g. DHCP, LADP, pf firewall, OOB access. The topology looks 
quite
clear and it is easy to maintenance. The only overhead is too much
hops between the vnet jail instances. For my use case the performance
is not critical and it works great for years.

> 

Best regards,
Zhenlei




[Bug 281452] Realtek 'if_re' module crashes on module load

2024-09-13 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=281452

--- Comment #4 from Zhenlei Huang  ---
@Mithun

It seems this has been fixed by Kristof via commit 43387b4e5740 (if: guard
against if_ioctl being NULL). The commit has been MFCed to stable/14 with
commit 9a8a26aefb36.

May you please have a try with 9a8a26aefb36 ?

See also https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=275920 .

-- 
You are receiving this mail because:
You are the assignee for the bug.