https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=286689
--- Comment #3 from Ivan ---
(In reply to Mark Johnston from comment #1)
(In reply to Aleksandr Fedorov from comment #2)
Sorry, i cant test it right now, it is production server and i've already
removed nic from the bridge to
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=286689
Aleksandr Fedorov changed:
What|Removed |Added
CC||afedo...@freebsd.org
--- Comme
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=287150
Lexi Winter changed:
What|Removed |Added
CC||i...@freebsd.org
Assignee|
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=287150
Mark Linimon changed:
What|Removed |Added
Keywords||regression
Assignee|b...@
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=286689
Mark Johnston changed:
What|Removed |Added
CC||ma...@freebsd.org
Stat
On Thu, May 22, 2025 at 08:09:13PM +0100, Lexi Winter wrote:
as kp said, this is broken, but you can trivially fix it by moving the
IP addresses to the bridge interface instead. note that for SLAAC to
work on a bridge, you have to set ‘accept_rtadv’ and ‘auto_linklocal’ on
the bridge interface
void:
> On Mon, May 19, 2025 at 11:33:50AM +0100, Lexi Winter wrote:
> > in short, following this commit...
> >
> > b61850c4e6f "bridge(4): default net.link.bridge.member_ifaddrs to false"
> > https://cgit.freebsd.org/src/commit/?id=b61850c4e6f6b0f21b36da723
e was quite a lot of
discussion).
in short, following this commit...
b61850c4e6f "bridge(4): default net.link.bridge.member_ifaddrs to
false"
https://cgit.freebsd.org/src/commit/?id=b61850c4e6f6b0f21b36da7238db969d9090309e
...it is now impossible to use a network interface which ha
short, following this commit...
b61850c4e6f "bridge(4): default net.link.bridge.member_ifaddrs to false"
https://cgit.freebsd.org/src/commit/?id=b61850c4e6f6b0f21b36da7238db969d9090309e
...it is now impossible to use a network interface which has an IP
address assigned to it as a bridg
Paul Vixie:
> If we move all member ifaddrs to the bridge itself, then will arp
> requests always have to be broadcast on all member interfaces? If so
> this is intolerable from a security perspective, a complete
> nonstarter.
i believe Patrick Hausen already answered your original q
On Monday, May 19, 2025 6:09:08 PM UTC Patrick M. Hausen wrote:
> Hi all,
>
> > Am 19.05.2025 um 19:28 schrieb Paul Vixie :
> >
> > If we move all member ifaddrs to the bridge itself, then will arp requests
> > always have to be broadcast on all member interfaces?
Hi all,
> Am 19.05.2025 um 19:28 schrieb Paul Vixie :
>
> If we move all member ifaddrs to the bridge itself, then will arp requests
> always have to be broadcast on all member interfaces? If so this is
> intolerable from a security perspective, a complete nonstarter.
I am n
If we move all member ifaddrs to the bridge itself, then will arp requests
always have to be broadcast on all member interfaces? If so this is intolerable
from a security perspective, a complete nonstarter.
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=286689
Mark Linimon changed:
What|Removed |Added
Assignee|b...@freebsd.org|n...@freebsd.org
Keywords
Lexi Winter:
> bridge_input() also does a second list walk in GRAB_OUR_PACKETS to find
> traffic destined for the local host, which we could avoid with a sysctl
> to ignore Ethernet traffic for MAC addresses other than the bridge
> itself. this would break configurations where IP a
Matthew Grooms:
> > over the last few days i have been doing a bit of work on VLAN filtering
> > for bridge(4), which i thought i'd mention here in case anyone is
> > interested. the purpose of this is to extend the existing bridge VLAN
> > support to make it more
On 4/4/25 13:47, Lexi Winter wrote:
hello,
over the last few days i have been doing a bit of work on VLAN filtering
for bridge(4), which i thought i'd mention here in case anyone is
interested. the purpose of this is to extend the existing bridge VLAN
support to make it more generally u
hello,
over the last few days i have been doing a bit of work on VLAN filtering
for bridge(4), which i thought i'd mention here in case anyone is
interested. the purpose of this is to extend the existing bridge VLAN
support to make it more generally useful.
the full changeset / di
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=222842
Mark Linimon changed:
What|Removed |Added
Resolution|--- |Overcome By Events
Stat
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=192756
Mark Linimon changed:
What|Removed |Added
Resolution|--- |Overcome By Events
Stat
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237660
Mark Linimon changed:
What|Removed |Added
Assignee|n...@freebsd.org |bugmeis...@freebsd.org
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=263846
Mark Linimon changed:
What|Removed |Added
Resolution|--- |Feedback Timeout
Status
from vlad ---
Hi there. Are there any solutions about the reported issue?
Because I ran into the same tag-vlan bridge related bug on FreeBSD 14.2
release.
A brief netflow topology of my vm-bhyve setup config:
vm-guest -> tap_if -> bridge -> lagg0.101 -> lagg0 -> switch link
aggrega
On Fri, Oct 18, 2024 at 07:28:49AM -0400, Cheng Cui wrote:
The patch is a TCP congestion control algorithm improvement. So to
be clear, it only impacts a TCP data sender. These hosts are just traffic
forwarders, not TCP sender/receiver.
I can send you a patch for the FreeBSD 14/stable to test p
On Thu, Oct 17, 2024 at 11:17 AM void wrote:
> On Thu, Oct 17, 2024 at 11:08:27AM -0400, Cheng Cui wrote:
> >The patch has no effect at the host if the host is a data receiver.
>
> In this context, the vms being tested are on a bhyve host.
>
> Is the host a data sender, because the tap interfaces
On Thu, Oct 17, 2024 at 11:08:27AM -0400, Cheng Cui wrote:
The patch has no effect at the host if the host is a data receiver.
In this context, the vms being tested are on a bhyve host.
Is the host a data sender, because the tap interfaces are bridged with bge0
on the host. Or is the host a da
The patch has no effect at the host if the host is a data receiver.
Also the patch is for the FreeBSD main (15-CURRENT in development).
There is no plan to merge the commit into prior releases, given the code
base has been branched for quite some time.
cc
On Thu, Oct 17, 2024 at 5:49 AM void wr
On Thu, Oct 17, 2024 at 05:05:41AM -0400, Cheng Cui wrote:
My commit is inside the FreeBSD kernel, so you just rebuild the `kernel`,
and you don't need to rebuild the `world`.
OK thanks. Would building it the same on the bhyve *host* have an effect?
I'm asking because you earlier mentioned it's
My commit is inside the FreeBSD kernel, so you just rebuild the `kernel`,
and
you don't need to rebuild the `world`.
cc
On Wed, Oct 16, 2024 at 1:39 PM void wrote:
> Hi,
>
> On Tue, Oct 15, 2024 at 09:48:58AM -0400, Cheng Cui wrote:
> >I am not sure if you are using FreeBSD15-CURRENT for testin
Thanks for your testing!
>From your VM test result in the link, if I understand correctly, the CUBIC
in base stack has +24.5% better performance than the CUBIC in rack stack,
and it is better than old releases and OpenBSD. That sounds like an
improvement, although it still needs to be improved to
Hi,
On Tue, Oct 15, 2024 at 09:48:58AM -0400, Cheng Cui wrote:
I am not sure if you are using FreeBSD15-CURRENT for testing in VMs.
But given your iperf3 test result has retransmissions, if you can try, there
is a recent VM friendly improvement from TCP congestion control CUBIC.
I did some fur
Hi,
On Tue, Oct 15, 2024 at 09:48:58AM -0400, Cheng Cui wrote:
I am not sure if you are using FreeBSD15-CURRENT for testing in VMs.
But given your iperf3 test result has retransmissions, if you can try, there
is a recent VM friendly improvement from TCP congestion control CUBIC.
commit ee450610
Hi,
On Tue, Oct 15, 2024 at 09:48:58AM -0400, Cheng Cui wrote:
I am not sure if you are using FreeBSD15-CURRENT for testing in VMs.
the test vm right now is main-n272915-c87b3f0006be built earlier today.
the bhyve host is n271832-04262ed78d23 built Sept 8th
the iperf3 listener is stable/14-n26
I am not sure if you are using FreeBSD15-CURRENT for testing in VMs.
But given your iperf3 test result has retransmissions, if you can try, there
is a recent VM friendly improvement from TCP congestion control CUBIC.
commit ee45061051715be4704ba22d2fcd1c373e29079d
Author: Cheng Cui
Date: Thu
On Tue, Oct 15, 2024 at 03:48:56AM +0100, void wrote:
(snip)
main-n272915-c87b3f0006be GENERIC-NODEBUG with
tcp_rack_load="YES" in /boot/loader.conf and in /etc/sysctl.conf:
#
# network
net.inet.tcp.functions_default=rack
net.pf.request_maxcount=40
net.local.stream.recvspace=65536
net.local
On Thu, Sep 12, 2024 at 06:16:18PM +0100, Sad Clouds wrote:
Hi, I'm using FreeBSD-14.1 and on this particular system I only have a
single physical network interface, so I followed instructions for
networking vnet jails via epair and bridge, e.g.
(snip)
The issue is bulk TCP perfor
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191700
--- Comment #7 from Palle Girgensohn ---
(In reply to Patrick M. Hausen from comment #6)
Ah, yes, thanks. I see now this example is with a tap interface, I assume that
is different.
--
You are receiving this mail because:
You are the assi
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191700
--- Comment #6 from Patrick M. Hausen ---
An epair has two ends. The one end on the host which is in most cases a member
of some bridge does not have an IP address. The other end which is inside the
jail does.
--
You are receiving this
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191700
--- Comment #5 from Palle Girgensohn ---
(In reply to Patrick M. Hausen from comment #4)
Is that really correct? Even for netgraph? For jails, there is just one bridge,
and multiple epairs or netgrapch interfaces are connected to that
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=238198
Mark Linimon changed:
What|Removed |Added
Flags|mfc-stable12? |
--- Comment #16 from Mark Linimon
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=219901
Mark Linimon changed:
What|Removed |Added
Flags|mfc-stable12? |
--- Comment #6 from Mark Linimon
#4 from Patrick M. Hausen ---
You are not supposed to use any form of layer 3 addressing and communication on
an interface that is a bridge member.
All IP address configuration must go on the bridge interface instead. It's
documented in the handbook.
I am confident CARP will work too, once yo
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=263846
Mark Linimon changed:
What|Removed |Added
CC|don...@freebsd.org |bugmeis...@freebsd.org
As
> On Sep 16, 2024, at 10:47 PM, Aleksandr Fedorov wrote:
>
> If we are talking about local traffic between jails and/or host, then in
> terms of TCP throughput we have a room to improve, for example:
Without RSS option enabled, if_epair will only use one thread to move packets
between the p
On 2024-09-16 07:32, Miroslav Lachman wrote:
On 15/09/2024 19:56, Sad Clouds wrote:
On Sun, 15 Sep 2024 18:01:07 +0100
Doug Rabson wrote:
I just did a throughput test with iperf3 client on a FreeBSD 14.1 host
with
an intel 10GB nic connecting to an iperf3 server running in a vnet jail on
a t
On 15/09/2024 19:56, Sad Clouds wrote:
On Sun, 15 Sep 2024 18:01:07 +0100
Doug Rabson wrote:
I just did a throughput test with iperf3 client on a FreeBSD 14.1 host with
an intel 10GB nic connecting to an iperf3 server running in a vnet jail on
a truenas host (13.something) also with an intel 1
On Sun, 15 Sept 2024 at 18:56, Sad Clouds
wrote:
> On Sun, 15 Sep 2024 18:01:07 +0100
> Doug Rabson wrote:
>
> > I just did a throughput test with iperf3 client on a FreeBSD 14.1 host
> with
> > an intel 10GB nic connecting to an iperf3 server running in a vnet jail
> on
> > a truenas host (13.s
On Sun, 15 Sep 2024 18:01:07 +0100
Doug Rabson wrote:
> I just did a throughput test with iperf3 client on a FreeBSD 14.1 host with
> an intel 10GB nic connecting to an iperf3 server running in a vnet jail on
> a truenas host (13.something) also with an intel 10GB nic and I get full
> 10GB throug
Dnia Thu, Sep 12, 2024 at 06:16:18PM +0100, Sad Clouds napisał(a):
> Hi, I'm using FreeBSD-14.1 and on this particular system I only have a
> single physical network interface, so I followed instructions for
> networking vnet jails via epair and bridge, e.g.
>
> deve
I just did a throughput test with iperf3 client on a FreeBSD 14.1 host with
an intel 10GB nic connecting to an iperf3 server running in a vnet jail on
a truenas host (13.something) also with an intel 10GB nic and I get full
10GB throughput in this setup. In the past, I had to disable LRO on the
tru
On Sat, 14 Sep 2024 10:45:03 +0800
Zhenlei Huang wrote:
> The overhead of vnet jail should be neglectable, compared to legacy jail
> or no-jail. Bare in mind when VIMAGE option is enabled, there is a default
> vnet 0. It is not visible via jls and can not be destroyed. So when you see
> bottlenec
> On Sep 13, 2024, at 10:54 PM, Sad Clouds wrote:
>
> On Fri, 13 Sep 2024 08:08:02 -0400
> Mark Saad wrote:
>
>> Sad
>> Can you go back a bit you mentioned there is a RPi in the mix ? Some of
>> the raspberries have their nic usb attached under the covers . Which will
>> kill the total s
On Fri, 13 Sep 2024 08:08:02 -0400
Mark Saad wrote:
> Sad
>Can you go back a bit you mentioned there is a RPi in the mix ? Some of
> the raspberries have their nic usb attached under the covers . Which will
> kill the total speed of things.
>
> Can you cobble together a diagram of what y
>
> On Sep 13, 2024, at 5:09 AM, Sad Clouds wrote:
>
> Tried both, kernel built with "options RSS" and the following
> in /boot/loader.conf
>
> net.isr.maxthreads=-1
> net.isr.bindthreads=1
> net.isr.dispatch=deferred
>
> Not sure if there are race conditions with these implementations, bu
Tried both, kernel built with "options RSS" and the following
in /boot/loader.conf
net.isr.maxthreads=-1
net.isr.bindthreads=1
net.isr.dispatch=deferred
Not sure if there are race conditions with these implementations, but
after a few short tests, the epair_task_0 got stuck 100% on CPU and
stayed
I built new kernel with "options RSS" however TCP throughput performance
now decreased from 128 MiB/sec to 106 MiB/sec.
Looks like the problem has shifted from epair to netisr
PID USERNAMEPRI NICE SIZERES STATEC TIMEWCPU COMMAND
12 root-56- 0B 272K CPU3
There seems to be an open bug describing similar issue:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=272944
On Thu, 12 Sep 2024 13:25:32 -0400
Paul Procacci wrote:
> You need to define `poor'.
> You need to show `top -SH` while the `problem' occurs.
>
> My guess is packets are getting shuttled between a global taskqueue thread.
> This is the default, or at least I'm not aware of this default being
> c
On 12/09/2024 19:16, Sad Clouds wrote:
Hi, I'm using FreeBSD-14.1 and on this particular system I only have a
single physical network interface, so I followed instructions for
networking vnet jails via epair and bridge, e.g.
devel
{
vnet;
vnet.interface = "
On Thu, Sep 12, 2024 at 1:16 PM Sad Clouds
wrote:
> Hi, I'm using FreeBSD-14.1 and on this particular system I only have a
> single physical network interface, so I followed instructions for
> networking vnet jails via epair and bridge, e.g.
>
> devel
> {
> vnet
Hi, I'm using FreeBSD-14.1 and on this particular system I only have a
single physical network interface, so I followed instructions for
networking vnet jails via epair and bridge, e.g.
devel
{
vnet;
vnet.interface = "e0b_devel";
exec.prestart += "/
/FreeBSD a few months ago:
https://new.reddit.com/r/freebsd/comments/1bfvof2/em0_disconnects_when_added_toremoved_from_bridge/
Consensus there was that the problem was "solved" by disabling RX/TX checksum
offloading on the interface, but I don't know if anyone noticed that modifying
the br
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=280074
Mark Linimon changed:
What|Removed |Added
Assignee|b...@freebsd.org|n...@freebsd.org
Keywords
gt; > > > device: "mlx5_core0";
> > > > num_vfs: 8;
> > > > }
> > > >
> > > > DEFAULT {
> > > > passthrough: true;
> > > > }
> > > >
> > > > VF-0 {
> > > > mac-addr: "02:01:02:02:01:00";
; > DEFAULT {
> > passthrough: true;
> > }
> >
> > VF-0 {
> > mac-addr: "02:01:02:02:01:00";
> > }
> >
> > VF-1 {
> > mac-addr: "02:01:02:02:01:01";
> > }
> >
> > VF-2 {
> > passthrough: false;}
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=276936
Mark Linimon changed:
What|Removed |Added
Keywords||IntelNetworking
See Also|
;
}
VF-2 {
passthrough: false;}
```
With one VF in the vm answering to a specific vlan, and one jail on the host
using vnet with the PF in a bridge and a epair interface added to this bridge.
When I am pinging from the jail to the VF in the vm , the vm receive the ping
or arp requests. But the jai
Hi all,
we have that single host with the largest number of jails - about 250 active.
All jails are bridged to the external interfaces (lagg + vlan) and also
have a private bridge not connected to any hardware or network.
The jails run a CMS and the database server is centralised.
Today the
*/config.json|wc -l
> > 255
> >
> > Of these 251 also have a second epair interface connected to a
> > private bridge named "priv1". These are used for connections to the
> > central database server which should not be exposed to the Internet.
> >
e host - renamed to "inet0" in our environment - via
>if_bridge(4) and all managed with iocage.
>
>root@ph003:~ # grep inet0 /iocage/jails/vpro*/config.json|wc -l
> 255
>
>Of these 251 also have a second epair interface connected to a private bridge
>named "priv1&
erface of the host - renamed to "inet0"
> in our environment - via if_bridge(4) and all managed with iocage.
>
> root@ph003:~ # grep inet0 /iocage/jails/vpro*/config.json|wc -l
> 255
>
> Of these 251 also have a second epair interface connected to a
> priva
age.
root@ph003:~ # grep inet0 /iocage/jails/vpro*/config.json|wc -l
255
Of these 251 also have a second epair interface connected to a private bridge
named "priv1". These are used for connections to the central database server
which should not be exposed to the Internet.
root@
from Zhenlei Huang ---
(In reply to mfbott from comment #0)
> I've solved this issue by replacing "pause" with "DELAY" inside the e1000
> driver
> (see attachment).
> This certainly works for me, but could point to a deeper problem within
> anything
>
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=264549
Mark Linimon changed:
What|Removed |Added
Attachment #234558|file_264549.txt |e1000_osdep.h.patch
filen
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=264549
Graham Perrin changed:
What|Removed |Added
See Also||https://bugs.freebsd.org/bu
Keywords||crash
Summary|e1000/bridge: "sleep on |HardenedBSD: panic:
|wchan ... with sleeping |e1000/bridge: "sleep on
|prohibited" |wchan ..
On 10/8/23 11:00, Benoit Chesneau wrote:
Before I am doing some performance tests myself, did someone comparend
netgraph ng_bridge vs if_bridge with recent multi thread additions in
netgraph ?
The advantage I see in using netgraph is remooving the need of using
tap interfaces and instead usi
Before I am doing some performance tests myself, did someone comparend netgraph
ng_bridge vs if_bridge with recent multi thread additions in netgraph ?
The advantage I see in using netgraph is remooving the need of using tap
interfaces and instead using netgraph sockets. Which seems to be more
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=243554
Graham Perrin changed:
What|Removed |Added
Severity|Affects Only Me |Affects Some People
St
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=243554
--- Comment #12 from Kristof Provost ---
(In reply to Patrick M. Hausen from comment #11)
I'm inclined to leave this open. While it's a well documented configuration
issue there's also a case to be made that this should either just work, or
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=243554
--- Comment #11 from Patrick M. Hausen ---
(In reply to Kristof Provost from comment #10)
Another closing candidate?
--
You are receiving this mail because:
You are the assignee for the bug.
Status|New |Closed
--- Comment #29 from Kristof Provost ---
Given that there's been no further reports and that all versions mentioned here
are long since out of support, yeah, we should just close this.
The locking in the bridge code has also been completely change
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=217606
--- Comment #28 from punkt.de Hosting Team ---
I guess someone with the necessary superpowers should close this. Correct?
Kind regards,
Patrick
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=263831
Xin LI changed:
What|Removed |Added
CC||delp...@freebsd.org
Status|In
VNET(restore_caps)
+SYSCTL_INT(_net_link_bridge, OID_AUTO, restore_caps,
+CTLFLAG_RWTUN | CTLFLAG_VNET, &VNET_NAME(restore_caps), 0,
+"Restore member interface flags on reinit");
+
/* share MAC with first bridge member */
VNET_DEFINE_STATIC(int, bridge_inherit_mac);
#d
eebsd.org/src/commit/sys/net/if_bridge.c?id=ec29c623005ca6a32d44fb59bc2a759a96dc75e4
You can see a variable "bif_savedcaps" was added so that the bridge now tracks
what the original interface flags were.
Then when a member is removed, it looks like all of a bridge's interfaces
interface(s) in rc.conf. This requires knowing what
you need to disable to make sure your NIC and epair have equal capabilities so
that when the epair interface is added to the bridge, there's no need to reinit
the NIC to make the capabilities match, and therefore, no connectivity loss.
-ixl-bridge
1c1
<
options=4e503bb
---
> options=4a500b9
3c3
<
options=4e503bb
---
> options=4a500b9
--
You are receiving this mail because:
You are the assignee for the bug.
from sp...@bway.net ---
I burned a few hours on this last night, first thinking something was amiss
with iocage (fair assumption, as it seems to be another abandoned project).
Then while troubleshooting, I started running the bridge creation and interface
additions by hand and noticed my prompt was
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=234550
Marius Strobl changed:
What|Removed |Added
Resolution|--- |FIXED
Status|Open
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=254343
Graham Perrin changed:
What|Removed |Added
Assignee|n...@freebsd.org |grahamper...@freebsd.org
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=254343
Zhenlei Huang changed:
What|Removed |Added
CC||z...@freebsd.org
--- Comment #13 f
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=254343
Mina Galić changed:
What|Removed |Added
Assignee|b...@freebsd.org|n...@freebsd.org
Status|N
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=271869
Mina Galić changed:
What|Removed |Added
CC||free...@igalic.co
--- Comment #6 from
Status|New |Closed
--- Comment #5 from ronald.heggenber...@docoscope.com ---
Alright... I figured out that I was running networkmgr - which caused the
bridge interface to be destroyed. I uninstalled networkmgr and now it works as
intended.
--
You are receiving this
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=271869
Mark Linimon changed:
What|Removed |Added
Assignee|b...@freebsd.org|n...@freebsd.org
--
You are receiv
Hi, folks!
I've got an unusual problem/behavior, when configuring a bridge
interface with tap interface members (the bridge interface disappears
after the "ifconfig bridge0 addm tap0" command)...
see https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=271869
Has anyone encoun
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=205706
Piotr Kubaj changed:
What|Removed |Added
CC||pku...@freebsd.org
Resolutio
I have a -CURRENT system where the networking is kind of convoluted.
The native networking on the machine itself is across 4 cxl(4)
interfaces + lagg(4) + if_bridge(4). The host itself and the few bhyve
VMs on that bridge are still working fine.
There is also a 2 igb(4) interfaces + lagg(4
i just managed to get the dump.
(kgdb) bt
#0 __curthread() at /usr/src/sys/amd64/include/pcpu_aux.h:55
#1 dump_savectx() at /usr/src/sys/kern/kern_shutdown.c:394
#2 0x80c01138in dumpsys(di=0x0) at
/usr/src/sys/x86/include/dump.h:87
#3 doadump(textdump=) at
/usr/src/sys/kern/kern_shutdow
1 - 100 of 1044 matches
Mail list logo