> On 17.07.2020, at 00:16, Daniel Dettlaff wrote:
>
> Hello, I heard it's known?
>
> I built recent system from stable/12 branch and either "ifconfig bridge0
> create" and `cloned_interfaces="bridge0"` + `ifconfig_bridge0_aliases="inet
> 17
Hello, I heard it's known?
I built recent system from stable/12 branch and either "ifconfig bridge0
create" and `cloned_interfaces="bridge0"` + `ifconfig_bridge0_aliases="inet
172.16.7.1/16` in rc.conf.local causes instant KP and system reboot.
Hello.
I have my second kernel panic, related with “MAC_PORTACL” kernel module loading
in CURRENT.
The only thing to do is to put mac_portacl_load=“YES” in loader.conf and boot
machine.
I built kernel using this config:
https://github.com/VerKnowSys/ServeD-OS/blob/master/kernel/VERKNOWSYS-11.0
Hello.
I have interesting verbose output with backtrace (not panic) from one of my
VMs: http://s.verknowsys.com/f0d457ce9420399baaf531012c33eb81.png
It’s triggered by autostarting jail on bridged vlan interface (no VNET feature
enabled)
I built kernel using this config:
https://github.com/VerK
Hello.
I have my second kernel panic, related with “MAC_PORTACL” kernel module loading
in CURRENT.
The only thing to do is to put mac_portacl_load=“YES” in loader.conf and boot
machine.
I built kernel using this config:
https://github.com/VerKnowSys/ServeD-OS/blob/master/kernel/VERKNOWSYS-11.0
Hello.
I have my first kernel panic, probably related to pf/pflog in CURRENT:
I built kernel using this config:
https://github.com/VerKnowSys/ServeD-OS/blob/master/kernel/VERKNOWSYS-11.0
My make.conf: https://github.com/VerKnowSys/ServeD-OS/blob/master/etc/make.conf
My src.conf: https://github.c
Hi, I have custom kernel config:
https://gist.github.com/dmilith/234b3e6b65b6fa606e27
If I uncomment VIMAGE and epair from lines 10-11, each time I try to launch any
jail with vnet it panics the kernel. (also HBSD options might be omited - they
change nothing in this case, panics happened also w
Hi!
I saw posts from 2009 on this mailing list, about a boot issue with ZFS.
I have IBM server with HW RAID10 made from 36 disks.
I'm using FreeBSD 9.2-RELEASE.
From system point of view my raid is seen as 65TiB drive at /dev/mfid0
From ZFS point of view it's plain stripe as one zvol (on mfid0p2)
Hi!
I saw posts from 2009 on this mailing list, about a boot issue with ZFS.
I have IBM server with HW RAID10 made from 36 disks.
I'm using FreeBSD 9.2-RELEASE.
From system point of view my raid is seen as 65TiB drive at /dev/mfid0
From ZFS point of view it's plain stripe as one zvol (on mfid0p2)