>Synopsis: arm64: dwge(4) ifconfig panic
>Category: kernel panic
>Environment:
System : OpenBSD 6.6
Details : OpenBSD 6.6-current (GENERIC.MP) #236: Fri Nov 8
14:46:36 EST 2019
t...@rocky.intricatesoftware.com:/sys/arch/arm64/compile/GE
>Synopsis: arm64: syscall_return panic while booting
>Category: kernel panic
>Environment:
System : OpenBSD 6.6
Details : OpenBSD 6.6-current (GENERIC.MP) #236: Fri Nov 8
14:46:36 EST 2019
t...@rocky.intricatesoftware.com:/sys/arch/arm6
>Synopsis: panic after ahci0: log page read failed, slot 31 was still
>active
>Category: kernel
>Environment:
System : OpenBSD 6.6
Details : OpenBSD 6.6-beta (GENERIC.MP) #245: Sat Sep 28 20:43:51
MDT 2019
dera...@arm64.openbsd.org:/usr
I have a new install on a Rock5b (Rockchip RK3588 based) where the disk
subsystem appears to be unstable and locking up. Under heavy load using
dpb(1) building ports, I have had the storage appear to get stuck on inode
locks. It takes about an hour or two to reproduce but is reproduceable.
When it
With the recent improvements to witness I can now get a better
report of the lock order reversal I can reproduce on arm64 on
my rock5b.
May 12 13:39:19 rock5b /bsd: witness: lock order reversal:
May 12 13:39:19 rock5b /bsd: 1st 0xff8001200700 /sys/dev/rnd.c:321
(/sys/dev/rnd.c:321)
May 12 13
>Synopsis: kernel data fault in data_access_fault
>Category: sparc64
>Environment:
System : OpenBSD 6.9
Details : OpenBSD 6.9-current (GENERIC.MP) #3: Thu Jul 1 18:00:32
EDT 2021
t...@oracle.intricatesoftware.com:/sys/arch/sparc64/compi
l Atom
and not the Soekris board) for a week or so.
-Kurt
specific to TPM 1.2 mode.
I'm happy to twiddle more BIOS settings if you come up with a patch
for TPM 1.2 on the x250.
--Kurt Mosiejczuk
sg buffer kept the old boot.
There does seem to be some noise in the dmesg. (Also some witness "lock
order reversal" stuff).
I'll try and tease apart the multiple boots.
--Kurt
thread
1 273837 0 0 30x82 wait init
0 0 -1 0 3 0x10200 scheduler swapper
Attempting to use mach ddbcpu 0 just hung forever.
--Kurt
On Thu, Oct 22, 2020 at 01:38:02PM -0400, Kurt Mosiejczuk wrote:
> With both yesterday's and today's snapshots for sparc64, my LDOMs panic
> if they run pfctl. If I get in single user mode and chmod 000 /sbin/pfctl,
> then the machine finishes boot into multi-user.
This seems
packages completing properly. I'm guessing it may have
been causing your problems with X also. Can you try installing that?
BUILDINFO is:
Build date: 1580311749 - Wed Jan 29 15:29:09 UTC 2020
--Kurt
On Wed, Jan 29, 2020 at 11:41:46PM -0600, David Savolainen wrote:
> Kurt,
> I installed today's snapshot with no change. I don't think this is a mesa
> issue. On the surface, X seems to be configured and running correctly.
> Except that video is not being directed t
Both sparc64-1c and sparc64-2c panicked. Apparently they did during startup
before any processes were run. (I had cu running in a tmux for sparc64-1c).
Repeating the upgrade manually via the bsd.rd restored those ldoms to
working order.
--Kurt
sparc64-2c:
ddb{0}> show panic
trap type 0
On Thu, Apr 09, 2020 at 12:57:44PM -0600, Theo de Raadt wrote:
> Yes, clang bsd.rd doesn't work.
Just to clarify. The upgrades completed. It was the boot off from bsd
that panicked.
--Kurt
> Kurt Mosiejczuk wrote:
>
> > Both sparc64-1c and sparc64-2c panicked. Appare
e weekly. The only thing it does not do is nice
itself.
Anyone have an argument to keep all the duplicate logic in weekly?
I know I may be missing something.
--Kurt
> Index: usr.bin/locate/locate/updatedb.sh
> ===
> RCS f
On Tue, Sep 10, 2019 at 04:56:42PM +0100, Raf Czlonka wrote:
> How about ditching dirname(1)?
File may not exist yet, in which case -w will be false, even if we
have permissions to create it.
--Kurt
> Index: usr.bin/locate/locate/updat
On Tue, Sep 10, 2019 at 05:03:54PM +0100, Raf Czlonka wrote:
> On Tue, Sep 10, 2019 at 04:58:19PM BST, Kurt Mosiejczuk wrote:
> > On Tue, Sep 10, 2019 at 04:56:42PM +0100, Raf Czlonka wrote:
> > > How about ditching dirname(1)?
> > File may not exist yet, in which case -
es/V3_chap02.html#tag_18_06_02
In that case, I'd go with your version of it.
OK kmos
--Kurt
On Sun, 2019-10-06 at 09:20 +1000, Jonathan Matthew wrote:
> On Fri, Oct 04, 2019 at 06:24:16PM -0400, k...@intricatesoftware.com wrote:
> >
> > >
> > > Synopsis: panic after ahci0: log page read failed, slot 31 was still
> > > active
> > > Category: kernel
> > > Environment:
> > System
, 0x1100/0x2f02, error 2
> I believe your bsd.rd has no ramdisk filesystem inside it.
>
You are correct, thank you. bsd.mp booted fine. I'll make
a proper bsd.rd and report back results of testing install
with it.
-Kurt
On Sun, 2019-10-06 at 20:01 -0400, Kurt Miller wrote:
> On Sun, 2019-10-06 at 17:47 -0600, Theo de Raadt wrote:
> >
> > >
> > >
> > > I applied your diff and built RAMDISK to test it. However, the
> > > boot failed with this (full boot message late
On Sun, 2019-10-06 at 19:23 -0600, Theo de Raadt wrote:
> Kurt Miller wrote:
>
> >
> > On Sun, 2019-10-06 at 20:01 -0400, Kurt Miller wrote:
> > >
> > > On Sun, 2019-10-06 at 17:47 -0600, Theo de Raadt wrote:
> > > >
> > > >
> &
On Mon, 2019-10-07 at 13:15 -0400, Kurt Miller wrote:
> On Sun, 2019-10-06 at 19:23 -0600, Theo de Raadt wrote:
> >
> > Kurt Miller wrote:
> >
> > >
> > >
> > > On Sun, 2019-10-06 at 20:01 -0400, Kurt Miller wrote:
> > > >
>
On Fri, 2019-10-11 at 10:21 +1000, Jonathan Matthew wrote:
> On Mon, Oct 07, 2019 at 01:30:52PM -0400, Kurt Miller wrote:
> >
> > >
> > > I hit the issue again using the latest snapshot which
> > > includes the work-around.
> > >
> > >
On Fri, 2019-10-11 at 12:43 -0400, Kurt Miller wrote:
> On Fri, 2019-10-11 at 10:21 +1000, Jonathan Matthew wrote:
> >
> > On Mon, Oct 07, 2019 at 01:30:52PM -0400, Kurt Miller wrote:
> > >
> > >
> > > >
> > > >
> > > > I
On Fri, 2019-10-11 at 10:21 +1000, Jonathan Matthew wrote:
> On Mon, Oct 07, 2019 at 01:30:52PM -0400, Kurt Miller wrote:
> >
> > >
> > > I hit the issue again using the latest snapshot which
> > > includes the work-around.
> > >
> > >
I’ve been hunting an intermittent jdk crash on sparc64 for some time now.
Since egdb has not been up to the task, I created a small c program which
reproduces the problem. This partially mimics the jdk startup where a number
of detached threads are created. When each thread is created the main thre
The test program as an attachment as my mua mangles inline
code - sorry. Also on cvs:~kurt/startup.c
startup.c
Description: Binary data
> On Aug 13, 2023, at 11:38 PM, Kurt Miller wrote:
>
> The test program as an attachment as my mua mangles inline
> code - sorry. Also on cvs:~kurt/startup.c
>
>
The attachment had NTHREADS set to 400. That was for a one off test.
All my testing has been at 40 threads.
Al
> On Aug 14, 2023, at 5:42 PM, Theo Buehler wrote:
>
> On Mon, Aug 14, 2023 at 08:47:22PM +, Miod Vallat wrote:
>> For what it's worth, I couldn't get your test to fail on a dual-cpu
>> sun4u. Either it's a sun4v-specific issue or it needs many more cpus to
>> trigger.
>
> I can reproduce th
On Sep 2, 2023, at 7:35 AM, Martin Pieuchot wrote:
>
> On 28/06/23(Wed) 20:07, Kurt Miller wrote:
>> On Jun 28, 2023, at 7:16 AM, Martin Pieuchot wrote:
>>>
>>> On 28/06/23(Wed) 08:58, Claudio Jeker wrote:
>>>>
>>>> I doubt this is a
On Aug 16, 2023, at 4:14 PM, Kurt Miller wrote:
>
>> On Aug 14, 2023, at 5:42 PM, Theo Buehler wrote:
>>
>> On Mon, Aug 14, 2023 at 08:47:22PM +, Miod Vallat wrote:
>>> For what it's worth, I couldn't get your test to fail on a dual-cpu
>>>
I experimented with adding a nanosleep after pthread_create() to
see if that would resolve the segfault issue - it does, but it
also exposed a new failure mode on -current. Every so often
the test program would not exit now. Thinking it may be related
to the detached threads I reworked the test pro
On Oct 25, 2023, at 4:26 AM, Claudio Jeker wrote:
>
> On Mon, Oct 23, 2023 at 11:06:53PM +, Kurt Miller wrote:
>> I experimented with adding a nanosleep after pthread_create() to
>> see if that would resolve the segfault issue - it does, but it
>> also exposed a new
While hunting for why the jdk's stack overflow execptions were randomly
not working, I determined the root cause was that pthread_main_np(3)
randomly fails to work properly. Reproduced on sparc64, amd64, and
aarch64. On i386 the standalone test program doesn't reproduce the
problem. However, I obs
On Dec 8, 2023, at 10:25 AM, Miod Vallat wrote:
>
> How about that diff.
>
Tested on amd64 and it does indeed fix the problem. This is clearly
the cause. okay kurt@
> Index: include/tib.h
> ===
> RCS file: /O
few hours as well. It has not reproduced the
problem. I’ve been using miod’s version of the diff.
Thank you Claudio for finding the root cause. I’m sure this
will help more than the jdk on sparc64. Okay kurt@ as well.
-Kurt
Testing now with my third power-supply and usb-c cable. This time
I caught a new lock order reversal in uvm and a kernel panic
uvm_fault failed.
witness: lock order reversal:
1st 0xff8001246998 /sys/dev/rnd.c:321 (/sys/dev/rnd.c:321)
2nd 0xff80012081d0 /sys/kern/kern_timeout.c:57
(/sys/
I made some changes and have a different panic and kernel diagnostic
assertion on the rock5b. I am now running with u-boot 2024-04 and
swapped out the nvme I previously was using with a lower power and
low heat one; SK Hynix Gold P31.
*cpu0: uvm_fault failed: ff800085c1bc esr 9647 far
While building devel/jdk/1.8 on May 3rd snapshot I noticed the build freezing
and processes getting stuck like ps. After enabling ddb.console I was able to
reproduce the livelock and capture cpu traces. Dmesg at the end.
Let me know if more information is needed as this appears to be rather
reprodu
On May 12, 2023, at 10:26 AM, Martin Pieuchot wrote:
>
> On 09/05/23(Tue) 20:02, Kurt Miller wrote:
>> While building devel/jdk/1.8 on May 3rd snapshot I noticed the build freezing
>> and processes getting stuck like ps. After enabling ddb.console I was able to
>> re
On May 13, 2023, at 9:16 AM, Kurt Miller wrote:
>
> On May 12, 2023, at 10:26 AM, Martin Pieuchot wrote:
>>
>> On 09/05/23(Tue) 20:02, Kurt Miller wrote:
>>> While building devel/jdk/1.8 on May 3rd snapshot I noticed the build
>>> freezing
>>&g
On May 13, 2023, at 9:16 AM, Kurt Miller wrote:
>
> On May 12, 2023, at 10:26 AM, Martin Pieuchot wrote:
>>
>> On 09/05/23(Tue) 20:02, Kurt Miller wrote:
>>> While building devel/jdk/1.8 on May 3rd snapshot I noticed the build
>>> freezing
>>&g
On May 13, 2023, at 3:07 PM, Kurt Miller wrote:
>
> On May 13, 2023, at 9:16 AM, Kurt Miller wrote:
>>
>> On May 12, 2023, at 10:26 AM, Martin Pieuchot wrote:
>>>
>>> On 09/05/23(Tue) 20:02, Kurt Miller wrote:
>>>> While building devel
> On May 17, 2023, at 2:39 PM, Kurt Miller wrote:
>
> On May 13, 2023, at 3:07 PM, Kurt Miller <mailto:k...@intricatesoftware.com>> wrote:
>>
>> On May 13, 2023, at 9:16 AM, Kurt Miller wrote:
>>>
>>> On May 12, 2023, at 10:26 AM, Martin Pi
On May 22, 2023, at 2:27 AM, Claudio Jeker wrote:
> I have seen these WITNESS warnings on other systems as well. I doubt this
> is the problem. IIRC this warning is because sys_mount() is doing it wrong
> but it is not really an issue since sys_mount is not called often.
Yup. I see that now that
On Jun 14, 2023, at 12:51 PM, Vitaliy Makkoveev wrote:
>
> On Tue, May 30, 2023 at 01:31:08PM +0200, Martin Pieuchot wrote:
>> So it seems the java process is holding the `sysctl_lock' for too long
>> and block all other sysctl(2). This seems wrong to me. We should come
>> up with a clever way
On Jun 27, 2023, at 1:52 PM, Kurt Miller wrote:
>
> On Jun 14, 2023, at 12:51 PM, Vitaliy Makkoveev wrote:
>>
>> On Tue, May 30, 2023 at 01:31:08PM +0200, Martin Pieuchot wrote:
>>> So it seems the java process is holding the `sysctl_lock' for too long
>>
ed at cpu_idle_cycle+0x44:and %g1, -0x3, %g1
>> sched_idle(400e0698360, 40015160ad0, 0, 0, 0, 0) at sched_idle+0x158
>> proc_trampoline(0, 0, 0, 0, 0, 0) at proc_trampoline+0x14
>> ddb{8}> machine ddbcpu 9
>> Stopped at cpu_idle_cycle+0x44:and %g1, -0x3, %g1
>> sched_idle(400e06a8360, 40015160820, 0, 0, 0, 0) at sched_idle+0x158
>> proc_trampoline(0, 0, 0, 0, 0, 0) at proc_trampoline+0x14
>> ...
>> ddb{62}> machine ddbcpu 0x3f
>> Stopped at cpu_idle_cycle+0x44:and %g1, -0x3, %g1
>> sched_idle(400e0a08360, 40015142b20, 0, 0, 0, 0) at sched_idle+0x158
>> proc_trampoline(0, 0, 0, 0, 0, 0) at proc_trampoline+0x14
>>
>> /sys/arch/sparc64/sparc64/trap.c:869
>> 2f0: 84 10 a0 00 mov %g2, %g2
>>2f0: R_SPARC_M44uvmexp
>>
>> /sys/arch/sparc64/sparc64/trap.c:869
>>uvmexp.traps++;
>
> I fear there's nothing in there that provides a useful hint what went
> wrong.
Ok. Thanks for looking it over.
-Kurt
>Synopsis: onboard SCSI throws errors on Ultra 2 (and Ultra 1E)
>Category: sparc64
>Environment:
System : OpenBSD 5.0
Details : OpenBSD 5.0 (GENERIC) #36: Wed Aug 17 10:13:34
MDT 2011
dera...@sparc64.openbsd.org:/usr/src/sys/arch/sparc64/c
ompile/GENERIC
51 matches
Mail list logo