Hi Finn,
On 23/04/23 19:46, Finn Thain wrote:
On Wed, 19 Apr 2023, Michael Schmitz wrote:
I wonder what we'd see if we patched the kernel to log every user data
write fault caused by a MOVEM instruction. I'll try to code that up.
If these instructions did always cause stack corruption on 030,
On Wed, 19 Apr 2023, Michael Schmitz wrote:
> > I wonder what we'd see if we patched the kernel to log every user data
> > write fault caused by a MOVEM instruction. I'll try to code that up.
>
> If these instructions did always cause stack corruption on 030, I think
> we would have noticed lon
Hi Finn,
Am 19.04.2023 um 13:50 schrieb Finn Thain:
I would have expected to see a different signal trampoline (for
sys_rt_sigreturn) ...
Well, this seems to be the trampoline from setup_frame() and not
setup_rt_frame().
According to the manpages I've seen, glibc ought to pick rt signals if
On Tue, 18 Apr 2023, Michael Schmitz wrote:
> Am 18.04.2023 um 14:04 schrieb Finn Thain:
> > On Tue, 18 Apr 2023, Michael Schmitz wrote:
> >> Am 16.04.2023 um 18:44 schrieb Finn Thain:
> >>
> >>> 0xe750: 0xc01a saved $a5 == libc .got
> >>> 0xe74c: 0xc0023e8
Hi Finn,
Am 18.04.2023 um 14:04 schrieb Finn Thain:
On Tue, 18 Apr 2023, Michael Schmitz wrote:
Am 16.04.2023 um 18:44 schrieb Finn Thain:
The backtrace confirms that this signal was delivered during execution
of __wait3(). (Delivery can happen during execution of __libc_fork()
but I just r
On Tue, 18 Apr 2023, Michael Schmitz wrote:
> Am 16.04.2023 um 18:44 schrieb Finn Thain:
>
> >
> > The backtrace confirms that this signal was delivered during execution
> > of __wait3(). (Delivery can happen during execution of __libc_fork()
> > but I just repeat the test until I get these duc
Hi Finn,
Am 16.04.2023 um 18:44 schrieb Finn Thain:
As I understand this, the call to sys_sigreturn() removes both this code
(signal trampoline IIRC) and the signal stack...
I don't see that stuff getting removed when I run dash under gdb under
QEMU. With breakpoints at the head of onsig() and
On Sun, 16 Apr 2023, Michael Schmitz wrote:
> Am 14.04.2023 um 21:30 schrieb Finn Thain:
> > Would signal delivery erase any of the memory immediately below the
> > USP? If so, it would erase those old stack frames, which would give
> > some indication of the timing of signal delivery.
>
> The
Hi Finn,
Am 14.04.2023 um 21:30 schrieb Finn Thain:
Would signal delivery erase any of the memory immediately below the USP?
If so, it would erase those old stack frames, which would give some
indication of the timing of signal delivery.
The signal stack is set up immediately below USP, from m
On Wed, 5 Apr 2023, I wrote:
>
> I don't care that much what dash does as long as it isn't corrupting
> it's own stack, which is a real possibility, and one which gdb's data
> watch point would normally resolve. And yet I have no way to tackle
> that.
>
> I've been running gdb under QEMU, whe
On Tue, 4 Apr 2023, I wrote:
> On Tue, 4 Apr 2023, I wrote:
>
> >
> > The actual corruption might offer a clue here. I believe the saved %a3
> > was clobbered with the value 0xefee1068 which seems to be a pointer into
> > some stack frame that would have come into existence shortly after
> >
Hi Geert,
Am 08.04.2023 um 00:06 schrieb Geert Uytterhoeven:
Hi Michael,
On Fri, Apr 7, 2023 at 3:58 AM Michael Schmitz wrote:
The easiest way to do that is to log all wait and signal syscalls, as
well as process exit. That might alter timing if these log messages go
to the serial console tho
Hi Michael,
On Fri, Apr 7, 2023 at 3:58 AM Michael Schmitz wrote:
> The easiest way to do that is to log all wait and signal syscalls, as
> well as process exit. That might alter timing if these log messages go
> to the serial console though. Is that what you have in mind?
Store to RAM, retrieve
Hi Finn,
Am 05.04.2023 um 14:00 schrieb Finn Thain:
On Wed, 5 Apr 2023, Michael Schmitz wrote:
On 4/04/23 12:13, Finn Thain wrote:
It looks like I messed up. waitproc() appears to have been invoked
twice, which is why wait3 was invoked twice...
GNU gdb (Debian 13.1-2) 13.1
...
(gdb) set osab
On Wed, 5 Apr 2023, Michael Schmitz wrote:
> On 4/04/23 12:13, Finn Thain wrote:
> > It looks like I messed up. waitproc() appears to have been invoked
> > twice, which is why wait3 was invoked twice...
> >
> > GNU gdb (Debian 13.1-2) 13.1
> > ...
> > (gdb) set osabi GNU/Linux
> > (gdb) file /bin/
Hi Finn,
On 4/04/23 12:13, Finn Thain wrote:
It looks like I messed up. waitproc() appears to have been invoked
twice, which is why wait3 was invoked twice...
GNU gdb (Debian 13.1-2) 13.1
...
(gdb) set osabi GNU/Linux
(gdb) file /bin/dash
Reading symbols from /bin/dash...
Reading symbols from
On Tue, 4 Apr 2023, I wrote:
>
> The actual corruption might offer a clue here. I believe the saved %a3
> was clobbered with the value 0xefee1068 which seems to be a pointer into
> some stack frame that would have come into existence shortly after
> __GI___wait4_time64 was called.
Wrong... it
nal (SIGCHLD)
> > and it gets handled before __wait3() returns. (Of course, the "stack
> > smashing detected" failure never shows up in QEMU.)
>
> Might be a clue that we need multiple signals to force the stack
> smashing error. And we might not get that in QEMU, d
On Mon, 3 Apr 2023, Michael Schmitz wrote:
> On 2/04/23 22:46, Finn Thain wrote:
>
> > This is odd:
> >
> > https://sources.debian.org/src/dash/0.5.12-2/src/jobs.c/?hl=1165#L1165
> >
> >1176 do {
> >1177 gotsigchld = 0;
> >1178 do
> >1179
quite long enough on that).
Maybe an interaction between (multiple?) signals and syscall return...
When running dash from gdb in QEMU, there's only one signal (SIGCHLD) and
it gets handled before __wait3() returns. (Of course, the "stack smashing
detected" failure never shows up
Hi Finn,
On 2/04/23 22:46, Finn Thain wrote:
On Sat, 1 Apr 2023, Andreas Schwab wrote:
On Apr 01 2023, Finn Thain wrote:
So, in summary, the canary validation failed in this case not because
the canary got clobbered but because %a3 got clobbered, somewhere
between __wait3+24 and __wait3+70 (
On Sat, 1 Apr 2023, Andreas Schwab wrote:
> On Apr 01 2023, Finn Thain wrote:
>
> > So, in summary, the canary validation failed in this case not because
> > the canary got clobbered but because %a3 got clobbered, somewhere
> > between __wait3+24 and __wait3+70 (below).
> >
> > The call to __
t having first restored the saved registers?
>
> Maybe an interaction between (multiple?) signals and syscall return...
When running dash from gdb in QEMU, there's only one signal (SIGCHLD) and
it gets handled before __wait3() returns. (Of course, the "stack smashing
detected&qu
Hi Finn,
nice work!
Am 01.04.2023 um 22:27 schrieb Finn Thain:
So, in summary, the canary validation failed in this case not because the
canary got clobbered but because %a3 got clobbered, somewhere between
__wait3+24 and __wait3+70 (below).
The call to __GI___wait4_time64 causes %a3 to be sav
On Apr 01 2023, Finn Thain wrote:
> So, in summary, the canary validation failed in this case not because the
> canary got clobbered but because %a3 got clobbered, somewhere between
> __wait3+24 and __wait3+70 (below).
>
> The call to __GI___wait4_time64 causes %a3 to be saved to and restored
>
On 31 Mar 2023, I wrote,
> ...
>
> So %a3 was a pointer into stack frame 6??
>
> (gdb) x/z $a3
> 0xefee1068: 0xc00e0172
>
> Clearly 0xd000c38e != 0xc00e0172 (that is, %fp@(-4) != %a3@) but did the
> canary value change? It rather looks like the canary pointer is wrong...
>
So the questio
read_kill.c:89
#3 0xc0064c22 in __GI_raise (sig=6) at ../sysdeps/posix/raise.c:26
#4 0xc0052faa in __GI_abort () at abort.c:79
#5 0xc009b328 in __libc_message (action=, fmt=)
at ../sysdeps/posix/libc_fatal.c:155
#6 0xc012a3c2 in __GI___fortify_fail (
msg=0xc0182c5e "stack
On Sat, 18 Feb 2023, I wrote:
> On Fri, 17 Feb 2023, Stan Johnson wrote:
>
> >
> > That's not to say a SIGABRT is ignored, it just doesn't kill PID 1.
> >
>
> I doubt that /sbin/init is generating the "stack smashing detected"
> error bu
Hi Andreas,
Am 18.02.2023 um 20:59 schrieb Andreas Schwab:
On Feb 18 2023, Michael Schmitz wrote:
Not sure it's wise to allow init to abort though.
When PID 1 exits, the kernel panics.
I might have been a mite subtle there ...
Anyway - Finn's intention was to log signals caused by the sta
On Feb 18 2023, Michael Schmitz wrote:
> Not sure it's wise to allow init to abort though.
When PID 1 exits, the kernel panics.
--
Andreas Schwab, sch...@linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510 2552 DF73 E780 A9DA AEC1
"And now for something completely different."
On Fri, 17 Feb 2023, Stan Johnson wrote:
> On 2/17/23 4:24 PM, Finn Thain wrote:
> >
> > On Fri, 17 Feb 2023, Stan Johnson wrote:
> >
> >>
> >> The error could have been exposed in any package where
> >> "-fstack-protector-strong" was recently added.
> >>
> >
> > And if you find the last goo
> ... I'll check
> this weekend to see whether QEMU is still not seeing any problem with my
> latest kernel builds together with the latest Debian SID.
Confirmed, there is still no stack smashing in QEMU 7.2.0 with kernel
6.1.12 and the current Debian SID.
The client reports this:
root@m68k:~# c
Finn,
On 2/17/23 4:24 PM, Finn Thain wrote:
>
> On Fri, 17 Feb 2023, Stan Johnson wrote:
>
>>
>> The error could have been exposed in any package where
>> "-fstack-protector-strong" was recently added.
>>
>
> And if you find the last good userland binary, what then? Fix the bad
> userland bin
On Sat, 18 Feb 2023, Michael Schmitz wrote:
> Am 18.02.2023 um 12:49 schrieb Finn Thain:
> > On Sat, 18 Feb 2023, Andreas Schwab wrote:
> >
> >> On Feb 18 2023, Finn Thain wrote:
> >>
> >>> Why do you say init ignores SIGABRT?
> >>
> >> PID 1 is special, it never receives signals it doesn't handle
On Fri, 17 Feb 2023, Stan Johnson wrote:
>
> What's interesting to me is that although there are stack smashing
> errors and "aborted" messages during boot, nothing seems to have
> actually failed or aborted (note that no processes are named as
> "terminated" or "aborted" in the messages).
Hi Finn,
Am 18.02.2023 um 12:49 schrieb Finn Thain:
On Sat, 18 Feb 2023, Andreas Schwab wrote:
On Feb 18 2023, Finn Thain wrote:
Why do you say init ignores SIGABRT?
PID 1 is special, it never receives signals it doesn't handle.
I see. I wonder if there is some way to configure the kern
Finn,
On 2/17/23 4:49 PM, Finn Thain wrote:
> On Sat, 18 Feb 2023, Andreas Schwab wrote:
>
>> On Feb 18 2023, Finn Thain wrote:
>>
>>> Why do you say init ignores SIGABRT?
>>
>> PID 1 is special, it never receives signals it doesn't handle.
>>
>
> I see. I wonder if there is some way to configur
On Sat, 18 Feb 2023, Andreas Schwab wrote:
> On Feb 18 2023, Finn Thain wrote:
>
> > Why do you say init ignores SIGABRT?
>
> PID 1 is special, it never receives signals it doesn't handle.
>
I see. I wonder if there is some way to configure the kernel so that PID 1
could be aborted for fstack
On Fri, 17 Feb 2023, Stan Johnson wrote:
>
> The error could have been exposed in any package where
> "-fstack-protector-strong" was recently added.
>
And if you find the last good userland binary, what then? Fix the bad
userland binary? That's wonderful but it doesn't explain why the bad
On Feb 18 2023, Finn Thain wrote:
> Why do you say init ignores SIGABRT?
PID 1 is special, it never receives signals it doesn't handle.
--
Andreas Schwab, sch...@linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510 2552 DF73 E780 A9DA AEC1
"And now for something completely different."
On 2/17/23 3:38 PM, Finn Thain wrote:
> On Fri, 17 Feb 2023, Stan Johnson wrote:
>
>>
>> That's not to say a SIGABRT is ignored, it just doesn't kill PID 1.
>>
>
> I doubt that /sbin/init is generating the "stack smashing detected" error
> bu
On Fri, 17 Feb 2023, Stan Johnson wrote:
>
> That's not to say a SIGABRT is ignored, it just doesn't kill PID 1.
>
I doubt that /sbin/init is generating the "stack smashing detected" error
but you may need to modify it to find out. If you can't figure out
On 2/17/23 2:54 PM, Finn Thain wrote:
>
> On Fri, 17 Feb 2023, Stan Johnson wrote:
>
>> I noticed that /sbin/init seems to ignore SIGABRT, so I thought that
>> might mean that init itself was somehow triggering the stack smashing
>> but nothing was really aborting, but I could be wrong about th
On Fri, 17 Feb 2023, Stan Johnson wrote:
> I noticed that /sbin/init seems to ignore SIGABRT, so I thought that
> might mean that init itself was somehow triggering the stack smashing
> but nothing was really aborting, but I could be wrong about that.
Why do you say init ignores SIGABRT? I co
Hi Michael,
On 2/17/23 1:31 PM, Michael Schmitz wrote:
> ...
>
> 'dpkg hold' should mark packages so apt won't consider them for
> upgrading. You may want to check what other packages depend on newer
> versions of sysvinit before attempting that (search for 'sysvinit-utils
> (>=' in the Packages
Hi Stan,
Am 18.02.2023 um 06:09 schrieb Stan Johnson:
Hi Michael,
On 2/16/23 4:10 PM, Michael Schmitz wrote:
...
'apt-get source sysvinit=3.06-2' will download and unpack that specific
version. That should unpack the source in sysvinit-3.06-2/.
That doesn't work for me:
# apt-get source sysv
On Fri, 2023-02-17 at 10:09 -0700, Stan Johnson wrote:
> Would downloading the source from an x86 repository be sufficient, ...?
Yes, there is no architecture-specific source code repository.
Use:
deb-src http://ftp.debian.org/debian/ unstable main
Adrian
--
.''`. John Paul Adrian Glaubitz
Hi Michael,
On 2/16/23 4:10 PM, Michael Schmitz wrote:
> ...
> 'apt-get source sysvinit=3.06-2' will download and unpack that specific
> version. That should unpack the source in sysvinit-3.06-2/.
That doesn't work for me:
# apt-get source sysvinit=3.06-2
Reading package lists... Done
E: You must
Hi Stan,
On 16/02/23 15:44, Stan Johnson wrote:
I could re-install the previous sysvinit over the version in the current
Debian SID and see if the stack smashing is still gone. I don't know how
to do that, but if someone has instructions, I'll try. I'm guessing I
need to download the previous .d
Hi Adrian,
On 2/16/23 3:07 AM, John Paul Adrian Glaubitz wrote:
> Hi Stan!
>
> On Wed, 2023-02-15 at 19:44 -0700, Stan Johnson wrote:
>> Going from 15 min to about 4 min seems worth the effort on old hardware.
>> As always, YMMV. Developers who use QEMU or other emulators likely don't
>> always r
On 2/15/23 9:16 PM, Finn Thain wrote:
> On Wed, 15 Feb 2023, Stan Johnson wrote:
>
>>
>> 1) Default Debian kernel (vmlinux-6.1.0-4-m68k, initrd-6.1.0-4-m68k)
>>Default Debian sysvinit scripts
>>Boot time (ABC... to login prompt): 15 min 3 sec
>>NIC not detected, no stack smashing
>>
>
Hi Stan!
On Wed, 2023-02-15 at 19:44 -0700, Stan Johnson wrote:
> Going from 15 min to about 4 min seems worth the effort on old hardware.
> As always, YMMV. Developers who use QEMU or other emulators likely don't
> always realize how long it takes to boot real hardware.
I am not sure what makes
On Wed, 15 Feb 2023, Stan Johnson wrote:
>
> 1) Default Debian kernel (vmlinux-6.1.0-4-m68k, initrd-6.1.0-4-m68k)
>Default Debian sysvinit scripts
>Boot time (ABC... to login prompt): 15 min 3 sec
>NIC not detected, no stack smashing
>
Did you check whether the NIC module (mac8390)
To wrap up my stack smashing tests, I ran a few tests on a Mac IIfx (128
MiB, external SCSI2SD). Serial console logs are attached.
1) Default Debian kernel (vmlinux-6.1.0-4-m68k, initrd-6.1.0-4-m68k)
Default Debian sysvinit scripts
Boot time (ABC... to login prompt): 15 min 3 sec
NIC not
On Sun, 12 Feb 2023, Finn Thain wrote:
>
> On Sat, 11 Feb 2023, Stan Johnson wrote:
>
> > v5.1 x SCSI2SD crashes, goes offline with activity LED on,
> > rootfs corrupted, needed to be restored from backups, SCSI2SD SD card
> > needed to have the Apple driver updated to boot MacOS
>
On Sat, 11 Feb 2023, Stan Johnson wrote:
> I think if there were hardware problems with the SCSI2SD board or the SD
> card, then I would be seeing lots of errors in MacOS and with v6.x
> kernels.
>
That's not true in general. But I can agree that certain SCSI device
faults will show up in any
On 2/11/23 3:11 PM, Finn Thain wrote:
>
> On Sat, 11 Feb 2023, Stan Johnson wrote:
>
>> v5.1 x SCSI2SD crashes, goes offline with activity LED on,
>> rootfs corrupted, needed to be restored from backups, SCSI2SD SD card
>> needed to have the Apple driver updated to boot MacOS
>>
>> v4.
On Sat, 11 Feb 2023, Stan Johnson wrote:
> v5.1 x SCSI2SD crashes, goes offline with activity LED on,
> rootfs corrupted, needed to be restored from backups, SCSI2SD SD card
> needed to have the Apple driver updated to boot MacOS
>
> v4.20 bad stack smashing on first boot, co
Michael,
On 2/10/23 12:55 AM, Michael Schmitz wrote:
> ...
>
> Without Al's patch, I doubt even in case a uaccess fault happens with
> signal pending we'd return -1 from send_fault_sig() (the no_context path
> isn't taken and do_page_fault() returns without error). No kernel
> messages expected i
Hi Stan,
Am 10.02.2023 um 13:24 schrieb Stan Johnson:
Hi Michael,
On 2/8/23 8:41 PM, Michael Schmitz wrote:
...
Following the 040 code a bit further, I suspect that happens in the 040
writeback handler, so this may be a red herring.
I'll try and log such accesses caught by exception tables
Hi Michael,
On 2/8/23 8:41 PM, Michael Schmitz wrote:
> ...
>
> Following the 040 code a bit further, I suspect that happens in the 040
> writeback handler, so this may be a red herring.
>
>> I'll try and log such accesses caught by exception tables on 030 to see
>> if they are rare enough to al
On 2/9/23 2:22 AM, John Paul Adrian Glaubitz wrote:
> On Wed, 2023-02-08 at 10:39 -0700, Stan Johnson wrote:
>> On 2/7/23 4:20 PM, Finn Thain wrote:
>>>
>>> On Tue, 7 Feb 2023, Stan Johnson wrote:
>>> ...
>>> Preventing pointless key generation would be beneficial for all Macs,
>>> Amigas, Ataris,
On Wed, 2023-02-08 at 10:20 +1100, Finn Thain wrote:
> Preventing pointless key generation would be beneficial for all Macs,
> Amigas, Ataris, emulators etc. and measuring the performance of one model
> of Mac versus that of another model seems a bit irrelevant to me.
Feel free to suggest a kern
On Wed, 2023-02-08 at 10:39 -0700, Stan Johnson wrote:
> On 2/7/23 4:20 PM, Finn Thain wrote:
> >
> > On Tue, 7 Feb 2023, Stan Johnson wrote:
> > ...
> > Preventing pointless key generation would be beneficial for all Macs,
> > Amigas, Ataris, emulators etc. and measuring the performance of one m
Hi Stan,
Am 08.02.2023 um 11:58 schrieb Michael Schmitz:
Thanks Stan,
On 8/02/23 08:37, Stan Johnson wrote:
Hi Michael,
On 2/5/23 3:19 PM, Michael Schmitz wrote:
...
Seeing Finn's report that Al Viro's VM_FAULT_RETRY fix may have solved
his task corruption troubles on 040, I just noticed th
On Wed, 8 Feb 2023, Stan Johnson wrote:
> On 2/7/23 4:20 PM, Finn Thain wrote:
> >
> > On Tue, 7 Feb 2023, Stan Johnson wrote:
> > ...
> > Preventing pointless key generation would be beneficial for all Macs,
> > Amigas, Ataris, emulators etc. and measuring the performance of one model
> > of M
Hi,
On 8.2.2023 19.39, Stan Johnson wrote:
The stack smashing appears to be intermittent. And it doesn't show up
while booting the kernel; it only shows up while sysvinit scripts are
running (I haven't tested using systemd, since that would be too painful
on any 68030 slower than about 40 MHz).
If anyone knows of a 68030 emulator (maybe Basilisk?) that can boot
Linux, then I might be able to use that for faster testing.
I've played around with NetBSD on FS-UAE. I'd use it more, except for the
fact that the emulation of the Commodore 2065 ethernet card gives very
flakey networking.
On 2/7/23 4:20 PM, Finn Thain wrote:
>
> On Tue, 7 Feb 2023, Stan Johnson wrote:
> ...
> Preventing pointless key generation would be beneficial for all Macs,
> Amigas, Ataris, emulators etc. and measuring the performance of one model
> of Mac versus that of another model seems a bit irrelevant
On 2/7/23 3:14 PM, Brad Boyer wrote:
> On Tue, Feb 07, 2023 at 12:41:52PM -0700, Stan Johnson wrote:
>> Yes, I do have the L2 cache card installed in the IIci.
>>
>> If you think it would be useful, I don't mind letting the SE/30 run
>> overnight to see if it eventually boots.
>
> I don't know if
On Tue, 7 Feb 2023, Stan Johnson wrote:
> If you think it would be useful, I don't mind letting the SE/30 run
> overnight to see if it eventually boots.
>
Preventing pointless key generation would be beneficial for all Macs,
Amigas, Ataris, emulators etc. and measuring the performance of one
Thanks Stan,
On 8/02/23 08:37, Stan Johnson wrote:
Hi Michael,
On 2/5/23 3:19 PM, Michael Schmitz wrote:
...
Seeing Finn's report that Al Viro's VM_FAULT_RETRY fix may have solved
his task corruption troubles on 040, I just noticed that I probably
misunderstood how Al's patch works.
Botching
On Tue, Feb 07, 2023 at 12:41:52PM -0700, Stan Johnson wrote:
> Yes, I do have the L2 cache card installed in the IIci.
>
> If you think it would be useful, I don't mind letting the SE/30 run
> overnight to see if it eventually boots.
I don't know if it would be useful in any practical sense, but
On Tue, Feb 07, 2023 at 12:31:17AM -0700, Stan Johnson wrote:
> On 2/6/23 8:25 PM, Finn Thain wrote:
> >
> > These systems are too slow for needless key generation so a bug report
> > may be needed.
> >
>
> The Mac IIci (25 MHz) is only about 50% faster that the SE/30 (16 MHz).
> The Debian ker
Hi Brad,
On 2/7/23 12:28 PM, Brad Boyer wrote:
> On Tue, Feb 07, 2023 at 12:31:17AM -0700, Stan Johnson wrote:
>> On 2/6/23 8:25 PM, Finn Thain wrote:
>>>
>>> These systems are too slow for needless key generation so a bug report
>>> may be needed.
>>>
>>
>> The Mac IIci (25 MHz) is only about 50
Hi Michael,
On 2/5/23 3:19 PM, Michael Schmitz wrote:
> ...
>
> Seeing Finn's report that Al Viro's VM_FAULT_RETRY fix may have solved
> his task corruption troubles on 040, I just noticed that I probably
> misunderstood how Al's patch works.
>
> Botching up a fault retry and carrying on may wel
Hi Finn,
On 2/6/23 8:25 PM, Finn Thain wrote:
>
> On Mon, 6 Feb 2023, Stan Johnson wrote:
>
>>> Thanks, so the kernel does start, but hangs later.
>>> Adding "initcall_debug" to the kernel command line may reveal more..
>>> ...
>>
>> Please see attached.
>>
>> ...
>>
>> [ 34.44] calling k
On Mon, 6 Feb 2023, Stan Johnson wrote:
> > Thanks, so the kernel does start, but hangs later.
> > Adding "initcall_debug" to the kernel command line may reveal more..
> > ...
>
> Please see attached.
>
> ...
>
> [ 34.44] calling key_proc_init+0x0/0x5e @ 1
> [ 34.47] initcall key
Hi Geert,
On 2/6/23 1:36 PM, Geert Uytterhoeven wrote:
> ...
>
> Thanks, so the kernel does start, but hangs later.
> Adding "initcall_debug" to the kernel command line may reveal more..
> ...
Please see attached.
thanks
-Stan
se-30_02062023-1.txt.xz
Description: Binary data
Hi Stan,
On Mon, Feb 6, 2023 at 9:31 PM Stan Johnson wrote:
> On 2/6/23 12:52 AM, Geert Uytterhoeven wrote:
> > On Mon, Feb 6, 2023 at 4:42 AM Stan Johnson wrote:
> >> On an SE/30 with 128 MiB memory, the latest Debian SID kernel
> >> (vmlinux-6.1.0-2-m68k), using Debian SID modules, and with
>
Hi Geert,
On 2/6/23 12:52 AM, Geert Uytterhoeven wrote:
> Hi Stan,
>
> On Mon, Feb 6, 2023 at 4:42 AM Stan Johnson wrote:
>> On an SE/30 with 128 MiB memory, the latest Debian SID kernel
>> (vmlinux-6.1.0-2-m68k), using Debian SID modules, and with
>> initrd-6.1.0-2-m68k built on the SE/30, hang
Hi Stan,
On Mon, Feb 6, 2023 at 4:42 AM Stan Johnson wrote:
> On an SE/30 with 128 MiB memory, the latest Debian SID kernel
> (vmlinux-6.1.0-2-m68k), using Debian SID modules, and with
> initrd-6.1.0-2-m68k built on the SE/30, hangs after the initial
> "ABCFGHIJK" (I tried it twice).
If you get
On 2/5/23 4:18 PM, John Paul Adrian Glaubitz wrote:
> On Mon, 2023-02-06 at 09:29 +1100, Finn Thain wrote:
>> ...
>
>> When you need to generate a valid Debian/m68k vmlinux/initrd combination
>> that is also current, then you'll need a Debian/m68k system that is
>> current. The quickest way to g
On Mon, 2023-02-06 at 09:29 +1100, Finn Thain wrote:
> On Sun, 5 Feb 2023, Stan Johnson wrote:
>
> > >
> > > To save time, I recommend using QEMU and an up-to-date Debian/m68k SID
> > > virtual machine to produce the vmlinux and initrd files needed for use
> > > with Penguin on your slower mach
On Sun, 5 Feb 2023, Stan Johnson wrote:
> >
> > To save time, I recommend using QEMU and an up-to-date Debian/m68k SID
> > virtual machine to produce the vmlinux and initrd files needed for use
> > with Penguin on your slower machines.
> >
>
> AFAIK, the initrd must be created on the system t
Hi Stan,
Am 02.02.2023 um 07:51 schrieb Michael Schmitz:
But since we're touching on copy_to_user() here - the 'remove set_fs'
patch set by Christoph Hellwig refactored the m68k inline helpers around
July 2021. Can you test a kernel prior to those patches (5.15-rc2)?
That's a lot of work on
On 2/3/23 4:39 PM, Finn Thain wrote:
Hi Finn,
> On Wed, 1 Feb 2023, Stan Johnson wrote:
>
>>
>> After logging the start and end of each script, I see that the "stack
>> smashing detected" error often happens while running
>> "/etc/rcS.d/S01mount
On Wed, 1 Feb 2023, Stan Johnson wrote:
>
> After logging the start and end of each script, I see that the "stack
> smashing detected" error often happens while running
> "/etc/rcS.d/S01mountkernfs.sh" (/etc/init.d/mountkernfs.sh). I'll try to
> isol
Hi Stan,
Am 03.02.2023 um 12:16 schrieb Stan Johnson:
On 2/1/23 11:51 AM, Michael Schmitz wrote:
...
The stack canary mechanism pushes a token on the stack at function
entry, and compares against that token's value at function exit. This is
all code generated by gcc in the user binary.
The ke
On 2/1/23 11:51 AM, Michael Schmitz wrote:
> ...
>
> The stack canary mechanism pushes a token on the stack at function
> entry, and compares against that token's value at function exit. This is
> all code generated by gcc in the user binary.
>
> The kernel is not involved in function calls other
o to four of the following errors while
> >>> booting Linux on 68030 systems and using sysvinit startup scripts:
> >>>
> >>> *** stack smashing detected ***: terminated
> >>> Aborted
> >>>
> >>> I usually (but not always) see three
smashing detected ***: terminated
Aborted
I usually (but not always) see three of the errors while init is running
the rcS.d scripts, and one while running the rc2.d scripts. The stack
smashing messages appear only on the system console (nothing is logged
in an error log or dmesg). Despite the
On 1/30/23 8:05 PM, Michael Schmitz wrote:
> ...
> Am 30.01.2023 um 17:00 schrieb Stan Johnson:
>> Hello,
>>
>> I am seeing anywhere from zero to four of the following errors while
>> booting Linux on 68030 systems and using sysvinit startup scripts:
>
Hi,
On 31.1.2023 5.05, Michael Schmitz wrote:
That's a lot of work on a 030 Mac - have you tried to reproduce this on
any kind of emulator?
I suppose one difference between your 030 and 040 Macs might be the
amount of RAM available. I wonder if this bug results from a combination
of 030 MMU
Hi Stan,
Am 30.01.2023 um 17:00 schrieb Stan Johnson:
Hello,
I am seeing anywhere from zero to four of the following errors while
booting Linux on 68030 systems and using sysvinit startup scripts:
*** stack smashing detected ***: terminated
Aborted
I usually (but not always) see three of the
CC linux-m68k
On Mon, Jan 30, 2023 at 5:01 AM Stan Johnson wrote:
>
> Hello,
>
> I am seeing anywhere from zero to four of the following errors while
> booting Linux on 68030 systems and using sysvinit startup scripts:
>
> *** stack smashing detected ***: terminated
> Ab
Hello,
I am seeing anywhere from zero to four of the following errors while
booting Linux on 68030 systems and using sysvinit startup scripts:
*** stack smashing detected ***: terminated
Aborted
I usually (but not always) see three of the errors while init is running
the rcS.d scripts, and one
97 matches
Mail list logo