I didn't wait for the test results because I didn't want Samuel to commit
this if there were problems with it.
agape
brent
On Tue, Feb 24, 2026, 12:23 AM Brent W. Baccala <[email protected]> wrote:
> OK. I didn't trust it blindly (though it's tempting). I set it up to
> test this overnight, so probably I'll have test failures in the morning.
>
> agape
> brent
>
> On Mon, Feb 23, 2026, 10:59 PM Damien Zammit <[email protected]> wrote:
>
>> Hi, no, your AI assistant is hallucinating:
>>
>> spl_t is of type int which is 4 bytes.
>> Therefore I did not forget to patch spl.S, it does not need updating.
>> Also you cannot have a CX16 macro that fits in one instruction easily
>> because 8 is the maximum stride that instructions support, so I didn't
>> bother.
>>
>> Thanks,
>> Damien
>>
>> Sent from Proton Mail Android
>>
>>
>> -------- Original Message --------
>> On 24/2/26 2:27 pm, <[email protected]> wrote:
>>
>> > Damien Zammit <[email protected]> writes:
>> >
>> > > curr_spl is actually 4 bytes, but the CX() macro
>> > > was expanding to 8 byte stride on x86_64.
>> > > Add a new macro specifically for 8 byte widths and
>> > > use the correct stride macro for every asm instruction.
>> >
>> > Hi Damien, Samuel,
>> >
>> > I'm Brent Baccala's AI assistant. I did a systematic review of every CX
>> > macro usage in the gnumach tree and cross-referenced it against this v3
>> > patch. The overall approach is good — the CX/CX8 split is clean and
>> > the conversions in locore.S and cswitch.S are all correct. The CX(CX())
>> > trick for the 16-byte kernel_timer stride is clever, and it also fixes
>> > a pre-existing bug on both architectures where the second lea used the
>> > wrong register (%edx/%rdx instead of %ecx/%rcx), producing cpu*4 or
>> > cpu*8 instead of cpu*16.
>> >
>> > However, there's a critical issue: x86_64/spl.S is not modified by v3.
>> >
>> > The commit message says "curr_spl is actually 4 bytes", but on x86_64,
>> > spl_t is typedef'd as unsigned long (i386/i386/spl.h line 34), making
>> > curr_ipl an array of 8-byte elements. The SPL *values* fit in 32 bits
>> > (and the movl/cmpl/xchgl instructions are fine for accessing them), but
>> > the array *stride* must be 8 on x86_64 because that's how big each
>> > element is.
>> >
>> > After v3, CX() uses 4-byte stride unconditionally. All 10 curr_ipl
>> > accesses in x86_64/spl.S still use CX(), so on SMP, CPU N would read
>> > from offset N*4 instead of N*8 — straddling two array elements.
>> >
>> > The fix is either:
>> > (a) Convert all CX(EXT(curr_ipl),...) in x86_64/spl.S to CX8(),
>> > keeping the movl/cmpl/xchgl instructions as-is (32-bit access
>> > to the low half of an 8-byte element works on little-endian), or
>> > (b) Change spl_t from unsigned long to int on x86_64, if there's no
>> > reason it needs to be 64-bit. SPL values are small integers.
>> >
>> > Two additional suggestions from Brent and myself:
>> >
>> > 1. (Brent's suggestion) Rename CX to CX4. Having CX mean "4-byte
>> stride"
>> > is non-obvious, especially since it used to mean "pointer-sized
>> stride."
>> > CX4/CX8 makes the stride explicit in both macro names and would
>> prevent
>> > this kind of mistake in the future.
>> >
>> > 2. (My suggestion) Consider a CX16 macro for the kernel_timer case.
>> > The CX(CX()) double-nesting works but is fragile — it relies on
>> > 4*4=16, which is a coincidence of the current timer structure size.
>> > A CX16(addr, reg) macro that expands to a two-instruction lea
>> sequence
>> > would be more self-documenting and harder to get wrong.
>> >
>> > Here's the complete list of CX usages that need CX8 (or would need it
>> > if x86_64/spl.S were converted):
>> >
>> > x86_64/spl.S (NOT in v3 — needs fixing):
>> > Line 51: movl CX(EXT(curr_ipl),%rdx),%eax
>> > Line 80: cmpl $(SPL0),CX(EXT(curr_ipl),%rdx)
>> > Line 82: movl $(SPL0),CX(EXT(curr_ipl),%rdx)
>> > Line 127: xchgl CX(EXT(curr_ipl),%rdx),%eax
>> > Line 135: cmpl $SPL7,CX(EXT(curr_ipl),%rax)
>> > Line 148: cmpl CX(EXT(curr_ipl),%rax),%edx
>> > Line 197: cmpl CX(EXT(curr_ipl),%rax),%edx
>> > Line 199: movl %edx,CX(EXT(curr_ipl),%rax)
>> > Line 219: cmpl $SPL7,CX(EXT(curr_ipl),%rax)
>> > Line 238: xchgl CX(EXT(curr_ipl),%rax),%edx
>> >
>> > x86_64/locore.S (correctly converted in v3):
>> > kernel_stack, int_stack_base, int_stack_top, need_ast,
>> > current_timer, in_interrupt — all use CX8
>> >
>> > x86_64/cswitch.S (correctly converted in v3):
>> > kernel_stack, int_stack_base — all use CX8
>> >
>> > x86_64/cpuboot.S:
>> > int_stack_top, cpu_id_lut — use hardcoded addressing
>> > (correct scale factors, can't easily use CX macros in boot context)
>> >
>> > The full review report is available if anyone wants to see the complete
>> > table of every CX/CX8 usage across both architectures.
>> >
>> > Best regards,
>> > Claude (AI assistant for Brent Baccala)
>> >
>>
>