On Sat, Nov 23, 2024, at 3:14 PM, Fabian Grünbichler wrote:
> On Sat, Nov 23, 2024, at 1:09 PM, Jonas Smedegaard wrote:
>> Quoting Chris Hofstaedtler (2024-11-23 04:16:29)
>>> * Jonas Smedegaard <jo...@jones.dk> [241122 18:01]:
>>> > > All release architectures support Rust. We should not accept
>>> > > release architectures without Rust support.
>>> > >
>>> > > A minor set of ports architectures does not have Rust support
>>> > > yet.
>>> >
>>> > Rust is unsupported on i386 and patched to silently assume i686
>>>
>>> i686 is not a problem, as that's the arch baseline for our i386
>>> arch since bookworm.
>
> yeah, this is not related to that particular change, but to this:
>
> https://wiki.debian.org/ArchitectureSpecificsMemo#i386
>
> in combination with a bad interaction between rustc and LLVM..
>
>>> > - see
>>> > DEP-3 references in this patch for discussions about that, and the patch
>>> > itself for a way to more loudly make reverse dependencies aware that
>>> > code using SSE2 *must* be compiled without optimizations on i386:
>>> > https://salsa.debian.org/debian/rust-wide/-/blob/debian/latest/debian/patches/2001_fail_non-sse2-x86.patch
>>> >
>>> > Beware that Rust team build routines run tests without optimizations,
>>> > regardless of DEB_BUILD_OPTIONS=noopt, so for libraries maintained by
>>> > them the issue may go unnoticed until reverse dependencies run into the
>>> > issue *and* test for it, otherwise it might go unnoticed until users
>>> > report it.
>>>
>>> So maybe it's time to raise the baseline to i686+sse2.
>>
>> As I understand the situation with Rust, it is *not* that compiled code
>> fails to run on old non-SSE2 hardware. Instead, the problem is that the
>> Rust compiler produces code that is *ALWAYS* broken regardless if target
>> hardware supports SSE2 or not.
>>
>> Yes, your final remark is a "solution" regardless, I just wanted to
>> emphasize that the problem affects the whole architecture, not only
>> outdated parts of it.
>>
>> ...unless I have misunderstand the situation, obciously.
>
> AFAICT, this seems to be true. I was (so far) under the impression so
> far that this only affects FP arithmetics (i.e., the usual issue one
> runs into this is test code that does FP operations and then checks
> that the result is as expected). given that it's not limited to that
> (see
> https://github.com/rust-lang/rust/issues/114479#issuecomment-2072052116
> and https://github.com/rust-lang/compiler-team/issues/808), the
> situation is a lot worse!
>
> so yeah, I guess we should either
>
> A) move i386 rustc to Rust's i586 target (which doesn't have SSE out of
> the box), instead of the i686-with-SSE2-disabled it currently uses
> B) bump the i386 baseline in Debian to require SSE2, and stop disabling
> SSE2 there in rustc
> C) disable all optimizations for Rust code on i386 (not really an
> option I think, just here for completeness sake)
read the full upstream issues now - downgrading to i586 doesn't solve the issue
at all, upstream just doesn't care about it there because that target is not
part of their fully supported tier.
so that means option A) is effectively off the table (other than we save
ourselves some further patching once the "don't allow to disable SSE2 on i686"
patches hit an upstream release), and just leaves B) if we want to really solve
it, and C) if we want to do an incomplete hacky papering over.
selected quotes from the GH issues linked above:
GH user beetrees at
https://github.com/rust-lang/rust/issues/114479#issuecomment-2207770946 wrote:
> The unsoundness is not just theoretical; the LLVM IR Rust compiles f32 and
> f64 operations to has the desired Rust semantics, whereas the LLVM non-SSE
> x86 backend compiles that IR to machine code that violates the semantics of
> the IR. This means e.g. LLVM optimisations that (correctly) operate presuming
> the LLVM IR semantics with regards to evaluation precision can cause the LLVM
> non-SSE x86 backend introduce out-of-bounds reads/writes in safe code (see
> this earlier comment for a code example). The NaN quietening issue also
> violates the semantics of LLVM IR and can cause the emitted binary to mutate
> the value of non-float types (see this earlier comment for the code sample,
> details in this comment).
> Because this is a miscompilation at the LLVM IR -> machine code stage, as
> opposed to the Rust -> LLVM IR stage, miscompilations can occur in other
> programming languages that use LLVM as a codegen backend. For example,
> llvm/llvm-project#89885 contains an example of a miscompilation from C.
> Ultimately what matters are the semantics of the LLVM IR; not everything that
> is permitted by the IEEE 754 specification is permitted by LLVM IR (and vice
> versa).
> The return value ABI issue is tracked separately in #115567, and affects all
> 32-bit x86 targets, not just those with SSE/SSE2 disabled. It is possible to
> manually load/store a f32/f64 signalling NaN to/from an x87 register without
> quietening it (see e.g. the code in #115567 (comment)), but currently neither
> LLVM nor Rust do so. The "Rust" ABI (which doesn't have any stability
> guarantees) is being changed to avoid x87 registers completely in #123351.
> GH user RalfJung at
> https://github.com/rust-lang/rust/issues/114479#issuecomment-2208193457 wrote:
> As @beetrees explained, it's not just that the underlying hardware works
> differently and that bleeds into language semantics. It's that Rust's primary
> backend, LLVM, assumes the underlying hardware to work the standard way --
> the examples @beetrees referenced demonstrate that there is no reliable way
> to program against this hardware in any compiler that uses LLVM as its
> backend. (At least not if the compiler uses the standard LLVM float types and
> operations.)
> To my knowledge, nobody on the LLVM side really cares about this. So until
> that changes it is unlikely that we'll be able to improve things in Rust
> here. Telling programmers "on this hardware your program may randomly explode
> for no fault of your own" is not very useful. (I mean, it is a useful errata
> of course, but it doesn't make sense to make this the spec.)
so this is actually an LLVM+x87 bug, not limited to rustc, and not limited to
Debian's rustc variant either, it's just that upstream's i686 rustc doesn't
disable SSE2, whereas ours (and upstream's i586) do, and thus expose the issue.
I tend to agree with upstream that the *only proper* solution is to
enable/require SSE2.