[Bug libstdc++/87744] New: Some valid instantiations of linear_congruential_engine produce compiler errors when __int128 isn't available

2018-10-24 Thread lrflew.coll at gmail dot com
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87744

Bug ID: 87744
   Summary: Some valid instantiations of
linear_congruential_engine produce compiler errors
when __int128 isn't available
   Product: gcc
   Version: 7.3.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: libstdc++
  Assignee: unassigned at gcc dot gnu.org
  Reporter: lrflew.coll at gmail dot com
  Target Milestone: ---

This issue occurs when the UIntType parameter is 64-bits, the platform being
compiled for can't use 128-bit integers (for example, 32-bit x86), and the LCG
parameters are chosen to meet certain requirements. An example of code that hit
this error is this:

#include 
int main() {
std::linear_congruential_engine gen;
gen();
}

This compiles fine when __int128 is present, and fails when it isn't. When I
compile this with the -m32 flag, it prints out a lot of errors.

To summarize the problem, the error happens when determining which template
specification of _Select_uint_least_t to use in random.h. When determining how
to define operator() for the engine, it realizes that the A value is large
enough to overflow the 64-bit result type, and that the precondition for
Schrage's method isn't met (M % A > M / A). Because of this, it attempts to
find a larger integer type to use for the computation using
_Select_uint_least_t. Since the result type is already 64-bits, and the 128-bit
integer isn't available, it ends up not finding a large enough integer type,
and hits the static assert in the unspecified variant of _Select_uint_least_t.

This isn't a simple issue to resolve. The static assert that gets hit even says
"sorry, would be too much trouble for a slow result". However, as far as I can
tell, this instantiation of linear_congruential_engine is valid in the
standard, so it shouldn't result in an error. This instantiation doesn't
produce any compile errors when I tried it with MSVC, Boost, and libc++'s
implementation of linear_congruential_engine (though libc++ incorrectly uses
Schrage's method). While a solution would be slow, it's probably needed for
full standards compliance.

[Bug libstdc++/87744] Some valid instantiations of linear_congruential_engine produce compiler errors when __int128 isn't available

2024-02-07 Thread lrflew.coll at gmail dot com via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87744

--- Comment #12 from Lewis Fox  ---
(In reply to Jonathan Wakely from comment #2)

My original comment about libc++ was in reference to the LLVM bugzilla report
#27839: https://bugs.llvm.org/show_bug.cgi?id=27839

It looks like the issue you discovered is LLVM bugzilla report #34206:
https://bugs.llvm.org/show_bug.cgi?id=34206

It seems like since I made that comment here, libc++ has updated to fix the
misuse of Schrage's algorithm (though, looking at the current source code, it
still looks wrong to me), so it does mean my initial comment is a little out of
date.

Either way, though, this issue wasn't in comparison to libc++, but rather that
libstdc++ seems to contradict the C++ standard. For reference, MSVC doesn't
have a native 128-bit integer type, but still handles these correctly by using
64-bit integer arithmetic (though MSVC could still optimize their
implementation for x86_64 using intrinsics if they wanted to).

This is a bit of an edge case that I don't think most users will encounter, so
performance is probably less important here than accuracy. I'd personally
prioritize minimizing branches (i.e. improving simplicity) than optimizing the
operand sizes for performance, but that's just my opinion.