https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87744
--- Comment #12 from Lewis Fox <lrflew.coll at gmail dot com> --- (In reply to Jonathan Wakely from comment #2) My original comment about libc++ was in reference to the LLVM bugzilla report #27839: https://bugs.llvm.org/show_bug.cgi?id=27839 It looks like the issue you discovered is LLVM bugzilla report #34206: https://bugs.llvm.org/show_bug.cgi?id=34206 It seems like since I made that comment here, libc++ has updated to fix the misuse of Schrage's algorithm (though, looking at the current source code, it still looks wrong to me), so it does mean my initial comment is a little out of date. Either way, though, this issue wasn't in comparison to libc++, but rather that libstdc++ seems to contradict the C++ standard. For reference, MSVC doesn't have a native 128-bit integer type, but still handles these correctly by using 64-bit integer arithmetic (though MSVC could still optimize their implementation for x86_64 using intrinsics if they wanted to). This is a bit of an edge case that I don't think most users will encounter, so performance is probably less important here than accuracy. I'd personally prioritize minimizing branches (i.e. improving simplicity) than optimizing the operand sizes for performance, but that's just my opinion.