Hi, GCC (and other C compilers too) support the widening multiplication of i386/AMD64 processors, but DON'T support their narrowing division:
--- demo.c --- unsigned long long product(unsigned long multiplicand, unsigned long multiplier) { return (unsigned long long) multiplicand * multiplier; } unsigned long long quotient(unsigned long long dividend, unsigned long divisor, unsigned long *remainder) { *remainder = dividend % divisor; return dividend / divisor; } --- EOF --- GCC 12.2: gcc -m32 -O2 demo.c # https://godbolt.org/z/1M9dohMcE product(unsigned long, unsigned long): mov eax, DWORD PTR [esp+8] mul DWORD PTR [esp+4] ret quotient(unsigned long long, unsigned long, unsigned long*): push ebx xor edx, edx sub esp, 24 mov eax, DWORD PTR [esp+40] lea ecx, [esp+8] sub esp, 12 push ecx push edx push eax push DWORD PTR [esp+60] push DWORD PTR [esp+60] call __udivmoddi4 mov ebx, DWORD PTR [esp+40] mov ecx, DWORD PTR [esp+76] mov DWORD PTR [ecx], ebx add esp, 56 pop ebx ret ### Diversion ### Even worse and completely BRAINDEAD, another compiler calls __udivdi3() and wastes a multiplication to compute the remainder, ignoring the fact that __udivdi3() calls __udivmoddi4() which already returns quotient and remainder: clang 15.0.0: clang -m32 -O2 demo.c # https://godbolt.org/z/rv1sTe7xv product(unsigned long, unsigned long): mov eax, dword ptr [esp + 8] mul dword ptr [esp + 4] ret quotient(unsigned long long, unsigned long, unsigned long*): push ebp push ebx push edi push esi sub esp, 12 call .L1$pb .L1$pb: pop ebx .Ltmp2: add ebx, offset _GLOBAL_OFFSET_TABLE_+(.Ltmp2-.L1$pb) mov esi, dword ptr [esp + 44] mov edi, dword ptr [esp + 32] mov ebp, dword ptr [esp + 40] push 0 push ebp push dword ptr [esp + 44] push edi call __udivdi3@PLT add esp, 16 imul ebp, eax sub edi, ebp mov dword ptr [esi], edi add esp, 12 pop esi pop edi pop ebx pop ebp ret ### end of diversion ### Both compilers miss the fact that the i386 processor has a narrowing integer division and can therefore divide 64-bit / 32-bit numbers, for example with the well-known "long" alias "schoolbook" division, returning 64-bit quotient and 32-bit remainder: .arch generic 32 .code32 .intel_syntax noprefix .text quotient: mov ecx, [esp+12] # ecx = divisor .if 0 xor edx, edx mov eax, [esp+8] # edx:eax = high dword of dividend cmp eax, edx je 0f # high dword of dividend = 0? .else xor eax, eax mov edx, [esp+8] # eax:edx = high dword of dividend cmp edx, ecx jb 0f # high dword of dividend < divisor? # quotient < 2**32? xchg eax, edx .endif div ecx # eax = high dword of quotient, 0: # edx = high dword of dividend' push eax mov eax, [esp+8] # edx:eax = dividend' div ecx # eax = low dword of quotient, # edx = remainder mov ecx, remainder # ecx = address of remainder mov [ecx], edx pop edx # edx:eax = quotient ret .end JFTR: dependent on the magnitude of the numbers and the processor it MIGHT be better to omit comparison and branch: there's a trade-öff between the latency of the (un-pipelined) division instruction and the latency of the conditional branch due to misprediction. Stefan Kanthak