> the latest xkcd on my internet is about a programming language that
> attempts to eliminate off-by-one errors via "every time an integer is
> stored or read, its value is adjusted up or down by a random amount
> between 40 and 50". The title text clarifies that this generates
> off-by-40-or-50 errors instead.

I'm looking a little into engaging cpython with a goal of implementing 
something like this.
It's maybe not the best choice -- the interpreter is only used for one 
language, the language has other interpreters written for it, and i haven't yet 
found a single unified place to engage integer stores and loads -- but it's a 
language that I've been engaging recently, and recently spammed around engaging 
the bytecode of, which seems quite rare for me.

it's 0613 ET. i'm looking for mentions of 'random' in the cpython source to 
figure out how to generate a random number :s

random.py -> imports _random and from os imports urandom
os.py -> imports * from posix or nt depending on platform
Modules/_randommodule.c -> contains this line: #include "pycore_pylifecycle.h"  
 // _PyOS_URandomNonblock()

Here are some interesting floating point constants:
static PyObject *
_random_Random_random_impl(RandomObject *self)
/*[clinic end generated code: output=117ff99ee53d755c input=26492e52d26e8b7b]*/
{
    uint32_t a=genrand_uint32(self)>>5, b=genrand_uint32(self)>>6;
    return PyFloat_FromDouble((a*67108864.0+b)*(1.0/9007199254740992.0));
}

how much is one 9.007 quadrillionth? something to do with doubles or floats, 
67.1 million, and 27 and 26 bit integers!
i guess the 67.1 million would be to generate a higher bit integer, and the 
9.007 quadrillion would be to scale it within 0 and 1. funny at first to see 
large integer constants in a floating point division context but makes sense.

blurp

arright ummm

genrand_uint32 is a manual inline implementation and operates on a RandomObject 
:/ comments say that it's MT19937 hiroshima-u.ac.jp . 9.007 quadrillion is 
pow(2, 53). the python comments say "likely vain hope" regarding performing 
multiply-by-reciprocal rather than divide. the authors of the comments were 
maybe unaware that this optimization could double your frame rate in the 80s 
and 90s, and spreading of newer such optimizations dropped after hardware 
advanced. 67.1 million is pow(2,26).

current options:
- initializing and using random state using this compiled mt19937 module
- using code from nt or posix module to compute random
- including my own random header or such
- placing a manual implementation of a generation algorithm

i'm visiting Modules/posixmodule.c

it's notable that these functions have comments along the lines of "clinic end 
generated code" ... but i'm guessing it's the signatures that are generated, 
not the bodies

os_urandom_impl in posixmodule.c hands off to _PyOS_URandom

it looks like posixmodule.c defines both the "posix" and the "nt" modules 
conditioned on platform

looks like both _PyOS_URandom _PyOS_URandomNonblock are pulled in from 
pycore_pylifecycle.h

the definitions though are in Python/bootstrap_hash.c .

it defines a few different random number generators, strange to see this after 
the inline generator from the japanese university code. usually these two 
things would be unified together.

 notably these random number generators i'm not seeing them in the header file 
yet
win32_urandom -> calls BCryptGenRandom, a windows call
py_getrandom -> calls getrandom(), a linux kernel syscall
py_getentropy -> calls getentropy()
dev_urandom -> reads /dev/urandom
lcg_urandom -> manual linear congruential generator
pyurandom -> hands off to the first 4 above in order if available, does not use 
lcg_urandom
_PyOS_URandom* -> hand off to pyurandom

so to generate a number between 40 and 50 i suppose i'd call a _PyOS_URandom 
function and read 1 byte. i could keep a bit cache if i were fancy, but would 
likely make more sense to spend time on ensuring integer stores and loads are 
properly effected.

Reply via email to