I wrote a very simple function to test random:
    def test_random(length, multiplier = 10000):
        number_list = length * [0]
        for i in range(length * multiplier):
            number_list[random.randint(0, length - 1)] += 1
        minimum = min(number_list)
        maximum = max(number_list)
        return (minimum, maximum, minimum / maximum)

When running:
    for i in range(1, 7):
        print(test_random(100, 10 ** i))
I get:
    (3, 17, 0.17647058823529413)
    (73, 127, 0.5748031496062992)
    (915, 1086, 0.8425414364640884)
    (9723, 10195, 0.9537027954879843)
    (99348, 100620, 0.9873583780560524)
    (997198, 1002496, 0.9947151908835546)

and when running:
    for i in range(1, 7):
        print(test_random(1000, 10 ** i))
I get:
    (2, 20, 0.1)
    (69, 138, 0.5)
    (908, 1098, 0.8269581056466302)
    (9684, 10268, 0.9431242695753799)
    (99046, 100979, 0.9808574059953059)
    (996923, 1003598, 0.9933489305478886)

It shows that when the first parameter increases the deviation
increases and when the second parameter increases the deviation
decreases. Exactly what you would expect. But what are the ranges you
would expect with a good random function. Then it could be used to
test a random function.

-- 
Cecil Westerhof
Senior Software Engineer
LinkedIn: http://www.linkedin.com/in/cecilwesterhof
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to