Op 28-09-13 12:31, Ned Batchelder schreef:
On 9/28/13 2:01 AM, Steven D'Aprano wrote:
On Fri, 27 Sep 2013 17:40:46 -0400, Ned Batchelder wrote:
On 9/27/13 12:10 PM, Denis McMahon wrote:
On Fri, 27 Sep 2013 12:08:33 +0000, Dave Angel wrote:
i recall
writing a shuffle function in C decades ago, which took an array of
(52) unique items and put them in random order.
Whenever I tried to write shuffles I came up against a fairly
fundamental limit:
52! > prng states
Granted prngs seem to be better on the importing entropy from elsewhere
front these days.
Python's PRNG holds plenty of state to handle card shuffling. Python's
random number generator has a period of 2**19337-1, so it has that many
states. That is much larger than 52!, about 10**5933 times larger.
(Unless I've botched the math...)
There's a warning about that in the docs:
http://docs.python.org/3/library/random.html#random.shuffle
[quote]
Note that for even rather small len(x), the total number of permutations
of x is larger than the period of most random number generators; this
implies that most permutations of a long sequence can never be generated.
[end quote]
If I've done that maths right, it turns out that 2025 is the largest
number of items that a list can have for Python's default PRNG to
generate every possible shuffle. But does it really matter? For most
purposes, I'd say No. Even if you generated one trillion shuffles per
second, you would have to keep shuffling for more than a trillion years
before repeating yourself.
To be pedantic: a *lot* more than a trillion years. Approximately
10**5789 trillion years. That's about a:
trillion trillion trillion trillion trillion trillion trillion trillion
trillion trillion trillion trillion trillion trillion trillion trillion
trillion trillion trillion trillion trillion trillion trillion trillion
... and so on for 57 more lines ... years.
So in practice it's not really relevant that some shuffles won't be
generated. They'll be randomly distributed throughout the space of all
possible shuffles, which is so large that you really won't notice the
missing ones.
I've thought that way about it too: there are so many shuffles any way,
it won't be a problem. But think about it like this: if you shuffle a
deck of 52 cards with a default Python random object, then once you have
dealt out only 28 cards, the entire rest of the deck is completely
determined. That is, given the sequence of the first 28 cards, there's
only one choice for how the remaining 24 will be dealt. Depending on
what you need from your deck of cards, that could be a complete disaster.
I don't see it. Unless given those 28 cards you can actually predict
those 24 other cards or at least can devise some betting strategy that
will allow you to beat the odds I don't see how this could lead to a
disaster.
Is it a problem for a card game simulation? No. Is it a problem for a
program that analyze correlations between the start of the deck and the
end of the deck? Maybe.
Well if a program would actually find a correlation between the start of
the deck and the end, that may indeed be a cause for worry. But what
evidence is there for that possibility?
--
Antoon Pardon
--
https://mail.python.org/mailman/listinfo/python-list