Abdur-Rahman I am sure various modules available have ready-made solutions and I see others have replied to your question. The usual disclaimers apply. This is an academic discussion and not a statement of the right or only way to do an abstract task.
So just a thought. You seem interested in a GENERAL situation where you have N situations with each having a specific probability and the probabilities sum to 1.0. If that is right, you are creating a partition where you can make a data structure listing N ordered items along with their individual probability. Given such a list, you can create another item that is a cumulative sum. In your example, your individual probabilities for ['green', 'red', 'blue'] are [0.4, 0.4, 0.2] but the use of lists is just for illustration. You might use a Numpy array as they have a cumulative sum function: import numpy as np np.cumsum([0.4, 0.4, 0.2]) Returns: array([0.4, 0.8, 1. ]) or viewed vertically: np.cumsum([0.4, 0.4, 0.2]).reshape(3,1) array([[0.4], [0.8], [1. ]]) Again, we are talking about a GENERAL solution but using this example to illustrate. To get a weighted probability now, you use the random module (or anything else) to generate a random number between 0 and 1. You search in the cumulative sum data structure to find the right range. A random value less than 0.4 should direct you to using green. If above that but less than 0.8, use red. Else, use blue. To do this properly, you can decide what data structures makes this easy to do. Maintaining three independent lists or arrays may not be optimal. The main idea is to find a way to segment your choices. So consider a different common example of rolling a pair of (six sided) standard dice. You know there are 6**2 possible outcomes. There is only one way to get a sum of 2 by rolling a one and another one. So the probability is 1/36 or .0277... and you can calculate the probabilities of all the other sums with a 7 being the most common roll at .1667. In this example your choices are rolls of 2 through 12 or 11 choices. The same logic applies. Generate 11 data measures and a cumulative sum. Just as illustration, I show code using a Pandas DataFrame object. import numpy as np import pandas as pd diceSumText = np.array(["two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "eleven", "twelve"]) diceSumVal = np.array(range(2,13)) diceProbability = np.array([1, 2, 3, 4, 5, 6, 5, 4, 3, 2, 1]) / 36 diceProbCumSum = np.cumsum(diceProbability) Now combine those: mydata = pd.DataFrame({"Text": diceSumText, "Value": diceSumVal, "Prob": diceProbability, "Cum": diceProbCumSum}) print(mydata) Text Value Prob Cum 0 two 2 0.027778 0.027778 1 three 3 0.055556 0.083333 2 four 4 0.083333 0.166667 3 five 5 0.111111 0.277778 4 six 6 0.138889 0.416667 5 seven 7 0.166667 0.583333 6 eight 8 0.138889 0.722222 7 nine 9 0.111111 0.833333 8 ten 10 0.083333 0.916667 9 eleven 11 0.055556 0.972222 10 twelve 12 0.027778 1.000000 Again, you can do something any number of ways. This is just one. And in this format the indentation is not great. But it lets you write an algorithm that finds the highest 'index" that still is below the random number chosen and then select either the text or value that fits in that partition. Not to repeat, there are many other ways so feel free to innovate. -----Original Message----- From: Python-list <python-list-bounces+avigross=verizon....@python.org> On Behalf Of Abdur-Rahmaan Janhangeer Sent: Friday, December 28, 2018 2:45 PM To: Python <python-list@python.org> Subject: Re: graded randomness well i wanted that to improve the following code: https://www.pythonmembers.club/2018/12/28/reviving-bertrand-russell-through- python/ that one i used the random list technique Abdur-Rahmaan Janhangeer http://www.pythonmembers.club | https://github.com/Abdur-rahmaanJ Mauritius > -- https://mail.python.org/mailman/listinfo/python-list -- https://mail.python.org/mailman/listinfo/python-list