pablobarhamal...@gmail.com wrote: > Ok, I'm working on a predator/prey simulation, which evolve using genetic > algorithms. At the moment, they use a quite simple feed-forward neural > network, which can change size over time. Each brain "tick" is performed > by the following function (inside the Brain class): > > def tick(self): > input_num = self.input_num > hidden_num = self.hidden_num > output_num = self.output_num > > hidden = [0]*hidden_num > output = [0]*output_num > > inputs = self.input > h_weight = self.h_weight > o_weight = self.o_weight > > e = math.e > > count = -1 > for x in range(hidden_num): > temp = 0 > for y in range(input_num): > count += 1 > temp += inputs[y] * h_weight[count] > hidden[x] = 1/(1+e**(-temp)) > > count = -1 > for x in range(output_num): > temp = 0 > for y in range(hidden_num): > count += 1 > temp += hidden[y] * o_weight[count] > output[x] = 1/(1+e**(-temp)) > > self.output = output > > The function is actually quite fast (~0.040 seconds per 200 calls, using > 10 input, 20 hidden and 3 output neurons), and used to be much slower > untill I fiddled about with it a bit to make it faster. However, it is > still somewhat slow for what I need it. > > My question to you is if you an see any obvious (or not so obvious) way of > making this faster. I've heard about numpy and have been reading about it, > but I really can't see how it could be implemented here. > > Cheers!
Assuming every list is replaced with a numpy.array, h_weight.shape == (hidden_num, input_num) o_weight.shape == (output_num, hidden_num) and as untested as it gets: def tick(self): temp = numpy.dot(self.inputs, self.h_weight) hidden = 1/(1+numpy.exp(-temp)) temp = numpy.dot(hidden, self.o_weight) self.output = 1/(1+numpy.exp(-temp)) My prediction: this is probably wrong, but if you can fix the code it will be stinkin' fast ;) -- http://mail.python.org/mailman/listinfo/python-list