Having a neuron as an object is exactly what I have in my implementation. Sounds exciting!
Share your code when ready! Eager to try it! Alexandre -- _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;: Alexandre Bergel http://www.bergel.eu ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;. > On Mar 21, 2017, at 7:30 AM, Oleksandr Zaytsev <olk.zayt...@gmail.com> wrote: > > I started by implementing some simple threshold neurons. The current goal is > a multilayer perceptron (similar to the one in scikit-learn), and maybe other > kinds of networks, such as self-organizing maps or radial basis networks. > > I could try to implement a deep learning algorithm, but the big issue with > them is time complexity. Probably, it would require the use of GPU, or some > advanced "tricks", so I should start with something smaller. > > Also, I want to try different kinds of design approaches, including those > that are not based on highly optimized vector algebra (I know that it might > not be the best idea, but I want try it and see what happens). For example, a > network, where each neuron is an object (normally the whole network is > represented as a collection of weight matrices). It might turn out to be very > slow, but more object-friendly. For now it's just an idea, but to try > something like that I would need a small network with 1-100 neurons. > > Yours sincerely, > Oleksandr