Hi Oleksandr,

I'm working half-time on a team doing simulations of spiking neural
networks (on scala). Since the topic is "new" to me, I started following a
MOOC on traditional machine learning and putting some of my code in here:

https://github.com/guillep/neural-experiments

Also, I wanted to experiment on hand-written characters with the Mnist
dataset (yann.lecun.com/exdb/mnist/) so I wrote a reader for the IDX format
to load the dataset.

https://github.com/guillep/idx-reader

If you want I can take a look and we can discuss further :)

Guille


On Tue, Mar 21, 2017 at 11:30 AM, Oleksandr Zaytsev <olk.zayt...@gmail.com>
wrote:

> I started by implementing some simple threshold neurons. The current goal
> is a multilayer perceptron (similar to the one in scikit-learn), and maybe
> other kinds of networks, such as self-organizing maps or radial basis
> networks.
>
> I could try to implement a deep learning algorithm, but the big issue with
> them is time complexity. Probably, it would require the use of GPU, or some
> advanced "tricks", so I should start with something smaller.
>
> Also, I want to try different kinds of design approaches, including those
> that are not based on highly optimized vector algebra (I know that it might
> not be the best idea, but I want try it and see what happens). For example,
> a network, where each neuron is an object (normally the whole network is
> represented as a collection of weight matrices). It might turn out to be
> very slow, but more object-friendly. For now it's just an idea, but to try
> something like that I would need a small network with 1-100 neurons.
>
> Yours sincerely,
> Oleksandr
>

Reply via email to