I definitely agree with this. Performance-wise, I expect it to be terrible
to model each individual neuron as an object. The logical unit (IMHO)
should be a layer of neurons, with matrix weights, vector biases, and
vector output.

Similarly, I think you'd be better off keeping the bias as a separate
value, rather than concatenating a 1 to the input vector. I know it's what
they do when presenting the math, but it means you'll be allocating a new
vector each time through.

Finally, I suspect that you'll eventually want to move the learning rate
(and maybe even the learn methods) out of the neuron and into a dedicated
"training"/"learning"/"optimizing" object. Perhaps overkill for the
perceptron, but for a multilayer network, you definitely want to control
the learning rate from the outside.

I've been working in TensorFlow, so my perceptions may be a bit colored by
that framework.

Cheers,
Johann

On Wed, Apr 12, 2017 at 10:45 AM Ben Coman <b...@openinworld.com> wrote:

>
>
> On Wed, Apr 5, 2017 at 8:27 AM, Oleksandr Zaytsev <olk.zayt...@gmail.com>
> wrote:
>
> Hello!
>
> Several weeks ago I've announced my NeuralNetworks project. Thank you very
> much for your ideas and feedback. As suggested, I wrote examples for every
> class and tested my perceptron on linearly-separable logical functions.
>
> I have just completed a post about my implementation of a single-layer
> perceptron:
> https://medium.com/@i.oleks/single-layer-perceptron-in-pharo-5b13246a041d.
> It has a detailed explanation of every part of the design and illustrates
> different approaches to implementation.
>
> Please, tell me what you think.
>
> Are my class diagrams correct or did I mess something up?
> Is there a design pattern that I should consider?
> Do you think that I should do something differently?
> Should I improve the quality of my code?
>
> Yours sincerely,
> Oleksandr
>
>
> Hi Oleks,
>
> (sorry for the delayed response. Saw you other post in pharo-dev and found
> this sitting in my drafts from last week.)
>
> Nice article and interesting read.  I only did NeuralNetworks in my
> undergrad 25 years ago so my knowledge is a bit vague.
> I think you design reasoning is fine for this stage.  Down the track you
> might consider that OO is about hiding implementation details. So just a
> vague idea, you might have SLPerceptron storing neuron data internally in
> arrays that a GPU can process efficiently, but when "SLPerceptron>>#at:" is
> asked for a neuron, it constructs a "real" neuron object, whose methods
> forward to the arrays in theSLPerceptron.
>
> cheers -ben
>

Reply via email to