On Wed, Apr 5, 2017 at 8:27 AM, Oleksandr Zaytsev <olk.zayt...@gmail.com>
wrote:

> Hello!
>
> Several weeks ago I've announced my NeuralNetworks project. Thank you very
> much for your ideas and feedback. As suggested, I wrote examples for every
> class and tested my perceptron on linearly-separable logical functions.
>
> I have just completed a post about my implementation of a single-layer
> perceptron: https://medium.com/@i.oleks/single-layer-perceptron-in-pharo
> -5b13246a041d. It has a detailed explanation of every part of the design
> and illustrates different approaches to implementation.
>
> Please, tell me what you think.
>
> Are my class diagrams correct or did I mess something up?
> Is there a design pattern that I should consider?
> Do you think that I should do something differently?
> Should I improve the quality of my code?
>
> Yours sincerely,
> Oleksandr
>

Hi Oleks,

(sorry for the delayed response. Saw you other post in pharo-dev and found
this sitting in my drafts from last week.)

Nice article and interesting read.  I only did NeuralNetworks in my
undergrad 25 years ago so my knowledge is a bit vague.
I think you design reasoning is fine for this stage.  Down the track you
might consider that OO is about hiding implementation details. So just a
vague idea, you might have SLPerceptron storing neuron data internally in
arrays that a GPU can process efficiently, but when "SLPerceptron>>#at:" is
asked for a neuron, it constructs a "real" neuron object, whose methods
forward to the arrays in theSLPerceptron.

cheers -ben

Reply via email to