Hello,

Thanks a lot for your advice! It was very helpful and educating (for
example, I thought that we store biases in the weight matrix and prepend 1
to input to make it faster, but now I see why it's actually slower that
way).

I've implemented a multi-layer neural network as a linked list of layers
that propagate the input and error from one to another, similar to the Chain
of Responsibility pattern. Also, now I represent biases as separate vectors.
The LearningAlgorithm is a separate class with Backpropagation as its
subclass (though at this point the network can only learn through
backpropagation, but I'm planning to change that). I'm trying to figure out
how the activation and cost functions should be connected. For example,
cross-entropy works best with logistic sigmoid activation etc. I would like
to give the user a freedom to use whatever he wants (plug in whatever you
like and see what happens), but it can be very inefficient (because some
time-consuming parts of activation and cost derivatives cancel out each
other).

Also, there is an interface for setting the learning rate for the whole
network, which can be used to choose the learning rate prior to learning, as
well as to change the learning rate after each iteration. I am planning to
implement some optimization algorithms that would automize the process of
choosing a learning rate (adagrad for example), but this would require a bit
different design (maybe I will implement the Optimizer, as you suggested).

I'm attaching two images with UML diagrams, describing my current
implementation. Could you please tell me what you think about this design?
The first image is a class diagram that shows the whole architecture, and
the second one is a sequence diagram of backpropagation.

mlnn.png <http://forum.world.st/file/n4943698/mlnn.png>  
backprop.png <http://forum.world.st/file/n4943698/backprop.png>  

Sincerely yours,
Oleksandr



--
View this message in context: 
http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4943698.html
Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.

Reply via email to