Hello,

I have finally added a configuration to the NeuralNetwork project. Now you
can use this Metacello script to load it into your Pharo image:

Metacello new
  repository: 'http://smalltalkhub.com/mc/Oleks/NeuralNetwork/main';
  configuration: 'MLNeuralNetwork';
  version: #development;
  load.

Sorry for the delay

Oleks

On Tue, Apr 25, 2017 at 4:13 PM, francescoagati [via Smalltalk] <
ml+s1294792n4944028...@n4.nabble.com> wrote:

> thanks ;-)
>
> 2017-04-25 15:09 GMT+02:00 Oleks <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=4944028&i=0>>:
>
>> Hello,
>>
>> There isn't one yet. But I will try to create it today. I will let you
>> know
>>
>> Cheers,
>> Oleks
>>
>> On Apr 25, 2017 16:10, "francescoagati [via Smalltalk]" <[hidden email]
>> <http:///user/SendEmail.jtp?type=node&node=4944027&i=0>> wrote:
>>
>>> Hi Oleks,
>>> there is a mode for install neural network from metacello?
>>>
>>> 2017-04-25 13:00 GMT+02:00 Alexandre Bergel <[hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=4944025&i=0>>:
>>>
>>>> Continue to push that topic Oleks. You are on the right track!
>>>>
>>>> Alexandre
>>>>
>>>> > On Apr 24, 2017, at 1:43 AM, Oleks <[hidden email]
>>>> <http:///user/SendEmail.jtp?type=node&node=4944025&i=1>> wrote:
>>>> >
>>>> > Hello,
>>>> >
>>>> > Thanks a lot for your advice! It was very helpful and educating (for
>>>> > example, I thought that we store biases in the weight matrix and
>>>> prepend 1
>>>> > to input to make it faster, but now I see why it's actually slower
>>>> that
>>>> > way).
>>>> >
>>>> > I've implemented a multi-layer neural network as a linked list of
>>>> layers
>>>> > that propagate the input and error from one to another, similar to
>>>> the Chain
>>>> > of Responsibility pattern. Also, now I represent biases as separate
>>>> vectors.
>>>> > The LearningAlgorithm is a separate class with Backpropagation as its
>>>> > subclass (though at this point the network can only learn through
>>>> > backpropagation, but I'm planning to change that). I'm trying to
>>>> figure out
>>>> > how the activation and cost functions should be connected. For
>>>> example,
>>>> > cross-entropy works best with logistic sigmoid activation etc. I
>>>> would like
>>>> > to give the user a freedom to use whatever he wants (plug in whatever
>>>> you
>>>> > like and see what happens), but it can be very inefficient (because
>>>> some
>>>> > time-consuming parts of activation and cost derivatives cancel out
>>>> each
>>>> > other).
>>>> >
>>>> > Also, there is an interface for setting the learning rate for the
>>>> whole
>>>> > network, which can be used to choose the learning rate prior to
>>>> learning, as
>>>> > well as to change the learning rate after each iteration. I am
>>>> planning to
>>>> > implement some optimization algorithms that would automize the
>>>> process of
>>>> > choosing a learning rate (adagrad for example), but this would
>>>> require a bit
>>>> > different design (maybe I will implement the Optimizer, as you
>>>> suggested).
>>>> >
>>>> > I'm attaching two images with UML diagrams, describing my current
>>>> > implementation. Could you please tell me what you think about this
>>>> design?
>>>> > The first image is a class diagram that shows the whole architecture,
>>>> and
>>>> > the second one is a sequence diagram of backpropagation.
>>>> >
>>>> > mlnn.png <http://forum.world.st/file/n4943698/mlnn.png>
>>>> > backprop.png <http://forum.world.st/file/n4943698/backprop.png>
>>>> >
>>>> > Sincerely yours,
>>>> > Oleksandr
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > View this message in context: http://forum.world.st/Neural-N
>>>> etworks-in-Pharo-tp4941271p4943698.html
>>>> > Sent from the Pharo Smalltalk Users mailing list archive at
>>>> Nabble.com.
>>>> >
>>>>
>>>> --
>>>> _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
>>>> Alexandre Bergel  http://www.bergel.eu
>>>> ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>> ------------------------------
>>> If you reply to this email, your message will be added to the discussion
>>> below:
>>> http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944025.html
>>> To unsubscribe from Neural Networks in Pharo, click here.
>>> NAML
>>> <http://forum.world.st/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>
>>
>> ------------------------------
>> View this message in context: Re: Neural Networks in Pharo
>> <http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944027.html>
>>
>> Sent from the Pharo Smalltalk Users mailing list archive
>> <http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html> at
>> Nabble.com.
>>
>
>
>
> ------------------------------
> If you reply to this email, your message will be added to the discussion
> below:
> http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944028.html
> To unsubscribe from Neural Networks in Pharo, click here
> <http://forum.world.st/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=4941271&code=b2xrLnpheXRzZXZAZ21haWwuY29tfDQ5NDEyNzF8MTQxNTM2ODEzNw==>
> .
> NAML
> <http://forum.world.st/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: 
http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944473.html
Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.

Reply via email to