Yes, you are seeding and taking different paths in the problem area.
Best case one could model the problem area with some priors and do some
minimal-overlapping search of features.

I find this one very interesting: http://metacog.org/main.pdf
Apparently it produces good results and to me seems theoretically sound.


On Mon, Nov 19, 2012 at 1:46 PM, nicolas.o...@gmail.com <
nicolas.o...@gmail.com> wrote:

> I am not a specialist a t all, but I think I remember back propagation
> benefits from learning multiple times with
> starting coefficients at random. (In order to find better local minimas).
>
>
>
>
> On Mon, Nov 19, 2012 at 3:29 AM, Timothy Washington <twash...@gmail.com>wrote:
>
>> Yes agreed. The only reason I chose to begin with BackPropagation was to
>> first get a thorough understanding of gradient descent. The next 2
>> approaches I have in mind are i) Resilient 
>> Propagation<http://de.wikipedia.org/wiki/Resilient_Propagation> and
>> ii) the Levenberg–Marquardt 
>> algorithm<http://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm>
>> .
>>
>> Now, by overtraining for the specific data, are you wondering if the
>> algorithm is skewed to accomodate it? That may be the case, and I have to
>> get more sample data sets. That's, in fact, one of the questions I have
>> with this post. More broadly, it would be good to have more eyes look at
>> the training algorithm and see if I got the main bits right. Then
>> strategies for picking network architecture, avoiding local minima, etc.
>>
>> The next things I want to do is setup a configuration so that *A)* one
>> can specify i) BackPropagation ii) ResilentPropagation iii) etc, *B)*have 
>> the network architecture (how many hidden and output neurons, etc) be
>> configurable, and *C)* add more and more types of training data.
>>
>>
>> Tim
>>
>>
>>
>>
>> On Sun, Nov 18, 2012 at 7:55 PM, Andreas Liljeqvist <bon...@gmail.com>wrote:
>>
>>> Well machine-learning is a complex area.
>>> Basically you have to widen the search area when you get stuck in a
>>> local minima.
>>>
>>> Another question is, are you overtraining for you specific data? Using
>>> too many neurons tend learn the specific cases but not the generality.
>>> Getting a perfect score is easy, just keep adding neurons...
>>>
>>> Standard backpropagation isn't really the state of the art nowadays.
>>> Go and look up the thousands of paper written in the area, and none of
>>> them have a definitive answer :P
>>>
>>>
>>>  --
>> You received this message because you are subscribed to the Google
>> Groups "Clojure" group.
>> To post to this group, send email to clojure@googlegroups.com
>> Note that posts from new members are moderated - please be patient with
>> your first post.
>> To unsubscribe from this group, send email to
>> clojure+unsubscr...@googlegroups.com
>> For more options, visit this group at
>> http://groups.google.com/group/clojure?hl=en
>>
>
>
>
> --
> Sent from an IBM Model M, 15 August 1989.
>
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to