If you have too many neurons or not a big enough dataset, you risk learning
the features of the trainingset but not the generality of the problem.

[e1 e2 answer]
(def dataset[[1 1 2] [2 2 4] [4 4 8]])

Are you learning addition here or a doubling function?

If you have enough neurons, you could also more or less encode a lookup
function instead of the logic behind it.
eg.
(defn func [x] ({[1 1] 2 [2 2] 4} x)) ; yeah I would not look like this if
I was encoded in a NN.
instead of:
(defn func [x] (apply + x))

It will break down once it finds data not in the trainingset.

About the validation of your implementation.
Try to find a textbook where they are calculating a simple backprop network
by hand.
Steal their data and make unit tests from it.


On Mon, Nov 19, 2012 at 4:29 AM, Timothy Washington <twash...@gmail.com>wrote:

> Yes agreed. The only reason I chose to begin with BackPropagation was to
> first get a thorough understanding of gradient descent. The next 2
> approaches I have in mind are i) Resilient 
> Propagation<http://de.wikipedia.org/wiki/Resilient_Propagation> and
> ii) the Levenberg–Marquardt 
> algorithm<http://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm>
> .
>
> Now, by overtraining for the specific data, are you wondering if the
> algorithm is skewed to accomodate it? That may be the case, and I have to
> get more sample data sets. That's, in fact, one of the questions I have
> with this post. More broadly, it would be good to have more eyes look at
> the training algorithm and see if I got the main bits right. Then
> strategies for picking network architecture, avoiding local minima, etc.
>
> The next things I want to do is setup a configuration so that *A)* one
> can specify i) BackPropagation ii) ResilentPropagation iii) etc, *B)*have the 
> network architecture (how many hidden and output neurons, etc) be
> configurable, and *C)* add more and more types of training data.
>
>
> Tim
>
>
>
>
> On Sun, Nov 18, 2012 at 7:55 PM, Andreas Liljeqvist <bon...@gmail.com>wrote:
>
>> Well machine-learning is a complex area.
>> Basically you have to widen the search area when you get stuck in a local
>> minima.
>>
>> Another question is, are you overtraining for you specific data? Using
>> too many neurons tend learn the specific cases but not the generality.
>> Getting a perfect score is easy, just keep adding neurons...
>>
>> Standard backpropagation isn't really the state of the art nowadays.
>> Go and look up the thousands of paper written in the area, and none of
>> them have a definitive answer :P
>>
>>
>>  --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to