Re: A Simple FeedForward NeuralNetwork (using BackPropagation)

2012-11-21 Thread Anthony McClure
Brian Ripley wrote and maintains the nnet package for R which is the basic neural net package. He also wrote a book "Pattern Recognition and Neural Networks" that is somewhat known by statisticians. I only have read the first chapter but it appeared he would work out simple examples. Also, the book

Re: A Simple FeedForward NeuralNetwork (using BackPropagation)

2012-11-20 Thread Timothy Washington
Nice. I'm gonna check out the Weka resources, and Russell and Norvig LISP programs. This is just what I'm looking for. And that Stanford resourceI mentioned, is part of the Coursera regimen. So It sounds like I'm on the right course.

Re: A Simple FeedForward NeuralNetwork (using BackPropagation)

2012-11-19 Thread Timothy Washington
Hey, thanks for the "*COMPETENT PROGRAM EVOLUTION*" paper. I'll take a closer look at that. And I too think using reference material is a good start. The only clear outline of the backprop algorithm I found was in this paper: page.mi.fu-berlin.de/rojas/neural/chapter/K7.pdf. I haven't found a pape

Re: A Simple FeedForward NeuralNetwork (using BackPropagation)

2012-11-19 Thread Timothy Washington
Yes, that's exactly what I found. I did a lot of test runs, varying my learning coefficient each time. The learning coefficients I used, ranged from 0.01 to 1.5. 0.02 seems to be a good coefficient for the data set I've started out with. I'm going to play around with it a little more though - using

Re: A Simple FeedForward NeuralNetwork (using BackPropagation)

2012-11-19 Thread Andreas Liljeqvist
If you have too many neurons or not a big enough dataset, you risk learning the features of the trainingset but not the generality of the problem. [e1 e2 answer] (def dataset[[1 1 2] [2 2 4] [4 4 8]]) Are you learning addition here or a doubling function? If you have enough neurons, you could al

Re: A Simple FeedForward NeuralNetwork (using BackPropagation)

2012-11-19 Thread Andreas Liljeqvist
Yes, you are seeding and taking different paths in the problem area. Best case one could model the problem area with some priors and do some minimal-overlapping search of features. I find this one very interesting: http://metacog.org/main.pdf Apparently it produces good results and to me seems the

Re: A Simple FeedForward NeuralNetwork (using BackPropagation)

2012-11-18 Thread Timothy Washington
Yes agreed. The only reason I chose to begin with BackPropagation was to first get a thorough understanding of gradient descent. The next 2 approaches I have in mind are i) Resilient Propagation and ii) the Levenberg–Marquardt algorithm