Hy,

I try to give an example of overfitting with multi-layer perceptron.

I have done following small example :

library(nnet)

set.seed(1)
x <- matrix(rnorm(20),10,2)
z <- matrix(rnorm(10),10,1)
rx <- max(x)-min(x)
rz <- max(z)-min(z)
x <- x/rx
z <- z/rz
erreur <- 10^9
for(i in 1:100){
  temp.mod <- nnet(x=x,y=z,size=10,rang=1,maxit=1000)
  if(temp.mod$value<erreur){
    res.mod <- temp.mod
    erreur <- res.mod$value
  }
}

cat("\nFinal error : ",res.mod$value,"\n")


Normaly it is easy task for an MLP with 10 hidden units to reduce the final error to almost 0 (althougt there is nothing to predict).

But the smallest error that I get is :  0.753895 (very poor result)

Maybe this problem is already known??

Maybe the fault is mine but I don't see where.

Joseph Rynkiewicz

--
Ce message a ete verifie par MailScanner
pour des virus ou des polluriels et rien de
suspect n'a ete trouve.

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to