In developing a machine learner to classify sentences in plain text
    sources of scientific documents I have been using the caret  
package and
    following the procedures described in the vignettes.  What I miss  
in the
    package -- but quite possibly I am overlooking it! -- is functions  
that allow
    me to estimate whether a model has too much variance or too much  
bias.  In
    particular, I am looking for something that helps me compare the  
training and
    test errors as functions of training set size.  Clearly, I can  
program
    something in R, but if there is already a function in some package  
to do that,
    I would prefer to use it, of course.
    Regards,
    Richard
        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to