Hi Steve, Thanks for the reply. I am a complete novice to R and to using SVM as well and therefore the terms that I'd used maybe non standard(I am not sure how to call them technically). Nevertheless the number of data points I had mentioned are just instances. But as I mentioned, these are bound to grow perpetually and hence I am looking for a way to manage the data for training in the most effective way possible without any information loss. I did a quick search on online SVM and think it is what I am looking for. Thanks for the links.
Divya -- View this message in context: http://r.789695.n4.nabble.com/predictive-modeling-and-extremely-large-data-tp3795674p3798294.html Sent from the R help mailing list archive at Nabble.com. ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.