Re: [R] GLM question

2011-08-23 Thread Andra Isan
to do the prediction for the hold-out data. Is there any better way for cross-validation to learn a model on training data and test it on test data in R? Thanks, Andra --- On Mon, 8/22/11, Joshua Wiley wrote: > From: Joshua Wiley > Subject: Re: [R] GLM question > To: "And

Re: [R] GLM question

2011-08-22 Thread Joshua Wiley
Hi Andra, There are several problems with what you are doing (by the way, I point them out so you can learn and improve, not to be harsh or rude). The good news is there is a solution (#3) that is easier than what you are doing right now! 1) glm.fit() is a function so it is a good idea not to us

Re: [R] GLM Question

2009-12-04 Thread Knut Krueger
Peter Flom schrieb: What do you mean by "better"? Dear Peter Thank you for your kind respons as well. You are right, we are in constant debate whether it makes sense to remove variables (no matter whether significant or not) from a total dataset which in itself has a certain meaning and may no

Re: [R] GLM Question

2009-12-03 Thread Peter Flom
Knut Krueger wrote > >I think this is more an general question to GLMs. > >The result was better in all prior GLMs when I admitted the non >significant factors, but this is the first time that the result is worse >than before. What could be the reason for that? > >glm(data1~data2+data3+data4+data5