to do the prediction for the hold-out data. Is there any
better way for cross-validation to learn a model on training data and test it
on test data in R?
Thanks,
Andra
--- On Mon, 8/22/11, Joshua Wiley wrote:
> From: Joshua Wiley
> Subject: Re: [R] GLM question
> To: "And
Hi Andra,
There are several problems with what you are doing (by the way, I
point them out so you can learn and improve, not to be harsh or rude).
The good news is there is a solution (#3) that is easier than what
you are doing right now!
1) glm.fit() is a function so it is a good idea not to us
Peter Flom schrieb:
What do you mean by "better"?
Dear Peter
Thank you for your kind respons as well. You are right, we are in
constant debate whether it makes sense to remove variables (no matter
whether significant or not) from a total dataset which in itself has a
certain meaning and may no
Knut Krueger wrote
>
>I think this is more an general question to GLMs.
>
>The result was better in all prior GLMs when I admitted the non
>significant factors, but this is the first time that the result is worse
>than before. What could be the reason for that?
>
>glm(data1~data2+data3+data4+data5
4 matches
Mail list logo