Is it possible that after filtering the feature dimension changed?
This may happen if you use LIBSVM format but didn't specify the number
of features. -Xiangrui
On Tue, Dec 9, 2014 at 4:54 AM, Sameer Tilak wrote:
> Hi All,
>
>
> I was able to run LinearRegressionwithSGD for a largeer dataset (> 2
>
>> Date: Tue, 7 Oct 2014 15:11:39 -0700
>> Subject: Re: MLLib Linear regression
>> From: men...@gmail.com
>> To: ssti...@live.com
>> CC: user@spark.apache.org
>
>>
>> Did you test different regularization parameters and step sizes? In
>> the comb
Oct 2014 15:11:39 -0700
> Subject: Re: MLLib Linear regression
> From: men...@gmail.com
> To: ssti...@live.com
> CC: user@spark.apache.org
>
> Did you test different regularization parameters and step sizes? In
> the combination that works, I don't see "A + D". Did
Did you test different regularization parameters and step sizes? In
the combination that works, I don't see "A + D". Did you test that
combination? Are there any linear dependency between A's columns and
D's columns? -Xiangrui
On Tue, Oct 7, 2014 at 1:56 PM, Sameer Tilak wrote:
> BTW, one detail:
BTW, one detail:
When number of iterations is 100 all weights are zero or below and the indices
are only from set A.
When number of iterations is 150 I see 30+ non-zero weights (when sorted by
weight) and indices are distributed across al sets. however MSE is high (5.xxx)
and the result does no
Thanks Burak. Step size 0.01 worked for b) and step=0.0001 for c) !
Cheers
On Wed, Oct 1, 2014 at 3:00 PM, Burak Yavuz wrote:
> Hi,
>
> It appears that the step size is too high that the model is diverging with
> the added noise.
> Could you try by setting the step size to be 0.1 or 0.01?
>
Hi,
It appears that the step size is too high that the model is diverging with the
added noise.
Could you try by setting the step size to be 0.1 or 0.01?
Best,
Burak
- Original Message -
From: "Krishna Sankar"
To: user@spark.apache.org
Sent: Wednesday, October 1, 2014 12:43:20 PM
Subj