Hi Yanbo, As Xiangrui said, the feature scaling in training step is transparent to users, and in theory, with/without feature scaling, the optimization should converge to the same solution after transforming to the original space.
In short, we do the training in the scaled space, and get the weights in the scaled space. Then we transform the weights to the original space so it's transparent to users. GLMNET package in R does the same thing, and I think we should do it instead of asking users to do it using pipeline API since not all the users know this stuff. Also, in GLMNET package, there are different strategies to do feature scalling for linear regression and logistic regression; as a result, we don't want to make it public api naively without addressing different use-case. Sincerely, DB Tsai ------------------------------------------------------- My Blog: https://www.dbtsai.com LinkedIn: https://www.linkedin.com/in/dbtsai On Wed, Nov 26, 2014 at 12:06 PM, Xiangrui Meng <men...@gmail.com> wrote: > Hi Yanbo, > > We scale the model coefficients back after training. So scaling in > prediction is not necessary. > > We had some discussion about this. I'd like to treat feature scaling > as part of the feature transformation, and recommend users to apply > feature scaling before training. It is a cleaner solution to me, and > this is easy with the new pipeline API. DB (cc'ed) recommends > embedding feature scaling in linear methods, because it generally > leads better conditioning, which is also valid. Feel free to create a > JIRA and we can have the discussion there. > > Best, > Xiangrui > > On Wed, Nov 26, 2014 at 1:39 AM, Yanbo Liang <yanboha...@gmail.com> wrote: >> Hi All, >> >> LogisticRegressionWithLBFGS set useFeatureScaling to true default which can >> improve the convergence during optimization. >> However, other model training method such as LogisticRegressionWithSGD does >> not set useFeatureScaling to true by default and the corresponding set >> function is private in mllib scope which can not be set by users. >> >> The default configuration will cause mismatch training and prediction. >> Suppose that users prepare input data for training set and predict set with >> the same format, then run model training with LogisticRegressionWithLBFGS >> and prediction. >> But they do not know that it contains feature scaling in training step but >> w/o it in prediction step. >> When prediction step, it will apply model on dataset whose extent or scope >> is not consistent with training step. >> >> Should we make setFeatureScaling function to public and change default value >> to false? >> I think it is more clear and comprehensive to make feature scale and >> normalization in preprocessing step of the machine learning pipeline. >> If this proposal is OK, I will file a JIRA to track. --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org