1) This is a miss, unfortunately ... We will add support for
regularization and intercept in the coming v1.1. (JIRA:
https://issues.apache.org/jira/browse/SPARK-2550)
2) It has overflow problems in Python but not in Scala. We can
stabilize the computation by ensuring exp only takes a negative value:
1 / ( 1 + e^ x) = 1 - 1 / ( 1 + e^{-x} ) . (JIRA:
https://issues.apache.org/jira/browse/SPARK-2552)

-Xiangrui

On Wed, Jul 16, 2014 at 7:58 PM, Yanbo Liang <yanboha...@gmail.com> wrote:
> AFAIK for question 2, there is no built-in method to account for that
> problem.
> At right now, we can only perform one type of regularization.
> However, the elastic net implementation is just underway.
> You can refer this topic for further discussion.
> https://issues.apache.org/jira/browse/SPARK-1543
>
>
> 2014-07-17 2:08 GMT+08:00 fjeg <francisco.gime...@gmail.com>:
>
>> 1) Okay, to clarify, there is *no* way to regularize logistic regression
>> in
>> python (sorry if I'm repeating your answer).
>>
>> 2) This method you described will have overflow errors when abs(margin) >
>> 750. Is there a built-in method to account for this? Otherwise, I will
>> probably have to implement something like this:
>>
>> http://lingpipe-blog.com/2012/02/16/howprevent-overflow-underflow-logistic-regression
>>
>> Also, another question about the Scala implementation:
>> Can we only do one type of regularization? Is there any way to perform
>> elastic net which is a combination of L1 and L2?
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/MLLib-Regularized-logistic-regression-in-python-tp9780p9963.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
>

Reply via email to