- 原始邮件 --
>> 发件人: "lisendong"mailto:lisend...@163.com>>;
>> 发送时间: 2015年3月31日(星期二) 下午3:47
>> 收件人: "Xiangrui Meng"mailto:men...@gmail.com>>;
>> 抄送: "Xiangrui Meng"mailto:m...@databricks.com>>;
>> "user&
artTime > 1) {
> throw new Exception("automatically cleanup error")
> }
> }
> }
>
>
> -- 原始邮件 --
> *发件人:* "lisendong";
> *发送时间:* 2015年3月31日(星期二) 下午3:47
> *收件人:* "Xiangrui Meng";
> *抄送:* "Xiangrui Meng";
--
> ??: "lisendong"mailto:lisend...@163.com>>;
> : 2015??3??31??(??) 3:47
> ??: "Xiangrui Meng"mailto:men...@gmail.com>>;
> : "Xiangrui Meng"mailto:m...@databricks.com>>;
> "user"
.@databricks.com>>;
> "user"mailto:user@spark.apache.org>>; "Sean
> Owen"mailto:so...@cloudera.com>>; "GuoQiang
> Li"mailto:wi...@qq.com>>;
> : Re: different result from implicit ALS with explicit AL
I have update my spark source code to 1.3.1.
the checkpoint works well.
BUT the shuffle data still could not be delete automatically…the disk usage is
still 30TB…
I have set the spark.cleaner.referenceTracking.blocking.shuffle to true.
Do you know how to solve my problem?
Sendong Li
> 在 2
Thank you very much for your opinion:)
In our case, maybe it 's dangerous to treat un-observed item as negative
interaction(although we could give them small confidence, I think they are
still incredible...)
I will do more experiments and give you feedback:)
Thank you;)
> 在 2015年2月26日,23:16,
I believe that's right, and is what I was getting at. yes the implicit
formulation ends up implicitly including every possible interaction in
its loss function, even unobserved ones. That could be the difference.
This is mostly an academic question though. In practice, you have
click-like data and
oh my god, I think I understood...
In my case, there are three kinds of user-item pairs:
Display and click pair(positive pair)
Display but no-click pair(negative pair)
No-display pair(unobserved pair)
Explicit ALS only consider the first and the second kinds
But implicit ALS consider all the thre
Lisen, did you use all m-by-n pairs during training? Implicit model
penalizes unobserved ratings, while explicit model doesn't. -Xiangrui
On Feb 26, 2015 6:26 AM, "Sean Owen" wrote:
>
> +user
>
> On Thu, Feb 26, 2015 at 2:26 PM, Sean Owen wrote:
>>
>> I think I may have it backwards, and that yo
+user
On Thu, Feb 26, 2015 at 2:26 PM, Sean Owen wrote:
> I think I may have it backwards, and that you are correct to keep the 0
> elements in train() in order to try to reproduce the same result.
>
> The second formulation is called 'weighted regularization' and is used for
> both implicit and
)
I could not understand why, could you help me?
Thank you very much!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/different-result-from-implicit-ALS-with-explicit-ALS-tp21823.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
okay, I have brought this to the user@list
I don’t think the negative pair should be omitted…..
if the score of all of the pairs are 1.0, the result will be worse…I have tried…
Best Regards,
Sendong Li
> 在 2015年2月26日,下午10:07,Sean Owen 写道:
>
> Yes, I mean, do not generate a Rating for these
12 matches
Mail list logo