Hi, I've been doing some POC for CF in MLlib.
In my environment,  ratings are all implicit so that I try to use it with
trainImplicit method (in python).

The trainImplicit method takes alpha as one of the arguments to specify a
confidence for the ratings as described in <
http://spark.apache.org/docs/latest/mllib-collaborative-filtering.html>,
but the alpha value is global for all the ratings so I am not sure why we
need this.
(If it is per rating, it makes sense to me, though.)

What is the difference in setting different alpha values for exactly the
same data set ?

I would be very appreciated if someone give me a reasonable explanation for
this.

Best regards,
Hiro

Reply via email to