(following up a rather old thread:)
Hi Christopher,
I understand how you might use nearest neighbors for item-item
recommendations, but how do you use it for top N items per user?
Thanks!
Apu
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale
that don't work. I will look into annoy. Thanks.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked-recommendation-tp10098p10212.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
gt; computations I think few GPU nodes will suffice to serve faster
>> recommendation after learning model with SPARK. It will be great to have
>> builtin GPU support in SPARK for faster computations to leverage GPU
>> capability of nodes for performing these flops faster.
>&g
SPARK for faster computations to leverage GPU
> capability of nodes for performing these flops faster.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked-recommendation-tp10098p10183.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
rming these flops faster.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked-recommendation-tp10098p10183.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
n do all this using
>> Breeze (e.g. concatenating vectors to make matrices, iterating and whatnot).
>>
>> Hope that helps
>>
>> Nick
>>
>>
>>> On Fri, Jul 18, 2014 at 1:17 AM, m3.sharma wrote:
>>> Yes, thats what prediction should be do
Hope that helps
>
> Nick
>
>
> On Fri, Jul 18, 2014 at 1:17 AM, m3.sharma wrote:
>
>> Yes, thats what prediction should be doing, taking dot products or sigmoid
>> function for each user,item pair. For 1 million users and 10 K items data
>> there are 10 billion pai
Yes, thats what prediction should be doing, taking dot products or sigmoid
> function for each user,item pair. For 1 million users and 10 K items data
> there are 10 billion pairs.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.na
Yes, thats what prediction should be doing, taking dot products or sigmoid
function for each user,item pair. For 1 million users and 10 K items data
there are 10 billion pairs.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked
in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked-recommendation-tp10098p10103.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
We are using RegressionModels that comes with *mllib* package in SPARK.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked-recommendation-tp10098p10103.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
ross 800 partitions
before doing above steps, still it was of no help.
I am using about 100 executor , 2 core, each executor with 2gb RAM.
Are there any suggestions to make these predictions fast?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scal
12 matches
Mail list logo