Hi Xiangrui,

Could you please point to some reference for calculating prec@k and ndcg@k ?

prec is precision I suppose but ndcg I have no idea about...

Thanks.
Deb


On Mon, Aug 25, 2014 at 12:28 PM, Xiangrui Meng <men...@gmail.com> wrote:

> The evaluation metrics are definitely useful. How do they differ from
> traditional IR metrics like prec@k and ndcg@k? -Xiangrui
>
>
> On Mon, Aug 25, 2014 at 2:14 AM, Lizhengbing (bing, BIPA) <
> zhengbing...@huawei.com> wrote:
>
> >  Hi:
> >
> > In paper “Item-Based Top-N Recommendation Algorithms”(
> >
> https://stuyresearch.googlecode.com/hg/blake/resources/10.1.1.102.4451.pdf
> ),
> > there are two parameters measuring the quality of recommendation: HR and
> > ARHR.
> >
> > If I use ALS(Implicit) for top-N recommendation system, I want to check
> > it’s quality. ARHR and HR are two good quality measures.
> >
> > I want to contribute them to spark MLlib.  So I want to know whether this
> > is meaningful?
> >
> >
> >
> >
> >
> > (1) If *n *is the total number of customers/users,  the hit-rate of the
> > recommendation algorithm was computed as
> >
> > *hit-rate (HR) *= *Number of hits / n*
> >
> >
> >
> > (2)If *h *is the number of hits that occurred at positions *p*1, *p*2, *.
> > . . *, *p**h *within the *top-N *lists (i.e., 1 ≤ *p**i *≤ *N*), then the
> > average reciprocal hit-rank is equal to:
> >
> > *i*
> >
> > *.*
> >
>

Reply via email to