gt;
>
> *From:* Stuti Awasthi
> *Sent:* Wednesday, May 14, 2014 6:20 PM
> *To:* user@spark.apache.org
> *Subject:* Understanding epsilon in KMeans
>
>
>
> Hi All,
>
>
>
> I wanted to understand the functionality of epsilon in KMeans in Spark
> MLlib.
>
&
Stuti,
- The two numbers at different contexts, but finally end up in two sides
of an && operator.
- A parallel K-Means consists of multiple iterations which in turn
consists of moving centroids around. A centroids would be deemed stabilized
when the root square distance between suc
:29 PM, "Stuti Awasthi" wrote:
> Hi All,
>
>
>
> Any ideas on this ??
>
>
>
> Thanks
>
> Stuti Awasthi
>
>
>
> *From:* Stuti Awasthi
> *Sent:* Wednesday, May 14, 2014 6:20 PM
> *To:* user@spark.apache.org
> *Subject:* Understanding ep
2014 at 8:35 PM, Stuti Awasthi wrote:
> Hi All,
>
>
>
> Any ideas on this ??
>
>
>
> Thanks
>
> Stuti Awasthi
>
>
>
> From: Stuti Awasthi
> Sent: Wednesday, May 14, 2014 6:20 PM
> To: user@spark.apache.org
> Subject: Understanding epsilon i
It is running k-means many times, independently, from different random
starting points in order to pick the best clustering. Convergence ends
one run, not all of them.
Yes epsilon should be the same as "convergence threshold" elsewhere.
You can set epsilon if you instantiate KMeans directly. Mayb
Hi All,
Any ideas on this ??
Thanks
Stuti Awasthi
From: Stuti Awasthi
Sent: Wednesday, May 14, 2014 6:20 PM
To: user@spark.apache.org
Subject: Understanding epsilon in KMeans
Hi All,
I wanted to understand the functionality of epsilon in KMeans in Spark MLlib.
As per documentation :
distance
Hi All,
I wanted to understand the functionality of epsilon in KMeans in Spark MLlib.
As per documentation :
distance threshold within which we've consider centers to have converged.If all
centers move less than this Euclidean distance, we stop iterating one run.
Now I have assumed that if cent