Hi Andrew,
Thank you for your info. I will have a look at these links.
Thanks,
Carter
Date: Tue, 27 May 2014 09:06:02 -0700
From: ml-node+s1001560n6436...@n3.nabble.com
To: gyz...@hotmail.com
Subject: Re: K-nearest neighbors search in Spark
Hi Carter,
In Spark 1.0 there will be an
Hi Krishna,
Thank you very much for your code. I will use it as a good start point.
Thanks,
Carter
Date: Tue, 27 May 2014 16:42:39 -0700
From: ml-node+s1001560n6455...@n3.nabble.com
To: gyz...@hotmail.com
Subject: Re: K-nearest neighbors search in Spark
Carter, Just as a quick
Carter,
Just as a quick & simple starting point for Spark. (caveats - lots of
improvements reqd for scaling, graceful and efficient handling of RDD et
al):
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import scala.collection.immutable.ListMap
import scala.colle
Hi Carter,
In Spark 1.0 there will be an implementation of k-means available as part
of MLLib. You can see the documentation for that below (until 1.0 is fully
released).
https://people.apache.org/~pwendell/spark-1.0.0-rc9-docs/mllib-clustering.html
Maybe diving into the source here will help g
Any suggestion is very much appreciated.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/K-nearest-neighbors-search-in-Spark-tp6393p6421.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.