I have given this a try in a spark-shell and I still get many "Allocation
Failure"s
On Thursday, July 3, 2014 9:51 AM, Xiangrui Meng wrote:
The SparkKMeans is just an example code showing a barebone
implementation of k-means. To run k-means on big datasets, please use
the KMeans implemented
The SparkKMeans is just an example code showing a barebone
implementation of k-means. To run k-means on big datasets, please use
the KMeans implemented in MLlib directly:
http://spark.apache.org/docs/latest/mllib-clustering.html
-Xiangrui
On Wed, Jul 2, 2014 at 9:50 AM, Wanda Hawk wrote:
> I can
I can run it now with the suggested method. However, I have encountered a new
problem that I have not faced before (sent another email with that one but here
it goes again ...)
I ran SparkKMeans with a big file (~ 7 GB of data) for one iteration with
spark-0.8.0 with this line in bash.rc " expo
The scripts that Xiangrui mentions set up the classpath...Can you run
./run-example for the provided example sucessfully?
What you can try is set SPARK_PRINT_LAUNCH_COMMAND=1 and then call
run-example -- that will show you the exact java command used to run
the example at the start of execution. A
Got it ! Ran the jar with spark-submit. Thanks !
On Wednesday, July 2, 2014 9:16 AM, Wanda Hawk wrote:
I want to make some minor modifications in the SparkMeans.scala so running the
basic example won't do.
I have also packed my code under a "jar" file with sbt. It completes
successfully b
I want to make some minor modifications in the SparkMeans.scala so running the
basic example won't do.
I have also packed my code under a "jar" file with sbt. It completes
successfully but when I try to run it : "java -jar myjar.jar" I get the same
error:
"Exception in thread "main" java.lang.N
You can use either bin/run-example or bin/spark-summit to run example
code. "scalac -d classes/ SparkKMeans.scala" doesn't recognize Spark
classpath. There are examples in the official doc:
http://spark.apache.org/docs/latest/quick-start.html#where-to-go-from-here
-Xiangrui
On Tue, Jul 1, 2014 at
Hello,
I have installed spark-1.0.0 with scala2.10.3. I have built spark with "sbt/sbt
assembly" and added
"/home/wanda/spark-1.0.0/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.0.4.jar"
to my CLASSPATH variable.
Then I went here
"../spark-1.0.0/examples/src/main/scala/org/apache/sp