Dear Community,
Please ignore my last post about Spark SQL.
When I run:
val file = sc.textFile("./README.md")
val count = file.flatMap(line => line.split(" ")).map(word => (word,
1)).reduceByKey(_+_)
count.collect()
it happends too.
is there any possible reason f
Hi Community,
I use Spark 1.0.2, using Spark SQL to do Hive SQL.
When I run the following code in Spark Shell:
val file = sc.textFile("./README.md")
val count = file.flatMap(line => line.split(" ")).map(word => (word,
1)).reduceByKey(_+_)
count.collect()
Correct and no error
Dear Developers,
I'm limited in using Spark 1.0.2 currently.
I use Spark SQL on Hive table to load amplab benchmark, which is 25.6GiB
approximately.
I run:
CREATE EXTERNAL TABLE uservisits (sourceIP STRING,destURL STRING, visitDate
STRING,adRevenue DOUBLE,userAgent STRING,countryCode STRING,
akka?
3. When running ./bin/run-example SparkPi I noticed that the jar file has
been sent from server to client. It is scary because the jar is big. Is it
common?
Trident