Hello Team ,
I am facing an issue where output files generated by Spark 1.6.1 are not
read by Hive 1.0.0 . It is because Hive 1.0.0 uses older parquet version
than Spark 1.6.1 which is using 1.7.0 parquet .
Is it possible that we can use older parquet version in Spark or newer
parquet version in
Hello Team,
Can anyone tell why the below code is throwing Null Pointer Execption in
yarn-client mode whereas running on local mode.
/ val filters = args.takeRight(0)
val sparkConf = new SparkConf().setAppName("TwitterAnalyzer")
val ssc = new StreamingContext(sparkConf, Seconds(2)
Is ANOVA present in Spark Mllib if not then, when will be this feature be
available in Spark ?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/ANOVA-test-in-Spark-tp26949.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Is there a way to create multiple output files when connected from beeline to
the Thriftserver ?
Right now i am using beeline -e 'query' > output.txt which is not efficient
as it uses linux operator to combine output files .
--
View this message in context:
http://apache-spark-user-list.1001560