et more partitions when reading instead of original schemaRdd.
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Spark-SQL-Long-running-job-tp10717p10727.html
Sent from the Apache Spark Developers List mailing lis
(hdfs file system)
but I fear that in case my "HDFS block size" > "partition file size", I will
get more partitions when reading instead of original schemaRdd.
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Spark-SQL-Long-running
spark job. (running spark in yarn-client mode btw).
Thanks
-Nitin
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Spark-SQL-Long-running-job-tp10717.html
Sent from the Apache Spark Developers List mailing list archive at Nabbl
ickly and going out of space as
its a long running spark job. (running spark in yarn-client mode btw).
Thanks
-Nitin
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Spark-SQL-Long-running-job-tp10717.html
Sent from the Apache Spark Developers List ma