What's your Spark version?
Do you have added hadoop native library to your path? like
"spark.executor.extraJavaOptions -Djava.library.path=/hadoop-native/" in
spark-defaults.conf.
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
I wanna be able to read snappy compressed files in spark. I can do a
val df = spark.read.textFile("hdfs:// path")
and it passes that test in spark shell but beyond that when i do a
df.show(10,false) or something - it shows me binary data mixed with real
text - how do I read the decompr