Hi Guys,
As part of debugging this "native library" error in our environment, it
would be great if somebody can help me with this question. What kind of
temp, scratch, and staging directories does Spark need and use on the slave
nodes in the YARN cluster mode?
Thanks,
Aravind
On Mon, Nov 3, 201
Team,
We are running a build of spark 1.1.1 for hadoop 2.2. We can't get the code
to read LZO or snappy files in YARN. It fails to find the native libs. I
have tried many different ways of defining the lib path - LD_LIBRARY_PATH,
--driver-class-path, spark.executor.extraLibraryPath in
spark-defaul