Hi,

I'd like to write a parquet file from the driver. I could use the HDFS API
but I am worried that it won't work on a secure cluster. I assume that the
method the executors use to write to HDFS takes care of managing Hadoop
security. However, I can't find the place where HDFS write happens in the
spark source.

Please help me:
1.How to write parquet from the driver using the Spark API?
2. If this wouldn't possible, where can I find the method executors use to
write to HDFS?

Thanks,
Zoltan

Reply via email to