Hi all!
Has anyone been through this already?
I have a spark docker images that are used in 2 different environments and
each one requires a different credentials provider for s3a. That parameter
is the only difference between them.
When passing via --conf, it works as expected.
When --conf is
Hi,
How are you submitting your spark job from your client?
Your files can either be on HDFS or HCFS such as gs, s3 etc.
With reference to --py-files hdfs://yarn-master-url hdfs://foo.py', I
assume you want your
spark-submit --verbose \
--deploy-mode cluster \
--co