Hi Jonhy,
What is the master you are using with spark-submit?
I ve had this problem before because Spark (different from CLI and boto3)
was running in Yarn distributed mode (--master yarn) So the keys were not
copied to all the executors' nodes so I have had to submit my spark job as
following:
In order to access my S3 bucket i have exported my creds
export AWS_SECRET_ACCESS_KEY=
export AWS_ACCESSS_ACCESS_KEY=
I can verify that everything works by doing
aws s3 ls mybucket
I can also verify with boto3 that it works in python
resource = boto3.resource("s3", region_name=