Hello, Robert.
I've been changing manually the name of the buckets in the logs and other
potentially sensitive data. The name of the buckets are ok, since changing
the format from 'parquet' to 'raw' allows to retrieve the data. Sorry for
the confusion.
Does your env allow access to all AWS resour
Thanks for the logs.
The OK job seems to read from "s3a://test-bucket/", while the KO job reads
from "s3a://bucket-test/". Could it be that you are just trying to access
the wrong bucket?
What I also found interesting from the KO Job TaskManager is this log
message:
Caused by: java.net.NoRouteTo
Thank you Svend and Till for your help.
Sorry for the the late response.
I'll try to give more information about the issue:
I've not worked exactly in the situation you described, although I've had
> to configure S3 access from a Flink application recently and here are a
> couple of things I le
Hi Angelo,
what Svend has written is very good advice. Additionally, you could give us
a bit more context by posting the exact stack trace and the exact
configuration you use to deploy the Flink cluster. To me this looks like a
configuration/setup problem in combination with AWS.
Cheers,
Till
On
Hi Angelo,
I've not worked exactly in the situation you described, although I've had to
configure S3 access from a Flink application recently and here are a couple of
things I learnt along the way:
* You should normally not need to include flink-s3-fs-hadoop nor
hadoop-mapreduce-client-core in
Hello,
Trying to read a parquet file located in S3 leads to a AWS credentials
exception. Switching to other format (raw, for example) works ok regarding
to file access.
This is a snippet of code to reproduce the issue:
static void parquetS3Error() {
EnvironmentSettings settings =
Environmen