Hi Generic Flink Developer,
Normally when you get an internal error from AWS, you also get a 500 status
code - the 200 seems odd to me.
One thing I do know is that if you’re hitting S3 hard, you have to expect and
recover from errors.
E.g. distcpy jobs in Hadoop-land will auto-retry a failed r
Hi, is there any idea on what causes this and how it can be resolved? Thanks.
‐‐‐ Original Message ‐‐‐
On Wednesday, December 5, 2018 12:44 AM, Flink Developer
wrote:
> I have a Flink app with high parallelism (400) running in AWS EMR. It uses
> Flink v1.5.2. It sources Kafka and sinks
I have a Flink app with high parallelism (400) running in AWS EMR. It uses
Flink v1.5.2. It sources Kafka and sinks to S3 using BucketingSink (using
RocksDb backend for checkpointing). The destination is defined using "s3a://"
prefix. The Flink job is a streaming app which runs continuously. At