Re: S3A AWSS3IOException from Flink's BucketingSink to S3

2018-12-09 Thread Ken Krugler
Hi Generic Flink Developer, Normally when you get an internal error from AWS, you also get a 500 status code - the 200 seems odd to me. One thing I do know is that if you’re hitting S3 hard, you have to expect and recover from errors. E.g. distcpy jobs in Hadoop-land will auto-retry a failed r

Re: S3A AWSS3IOException from Flink's BucketingSink to S3

2018-12-09 Thread Flink Developer
Hi, is there any idea on what causes this and how it can be resolved? Thanks. ‐‐‐ Original Message ‐‐‐ On Wednesday, December 5, 2018 12:44 AM, Flink Developer wrote: > I have a Flink app with high parallelism (400) running in AWS EMR. It uses > Flink v1.5.2. It sources Kafka and sinks

S3A AWSS3IOException from Flink's BucketingSink to S3

2018-12-05 Thread Flink Developer
I have a Flink app with high parallelism (400) running in AWS EMR. It uses Flink v1.5.2. It sources Kafka and sinks to S3 using BucketingSink (using RocksDb backend for checkpointing). The destination is defined using "s3a://" prefix. The Flink job is a streaming app which runs continuously. At