Hi All,
         We have implemented S3 sink in the following way:

StreamingFileSink sink= StreamingFileSink.forBulkFormat(new
Path("s3a://mybucket/myfolder/output/"),
ParquetAvroWriters.forGenericRecord(schema))
.withBucketCheckInterval(50l).withBucketAssigner(new
CustomBucketAssigner()).build();

The problem we are facing is that StreamingFileSink is initializing
S3AFileSystem class to write to s3 and is not able to find the s3
credentials to write data, However other flink application on the same
cluster use "s3://" paths are able to write data to the same s3 bucket and
folders, we are only facing this issue with StreamingFileSink.

Regards,
Taher Koitawala
GS Lab Pune
+91 8407979163

Reply via email to