FYI - I discovered that if I specify the Hadoop compression codec it works
fine. E.g.:
CompressWriters.forExtractor(new
DefaultExtractor()).withHadoopCompression("GzipCodec")
Haven't dug into exactly why yet.
On Wed, Oct 7, 2020 at 12:14 PM David Anderson
wrote:
> Looping in @Kostas Kloudas w
Looping in @Kostas Kloudas who should be able to
clarify things.
David
On Wed, Oct 7, 2020 at 7:12 PM Dan Diephouse wrote:
> Thanks! Completely missed that in the docs. It's now working, however it's
> not working with compression writers. Someone else noted this issue here:
>
>
> https://stac
Thanks! Completely missed that in the docs. It's now working, however it's
not working with compression writers. Someone else noted this issue here:
https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming
Looking at the code, I'
Dan,
The first point you've raised is a known issue: When a job is stopped, the
unfinished part files are not transitioned to the finished state. This is
mentioned in the docs as Important Note 2 [1], and fixing this is waiting
on FLIP-46 [2]. That section of the docs also includes some S3-specifi