Hi all,
I am trying to reduce the memory usage of a Flink app.
There is about 25+Gb of state when persisted to checkpoint/savepoint. And a
fair amount of short lived objects as incoming traffic is fairly high.
So far, I have 8TM with 20GB each using Flink 1.12. I would like to reduce
the amount of
> I am unsure of the issue with the Hadoop plugin, but if using 1.14 is a
> hard requirement, rewriting your input data into another format could also
> be a viable stop-gap solution.
>
> Seth
>
> On Mon, Dec 20, 2021 at 8:57 PM Alexandre Montecucco <
> alexandre.montecu...
doc. I tried importing various hadoop libraries, but it always
causes yet another issue.
I think this might be the root cause of my problem.
Best,
Alex
[1] https://lists.apache.org/thread/796m8tww4gqykqm1szb3y5m7t6scgho2
On Mon, Dec 20, 2021 at 4:23 PM Alexandre Montecucco <
alexandre.m
apache.org/flink/flink-docs-release-1.12/deployment/filesystems/s3.html
>
>
> pt., 17 gru 2021 o 10:10 Alexandre Montecucco <
> alexandre.montecu...@grabtaxi.com> napisaĆ(a):
>
>> Hello everyone,
>> I am struggling to read S3 parquet files from S3 with Flink Streamin
KA99J>
[image: Twitter] <https://htmlsig.com/t/01BKDVDC> [image: Facebook]
<https://htmlsig.com/t/01BF8J9Q> [image: LinkedIn]
<https://htmlsig.com/t/01BKYJ3R> [image: Instagram]
<https://htmlsig.com/t/01BH4CH1> [image: Youtube]
<https://htmlsig.com/t/