23, 2024 at 10:39 AM Ganesh Walse
> wrote:
>
>> Hi All,
>>
>> I am using apache flink for the bounded data where I need to execute 100
>> of jobs daily.
>>
>> In every job submission my job manager jvm metaspace is increasing by
>> 10Mb and it is not g
10mb and it will never get
released.
Due to above scenario my application gets oom error after certain number of
jobs.
Please help me here to get out of this situation.
Thanks and regards,
Ganesh walse
Hi All,
Whenever I scale down my job manager pods , my application jar gets deleted.
And after that I scaled up the pod but jar is not getting uploaded again.
Any help would be appreciated.
Thanks and regards,
Ganesh walse
Hi All,
Whenever I scale down my job manager pods , my application jar gets deleted.
And after that I scaled up the pod but jar is not getting uploaded again.
Any help would be appreciated.
Thanks and regards,
Ganesh walse
10mb and it will never get
released.
Due to above scenario my application gets oom error after certain number of
jobs.
Please help me here to get out of this situation.
Thanks and regards,
Ganesh walse
or not
Thanks,
Ganesh Walse.
the internet but I did not get any solution on this and
also many people are getting same error.
Can you please help on this as this is getting critical for me day by day.
Thanks,
Ganesh Walse.
Hi All,
My task manager memory goes on increasing in idle stages also any
reason why so.
As a result of the above my job is failing.
Thanks in advance.
Thanks & regards,
Ganesh Walse
best way to cache those tables.
Thank you in advance
Thanks,
Ganesh Walse
Hi Team,
After I ran my fink application on cluster and application runs
successfully but my JVM heap is still showing 50% is filled.
What will be the reason?
tly IO-bound, you can further boost the throughput
> via Async-IO [1].
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/operators/asyncio/
>
> Best,
> Zhanghao Chen
> ----------
> *From:* Ganesh Walse
> *Sent:* Frid
Hi Team,
If my 1 record gets processed in 1 second in a flink. Then what will be the
best time taken to process 1000 records in flink using maximum parallelism.
start.
Thanks & regards,
Ganesh Walse.
14 matches
Mail list logo