Thanks, Gabor! We switched back to the hashmap state backend for now. We'll
troubleshoot if we go back to rocksdb.
Jad


On Wed, Nov 6, 2024 at 6:51 AM Gabor Somogyi <gabor.g.somo...@gmail.com>
wrote:

> Hi Jad,
>
> There is no low hanging fruit here if you really want to find this out.
> Such case the memory manager tries to allocate and deallocate the total
> memory which is prepared for.
> When not all the memory is available then it's not going to be successful
> and you see the mentioned exception.
>
> I would suggest the following:
> * Presume this is already fulfilled but double check that Java 8u72+ used
> (as the exception message writes)
> * Create a custom Flink image with additional log entries
> in UnsafeMemoryBudget
> * Remote debug with a breakpoint in UnsafeMemoryBudget
>
> All in all one must find out why the available memory is not equals to the
> total memory in the memory manager.
>
> G
>
>
> On Tue, Oct 29, 2024 at 6:03 PM Jad Naous <j...@grepr.ai> wrote:
>
>> Hi Flink Community,
>> I'd really appreciate your help. We're trying to switch from using the
>> heap state backend to rocksdb, and have been encountering a warning "Not
>> all slot managed memory is freed at TaskSlot..." when the pipeline
>> restarts. Any pointers to troubleshoot this issue?
>> Many thanks!
>> Jad.
>>
>

Reply via email to