>> root 6566 0.0 0.0 164672 4116 ?R00:30 0:00 ps
>>>>>> auxwww --sort -rss
>>>>>> root 6532 0.0 0.0 183124 3592 ? S 00:30 0:00
>>>>>> /usr/sbin/CROND -n
>>>>>>
>>>>>> O
:
>>>>>
>>>>>> Hi Ori,
>>>>>>
>>>>>> The error message suggests that there's not enough physical memory on
>>>>>> the machine to satisfy the allocation. This does not necessarily mean a
>>>>&g
> enough memory reserved for the system processes, etc.
>>>>>
>>>>> I would suggest to first look into the machine memory usages, see
>>>>> whether the Flink process indeed uses more memory than expected. This
>>>>> could
>>>>&g
the `/proc/meminfo` file
>>>> - Any container memory usage metrics that are available to your Yarn
>>>> cluster
>>>>
>>>> Thank you~
>>>>
>>>> Xintong Song
>>>>
>>>>
>>>>
>>>> On Tu
;> After the job is running for 10 days in production, TaskManagers start
>>>> failing with:
>>>>
>>>> Connection unexpectedly closed by remote task manager
>>>>
>>>> Looking in the machine logs, I can see the following error:
>>
ning for 10 days in production, TaskManagers start
>>> failing with:
>>>
>>> Connection unexpectedly closed by remote task manager
>>>
>>> Looking in the machine logs, I can see the following error:
>>>
>>> = Java processes for
s, I can see the following error:
>>
>> = Java processes for user hadoop =
>> OpenJDK 64-Bit Server VM warning: INFO:
>> os::commit_memory(0x7fb4f401, 1006567424, 0) failed; error='Cannot
>> allocate memory' (err
>
006567424, 0) failed; error='Cannot
> allocate memory' (err
> #
> # There is insufficient memory for the Java Runtime Environment to
> continue.
> # Native memory allocation (mmap) failed to map 1006567424 bytes for
> committing reserved memory.
> # An
: INFO:
os::commit_memory(0x7fb4f401, 1006567424, 0) failed; error='Cannot
allocate memory' (err
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 1006567424 bytes for
committing reserved memory.
# An error r