e uses to start the TM
> > >> process.
> > >> >> > > > > > > > - If possible, I would exclude the memory reservation
> > >> from
> > >> >> this
> > >> >> > > > FLIP
> > >> >> > > > > > and
> > >> >> > > > > > > > add this as part of a dedicated FLIP.
> > >> >> &
t; > > > > > > > is allocated can happen in a second step. This would
> keep
> >> >> the
> >> >> > > scope
> >> >> > > > > of
> >> >> > > > > > > this
> >> >> > > > > > > >
ll
>> >> > > > > > > >
>> >> > > > > > > > On Thu, Aug 22, 2019 at 2:51 PM Xintong Song <
>> >> > > > tonysong...@gmail.com>
>> >> > > > > > > > wrote:
>> >> > > > > > > >
>> >> > > > > > > > > Hi everyone,
>> >> > > > > > > > >
>> >>
ment on wiki [1], with the
> > >> > > following
> > >> > > > > > > changes.
> > >> > > > > > > > >
> > >> > > > > > > > >- Removed open question regarding MemorySegment
>
;> > > > > > Alternatives"
> >> > > > > > > > for
> >> > > > > > > > > the
> >> > > > > > > > >moment.
> >> > > > > > > > >- Added imp
gt;
>> > > > > > > > > Thank you~
>> > > > > > > > >
>> > > > > > > > > Xintong Song
>> > > > > > > > >
>> > > > > > > > >
>> > > > > > > > > [1]
>> > > > > > > > >
>> > > > > > > > >
>> > >
; > >
> > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > @Xintong: Concerning "wait for memory users before task
> > > dispose
> > > > > and
> > > > > > > > > memory
> > > > > > > > > >
ory buffer": There seems to be pretty elaborate
> > logic
> > > > to
> > > > > > free
> > > > > > > > > buffers when allocating new ones. See
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
&
; > > > > with
> > > > > > > > option 2 and not setting "-XX:MaxDirectMemorySize" at all),
> > then
> > > I
> > > > > > think
> > > > > > > it
> > > > > > > > should be okay to set "-XX:MaxDirectMemorySize" to
> > > > > > > >
19, 2019 at 4:44 PM Xintong Song <
> > > tonysong...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Thanks for the inputs, Jingsong.
> > > > > > > >
> > > > > >
> >- Memory consumers should always avoid returning memory
> > segments
> > > > to
> > > > > > >memory manager while there are still un-cleaned structures /
> > > > threads
> > > > > > > that
> > > > > > &
t; > > > > > encounter direct memory oom if the GC cleaning memory slower
> > than
> > > > the
> > > > > >direct memory allocation.
> > > > > >
> > > > > > Am I understanding this correctly?
> > > >
;
> > > > >
> > > > >
> > > > > On Mon, Aug 19, 2019 at 4:21 PM JingsongLee <
> lzljs3620...@aliyun.com
> > > > > .invalid>
> > > > > wrote:
> > > > >
> > > > > > Hi stephan:
>
> > About option 2:
> > > > >
> > > > > if additional threads not cleanly shut down before we can exit the
> > > task:
> > > > > In the current case of memory reuse, it has freed up the memory it
> > > > > uses. If this memory is used by other tasks and asynchronous
> threads
> > &
; > of exited task may still be writing, there will be concurrent
> security
> > > > problems, and even lead to errors in user computing results.
> > > >
> > > > So I think this is a serious and intolerable bug, No matter what the
> > > > option is, it should be avoided.
> > > >
> > >
ug, No matter what the
> > > option is, it should be avoided.
> > >
> > > About direct memory cleaned by GC:
> > > I don't think it is a good idea, I've encountered so many situations
> > > that it's too late for GC to cause DirectMemory OOM. Release and
&g
on the type of user job, which is
> > often beyond our control.
> >
> > Best,
> > Jingsong Lee
> >
> >
> > --
> > From:Stephan Ewen
> > Send Time:2019年8月19日(星期一) 15:56
> > T
nd
> allocate DirectMemory depend on the type of user job, which is
> often beyond our control.
>
> Best,
> Jingsong Lee
>
>
> ----------
> From:Stephan Ewen
> Send Time:2019年8月19日(星期一) 15:56
> To:dev
>
Release and
> allocate DirectMemory depend on the type of user job, which is
> often beyond our control.
>
> Best,
> Jingsong Lee
>
>
> ----------
> From:Stephan Ewen
> Send Time:2019年8月19日(星期一) 15:56
>
dev
Subject:Re: [DISCUSS] FLIP-49: Unified Memory Configuration for TaskExecutors
My main concern with option 2 (manually release memory) is that segfaults
in the JVM send off all sorts of alarms on user ends. So we need to
guarantee that this never happens.
The trickyness is in tasks that uses
My main concern with option 2 (manually release memory) is that segfaults
in the JVM send off all sorts of alarms on user ends. So we need to
guarantee that this never happens.
The trickyness is in tasks that uses data structures / algorithms with
additional threads, like hash table spill/read and
Thanks for the comments, Stephan. Summarized in this way really makes
things easier to understand.
I'm in favor of option 2, at least for the moment. I think it is not that
difficult to keep it segfault safe for memory manager, as long as we always
de-allocate the memory segment when it is release
About the "-XX:MaxDirectMemorySize" discussion, maybe let me summarize it a
bit differently:
We have the following two options:
(1) We let MemorySegments be de-allocated by the GC. That makes it segfault
safe. But then we need a way to trigger GC in case de-allocation and
re-allocation of a bunch
Thanks for sharing your opinion Till.
I'm also in favor of alternative 2. I was wondering whether we can avoid
using Unsafe.allocate() for off-heap managed memory and network memory with
alternative 3. But after giving it a second thought, I think even for
alternative 3 using direct memory for off
Thanks for the clarification Xintong. I understand the two alternatives now.
I would be in favour of option 2 because it makes things explicit. If we
don't limit the direct memory, I fear that we might end up in a similar
situation as we are currently in: The user might see that her process gets
k
Let me explain this with a concrete example Till.
Let's say we have the following scenario.
Total Process Memory: 1GB
JVM Direct Memory (Task Off-Heap Memory + JVM Overhead): 200MB
Other Memory (JVM Heap Memory, JVM Metaspace, Off-Heap Managed Memory and
Network Memory): 800MB
For alternative 2
I guess you have to help me understand the difference between alternative 2
and 3 wrt to memory under utilization Xintong.
- Alternative 2: set XX:MaxDirectMemorySize to Task Off-Heap Memory and JVM
Overhead. Then there is the risk that this size is too low resulting in a
lot of garbage collection
Hi xintong,till
> Native and Direct Memory
My point is setting a very large max direct memory size when we do not
differentiate direct and native memory. If the direct memory,including user
direct memory and framework direct memory,could be calculated correctly,then
i am in favor of setting dire
Thanks for replying, Till.
About MemorySegment, I think you are right that we should not include this
issue in the scope of this FLIP. This FLIP should concentrate on how to
configure memory pools for TaskExecutors, with minimum involvement on how
memory consumers use it.
About direct memory, I t
Thanks for proposing this FLIP Xintong.
All in all I think it already looks quite good. Concerning the first open
question about allocating memory segments, I was wondering whether this is
strictly necessary to do in the context of this FLIP or whether this could
be done as a follow up? Without kn
Thanks for the feedback, Yang.
Regarding your comments:
*Native and Direct Memory*
I think setting a very large max direct memory size definitely has some
good sides. E.g., we do not worry about direct OOM, and we don't even need
to allocate managed / network memory with Unsafe.allocate() .
Howev
Hi xintong,
Thanks for your detailed proposal. After all the memory configuration are
introduced, it will be more powerful to control the flink memory usage. I
just have few questions about it.
- Native and Direct Memory
We do not differentiate user direct memory and native memory. They ar
Hi everyone,
We would like to start a discussion thread on "FLIP-49: Unified Memory
Configuration for TaskExecutors"[1], where we describe how to improve
TaskExecutor memory configurations. The FLIP document is mostly based on an
early design "Memory Management and Configuration Reloaded"[2] by St
33 matches
Mail list logo