Dmitriy,

> Denis, it sounds like with this approach, in case of the over-allocation,
> the system will just get slower and slower and users will end up blaming
> Ignite for it. Am I understanding your suggestion correctly?


This will not happen (at least in Unix) unless all the nodes really used all 
the allocated memory by putting data there or touching all the memory range 
somehow else.

> How was this handled in Ignite 1.9?


If you are talking about the legacy off-heap impl then we requested small 
chunks of data from an operating system rather than a continuous memory region 
as in the page memory. But I would think of the page memory as of Java heap 
which also can request 8 GB continuous memory region on a 8 GB machine 
following heap settings of an app but an operating system will not return the 
whole range immediately unless Java app occupies the whole heap or use special 
parameters.

At all, I think it’s safe to use approach suggested by me unless I miss 
something.

—
Denis

> On Apr 17, 2017, at 6:05 PM, Dmitriy Setrakyan <dsetrak...@apache.org> wrote:
> 
> On Mon, Apr 17, 2017 at 6:00 PM, Denis Magda <dma...@apache.org> wrote:
> 
>> Dmitriy,
>> 
>> All the nodes will request its own continuous memory region that takes
>> 70-80% of all RAM from an underlying operation system. However, the
>> operating system will not outfit the nodes with physical pages mapped to
>> RAM immediately allowing every node's process to start successfully. The
>> nodes will communicate to RAM via a virtual memory which in its turn will
>> give an access to physical pages whenever is needed applying low level
>> eviction and swapping techniques.
>> 
> 
> Denis, it sounds like with this approach, in case of the over-allocation,
> the system will just get slower and slower and users will end up blaming
> Ignite for it. Am I understanding your suggestion correctly?
> 
> How was this handled in Ignite 1.9?
> 
> D.

Reply via email to