Sergey,

That’s expectable because as we revealed from this discussion the allocation 
works different depending on whether the persistence is used or not:

1) In-memory mode (the persistence is disabled) - the space will be allocated 
incrementally until the max threshold is reached. Good!

2) The persistence mode - the whole space (limited by the max threshold) is 
allocated right away. It’s not surprising that your laptop starts choking.

So, in my previous response I tried to explain that I can’t find any reason why 
we should adjust 1). Any reasons except for the massive preloading?

As for 2), that was a big surprise to reveal this after 2.1 release. Definitely 
we have to fix this somehow.

—
Denis

> On Aug 2, 2017, at 6:59 AM, Sergey Chugunov <sergey.chugu...@gmail.com> wrote:
> 
> Denis,
> 
> Just a simple example from our own codebase: I tried to execute
> PersistentStoreExample with default settings and two server nodes and
> client node got frozen even on initial load of data into the grid.
> Although with one server node the example finishes pretty quickly.
> 
> And my laptop isn't the weakest one and has 16 gigs of memory, but it
> cannot deal with it.
> 
> 
> On Wed, Aug 2, 2017 at 4:58 PM, Denis Magda <dma...@apache.org> wrote:
> 
>>> As far as allocating 80% of available RAM - I was against this even for
>>> In-memory mode and still think that this is a wrong default. Looking at
>>> free RAM is even worse because it gives you undefined behavior.
>> 
>> Guys, I can not understand how this dynamic memory allocation's high-level
>> behavior (with the persistence DISABLED) is different from the legacy
>> off-heap memory we had in 1.x. Both off-heap memories allocate the space on
>> demand, the current just does this more aggressively requesting big chunks.
>> 
>> Next, the legacy one was unlimited by default and the user can start as
>> many nodes as he wanted on a laptop and preload as much data as he needed.
>> Sure he could bring down the laptop if too many entries were injected into
>> the local cluster. But that’s about too massive preloading and not caused
>> by the ability of the legacy off-heap memory to grow infinitely. The same
>> preloading would cause a hang if the Java heap memory mode is used.
>> 
>> The upshot is that the massive preloading of data on the local laptop
>> should not fixed with repealing of the dynamic memory allocation.
>> Is there any other reason why we have to use the static memory allocation
>> for the case when the persistence is disabled? I think the case with the
>> persistence should be reviewed separately.
>> 
>> —
>> Denis
>> 
>>> On Aug 2, 2017, at 12:45 AM, Alexey Goncharuk <
>> alexey.goncha...@gmail.com> wrote:
>>> 
>>> Dmitriy,
>>> 
>>> The reason behind this is the need to to be able to evict and load pages
>> to
>>> disk, thus we need to preserve a PageId->Pointer mapping in memory. In
>>> order to do this in the most efficient way, we need to know in advance
>> all
>>> the address ranges we work with. We can add dynamic memory extension for
>>> persistence-enabled config, but this will add yet another step of
>>> indirection when resolving every page address, which adds a noticeable
>>> performance penalty.
>>> 
>>> 
>>> 
>>> 2017-08-02 10:37 GMT+03:00 Dmitriy Setrakyan <dsetrak...@apache.org>:
>>> 
>>>> On Wed, Aug 2, 2017 at 9:33 AM, Vladimir Ozerov <voze...@gridgain.com>
>>>> wrote:
>>>> 
>>>>> Dima,
>>>>> 
>>>>> Probably folks who worked closely with storage know why.
>>>>> 
>>>> 
>>>> Without knowing why, how can we make a decision?
>>>> 
>>>> Alexey Goncharuk, was it you who made the decision about not using
>>>> increments? Do know remember what was the reason?
>>>> 
>>>> 
>>>>> 
>>>>> The very problem is that before being started once on production
>>>>> environment, Ignite will typically be started hundred times on
>>>> developer's
>>>>> environment. I think that default should be ~10% of total RAM.
>>>>> 
>>>> 
>>>> Why not 80% of *free *RAM?
>>>> 
>>>> 
>>>>> 
>>>>> On Wed, Aug 2, 2017 at 10:21 AM, Dmitriy Setrakyan <
>>>> dsetrak...@apache.org>
>>>>> wrote:
>>>>> 
>>>>>> On Wed, Aug 2, 2017 at 7:27 AM, Vladimir Ozerov <voze...@gridgain.com
>>> 
>>>>>> wrote:
>>>>>> 
>>>>>>> Please see original Sergey's message - when persistence is enabled,
>>>>>> memory
>>>>>>> is not allocated incrementally, maxSize is used.
>>>>>>> 
>>>>>> 
>>>>>> Why?
>>>>>> 
>>>>>> 
>>>>>>> Default settings must allow for normal work on developer's
>>>> environment.
>>>>>>> 
>>>>>> 
>>>>>> Agree, but why not in increments?
>>>>>> 
>>>>>> 
>>>>>>> 
>>>>>>> ср, 2 авг. 2017 г. в 1:10, Denis Magda <dma...@apache.org>:
>>>>>>> 
>>>>>>>>> Why not allocate in increments automatically?
>>>>>>>> 
>>>>>>>> This is exactly how the allocation works right now. The memory will
>>>>>> grow
>>>>>>>> incrementally until the max size is reached (80% of RAM by
>>>> default).
>>>>>>>> 
>>>>>>>> —
>>>>>>>> Denis
>>>>>>>> 
>>>>>>>>> On Aug 1, 2017, at 3:03 PM, dsetrak...@apache.org wrote:
>>>>>>>>> 
>>>>>>>>> Vova, 1GB seems a bit too small for me, and frankly i do not want
>>>>> t o
>>>>>>>> guess. Why not allocate in increments automatically?
>>>>>>>>> 
>>>>>>>>> ⁣D.​
>>>>>>>>> 
>>>>>>>>> On Aug 1, 2017, 11:03 PM, at 11:03 PM, Vladimir Ozerov <
>>>>>>>> voze...@gridgain.com> wrote:
>>>>>>>>>> Denis,
>>>>>>>>>> No doubts you haven't heard about it - AI 2.1 with persistence,
>>>>> when
>>>>>>>>>> 80% of
>>>>>>>>>> RAM is allocated right away, was released several days ago. How
>>>> do
>>>>>> you
>>>>>>>>>> think, how many users tried it already?
>>>>>>>>>> 
>>>>>>>>>> Guys,
>>>>>>>>>> Do you really think allocating 80% of available RAM is a normal
>>>>>> thing?
>>>>>>>>>> Take
>>>>>>>>>> your laptop and check how many available RAM you have right now.
>>>>> Do
>>>>>>> you
>>>>>>>>>> fit
>>>>>>>>>> to remaining 20%? If not, then running AI with persistence with
>>>>> all
>>>>>>>>>> defaults will bring your machine down. This is insane. We shold
>>>>>>>>>> allocate no
>>>>>>>>>> more than 1Gb, so that user can play with it without any
>>>> problems.
>>>>>>>>>> 
>>>>>>>>>> On Tue, Aug 1, 2017 at 10:26 PM, Denis Magda <dma...@apache.org
>>>>> 
>>>>>>> wrote:
>>>>>>>>>> 
>>>>>>>>>>> My vote goes for option #1 too. I don’t think that 80% is too
>>>>>>>>>> aggressive
>>>>>>>>>>> to bring it down.
>>>>>>>>>>> 
>>>>>>>>>>> IGNITE-5717 was created to fix the issue of the 80% RAM
>>>>> allocation
>>>>>> on
>>>>>>>>>> 64
>>>>>>>>>>> bit systems when Ignite works on top of 32 bit JVM. I’ve not
>>>>> heard
>>>>>> of
>>>>>>>>>> any
>>>>>>>>>>> other complaints in regards the default allocation size.
>>>>>>>>>>> 
>>>>>>>>>>> —
>>>>>>>>>>> Denis
>>>>>>>>>>> 
>>>>>>>>>>>> On Aug 1, 2017, at 10:58 AM, dsetrak...@apache.org wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>> I prefer option #1.
>>>>>>>>>>>> 
>>>>>>>>>>>> ⁣D.​
>>>>>>>>>>>> 
>>>>>>>>>>>> On Aug 1, 2017, 11:20 AM, at 11:20 AM, Sergey Chugunov <
>>>>>>>>>>> sergey.chugu...@gmail.com> wrote:
>>>>>>>>>>>>> Folks,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I would like to get back to the question about MemoryPolicy
>>>>>>>>>> maxMemory
>>>>>>>>>>>>> defaults.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Although MemoryPolicy may be configured with initial and
>>>>>> maxMemory
>>>>>>>>>>>>> settings, when persistence is used MemoryPolicy always
>>>>> allocates
>>>>>>>>>>>>> maxMemory
>>>>>>>>>>>>> size for performance reasons.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> As default size of maxMemory is 80% of physical memory it
>>>>> causes
>>>>>>>>>> OOME
>>>>>>>>>>>>> exceptions of 32 bit platforms (either on OS or JVM level)
>>>> and
>>>>>>>>>> hurts
>>>>>>>>>>>>> performance in setups when multiple Ignite nodes are started
>>>> on
>>>>>>>>>> the
>>>>>>>>>>>>> same
>>>>>>>>>>>>> physical server.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I suggest to rethink these defaults and switch to other
>>>>> options:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> - Check whether platform is 32 or 64 bits and adapt defaults.
>>>>> In
>>>>>>>>>> this
>>>>>>>>>>>>> case we still need to address the issue with multiple nodes
>>>> on
>>>>>> one
>>>>>>>>>>>>> machine
>>>>>>>>>>>>> even on 64 bit systems.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> - Lower defaults for maxMemory and allocate, for instance,
>>>>>>>>>> max(0.3 *
>>>>>>>>>>>>> availableMemory, 1Gb).
>>>>>>>>>>>>> This option allows us to solve all issues with starting on 32
>>>>> bit
>>>>>>>>>>>>> platforms and reduce instability with multiple nodes on the
>>>>> same
>>>>>>>>>>>>> machine.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thoughts and/or other options?
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> Sergey.
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>> 
>> 

Reply via email to