So are you, guys, suggesting to accept page-memory as the right one by
default, which:
1) Doesn't work with half of current cache features
2) Halved our performance
3) Goes against the whole Java ecosystem with it's "offheap-first" approach?

Good try, but no :-)

Let me clarify on p.3. Offheap-first is not correct approach. It is
questionable and dangerous way, to say the least. GC is central component
of the whole Java ecosystem. No wonder that the most comfortable approach
for users is when everythings is stored in heap.

Offheap solutions were created to mitigate scalability issues Java faced in
recent years due to rapid decrease in RAM costs. However, it doesn't mean
that things will be bad forever. At the moment there are at least 3 modern
GCs targeting scalability: G1GC from Oracle, Shenandoah from RedHat, and C4
from Azul. No doubts they will solve (or at least relieve signifiacntly)
the problem in mid-term, with gradual improvement from year-to-year,
month-to-month.

Moreover, GC problem is attacked from different angles. Another major
improvement is stack-based structures which is going to appear in Java as a
part of Valhalla project [1]. When implemented, frameworks will be able to
reduce heap allocations significantly. Instead of having several dozens
heap objecs rooted from our infamous GridCacheMapEntry, we will have only
one heap object - GridCacheMapEntry itself.

Okay, okay, this is a matter of years, we need solution now, what is wrong
with offheap? Only one thing - it *splits server memory into two unrelated
pieces* - Java heap and offheap. This is terrible thing from user
perspective. I already went through this during Hadoop Accelerator
development:
- Output data is stored offheap. Cool, no GC!
- Intermediate data, such as our NIO messages, are stored in Java heap.
Now we run intensive load and ... OutOfMemoryError!.Ok, giving more Java
heap, but now ... out of native memory! Finally, in order to make it work
we have to give much more memory than needed to one of these parts.
Result: *poor
memory utilization*. Things would be much more easier if we either store
everything in heap, or everything offheap. But as user code is executed in
heap by default, offheap is not an option for average user.

All in all, offheap approach is valuable for high-end deployments with
hundreds gigabytes of memory. But on commodity software with moderate
amount of memory applications are likely to have problems due to
heap/offheap separation, without any advantages.

So my main concern is *what about current heap mode*? It must stay alive.
Page-memory approach should be abstracted out and implemented in addition
to current heap-approach, not instead of it. Have high-end machine and
suffer from GC? Pick offheap mode. Have a commodity machine? Old good heap
mode is your choice.

[1] http://openjdk.java.net/projects/valhalla/



On Fri, Dec 30, 2016 at 9:50 PM, Dmitriy Setrakyan <dsetrak...@apache.org>
wrote:

> On Thu, Dec 29, 2016 at 1:37 AM, Alexey Goncharuk <
> alexey.goncha...@gmail.com> wrote:
>
> > Folks,
> >
> > I pushed an initial implementation of IGNITE-3477 to ignite-3477 branch
> for
> > community review and further discussion.
> >
> > Note that the implementation lacks the following features:
> >  - On-heap deserialized values cache
> >  - Full LOCAL cache support
> >  - Eviction policies
> >  - Multiple memory pools
> >  - Distributed joins support
> >  - Off-heap circular remove buffer
> >  - Maybe something else I missed
> >
>
> Do we have *blocker* tickets for all the remaining issues? Ignite 2.0 will
> have to support everything in Ignite 1.0. Otherwise we will not be able to
> release.
>
>
> > The subject of this discussion is to determine whether the PageMemory
> > approach is a way to go, because the this implementation is almost 2x
> > slower than current 2.0 branch. There is some room for improvement, but I
> > am not completely sure we can gain the same performance numbers as in
> 2.0.
> >
>
> I would rephrase this. We should all assume that the PageMemory approach is
> the right approach. Here are the main benefits:
>
> - completely off-heap (minimal GC overhead)
> - predictable memory size
> - ability to extend to external store, like disk, without serialization
> - etc...
>
> Let's collectively work on ensuring that it can perform as fast as Ignite
> 1.8.x. If after a thorough investigation we decide that PageMemory cannot
> perform, then we can start thinking about other approaches.
>
>
> > I encourage the community to review the code and architecture and share
> > their thoughts here.
> >
>
> Completely agree. If anyone has extra cycles, please review the code and
> suggest any improvements.
>

Reply via email to