Thank you Denis for the hint!
Kind regards
Peter
2016-07-05 14:35 GMT+02:00 Denis Magda :
> Peter,
>
> Measure it basing under your load. Sometimes it’s enough to have Java
> heaps 8 GB in size to work under significant load with off-heap data in
> hundreds of GBs in size.
>
> —
> Denis
>
> On
Peter,
Measure it basing under your load. Sometimes it’s enough to have Java heaps 8
GB in size to work under significant load with off-heap data in hundreds of GBs
in size.
—
Denis
> On Jul 5, 2016, at 3:20 PM, Peter Schmitt wrote:
>
> Hi Denis,
>
> we are trying to store a huge amount of
Hi Denis,
we are trying to store a huge amount of data off-heap (more than 50 GB).
Therefore, we need to know the heap-size which is needed by Ignite to
handle such a huge off-heap cache (which we only need to keep the heap-size
as low as possible due to the GC overhead).
Kind regards
Peter
20
Hi Peter,
Basically it depends on your use case. Sometimes it’s enough 2 GB, sometimes 5
GB or 10 GB. It depends on the workload.
However you shouldn’t allocate too big Java heaps with size more than 20 GB
because it can lead to long stop-the-world pauses at some time.
—
Denis
> On Jul 3, 201
Hi Val,
after several tests I can confirm that it works with more heap-memory.
However, I'm not sure how much heap-memory is needed for 50+ GB
off-heap-data and I can't find hints for it in the docs.
Kind regards
Peter
2016-06-30 22:23 GMT+02:00 vkulichenko :
> Hi Peter,
>
> It sound like you
Hi Peter,
It sound like you just don't give enough heap memory to a node. Heap memory
is still required, even if you store all the data off heap. Can you try to
your JVM at least 2GB and check if this helps?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/
Hi Val,
I did further tests and it looks like OutOfMemoryError shows up in case the
VM isn't fast enough with its GC. If you add e.g. cache.get in a loop for
every key, it's possible to reproduce it. Doing System.gc() every thousand
calls takes longer, but the VM has enough time for GC and it work
Hi Val,
I've pushed a demo to https://github.com/ps4os/ignite_offheap_test
The issue is that I can't reproduce it consistently. As soon as I get the
OutOfMemoryError, I can reproduce it. And I can reproduce it with a restart
in between.
The demo contains a Readme as well as some TODOs and comments
Peter,
This doesn't make much sense to me. With OFFHEAP_TIERED eviction policy
should not change anything at all, so it sounds like misconfiguration. Can
you provide the whole test that I can run and investigate?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble
Hi Val,
without the eviction policy, the setup breaks due to
java.lang.OutOfMemoryError: GC overhead limit exceeded
I'm not sure why the heap grows (+ it looks like GC can't free it), in case
of the mentioned (off-heap) config.
In the end almost everything should be stored off-heap.
However, ma
Peter,
The only configuration that defines whether nodes join topology or not is
discovery SPI (the one you provided in the first message)
All looks fine, expect that eviction policy will be ignored in your case.
It's used for entries that are stored in heap memory, while your cache is
OFFHEAP_TI
Hi Val,
thank you for checking it!
I've switched to JDK8 and the issue disappeared.
I've to test it with the JDK version we need to use in production.
It would be great to hear whether there is a different approach to reach
the same or if it could be a side-effect due to the used config:
//Ig
Hi Peter,
Your code works fine for me. Can you please attach the log file?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Non-cluster-mode-tp5959p5971.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
13 matches
Mail list logo