Hello!
Unfortunately it's hard to tell why the node would stop without looking at
client & server logs. Can you share these somewhere?
Maybe you should also set memory policy for these nodes, to the values that
your Yarn configuration expect them to have:
https://apacheignite.readme.io/docs/memor
Thanks for your time !!!
1) Changed the logic of generating keys. Now added a query to get the
maximum key from cache. New record will be max key + 1.
This has resolved the issue of count mismatch. Thank you
2) After splitting data into more batches and writing them to grid in many
iterations,
Hello!
I don't think that anything will get evicted with the configuration that
you have provided.
I think you should check whether keys are really unique (yes I remember
that you include currentTimeMillis in them, still it makes sense to
double-check) and also that all values are of type Data. I
Hi,
I would like to add below points :
1) Ignite YARN is started once [Server] and it will not be stopped between
iterations. This means that only once the Ignite nodes are negotiated
between YARN and Ignite. Once finalized this should be the same.
Please find below the server logs.
[12:30:46] T
I can see two options here:
- Between iteration 1 and iteration 2 some nodes were stopped. Perhaps some
new nodes were started. Data on stopped nodes became unavailab.e
- Cache key collisions between iterations 1 and 2 so that 80% keys are
identical and only 20% are distinct the second time.
I ex
Hi,
1) Load data to cache
var cacheConf: CacheConfiguration[Long, Data] = new
CacheConfiguration[Long, Data]("DataCache")
cacheConf.setCacheMode(CacheMode.PARTITIONED)
cacheConf.setIndexedTypes(classOf[Long], classOf[Data])
val cache = ignite.getOrCreateCache(cacheConf)
var dataMap = ge
Hi,
1. How do you load data to cache? Is it possible keys have duplicates?
2. How did you check there are 120k records in cache? Is it whole cache
metric or node local metric?
3. Are there any error in logs?
On Tue, Jan 30, 2018 at 3:17 PM, Raghav wrote:
> Hello,
>
> Am trying to enable Ignite
Hello,
Am trying to enable Ignite Native Persistence in Ignite Yarn Deployment.
Purpose of this is to have no eviction of data at all from Ignite grids.
Whenever the memory is full the data should get stored in disc.
But when I try to add large number of records to Ignite Grid, the data is
gettin