Hi Igniters,
We have three server nodes, there is firewall between clients and servers.
Opened 47500 and 47100 series ports, but still below errors are printing at
client side frequently. But client application able to get data from cluster,
could please help us how to avoid these errors at cl
Ilya,
Does 2.9 have specific optimizations for checkpoints?
Thanks,
Raymond.
On Fri, Oct 9, 2020 at 1:23 AM Ilya Kasnacheev
wrote:
> Hello!
>
> I think the real saver is in decreasing amount of time between checkpoints.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 6 окт. 2020 г. в 02:10, Raym
When we tried 16K as pagesize, and eviction enabled to LRU2, we found that
below 2 messages after sometime:
1. Too many failed attempts to evict page: 30
2. Possible starvation in striped pool.
Can you please suggest, what is the recommended pagesize and best practices
here.
Thanks
--
Sent fr
Hi,
I have few questions on best practices for pagesize and eviction policy,
based on the object size.
Our average object size is 85K
No persistence enabled for data region
Eviction enabled
Third party persistence for read-through and write-through
Questions:
1. What is the preferred page-size? 1
Hi,
We have an app that writes N records to the cluster (REPLICATED) - e.g.
10,000 records, in one transaction.
We also have an app that issues a continuous query against the cluster,
listening for updates to this cache.
We'd like the app to receive all 10,000 records in one call into the
localLi
Thanks! I will have a read through
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi ,
Our setup :
Servers - 3 node cluster
Reader clients : wait for an update on an entry of a cache ( around 20 of
them )
Writer Client : 1
If one of the reader client restarts while the writer is writing into the
entry of the cache , the server attempts to send the update to the failed
client
Hi Alan.
Could you provide a class (IgniteRunnable and may other classes which
was laded through p2p with them) which has this unusual behavior?
And please, attache a full stack trace of the exception.
On Thu, Oct 8, 2020 at 5:19 PM Alan Ward wrote:
> I'm using peer class loading on a 5 node ig
I'm using peer class loading on a 5 node ignite cluster, persistence
enabled, Ignite version 2.8.1. I have a custom class that implements
IgniteRunnable and I launch that class on the cluster. This works fine when
deploying to an ignite node running on a single node cluster locally, but
fails with
Hello!
I think the real saver is in decreasing amount of time between checkpoints.
Regards,
--
Ilya Kasnacheev
вт, 6 окт. 2020 г. в 02:10, Raymond Wilson :
> Thanks for the thoughts Ilya and Vladimir.
>
> We'll do a comparison with 2.9 when it releases to see if that makes any
> difference.
>
Hello!
You can do either one, I'll take it from there.
Regards,
--
Ilya Kasnacheev
вт, 6 окт. 2020 г. в 17:26, :
> Could you please guide be through the process? Should I create just a
> simple project anywhere and share it here or I should create a test case in
> the Ignite project?
>
> From
Hi Anton,
Thank you for the reply .
>>I can confirm that there is no SEGMENTED event thrown on 2.8.1.
I guess you meant that there is no SEGMENTED event thrown in 2.8.1 client
when the servers are brought down and started again. Because SEGMENTED event
could be thrown in 2.8.1 in other scenari
Hi Anton,
Thank you for the reply .
>>I can confirm that there is no SEGMENTED event thrown on 2.8.1.
I guess you meant that there is no SEGMENTED event thrown in 2.8.1 client
when the servers are brought down and started again. Because SEGMENTED event
could be thrown in 2.8.1 in other scenari
13 matches
Mail list logo