Slava,
I agree. Different persistence enabled flag can cause unpleasant issues.
I've left a comment in IGNITE-8951.
Yakov,
Seems like I misunderstood the point of the discussion from the very
beginning. I thought that Andrew raised topic to discuss adding new
checks that will fail node join (like we do for different page size and
rebalance pool size). If we are talking about /printing
warnin//gs//about all differences/, we indeed can start with logic that
passes through configuration classes with reflection. As a next step, we
can filter out the properties that are expected to be different
(consistentId, etc). I believe, full list of such properties can't be
collected without manual research.
Best Regards,
Ivan Rakov
On 10.07.2018 14:06, Вячеслав Коптилин wrote:
.Hello Ivan,
I think it would be nice to add validation
DataRegionConfiguration#persistenceEnabled property. That property must be
the same across a cluster for the given data region.
Perhaps, different values of `initSize`, `maxSize` etc make sense in case
of a heterogeneous cluster, except `persistenceEnabled`
Thanks,
S.
вт, 10 июл. 2018 г. в 13:42, Ivan Rakov <ivan.glu...@gmail.com>:
Guys,
For your information: rebalanceThreadPoolSize validation is already
implemented and merged to master:
https://issues.apache.org/jira/browse/IGNITE-8904
You can overview the commit to see the approach. By the way, we already
validate some other parameters that can't differ among cluster nodes
(page size, tx configuration): GridCacheProcessor#checkConsistency.
We also match necessary part of CacheConfiguration between nodes in
GridCacheUtils#checkAttributeMismatch method.
Does anyone know another properties mismatch of which can backfire on us?
Best Regards,
Ivan Rakov
On 10.07.2018 10:47, Andrew Medvedev wrote:
Made comment there, c&p here as well
Is it going to be a preconfigured set of settings, or a whole range
of settings?
If latter :
1) Property names in CacheConfiguration do not always correspond to
getters (some follow different naming conventions, some are completely
different, as in memPlcName and getDataRegionName()), so inclusion
pattern ("get all properties") does not work quite well with them
2) If using manual handling of getter methods, we see that a lot of
metrics are returned by methods in CacheConfiguration and below,
instead of properties (in TcpCommunicationSpi especially), and getter
methods are not properly annotated. (for ex see
https://issues.apache.org/jira/browse/IGNITE-8829), so exclusion
pattern ("get all except metrics etc") forces us to manually exclude
those, not quite well too, looks like a hack
Plus some properties, although configurable, have their defaults
dynamically set on startup for ex. DFLT_MEMORY_POLICY_MAX_SIZE
Just to make sure, we compare with coordinator, log locally, and
client nodes are excluded?
On Fri, Jul 6, 2018 at 4:15 PM, Yakov Zhdanov <yzhda...@gridgain.com>
wrote:
Guys, I created ticket for config params validation -
https://issues.apache.org/jira/browse/IGNITE-8951. Feel free to
comment.
Yakov Zhdanov
www.gridgain.com
2018-07-04 10:51 GMT+03:00 Andrew Medvedev <andrew.y.medve...@gmail.com
:
Hi Nikolay
No, we have been beaten by
https://issues.apache.org/jira/browse/IGNITE-8904?jql=text%20~%20%
22rebalanceThreadPoolSize%22
it is not checked on start
Utility I mean check
org.apache.ignite.configuration.IgniteConfiguration and children
On Wed, Jul 4, 2018 at 10:36 AM, Nikolay Izhikov <nizhi...@apache.org>
wrote:
Hello, Andrew.
Can you clarify your question?
What checks do you mean, exactly?
Do you mean internal Ignite checks or user-provided checks?
Ignite checks configuration consistency on node start [1].
Ignite do have consistency check for a joining node. Take a look at
[2]
and all of it children.
[1] https://github.com/apache/ignite/blob/master/modules/
core/src/main/java/org/apache/ignite/internal/IgniteKernal.java#L825
[2] https://github.com/apache/ignite/blob/master/modules/
core/src/main/java/org/apache/ignite/internal/GridComponent.java#L153
В Ср, 04/07/2018 в 08:58 +0300, Andrew Medvedev пишет:
Hello everybody
Our company has lots of nodes in cluster, and we have seen some
problems with inconsistent settings on nodes clusterwide. To help us
with this, we made an utility to check consistency of settings on
running cluster, but it is a hack, better ways seems to be settings
validation by each node itself on start/joining topology/etc..
1) Is his needed?
2) Have the implementation details been discussed somewhere?
Cheers