Hello Felipe,

At the moment Ignite 2.x isn't optimized for deployments spanning several
Availability Zones so even if using the filter you mentioned with right
configuration some operations may still suffer from increased delays
between AZs. For instance, fulfilling a request from a remote AZ even if it
could be fulfilled without crossing the zone border.

In the case of Ignite 2.x there is also a factor of cluster stability from
DiscoverySpi component point of view. From my perspective the best option
here is to use ZookeeperDiscoverySpi (you'll need to set up a separate
Zookeeper service for that) as TcpDiscoverySpi may create unnecessary
cross-AZ links between nodes in topology. This could decrease cluster
stability especially when scaling the cluster and adding more nodes to
planned 9.

There is an enhancement proposal
<https://cwiki.apache.org/confluence/display/IGNITE/IEP-140+Multi+Data+Center+deployments+support>
though for optimizing Ignite 2.x components to support multi AZ deployments
(in the IEP document it is called Multiple Data Center deployments for the
sake of clarity) which should address most of the issues I mentioned. But
it is in draft now, and I don't expect all components to be fully optimized
until next year.

Thank you!
Sergey

On Mon, Aug 25, 2025 at 5:54 PM Felipe Kersting <kerstingfel...@gmail.com>
wrote:

> Hello devs,
>
> We are working towards adopting Apache Ignite in our cloud-native
> solution, but I have some basic questions whose answers I have not found in
> the docs. I was hoping you could help me.
>
> We want to deploy Apache Ignite to a multi-AZ Kubernetes cluster. We are
> evaluating whether we go with Ignite 2.x or Ignite 3.x (our preference).
>
> We plan to deploy Ignite in embedded mode to multiple nodes that span
> cross AZs, ensuring resilience to AZ failures. However, for that to work,
> it is crucial that all data have backups that are stored in distinct AZs. I
> have been searching ways to do that in Ignite 3, but haven't found any yet.
>
> A practical example: Imagine we have 3 AZs, and each AZ has 3 nodes (total
> of 9 nodes). We wanted to configure the system ensuring that all data have
> 2 backups, and that these 2 backups are always stored in the 2 other AZs
> (for example, if some data was stored in a partition that belongs to AZ 0,
> we want to ensure that the two backups are stored in partitions that belong
> to AZ 1 and AZ 2).
>
> Without explicitly configuring that, the two backups could also be mapped
> to AZ 0, hence, if AZ 0 goes down, the data would be gone.
>
> One way of solving the problem would be replicating the dataset to all
> nodes, but we want to ideally avoid that, as we have a lot of data.
>
> To achieve our goals, I believe we would need some kind of configurable AZ
> affinity, to hint Ignite which nodes are in each AZ, and how backups should
> be distributed. Is this possible at all in Ignite 3? Does somebody have any
> suggestions on how to achieve this? And, in case this is not possible in
> Ignite 3, is there a feature in the roadmap that could provide this in the
> future?
>
> For Ignite 2, we found ClusterNodeAttributeAffinityBackupFilter
> <https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/rendezvous/ClusterNodeAttributeAffinityBackupFilter.html>,
> which looks like could do the job (but I haven't tested it yet, as we are
> initially doing PoCs with Ignite 3). If possible, could you confirm that
> this could be used to achieve our goals in case we go with Ignite 2?
>
> Thank you! I appreciate any help!
> Felipe
>

Reply via email to