Hi, StatefulSet provides predictable naming, then it should be easy to
configure a client with addresses ignite-1,ignite-2...ignite-N. So there is
no need in custom discovery, IPs etc. I think it corresponds to k8s
patterns, as some pods are different from others as they store specific
partitions (read, have a state). There will be some maintenance by a user -
the list of server namings should be provided too unless it's very simple
(ignite-XX).

As StatefulSet is required to enable persistence, I think it's not a big
problem to configure it the same way. And it should work out of the box.

I will investigate how much cost is to implement custom discovery for thin
clients. And compare it with the StatefulSet solution.

On Thu, Aug 13, 2020 at 12:52 PM Pavel Tupitsyn <ptupit...@apache.org>
wrote:

> Vladimir,
>
> I agree with you, StatefulSet is not related here.
>
> > it's not a strict rule to communicate only directly with a pod
> > running a node with a primary partition
> Yes, if a node with a primary partition is not known or can't be contacted,
> we fail over to a default (random) node
> (afaik this is how Java, C++ and .NET thin clients are implemented)
>
> On Thu, Aug 13, 2020 at 12:44 PM Vladimir Pligin <vova199...@yandex.ru>
> wrote:
>
> > Hi guys,
> >
> > Maybe I'm missing something but I don't undestand how StatefulSet relates
> > to
> > the described functionality.
> > StatefulSet is more about persistence. Correct me if I'm wrong but my
> > current understanding is that we don't need to have any explicit state
> for
> > a
> > thin client connection. I'd like this thing to be simple: if I'm working
> > with a pod and it fails then I just go to another one and try my request
> > again. The corner case is the best-effort affinity. As far as I can it's
> > not
> > a strict rule to communicate only directly with a pod running a node
> with a
> > primary partition. It's ok to fail-over in this case and communicate with
> > any pod. Am I right?
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
> >
>

Reply via email to