In *most* k8s environments each Kubernetes worker receives its own
dedicated CIDR range from the cluster’s CIDR space for allocating pod IP
addresses. The issue described can occur when a k8s worker goes down then
comes back up and the pods are rescheduled where either pod starts up with
another pods previously used IP.

They don’t necessarily have to swap 1:1 (ie one pod could use the other’s
previous address while that pod receives a new address). Additionally it’s
not a race condition of which container starts first. The k8s scheduler and
kubelet daemon assign IPs to pods.

On Mon, Aug 3, 2020 at 11:14 PM manish khandelwal <
manishkhandelwa...@gmail.com> wrote:

> I have started reading about how to deploy Cassandra with K8S. But as I
>> read more I feel there are a lot of challenges in running Cassandra on K8s.
>> Some of the challenges which I feel are
>>
>> 1. POD IPs identification - If the pods go down and when they come up
>> their IPs change, how is it handled as we are dependent on IPs of Cassandra
>> nodes for internode as well client server communication.
>>
>
> Strictly safe to change an IP to an IP that is unused.
> Strictly unsafe to use an IP that's already in the cluster (so if two pods
> go down, having the first pod that comes up grab the IP of the second pod
> is strictly dangerous and will violate consistency and maybe lose data).
>
> *For point 2 (Strictly unsafe to use an IP) , if the first pod grabs the
> IP of the second node then it should not be able to join the cluster.*
> *So can IPs still be swapped?  *
> *When and how this IP swap can occur?*
>
> Regards
> Manish
>
> On Mon, Jul 6, 2020 at 10:40 PM Jeff Jirsa <jji...@gmail.com> wrote:
>
>>
>>
>> On Mon, Jul 6, 2020 at 10:01 AM manish khandelwal <
>> manishkhandelwa...@gmail.com> wrote:
>>
>>> I have started reading about how to deploy Cassandra with K8S. But as I
>>> read more I feel there are a lot of challenges in running Cassandra on K8s.
>>> Some of the challenges which I feel are
>>>
>>> 1. POD IPs identification - If the pods go down and when they come up
>>> their IPs change, how is it handled as we are dependent on IPs of Cassandra
>>> nodes for internode as well client server communication.
>>>
>>
>> Strictly safe to change an IP to an IP that is unused.
>> Strictly unsafe to use an IP that's already in the cluster (so if two
>> pods go down, having the first pod that comes up grab the IP of the second
>> pod is strictly dangerous and will violate consistency and maybe lose
>> data).
>>
>>
>>>
>>> 2. A K8S node can host  a single pod. This is being done so that even if
>>> the host goes down we have only one pod down case. With multiple pods on a
>>> single host there is a risk of traffic failures as consistency might not be
>>> achieved. But if we keep two pods of the same rack on a single host then
>>> are we safe or is there  any unknown risk?
>>>
>>
>> This sounds like you're asking if rack aware snitches protect you from
>> concurrent pods going down. Make sure you're using a rack aware snitch.
>>
>>
>>>
>>> 3. Seed discovery? Again as an extension of point 1, since IPs can
>>> change, how we can manage seeds.
>>>
>>
>> Either use DNS instead of static IPs, or use a seed provider that handles
>> IPs changing.
>>
>>
>>>
>>> 4. Also I read a lot of use of Cassandra operators for maintaining a
>>> Cassandra cluster on Kubernetes. I think that Cassandra Operator is like a
>>> robot (automated admin) which works and acts like a norma admin will work.
>>> I want to understand that how important is Cassandra operator and what if
>>> we go on to production without one?
>>>
>>> Regards
>>> Manish
>>>
>> --

Christopher Bradford

Reply via email to