Hi Manu,

The 2/2 indicates that there are two containers and each is in the ready
state. As Vishal suggested, run kubectl describe pod <pod id> to get more
details. You also use kubectl get pod <pod id> -o yaml. The former will
include events in the output. You can run nodetool commands like this:

$ kubectl -n cass-operator exec -it <pod id> -c cassandra -- nodetool status

Lastly, there is the #cassandra-kubernetes channel on ASF slack. Feel free
to drop in there with questions.

Thanks

John

On Fri, Jul 3, 2020 at 7:49 AM vishal kharjul <kharjul.vis...@gmail.com>
wrote:

> Hello Manu,
>
> It's actually a K8 query and not Cassnadra. AFIK READY= 2/2 could
> represent a status of individual. Container in each pod. 2/2 suggests Pod
> consists of two containers and both ready. Try "kubectl describe" on each
> pod and you can see container spec. Also I will recommend getting started
> kubernetes tutorial on kubernetes.io to refresh kubernetes concepts.
>
>
> On Fri, Jul 3, 2020, 5:16 AM Manu Chadha <manu.cha...@hotmail.com> wrote:
>
>> Hi
>>
>>
>>
>> I have a 3 node Kubernetes cluster and I have set up Cassandra on it
>> using Cass-Operator.
>>
>>
>>
>> What does the 2/2 mean in the output of the following command
>>
>>
>>
>> kubectl get all -n cass-operator
>>
>> NAME                                READY   STATUS    RESTARTS   AGE
>>
>> pod/cass-operator-78c6469c6-6qhsb   1/1     Running   0          139m
>>
>> pod/cluster1-dc1-default-sts-0      2/2     Running   0          138m
>>
>> pod/cluster1-dc1-default-sts-1      2/2     Running   0          138m
>>
>> pod/cluster1-dc1-default-sts-2      2/2     Running   0          138m
>>
>>
>>
>> Does it mean that there are 3 data centres each running 2 cassandra
>> nodes? It should be because my K8S cluster has only 3 nodes.
>>
>>
>>
>> manuchadha25@cloudshell:~ (copper-frame-262317)$ gcloud compute instances 
>> list
>>
>> NAME                                              ZONE            
>> MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
>>
>> gke-cassandra-cluster-default-pool-92d544da-6fq8  europe-west4-a  
>> n1-standard-1               10.164.0.26  34.91.214.233  RUNNING
>>
>> gke-cassandra-cluster-default-pool-92d544da-g0b5  europe-west4-a  
>> n1-standard-1               10.164.0.25  34.91.101.218  RUNNING
>>
>> gke-cassandra-cluster-default-pool-92d544da-l87v  europe-west4-a  
>> n1-standard-1               10.164.0.27  34.91.86.10    RUNNING
>>
>>
>>
>> Or is Cassandra-operator running two containers per K8S Node?
>>
>>
>>
>> thanks
>>
>> Manu
>>
>>
>>
>

-- 

- John

Reply via email to