Thanks John. I wasn't aware of.

@Manu,

As John said it's listed in limitations of operator on below link -

https://docs.datastax.com/en/cass-operator/doc/cass-operator/cassOperatorReleaseNotes.html



On Tue, Jul 7, 2020, 10:48 AM John Sanda <john.sa...@gmail.com> wrote:

> Cass Operator currently does not support scaling down.
>
> Thanks
>
> John
>
> On Thu, Jul 2, 2020 at 1:02 PM Manu Chadha <manu.cha...@hotmail.com>
> wrote:
>
>> Hi
>>
>>
>>
>> I changed the file and applied it but the new configuration hasn’t got
>> applied.
>>
>>
>>
>>
>>
>> metadata:
>>
>>   name: dc1
>>
>> spec:
>>
>>   clusterName: cluster1
>>
>>   serverType: cassandra
>>
>>   serverVersion: "3.11.6"
>>
>>   managementApiAuth:
>>
>>     insecure: {}
>>
>>   size: 1 ß made change here
>>
>>   storageConfig:
>>
>> ...
>>
>>
>>
>> kubectl apply -n cass-operator -f ./cass-dc-2-nodes.yaml
>>
>>
>>
>> manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl get all -n 
>> cass-operator
>>
>> NAME                                 READY   STATUS    RESTARTS   AGE
>>
>> pod/cass-operator-5f8cdf99fc-9c5g4   1/1     Running   0          2d20h
>>
>> pod/cluster1-dc1-default-sts-0       2/2     Running   0          2d20h
>>
>> pod/cluster1-dc1-default-sts-1       2/2     Running   0          9h
>>
>> pod/cluster1-dc1-default-sts-2       2/2     Running   0          9h
>>
>>
>>
>> NAME                                          TYPE           CLUSTER-IP      
>> EXTERNAL-IP     PORT(S)             AGE
>>
>> service/cass-operator-metrics                 ClusterIP      10.51.243.147   
>> <none>          8383/TCP,8686/TCP   2d20h
>>
>> service/cassandra-loadbalancer                LoadBalancer   10.51.240.24    
>> 34.91.214.233   9042:30870/TCP      2d
>>
>> service/cassandradatacenter-webhook-service   ClusterIP      10.51.243.86    
>> <none>          443/TCP             2d20h
>>
>> service/cluster1-dc1-all-pods-service         ClusterIP      None            
>> <none>          <none>              2d20h
>>
>> service/cluster1-dc1-service                  ClusterIP      None            
>> <none>          9042/TCP,8080/TCP   2d20h
>>
>> service/cluster1-seed-service                 ClusterIP      None            
>> <none>          <none>              2d20h
>>
>>
>>
>> NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
>>
>> deployment.apps/cass-operator   1/1     1            1           2d20h
>>
>>
>>
>> NAME                                       DESIRED   CURRENT   READY   AGE
>>
>> replicaset.apps/cass-operator-5f8cdf99fc   1         1         1       2d20h
>>
>>
>>
>> NAME                                        READY   AGE
>>
>> statefulset.apps/cluster1-dc1-default-sts   3/3     2d20h ß still 3/3
>>
>> manuchadha25@cloudshell:~ (copper-frame-262317)$
>>
>>
>>
>> thanks
>>
>> Manu
>>
>> Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for
>> Windows 10
>>
>>
>>
>> *From: *vishal kharjul <kharjul.vis...@gmail.com>
>> *Sent: *02 July 2020 12:46
>> *To: *user@cassandra.apache.org
>> *Subject: *Re: What is the way to scale down Cassandra/Kubernetes
>> cluster from 3 to 1 nodes using cass-operator
>>
>>
>>
>> Hello Many,
>>
>>
>>
>> I tried scale up and it's just need size parameter change . So try same
>> for scale down. Just change the size parameter of CassandraDatacenter CRD
>> and apply it again. Basically same step which you took to spinoff 3 node
>> with just the size parameter changed. Operator will bring down Cassandra
>> nodes accordingly.  No need to shut down or restart.
>>
>>
>>
>> Thanks and Regards,
>>
>> Vishal
>>
>> On Thu, Jul 2, 2020, 3:41 AM Oleksandr Shulgin <
>> oleksandr.shul...@zalando.de> wrote:
>>
>> On Thu, Jul 2, 2020 at 9:29 AM Manu Chadha <manu.cha...@hotmail.com>
>> wrote:
>>
>> Thanks Alex. Will give this a try. So I just change the yaml file and
>> hot-patch it or would I need to stop the cluster, delete it and make a new
>> one?
>>
>>
>>
>> I've no experience with this specific operator, but I expect that editing
>> the file and applying it using kubectl is the way to go, especially if you
>> don't want to lose your data.
>>
>>
>>
>> --
>>
>> Alex
>>
>>
>>
>>
>>
>
>
> --
>
> - John
>

Reply via email to