Thank you everyone.
This thread has been really useful!

On Wed, May 23, 2018 at 8:59 PM, Ben Bromhead <b...@instaclustr.com> wrote:

> Here is the expectations around compatibility levels https://github.com/
> kubernetes/community/blob/master/contributors/design-
> proposals/api-machinery/csi-new-client-library-procedure.
> md#client-capabilities Though references to gold, silver, bronze etc seem
> to have largely gone away... not sure what's going on there?
>
> For a full reference just browser through the repo, https://github.com/
> kubernetes-client/java/blob/master/kubernetes/README.md is a good place
> to start as is https://github.com/kubernetes-client/java/tree/
> master/examples
>
> The Java driver doesn't have as much of the nice things in
> https://github.com/kubernetes/client-go/tree/master/tools but it does
> have some good helper classes in the util package so I guess we spent a
> little more time wiring things together?
>
> Code Generation is done via the jsonschema2pojo maven plugin and we also
> just have the raw CRD definition in a resource directory.
>
> On Wed, May 23, 2018 at 11:23 AM vincent gromakowski <
> vincent.gromakow...@gmail.com> wrote:
>
>> Thanks ! Do you have some pointers on the available features ? I am more
>> afraid of the lack of custom controller integration, for instance the code
>> generator...
>>
>> 2018-05-23 17:17 GMT+02:00 Ben Bromhead <b...@instaclustr.com>:
>>
>>> The official Kubernetes Java driver is actually pretty feature complete,
>>> if not exactly idiomatic Java...  it's only missing full examples to get it
>>> to GOLD compatibility levels iirc.
>>>
>>> A few reasons we went down the Java path:
>>>
>>>    - Cassandra community engagement was the primary concern. If you are
>>>    a developer in the Cassandra community you have a base level of Java
>>>    knowledge, so it means if you want to work on the Kubernetes operator you
>>>    only have to learn 1 thing, Kubernetes. If the operator was in Go,
>>>    you would then have two things to learn, Go and Kubernetes :)
>>>    - We actually wrote an initial PoC in Go (based off the etcd
>>>    operator, you can find it here https://github.com/
>>>    benbromhead/cassandra-operator-old
>>>    <https://github.com/benbromhead/cassandra-operator-old> ), but
>>>    because it was in Go we ended up making architectural decisions simply
>>>    because Go doesn't do JMX, so it felt like we were just fighting 
>>> different
>>>    ecosystems just to be part of the cool group.
>>>
>>> Some other less important points weighed the decision in Java's favour:
>>>
>>>    - The folk at Instaclustr all know Java, and are productive in it
>>>    from day 1. Go is fun and relatively simple, but not our forte.
>>>    - <troll> Mature package management, Generics/inability to write DRY
>>>    code, a million if err statements </troll> (:
>>>    - Some other awesome operators/controllers are written in JVM based
>>>    languages. The sparkKubernetes resource manager (which is a k8s 
>>> controller)
>>>    is written in Scala.
>>>
>>>
>>> On Wed, May 23, 2018 at 10:04 AM vincent gromakowski <
>>> vincent.gromakow...@gmail.com> wrote:
>>>
>>>> Why did you choose java for the operator implementation when everybody
>>>> seems to use the go client (probably for greater functionalities) ?
>>>>
>>>> 2018-05-23 15:39 GMT+02:00 Ben Bromhead <b...@instaclustr.com>:
>>>>
>>>>> You can get a good way with StatefulSets, but as Tom mentioned there
>>>>> are still some issues with this, particularly around scaling up and down.
>>>>>
>>>>> We are working on an Operator for Apache Cassandra, you can find it
>>>>> here https://github.com/instaclustr/cassandra-operator. This is a
>>>>> joint project between Instaclustr, Pivotal and a few other folk.
>>>>>
>>>>> Currently it's a work in progress, but we would love any or all early
>>>>> feedback/PRs/issues etc. Our first GA release will target the following
>>>>> capabilities:
>>>>>
>>>>>    - Safe scaling up and down (including decommissioning)
>>>>>    - Backup/restore workflow (snapshots only initially)
>>>>>    - Built in prometheus integration and discovery
>>>>>
>>>>> Other features like repair, better PV support, maybe even a nice
>>>>> dashboard will be on the way.
>>>>>
>>>>>
>>>>> On Wed, May 23, 2018 at 7:35 AM Tom Petracca <tpetra...@palantir.com>
>>>>> wrote:
>>>>>
>>>>>> Using a statefulset should get you pretty far, though will likely be
>>>>>> less effective than a coreos-style “operator”. Some random points:
>>>>>>
>>>>>>    - For scale-up: a node shouldn’t report “ready” until it’s in the
>>>>>>    NORMAL state; this will prevent multiple nodes from bootstrapping at 
>>>>>> once.
>>>>>>    - For scale-down: as of now there isn’t a mechanism to know if a
>>>>>>    pod is getting decommissioned because you’ve permanently lowered 
>>>>>> replica
>>>>>>    count, or because it’s just getting bounced/re-scheduled, thus knowing
>>>>>>    whether or not to decommission is basically impossible. Relevant 
>>>>>> issue:
>>>>>>    kubernetes/kubernetes#1462
>>>>>>    <https://github.com/kubernetes/kubernetes/issues/1462>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *From: *Pradeep Chhetri <prad...@stashaway.com>
>>>>>> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>>>>>> *Date: *Friday, May 18, 2018 at 10:20 AM
>>>>>> *To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>>>>>> *Subject: *Re: Using K8s to Manage Cassandra in Production
>>>>>>
>>>>>>
>>>>>>
>>>>>> Hello Hassaan,
>>>>>>
>>>>>>
>>>>>>
>>>>>> We use cassandra helm chart[0] for deploying cassandra over
>>>>>> kubernetes in production. We have around 200GB cas data. It works really
>>>>>> well. You can scale up nodes easily (I haven't tested scaling down).
>>>>>>
>>>>>>
>>>>>>
>>>>>> I would say that if you are worried about running cassandra over k8s
>>>>>> in production, maybe you should first try setting it for your
>>>>>> staging/preproduction and gain confidence over time.
>>>>>>
>>>>>>
>>>>>>
>>>>>> I have tested situations where i have killed the host running
>>>>>> cassandra container and have seen that container moves to a different 
>>>>>> node
>>>>>> and joins cluster properly. So from my experience its pretty good. No
>>>>>> issues till yet.
>>>>>>
>>>>>>
>>>>>>
>>>>>> [0]: https://github.com/kubernetes/charts/tree/master/incubator/cassandra
>>>>>> [github.com]
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_kubernetes_charts_tree_master_incubator_cassandra&d=DwMFaQ&c=izlc9mHr637UR4lpLEZLFFS3Vn2UXBrZ4tFb6oOnmz8&r=1oh1YI8i5eJD1DFTwooO7U92fFi2fjan6lqP61yAiyo&m=dupKDpZi0lkjFkqaSd6XaEj5nuY1T5UObgAcXCBqo7A&s=0WTYStEM1zvh2BQKvnVLRpukxgr0aDLyGffyE1V2xik&e=>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> Pradeep
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, May 18, 2018 at 1:01 PM, Павел Сапежко <amelius0...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>> Hi, Hassaan! For example we are using C* in k8s in production for our
>>>>>> video surveillance system. Moreover, we are using Ceph RBD as our storage
>>>>>> for cassandra. Today we have 8 C* nodes each manages 2Tb of data.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, May 18, 2018 at 9:27 AM Hassaan Pasha <hpa...@an10.io> wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>>
>>>>>>
>>>>>> I am trying to craft a deployment strategy for deploying and
>>>>>> maintaining a C* cluster. I was wondering if there are actual production
>>>>>> deployments of C* using K8s as the orchestration layer.
>>>>>>
>>>>>>
>>>>>>
>>>>>> I have been given the impression that K8s managing a C* cluster can
>>>>>> be a recipe for disaster, especially if you aren't well versed with the
>>>>>> intricacies of a scale-up/down event. I know use cases where people are
>>>>>> using Mesos or a custom tool built with terraform/chef etc to run their
>>>>>> production clusters but have yet to find a real K8s use case.
>>>>>>
>>>>>>
>>>>>>
>>>>>> *Questions?*
>>>>>>
>>>>>> Is K8s a reasonable choice for managing a production C* cluster?
>>>>>>
>>>>>> Are there documented use cases for this?
>>>>>>
>>>>>>
>>>>>>
>>>>>> Any help would be greatly appreciated.
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>>
>>>>>>
>>>>>> *Hassaan Pasha*
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Regrads,
>>>>>>
>>>>>> Pavel Sapezhko
>>>>>>
>>>>>>
>>>>>>
>>>>> --
>>>>> Ben Bromhead
>>>>> CTO | Instaclustr <https://www.instaclustr.com/>
>>>>> +1 650 284 9692 <(650)%20284-9692>
>>>>> Reliability at Scale
>>>>> Cassandra, Spark, Elasticsearch on AWS, Azure, GCP and Softlayer
>>>>>
>>>>
>>>> --
>>> Ben Bromhead
>>> CTO | Instaclustr <https://www.instaclustr.com/>
>>> +1 650 284 9692 <(650)%20284-9692>
>>> Reliability at Scale
>>> Cassandra, Spark, Elasticsearch on AWS, Azure, GCP and Softlayer
>>>
>>
>> --
> Ben Bromhead
> CTO | Instaclustr <https://www.instaclustr.com/>
> +1 650 284 9692
> Reliability at Scale
> Cassandra, Spark, Elasticsearch on AWS, Azure, GCP and Softlayer
>



-- 
Regards,

*Hassaan Pasha*
Mobile: 03347767442

Reply via email to