other concerns:
there is no replication between 2.11 and 3.11, store in hints, and replay
hints when remote is same version. have to do repair if over window.  if
read quorum 2/3,  will get error.

in case rollback to 2.11, can not read new version 3.11 data files, but
online rolling upgrade, some new data is in new version format.

if hardlink snapshot is not copied to other device, then disk failure may
cause data loss. ( since may only have some data may just 1 copy during
upgrade because no replication).


On Thu, Jul 5, 2018 at 8:13 PM, kooljava2 <koolja...@yahoo.com.invalid>
wrote:

> Hello Anuj,
>
> The 2nd workaround should work. As app will auto discover all the other
> nodes. Its the first contact with the node that app makes determines the
> protocol version. So if you remove the newer version nodes from the app
> configuration after the startup, it will auto discover the newer nodes as
> well.
>
> Thank you,
> TS.
>
> On Thursday, 5 July 2018, 12:45:39 GMT-7, Anuj Wadehra <
> anujw_2...@yahoo.co.in.INVALID> wrote:
>
>
> Hi,
>
> I woud like to know how people are doing rolling upgrade of Casandra
> clustes when there is a change in native protocol version say from 2.1 to
> 3.11. During rolling upgrade, if client application is restarted on nodes,
> the client driver may first contact an upgraded Cassandra node with v4 and
> permanently mark all old Casandra nodes on v3 as down. This may lead to
> request failures. Datastax recommends two ways to deal with this:
>
> 1. Before upgrade, set protocol version to lower protocol version. And
> move to higher version once entire cluster is upgraded.
> 2. Make sure driver only contacts upraded Cassandra nodes during rolling
> upgrade.
>
> Second workaround will lead to failures as you may not be able to meet
> required consistency for some time.
>
> Lets consider first workaround. Now imagine an application where protocol
> version is not configurable and code uses default protocol version. You can
> not apply first workaroud because you have to upgrade your application on
> all nodes to first make the protocol version configurable. How would you
> upgrade such a cluster without downtime? Thoughts?
>
> Thanks
> Anuj
>
>
>

Reply via email to