Hi,
Backup / Restore can be done in distinct ways.
The process is about taking a snapshot of all the nodes, roughly at the
same time to backup, and set a new environment based on this for restore.
Usually a Cassandra backup is made through 'nodetool snapshot' on each
node, then move all the snap
I ran into similar issue before with 2.1.13 version of C*. and when I
restart the node second time it actually created the default roles. I
haven't dig deeper on the root cause. it happened to me only on one cluster
out of 10+ clusters.
On Wed, Nov 22, 2017 at 5:13 PM, @Nandan@
wrote:
> Hi Jai,
So It means you are also getting same WARN into your output.log file.
I am also getting this WARN into my NODE1.
On Fri, Nov 24, 2017 at 7:03 AM, Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:
> I ran into similar issue before with 2.1.13 version of C*. and when I
> restart the node s
yes,
I had it in one of my cluster. try restarting the node once again and see
if it creates the default role.
On Thu, Nov 23, 2017 at 5:05 PM, @Nandan@
wrote:
> So It means you are also getting same WARN into your output.log file.
> I am also getting this WARN into my NODE1.
>
>
> On Fri, Nov
Hi Jai,
As you suggested, I stopped my first node and restart now again. Now
updates are like this.
> On Node1:-
> WARN message is not coming.
> On Node 2:-
> WARN: CassandraRoleManager Skipped
> WARN: 10.0.0.3 node seems to be down
> On Node 3:-
> No Such Warning,
On Fri, Nov 24, 2017 at 9:08