I ran into similar issue before with 2.1.13 version of C*. and when I
restart the node second time it actually created the default roles. I
haven't dig deeper on the root cause. it happened to me only on one cluster
out of 10+ clusters.

On Wed, Nov 22, 2017 at 5:13 PM, @Nandan@ <nandanpriyadarshi...@gmail.com>
wrote:

> Hi Jai,
> I checked nodetool describecluster and got same schema version on all 4
> nodes.
>
>> [nandan@node-1 ~]$ nodetool describecluster
>
> Cluster Information:
> Name: Nandan
> Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
> Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
> Schema versions:
> 2e2ab56b-6639-394e-a1fe-4b35ba87473b: [10.0.0.2, 10.0.0.3, 10.0.0.4,
> 10.0.0.1]
>
>
> Thanks and best regards,
> Nandan
>
> On Thu, Nov 23, 2017 at 5:37 AM, Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
>> Can you do a nodetool describecluster and check if the schema version is
>> matching on all the nodes?
>>
>>
>> On Tue, Nov 21, 2017 at 11:52 PM, @Nandan@ <nandanpriyadarshi...@gmail.co
>> m> wrote:
>>
>>> Hi Team,
>>>
>>> Today I set up a test cluster with 4 nodes and using Apache Cassandra
>>> 3.1.1 version.
>>> After setup when I checked output.log file then I got WARN message as
>>> below :-
>>> WARN  08:51:38,122  CassandraRoleManager.java:355 - CassandraRoleManager
>>> skipped default role setup: some nodes were not ready
>>> WARN  08:51:46,269  DseDaemon.java:733 - The following nodes seems to be
>>> down: [/10.0.0.2, /10.0.0.3, /10.0.0.4]. Some Cassandra operations may
>>> fail with UnavailableException.
>>>
>>> But I checked Nodetool status and that is totally working fine and all
>>> nodes are in UN status.
>>>
>>> Please tell me what have I need to check for this. ?
>>> Thanks in Advance,
>>> Nandan Priyadarshi
>>>
>>
>>
>

Reply via email to