It's best-practice to disable the default user ("cassandra" user) after
enabling password authentication on your cluster. The default user reads
with a CL.QUORUM when authenticating, while other users use CL.LOCAL_ONE.
This means it's more likely you could experience authentication issues,
even if
On Thu, Jul 6, 2017 at 6:58 PM, Charulata Sharma (charshar) <
chars...@cisco.com> wrote:
> Hi,
>
> I am facing similar issues with SYSTEM_AUTH keyspace and wanted to know
> the implication of disabling the "*cassandra*" superuser.
>
Unless you have scheduled any tasks that require the user with t
4, 2017 at 2:16 AM
To: Oleksandr Shulgin
mailto:oleksandr.shul...@zalando.de>>
Cc: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: Cannot achieve consistency level LOCAL_ONE
Thanks for the detail explanation
Thanks for the detail explanation. You did solve my problem.
Cheers,
-Simon
From: Oleksandr Shulgin
Date: 2017-06-14 17:09
To: wxn...@zjqunshuo.com
CC: user
Subject: Re: Cannot achieve consistency level LOCAL_ONE
On Wed, Jun 14, 2017 at 10:46 AM, wxn...@zjqunshuo.com
wrote:
Thanks for the
On Wed, Jun 14, 2017 at 10:46 AM, wxn...@zjqunshuo.com wrote:
> Thanks for the reply.
> My system_auth settings is as below and what should I do with it? And I'm
> interested why the newly added node is responsible for the user
> authentication?
>
> CREATE KEYSPACE system_auth WITH replication =
replication_factor': '1'} AND durable_writes = true;
-Simon
From: Oleksandr Shulgin
Date: 2017-06-14 16:36
To: wxn...@zjqunshuo.com
CC: user
Subject: Re: Cannot achieve consistency level LOCAL_ONE
On Wed, Jun 14, 2017 at 9:11 AM, wxn...@zjqunshuo.com
wrote:
Hi,
Cluster set up:
1 DC with 5
During the down
> period, all 4 other nodes report "Cannot achieve consistency
> level LOCAL_ONE" constantly until I brought up the dead node. My data
> seems lost during that down time. To me this could not happen because the
> write CL is LOCAL_ONE and only one node was dea
Hi,
Cluster set up:
1 DC with 5 nodes (each node having 700GB data)
1 kespace with RF of 2
write CL is LOCAL_ONE
read CL is LOCAL_QUORUM
One node was down for about 1 hour because of OOM issue. During the down
period, all 4 other nodes report "Cannot achieve consistency level LOCA