for sure, but with auth for another user (LOCAL_ONE), you still want auth
info replicated to all nodes.

The system default of an RF of 1 can cause no access at all for a single
node going down (even with caching) and for the average user is a worse
solution than replicated to all nodes.

Also given that you should not be using the cassandra user (it's only there
to bootstrap auth setup) and that I would claim it is best to have auth
details local to each node, short of a complete rewrite of how system_auth
is distributed (e.g. via gossip, which I suspect it should be) I would
propose that an everywhere strategy is helpful in this regard?

On the other end of the spectrum operators with very large clusters are
already customizing auth to suit their needs (and not using the Cassandra
user).  I've seen far more people shoot themselves in the foot with
system_auth as it currently stands than large operators getting this wrong.
I would also claim that an everywhere strategy is about as dangerous as
secondary indexes...

Sorry to keep flogging a dead horse, but keeping auth replicated properly
has been super helpful for us. Of course replication strategies are
pluggable, so easy for us to maintain separately, I'm just trying to figure
out where I'm missing the point and if we need to re-evaluate the way we do
things or if the fear of misuse is the primary concern :)

On Wed, 30 Nov 2016 at 10:32 Jeff Jirsa <jji...@apache.org> wrote:

>
>
> On 2016-11-30 10:02 (-0800), Ben Bromhead <b...@instaclustr.com> wrote:
> > Also apparently the Everywhere Strategy is a bad idea (tm) according to
> > comments in https://issues.apache.org/jira/browse/CASSANDRA-12629 but no
> > reason has been given why...
> >
>
> It's touched on in that thread, but it's REALLY EASY to misuse, and most
> people who want it are probably going to shoot themselves in the foot with
> it.
>
> The one example Jeremiah gave is admin logins with system_auth - requires
> QUORUM, quorum on a large cluster is almost impossible to satisfy like that
> (imagine a single digest mismatch triggering a blocking read repair on a
> hundred nodes, and what that does to the various thread pools).
>
>
>
> --
Ben Bromhead
CTO | Instaclustr <https://www.instaclustr.com/>
+1 650 284 9692
Managed Cassandra / Spark on AWS, Azure and Softlayer

Reply via email to