On Fri, Jun 28, 2019 at 8:37 AM Ayub M wrote:
> Hello, I have a cluster with 3 nodes - say cluster1 on AWS EC2 instances.
> The cluster is up and running, took snapshot of the keyspaces volume.
>
> Now I want to restore few tables/keyspaces from the snapshot volumes, so I
> created another cluste
Hi everyone,
I'm completely new to Cassandra DB, so please do not roast me for asking
obvious stuff.
I managed to setup one Cassandra node and enter some data to it,
successfully. Next, I installed a second node, which connects to that
first one via port 7000 and sync all that data from it.
I would start checking this page:
http://cassandra.apache.org/doc/latest/operating/security.html
Then move to this:
https://thelastpickle.com/blog/2015/09/30/hardening-cassandra-step-by-step-part-1-server-to-server.html
Cheers,
Hannu
> Marc Richter kirjoitti 28.6.2019 kello 16.55:
>
> Hi ever
Hi all …
The datastax & apache docs are clear: run ‘nodetool repair’ after you alter a
keyspace to change its RF or RS.
However, the details are all over the place as what type of repair and on what
nodes it needs to run. None of the above doc authorities are clear and what you
find on the int
This sounds like a bad query or large partition. If a large partition is
requested on multiple nodes (because of consistency level), it will pressure
all those replica nodes. Then, as the cluster tries to adjust the rest of the
load, the other nodes can get overwhelmed, too.
Look at cfstats to
For just changing RF:
You only need to repair the full token range - how you do that is up to
you. Running `repair -pr -full` on each node will do that. Running `repair
-full` will do it multiple times, so it's more work, but technically
correct.The caveat that few people actually appreciate about
Yep - not to mention the increased complexity and overhead of going from
ONE to QUORUM, or the increased cost of QUORUM in RF=5 vs RF=3.
If you're in a cloud provider, I've found you're almost always better off
adding a new DC with a higher RF, assuming you're on NTS like Jeff
mentioned.
On Fri,
On Fri, Jun 28, 2019 at 3:57 PM Marc Richter wrote:
>
> How is this dealt with in Cassandra? Is setting up firewalls the only
> way to allow only some nodes to connect to the ports 7000/7001?
>
Hi,
You can set
server_encryption_options:
internode_encryption: all
...
and distribute the
On Fri, Jun 28, 2019 at 11:29 PM Jeff Jirsa wrote:
> you often have to run repair after each increment - going from 3 -> 5
> means 3 -> 4, repair, 4 -> 5 - just going 3 -> 5 will violate consistency
> guarantees, and is technically unsafe.
>
Jeff,
How going from 3 -> 4 is *not violating* consi
If you’re at RF= 3 and read/write at quorum, you’ll have full visibility of all
data if you switch to RF=4 and continue reading at quorum because quorum if 4
is 3, so you’re guaranteed to overlap with at least one of the two nodes that
got all earlier writes
Going from 3 to 4 to 5 requires a re
To Sir Oleksandr :
Thank you!
Sincerely
Nimbuslin(Lin JiaXin)
Mobile: 0086 180 5986 1565
Mail: jiaxin...@live.com
From: Oleksandr Shulgin
Sent: Monday, June 17, 2019 7:19 AM
To: User
Subject: Re: How can I check cassandra cluster has a real working f
To Sir Oleksandr :
Thank you very much for your careful teaching, at the begging, I copied
system_auth keyspace and tables' sql grammar
and misunderstood the HA function of cassandra, now I know cassandra'ha as
hadoop or greenplum.
And I will check the 3rd answer on Jconsole latte
12 matches
Mail list logo