Hi,
given a cluster with RF=3 and CL=LOCAL_ONE and application is deleting data,
what happens if the nodes are setup with JBOD and one disk fails? Do I get
consistent results while the broken drive is replaced and a nodetool repair is
running on the node with the replaced drive?
Kind regards,
I am inserting to Cassandra by a simple insert query and an update counter
query for every input record. input rate is so high. I've configured the update
query with idempotent = true (no config for insert query, default is false
IMHO) I've seen multiple records having rows in counter table (ide
Scenario: Cassandra 2.2.7, 3 nodes, RF=3 keyspace.
1. Truncate a table.
2. More than 24 hours later… FileCacheService is still reporting cold
readers for sstables of truncated data for node 2 and 3, but not node 1.
3. The output of nodeool compactionstats shows stuck compacti
Currently our cassandra prod is 18 node 3 dc cluster and application does
55 million reads per day and want to add load and make it 90 millon reads
per day.they need a guestimate of resources which we need to bump without
testing ..on top of my head we can increase heap and native trasport value
.
Not a great idea to make config changes without testing. For a lot of
changes you can make the change on one node and measure of three is an
improvement however.
You'd probably be best to add nodes (double should be sufficient), do
tuning and testing afterwards, and then decommission a few nodes i
How can we improve data load performance?
you have to explain what you mean by "JBOD". All in one large vdisk?
Separate drives?
At the end of the day, if a device fails in a way that the data housed on
that device (or array) is no longer available, that HDFS storage is marked
down. HDFS now needs to create a 3rd replicant. Various timers
Bro, Please explain your question as much as possible.
This is not a single line Q&A session where we will able to understand your
in-depth queries in a single line.
For better and suitable reply, Please ask a question and elaborate what
steps you took for your question and what issue are you getti
Depends on version
For versions without the fix from Cassandra-6696, the only safe option on
single disk failure is to stop and replace the whole instance - this is
important because in older versions of Cassandra, you could have data in one
sstable, a tombstone shadowing it in another disk, an
If that disk had important data in the system tables however you might have
some trouble and need to replace the entire instance anyway.
On 15 August 2018 at 12:20, Jeff Jirsa wrote:
> Depends on version
>
> For versions without the fix from Cassandra-6696, the only safe option on
> single disk
10 matches
Mail list logo