Hi all,
I am perftesting cassandra over a longrun in a cluster of 8 nodes and i
noticed the rate of service drops.
Most of the nodes have the CPU between 40-65% however one of the nodes has
a higher CPU and also started performing a lot of read IOPS as seen in the
image. (green is read IOPS)
My t
Hi,
We have a pretty outdated Cassandra cluster running version 2.0.x. Instead
of doing step by step upgrades (2.0 -> 2.1, 2.1 -> 2.2, 2.2 -> 3.0, 3.0 ->
3.11.x), would it be possible to add new nodes with a recent version (say
3.11.x) and start decommissioning the old ones until we have a cluster
Hi Joel,
No it's not supported. C*2.0 can't stream data to C*3.11.
Make the upgrade 2.0 -> 2.1.20 then you'll be able to upgrade to 3.11.3 i.e.
2.1.20 -> 3.11.3. You can upgrade to 3.0.17 as an intermediary step (I would
do), but don't upgrade to 2.2. Also make sure to read carefully
https://g
Also, you didn't mention which C*2.0 version you're using but prior to upgrade
to 2.1.20, make sure to use the latest 2.0 - or at least >= 2.0.7
Le vendredi 3 août 2018 à 13:03:39 UTC+2, Romain Hardouin
a écrit :
Hi Joel,
No it's not supported. C*2.0 can't stream data to C*3.11.
Make
Thank you for your replies! We're at 2.0.17.
Den fre 3 aug. 2018 kl 14:34 skrev Romain Hardouin
:
> Also, you didn't mention which C*2.0 version you're using but prior to
> upgrade to 2.1.20, make sure to use the latest 2.0 - or at least >= 2.0.7
>
> Le vendredi 3 août 2018 à 13:03:39 UTC+2, Roma
Probably Compaction
Cassandra data files are immutable
The write path first appends to a commitlog, then puts data into the memtable.
When the memtable hits a threshold, it’s flushed to data files on disk (let’s
call the first one “1”, second “2” and so on)
Over time we build up multiple data
I looked at the compaction history on the affected node when it was
affected and it was not affected.
The number of compactions is fairly similar and the amount of work also.
*Not affected time*
[root@cassandra7 ~]# nodetool compactionhistory | grep 02T22
fda43ca0-9696-11e8-8efb-25b020ed0402 demo
I wonder if you are building up tombstones with the deletes. Can you share your
data model? Are the deleted rows using the same partition key as new rows? Any
warnings in your system.log for reading through too many tombstones?
Sean Durity
From: Mihai Stanescu
Sent: Friday, August 03, 2018 12