his eventually could be a long time in the future.
> >
> > So I would recommend running the sstable upgrade, even if you wait to
> > schedule it during a time of lower load on the system.
> >
> > Regards
> >
> > Paul
> >
> > Sent from my
Hi Luciano,
It is not a must, as, after the node is upgraded all the new sstables will be
created as oa sstables. Then old nb sstables will get compacted away
eventually, but this eventually could be a long time in the future.
So I would recommend running the sstable upgrade, even if you wait
> created as oa sstables. Then old nb sstables will get compacted away
> eventually, but this eventually could be a long time in the future.
>
> So I would recommend running the sstable upgrade, even if you wait to
> schedule it during a time of lower load on the system.
>
>
One thing I am not really sure is if upgrading sstables is really a
must after the Cassandra 5 upgrade.
Thanks
Luciano Greiner
On Sun, Mar 16, 2025 at 2:33 PM Paul Chandler wrote:
>
> Yes, that should sort it out.
>
> Regards
>
> Paul
> Sent from my iPhone
>
>
Great ,Thanks Scott .
בתאריך יום א׳, 16 במרץ 2025, 21:40, מאת C. Scott Andreas <
sc...@paradoxica.net>:
> Hi Edi,
>
> Details are in NEWS.txt:
> https://github.com/apache/cassandra/blob/cassandra-5.0/NEWS.txt
>
> – Scott
>
> On Mar 16, 2025, at 11:37 AM, edi mari wrote:
>
>
>
> Sorry for jump
Hi Edi,Details are in NEWS.txt: https://github.com/apache/cassandra/blob/cassandra-5.0/NEWS.txt– ScottOn Mar 16, 2025, at 11:37 AM, edi mari wrote:Sorry for jumping into the conversation, but I wanted to ask— is there an guide for upgrading Cassandra from v4 to v5?
Edi
בתאריך יום א׳, 16 במרץ 2025
Sorry for jumping into the conversation, but I wanted to ask— is there an
guide for upgrading Cassandra from v4 to v5?
Edi
בתאריך יום א׳, 16 במרץ 2025, 19:33, מאת Paul Chandler :
> Yes, that should sort it out.
>
> Regards
>
> Paul
> Sent from my iPhone
>
> > On 16 Mar 2025, at 18:34, Luciano G
Yes, that should sort it out.
Regards
Paul
Sent from my iPhone
> On 16 Mar 2025, at 18:34, Luciano Greiner wrote:
>
> Thank you Paul!
>
> So should I restart the nodes with UPGRADING mode and run the
> upgradesstables again?
>
> Thank you!
>
> Luciano Greiner
> (54) 996309845
>
>> On Sun
Thank you Paul!
So should I restart the nodes with UPGRADING mode and run the
upgradesstables again?
Thank you!
Luciano Greiner
(54) 996309845
On Sun, Mar 16, 2025 at 2:58 AM Paul Chandler wrote:
>
> Hi Luciano,
>
> It sounds like you could have the storage_compatibility_mode set to the
> def
Hi Luciano,
It sounds like you could have the storage_compatibility_mode set to the default
CASSANDRA_4 value, check this and change it to UPGRADING or NONE.
Full details can be found in the Cassandra.yaml
https://cassandra.apache.org/doc/latest/cassandra/managing/configuration/cass_yaml_file.h
Hi,
We recently upgraded our clusters from 4.1.5 to 5.0.3 and now I’m
trying to migrate SSTable files to the new oa-* format, but it’s not
working as I expected.
What I Tried:
nodetool flush + upgradesstables → Completed quickly with success
messages, but no SSTables were rewritten.
nodetool upg
Yes, I would add the 6 new nodes, then decommission the 6 original nodes, then
upgrade to 5.0.
Remember to change the seed node values before you decommission the old nodes.
Thanks
Paul
> On 30 Jan 2025, at 14:42, Luciano Greiner wrote:
>
> Ok, so you mean I setup these new 6
Ok, so you mean I setup these new 6 nodes as 4.1.3, then just upgrade
the software, correct?
Thank you!
Luciano
On Thu, Jan 30, 2025 at 6:57 AM Paul Chandler wrote:
>
> Hi Luciano,
>
> The problem occurs due to Cassandra 5 making changes to the system tables, so
> the clu
Hi Luciano,
The problem occurs due to Cassandra 5 making changes to the system tables, so
the cluster will be in schema mismatch during the upgrade process, until all
the nodes are on 5.0
Normally this would not be a problem, as the system tables are not replicated
anyway, but, as you are
Hello.
I am upgrading a small Cassandra 4.1.3 cluster (2 sites, 3 nodes each)
to Cassandra 5.
Given we're using an old Centos OS in our nodes, I decided to get new
nodes provisioned in the cluster (with version 5 on compatibility
mode), and then decommission the old nodes when all is completed.
On Wed, Dec 18, 2024 at 11:50 AM Paul Chandler wrote:
>
> This is all old history and has been fixed, so is not really what the
> question was about, however these old problems have a bad legacy in the
> memory of the people that matter. Hence the push back we have now.
>
I totally understand th
OK, it seems like I didn’t explain it too well, but yes it is the rolling
restart 3 times as part of the upgrade that is causing the push back, my
message was a bit vague on the use cases because there are confidentiality
agreements in place so I can’t share too much.
We have had problems in
surprise at the concern of
> rolling restarts (you repeatedly, Scott mentioned that repair isn’t
> required for upgrade - or restart, but I could see how teams consider it
> required, especially if they're doing low consistency writes with lots of
> DCs and relying on repair-after-boun
On Wed, Dec 18, 2024 at 12:12 PM Jon Haddad wrote:
> I think we're talking about different things.
>
> > Yes, and Paul clarified that it wasn't (just) an issue of having to do
> rolling restarts, but the work involved in doing an upgrade. Were it only
> the case th
that repair isn’t required for
upgrade - or restart, but I could see how teams consider it required,
especially if they're doing low consistency writes with lots of DCs and relying
on repair-after-bounce for visibility (because hints may not keep up before
they time out, for example),
ike this?
>
> On 17 Dec 2024, at 22:12, Jon Haddad wrote:
>
> > Secondly there are some very large clusters involved, 1300+ nodes across
> multiple physical datacenters, in this case any upgrades are only done out
> of hours and only one datacenter per day. So a normal upgrade cyc
I think we're talking about different things.
> Yes, and Paul clarified that it wasn't (just) an issue of having to do
rolling restarts, but the work involved in doing an upgrade. Were it only
the case that the hardest part of doing an upgrade was the rolling
restart...
>From
It's clear from discussion on this list that the current "storage_compatibility_mode" implementation and upgrade path for 5.0 is a source of real and legitimate user pain, and is
likely to result in many organizations slowing their adoption of the release. Would love to discuss
On Wed, Dec 18, 2024 at 11:43 AM Jon Haddad wrote:
> > We (Wikimedia) have had more (major) upgrades go wrong in some way, than
> right. Any significant upgrade is going to be weeks —if not months— in the
> making, with careful testing, a phased rollout, and a workable plan fo
> We (Wikimedia) have had more (major) upgrades go wrong in some way, than
right. Any significant upgrade is going to be weeks —if not months— in the
making, with careful testing, a phased rollout, and a workable plan for
rollback. We'd never entertain doing more than one at a time, i
Secondly there are some very large clusters involved, 1300+ nodes across
> multiple physical datacenters, in this case any upgrades are only done out
> of hours and only one datacenter per day. So a normal upgrade cycle will
> take multiple weeks, and this one will take 3 times as long.
it like this?
> On 17 Dec 2024, at 22:12, Jon Haddad wrote:
>
> > Secondly there are some very large clusters involved, 1300+ nodes across
> > multiple physical datacenters, in this case any upgrades are only done out
> > of hours and only one datacenter per day. S
Hi Jeff, Repair is not a prerequisite for upgrading from 3.x to 4.x (but it's always recommended to run as a continuous process). Repair is not supported between nodes
running different major versions, so it should be disabled during the upgrade. There are quite a few fixes for hung r
We have similar issues with 3.x repairs, and run manually as well as with
Reaper. Can someone tell me, if I cannot get a table repaired because it is
locking up a node, is it still possible to upgrade to 4.0?
Jeff
From: Jon Haddad
Reply-To:
Date: Tuesday, December 17, 2024 at 2:20 PM
I strongly suggest moving to 4.0 and to set up Reaper. Managing repairs
yourself is a waste of time, and you're almost certainly not doing it
optimally.
Jon
On Tue, Dec 17, 2024 at 12:40 PM Miguel Santos-Lopez
wrote:
> We haven’t had the chance to upgrade to 4, let alone 5. Has ther
> Secondly there are some very large clusters involved, 1300+ nodes across
multiple physical datacenters, in this case any upgrades are only done out
of hours and only one datacenter per day. So a normal upgrade cycle will
take multiple weeks, and this one will take 3 times as long.
If you o
across
multiple physical datacenters, in this case any upgrades are only done out of
hours and only one datacenter per day. So a normal upgrade cycle will take
multiple weeks, and this one will take 3 times as long.
This is a very large organisation with some very fixed rules and processes, so
the
ble, and you should be able to do them without there being any concern.
>
> Jon
>
>
> On 2024/12/17 16:01:06 Paul Chandler wrote:
> > All,
> >
> > We are getting a lot of push back on the 3 stage process of going through
> > the three compatibility modes
should be able to do them without there being any concern.
Jon
On 2024/12/17 16:01:06 Paul Chandler wrote:
> All,
>
> We are getting a lot of push back on the 3 stage process of going through the
> three compatibility modes to upgrade to Cassandra 5. This basically means 3
> r
All,
We are getting a lot of push back on the 3 stage process of going through the
three compatibility modes to upgrade to Cassandra 5. This basically means 3
rolling restarts of a cluster, which will be difficult for some of our large
multi DC clusters.
Having researched this, it looks like
Hi Team
For the cassandra 5, upgrade storage_compatibility_mode is mandated to be
used as follows
- Do a rolling upgrade to 5.0 where 2038 will still be the limit. At this
point, the node won't write
anything incompatible with Cassandra 4.x, and you would still be able to
rollback to
Found issue - num tokens was set incorrectly in my container. Upgrade
successful!
-Joe
On 11/5/2024 2:27 PM, Joe Obernberger wrote:
Hi all - getting an error trying to upgrade our 4.x cluster to 5. The
following message repeats over and over and then the pod crashes:
Heap dump creation on
Hi all - getting an error trying to upgrade our 4.x cluster to 5. The
following message repeats over and over and then the pod crashes:
Heap dump creation on uncaught exceptions is disabled.
DEBUG [MemtableFlushWriter:2] 2024-11-05 19:25:12,763
ColumnFamilyStore.java:1379
ing upgrades is not supported; please ensure
> it is disabled before beginning the upgrade and re-enable after.
>
> – Scott
>
> On Jan 6, 2024, at 10:25 PM, manish khandelwal <
> manishkhandelwa...@gmail.com> wrote:
>
>
> Hi
>
> Is Cassandra upgrade from
Upgrading from 3.11.x to 4.1.x is supported, yes. As the documentation you
reference mentions, it is not possible to downgrade from 4.x to 3.x.
Note that running repair during upgrades is not supported; please ensure it is
disabled before beginning the upgrade and re-enable after.
– Scott
Hi
Is Cassandra upgrade from Cassandra 3.11.x to Cassandra 4.1.3 is supported.
NEWS.txt has a general guideline that
Snapshotting is fast (especially if you have JNA installed) and takes
effectively zero disk space until you start compacting the live data
files again. Thus, best practice is to
21:08
To: user@cassandra.apache.org
Subject: Re: Upgrade from C* 3 to C* 4 per datacenter
Just a heads-up, but there have been issues (at least one) reported when
upgrading a multi-DC cluster from 3.x to 4.x when the cluster uses node-to-node
SSL/TLS encryption. This is largely attributed to the
that you clone, you need exactly
> the same number of nodes in your test cluster that you have in the
> respective data center of your production cluster.
>
> Once the cluster is cloned, you can test whatever you like (e.g. upgrade
> to C* 4, test operations in a mixed-version cluste
cluster is cloned, you can test whatever you like (e.g. upgrade to C*
4, test operations in a mixed-version cluster, etc.).
Our experience with the upgrade from C* 3.11 to C* 4.1 on the test cluster was
quite smooth. The only problem that we saw was that when later adding a second
data center to the
Hello Jeff et al,
Thanks a lot for your valuable info. Your comment covers all my queries.
BR
MK
From: Jeff Jirsa
Sent: October 26, 2023 15:48
To: user@cassandra.apache.org
Cc: Michalis Kotsiouros (EXT)
Subject: Re: Upgrade from C* 3 to C* 4 per datacenter
On Oct 26, 2023, at 12:32 AM
> On Oct 26, 2023, at 12:32 AM, Michalis Kotsiouros (EXT) via user
> wrote:
>
>
> Hello Cassandra community,
> We are trying to upgrade our systems from Cassandra 3 to Cassandra 4. We plan
> to do this per data center.
> During the upgrade, a cluster with mixed S
Hello Scott,
Thanks a lot for the immediate answer.
We use a semi automated procedure to do the upgrade of the SW in our systems
which is done per datacenter.
Our limitation is that if we want to rollback we need to rollback the Cassandra
nodes from the whole datacenter.
May I return to the
The recommended approach to upgrading is to perform a replica-safe rolling restart of instances in
each datacenter, one datacenter at a time. > In case of an upgrade failure, would it be possible
to remove the data center from the cluster, restore the datacenter to C*3 SW and add it back
Hello Cassandra community,
We are trying to upgrade our systems from Cassandra 3 to Cassandra 4. We plan
to do this per data center.
During the upgrade, a cluster with mixed SW levels is expected. At this point
is it possible to perform topology changes?
In case of an upgrade failure, would it
that updating the
> Java version is not a simple thing but wanted to throw that little
> Interesting bit of info that I had experienced.
>
> Sent from my iPhone
>
> On Aug 16, 2023, at 1:28 PM, vaibhav khedkar wrote:
>
>
> Thanks Patrick,
>
>
> We do have plans
a simple thing but wanted to throw that little Interesting bit of info that I had experienced. Sent from my iPhoneOn Aug 16, 2023, at 1:28 PM, vaibhav khedkar wrote:Thanks Patrick,We do have plans to upgrade to java 11 eventually but we will go through internal testing and would also need some
Thanks Patrick,
We do have plans to upgrade to *java 11* eventually but we will go through
internal testing and would also need some time given the size of our
infrastructure.
Is it safe to assume that the issue exists in the combination of upgrades
from 3.11.x to 4.0.x *and* running on JAVA 8
I've actually noticed this as well on a few clusters I deal with but after
upgrading Cassandra from 3.11 to 4 we also changed to use Java 11 shortly
after the cluster upgrade. After I moved to Java 11 I have not experienced
a problem.
On Wed, Aug 16, 2023 at 12:12 PM vaibhav khedkar
t;
> We recently upgraded our fleet of ~2500 Cassandra instances from 3.11.9 to
> 4.0.5.
>
> After the upgrade, we are seeing a unique issue where the compacted
> SSTables's file descriptors are still present and are never cleared. This
> is causing false disk alerts. We have to
for this for discussion / investigation is
probably a good next step.Thanks,– ScottOn Aug 16, 2023, at 9:28 AM, vaibhav khedkar
wrote:Hi everyone,We recently upgraded our fleet of ~2500
Cassandra instances from 3.11.9 to 4.0.5.After the upgrade, we are seeing a unique
issue where the compacted
Hi everyone,
We recently upgraded our fleet of ~2500 Cassandra instances from 3.11.9 to
4.0.5.
After the upgrade, we are seeing a unique issue where the compacted
SSTables's file descriptors are still present and are never cleared. This
is causing false disk alerts. We have to restart nodes
You can check in your lower environment.
On Fri, 11 Aug, 2023, 06:25 Surbhi Gupta, wrote:
> Thanks,
>
> I am looking to to upgrade to 4.1.x .
> Please advise.
>
> Thanks
> Surbhi
>
> On Thu, Aug 10, 2023 at 5:39 PM MyWorld wrote:
>
>> Though it's re
Thanks,
I am looking to to upgrade to 4.1.x .
Please advise.
Thanks
Surbhi
On Thu, Aug 10, 2023 at 5:39 PM MyWorld wrote:
> Though it's recommended to upgrade to latest version of 3.11.x and then to
> ver 4 but even upgrading directly won't be a problem. Just check the
Though it's recommended to upgrade to latest version of 3.11.x and then to
ver 4 but even upgrading directly won't be a problem. Just check the
release notes.
However for production, I would recommend to go for 4.0.x latest stable
version.
Regards
Ashish
On Sat, 8 Jul, 2023, 05:44 Su
allow for the time to upgrade DC1, wait, upgrade DC2 and then complete a
repair, or you may end up with resurrected data.
You also must ensure you do not enable any new features on new version
nodes in a mixed version cluster. You may enable new features after all
nodes in the cluster are
Assuming "do it in one go" means a rolling upgrade from 3.11.5 to 4.1.2
skipping all version numbers between these two, the answer is yes, you
can "do it in one go".
On 08/07/2023 01:14, Surbhi Gupta wrote:
Hi,
We have to upgrade from 3.11.5 to 4.1.x .
Can we do it in on
Yes repairs are prohibited in mixed version cluster. If you want to monitor
please disable repairs till complete upgrade is finished
On Sat, Jul 8, 2023, 01:21 Runtian Liu wrote:
> Hi,
>
> We are upgrading our Cassandra clusters from 3.0.27 to 4.0.6 and we
> observed some erro
Hi,
We have to upgrade from 3.11.5 to 4.1.x .
Can we do it in one go ?
Or do we have to go to an intermediate version first?
Thanks
Surbhi
Hi,
We are upgrading our Cassandra clusters from 3.0.27 to 4.0.6 and we
observed some error related to repair: j.l.IllegalArgumentException:
Unknown verb id 32
We have two datacenters for each Cassandra cluster and when we are doing an
upgrade, we want to upgrade 1 datacenter first and monitor
You should take a snapshot before starting the upgrade process. You
cannot achieve a snapshot of "the most current situation" in a live
cluster anyway, as data are constantly written to the cluster even after
a node is stopped for upgrading. So you've gotta to accept the outdate
Hi all,
On a test setup I a looking to do an upgrade from 4.0.3 to 4.0.6.
Would one typically snapshot before DRAIN or after?
If DRAIN after snapshot, I would have to restart the service to snapshot and
would this not then be accepting new operations/data?
If DRAIN before snapshot, would
.
* Set the protocol version explicitly in your application.
* Ensure that the list of initial contact points contains only hosts
with the oldest Cassandra version or protocol version.
Server side:
* Do not enable new features.
* Do not run nodetool repair.
* During the upgrade, do not
Hi all,
What (if any) problems could we expect from an upgrade?
Ie., If we have 12 nodes and I upgrade them one-at-a-time, some will be on the
new version and others on the old.
Assuming that daily operations continue during this process, could problems
occur with streaming replica from one
Groovy. Thanks.
From: Erick Ramirez
Sent: Wednesday, October 12, 2022 4:08 PM
To: user@cassandra.apache.org
Subject: Re: Upgrade
EXTERNAL
That's correct. Cheers!
That's correct. Cheers!
On every node?
From: Erick Ramirez
Sent: Wednesday, October 12, 2022 3:20 PM
To: user@cassandra.apache.org
Subject: Re: Upgrade
EXTERNAL
It's just a minor patch upgrade so all you're really upgrading is the binaries.
In any case, switching off replication is not the recommended app
It's just a minor patch upgrade so all you're really upgrading is the
binaries. In any case, switching off replication is not the recommended
approach. The recommended pre-upgrade procedure is to take backups of the
data on your nodes with nodetool snapshot. Cheers!
Hi all,
Looking at upgrading our install from 4.0.3 to 4.0.6.
We have replication from one datacentre to a backup site. Other than modifying
the replication config from dc1 to dc2, is there a simple method or command to
stop replication for a period?
The idea being that, should something go a
n only when all the nodes in the cluster upgraded
>> to 4.0.x?
>>
>> On Tue, Aug 16, 2022 at 2:12 AM Erick Ramirez
>> wrote:
>>
>>> As convenient as it is, there are a few caveats and it isn't a silver
>>> bullet. The automatic feature will only
le-threaded by default
>> so it will take a while to get through all the sstables on dense nodes.
>>
>> In contrast, you'll have a bit more control if you manually upgrade the
>> sstables. For example, you can schedule the upgrade during low traffic
>> periods so reads are not competing with compactions for IO. Cheers!
>>
>>>
>>>
>
e single-threaded by default so it
will take a while to get through all the sstables on dense nodes.In contrast, you'll have a bit more
control if you manually upgrade the sstables. For example, you can schedule the upgrade during low
traffic periods so reads are not competing with compactions for IO. Cheers!
> compactions scheduled. Also, it is going to be single-threaded by default
> so it will take a while to get through all the sstables on dense nodes.
>
> In contrast, you'll have a bit more control if you manually upgrade the
> sstables. For example, you can schedule the upgra
trast, you'll have a bit more control if you manually upgrade the
sstables. For example, you can schedule the upgrade during low traffic
periods so reads are not competing with compactions for IO. Cheers!
>
Hello,
I am evaluating the upgrade from 3.11.x to 4.0.x and as per CASSANDRA-14197
<https://issues.apache.org/jira/browse/CASSANDRA-14197> we don't need to
run upgradesstables any more. We have tested this in a test environment and
see that setting "-Dcassandra.automatic_ssta
Yeah, we have a fork of Cassandra with custom patches, and a fork of dtest
with some additional custom tests, so we will have to upgrade dtest as
well.
Is there any specific tag of dtest we should use or the latest trunk is
fine to test against 3.0.27?
Jaydeep
On Mon, Jun 13, 2022 at 10:51 PM C
If you have a fork of Cassandra with custom patches and build/execute the dtest
suite as part of qualification, you’d want to upgrade that as well.
Note that in more recent 3.0.x releases, the project also introduced in-JVM
dtests. This is a new suite that serves a similar purpose to the Python
Thanks Jeff and Scott for valuable feedback!
One more question, do we have to upgrade the dTest repo if we go to 3.0.27,
or the one we have currently already working with 3.0.14 should continue to
work fine?
Jaydeep
On Mon, Jun 13, 2022 at 10:25 PM C. Scott Andreas
wrote:
> Thank you
Thank you for reaching out, and for planning the upgrade!
Upgrading from 3.0.14 to 3.0.27 would be best, followed by upgrading to 4.0.4.
3.0.14 contains a number of serious bugs that are resolved in more recent 3.0.x
releases (3.0.19+ are generally good/safe). Upgrading to 3.0.27 will put you
The versions with caveats should all be enumerated in
https://github.com/apache/cassandra/blob/cassandra-3.0/NEWS.txt
The biggest caveat was 3.0.14 (which had the fix for cassandra-13004),
which you're already on.
Personally, I'd qualify exactly one upgrade, and rather than doing 3
Hi,
I am running Cassandra version 3.0.14 at scale on thousands of nodes. I am
planning to do a minor version upgrade from 3.0.14 to 3.0.26 in a safe
manner. My eventual goal is to upgrade from 3.0.26 to a major release 4.0.
As you know, there are multiple minor releases between 3.0.14 and
>
> Thank you for that clarification, Erick. So do i understand correctly,
> that because of the upgrade the host id changed and therefore differs from
> the ones in the sstables where the old host id is still sitting until a
> sstable upgrade?
>
Not quite. :) The host ID wil
are moved/copied to other nodes
> (CASSANDRA-16619). That's why the message is logged at WARN level instead
> of ERROR. Cheers!
>
Thank you for that clarification, Erick. So do i understand correctly, that
because of the upgrade the host id changed and therefore differs from the
ones
It's expected and is nothing to worry about. From C* 3.0.25/3.11.11/4.0,
the SSTables now contain the host ID on which they were created to prevent
loss of commitlog data when SSTables are moved/copied to other nodes
(CASSANDRA-16619). That's why the message is logged at WARN level instead
of ERROR
Hi,
i just upgraded my one-node cassandra from 3.11.6 to 4.0.3. Now every time
it starts it's producing the following warn messages in the log like:
WARN [main] 2022-01-18 10:55:00,696 CommitLogReplayer.java:305 - Origin of
2 sstables is unknown or doesn't match the local node; commitLogIntervals
f
The general advice is to always upgrade to 3.11.latest before upgrading to
4.0.latest. It is possible to upgrade from an older 3.11 version but you'll
probably run into known issues already fixed in the latest version.
Also, we recommend you run upgradesstables BEFORE upgrading to 4.0.l
You can see upgrading instructions here
https://github.com/apache/cassandra/blob/cassandra-4.0.2/NEWS.txt.
On Fri, Feb 11, 2022 at 2:52 AM Abdul Patel wrote:
> Hi
> apart from standard upgrade process any thing specific needs ti be
> handled separately for this upgrade process?
>
&
Make sure you go through all the instructions in
https://github.com/apache/cassandra/blob/trunk/NEWS.txt. It's also highly
recommended that you upgrade to the latest 3.0.x or 3.11.x version before
upgrading to 4.0.
Generally there are no changes required on the client side apart from
settin
Hi
apart from standard upgrade process any thing specific needs ti be handled
separately for this upgrade process?
Any changes needed at client side w.r.t drivers?
> we had an awful performance/throughput experience with 3.x coming from 2.1.
> 3.11 is simply a memory hog, if you are using batch statements on the client
> side. If so, you are likely affected by
> https://issues.apache.org/jira/browse/CASSANDRA-16201
>
Confirming what Thomas writes, hea
From: Leon Zaruvinsky
Sent: Wednesday, October 28, 2020 5:21 AM
To: user@cassandra.apache.org
Subject: Re: GC pauses way up after single node Cassandra 2.2 -> 3.11 binary
upgrade
Our JVM options are unchanged between 2.2 and 3.11
For the sake of clarity, do you mean:
(a) you
cordAlways
XX:+CMSClassUnloadingEnabled
> The distinction is important because at the moment, you need to go through
> a process of elimination to identify the cause.
>
>
>> Read throughput (rate, bytes read/range scanned, etc.) seems fairly
>> consistent before and after the
is important because at the moment, you need to go through
a process of elimination to identify the cause.
> Read throughput (rate, bytes read/range scanned, etc.) seems fairly
> consistent before and after the upgrade across all nodes.
>
What I was trying to get at is whether the u
Thanks Erick.
Our JVM options are unchanged between 2.2 and 3.11, and we have disk access
mode set to standard. Generally we’ve maintained all configuration between
the two versions.
Read throughput (rate, bytes read/range scanned, etc.) seems fairly
consistent before and after the upgrade
I haven't seen this specific behaviour in the past but things that I would
look at are:
- JVM options which differ between 3.11 defaults and what you have
configured in 2.2
- review your monitoring and check read throughput on the upgraded node
as compared to 2.2 nodes
- possibly no
On Wed, 28 Oct 2020 at 14:41, Rich Hawley wrote:
> unsubscribe
>
You need to email user-unsubscr...@cassandra.apache.org to unsubscribe from
the list. Cheers!
1 - 100 of 953 matches
Mail list logo