ed in 4.1.1.
On Sep 11, 2023, at 2:09 PM, Bowen Song via user
wrote:
*Description*
When adding a new node to an existing cluster, the new node
bootstrapping fails with the
"io.netty.channel.unix.Errors$NativeIoException: writeAddress(..)
failed: Connection timed out" error
PM, Bowen Song via user wrote:DescriptionWhen adding
a new node to an existing cluster, the new node
bootstrapping fails with the
"io.netty.channel.unix.Errors$NativeIoException: writeAddress(..)
failed: Connection timed out" error from the streaming source
node. Re
*Description*
When adding a new node to an existing cluster, the new node
bootstrapping fails with the
"io.netty.channel.unix.Errors$NativeIoException: writeAddress(..)
failed: Connection timed out" error from the streaming source node.
Resuming the bootstrap with "nod
urbhi Gupta wrote:
>
>
> Hi Cassandra Community,
>
> We have to expand our cluster and I tried to add the first node to the
> cluster and when the new node was bootstrapping , I noticed the error like
> below in the system.log, but the bootstrap process was su
n, so you can upgrade simply by rev'ing the
version and performing a rolling restart of the instances.– ScottOn Mar 1, 2023, at 2:43 PM,
Surbhi Gupta wrote:Hi Cassandra Community,We have to expand
our cluster and I tried to add the first node to the cluster and when the new node was
bootstr
Hi Cassandra Community,
We have to expand our cluster and I tried to add the first node to the
cluster and when the new node was bootstrapping , I noticed the error like
below in the system.log, but the bootstrap process was successful .
We are on 3.11.5 .
ERROR [MutationStage-7] 2023-03-01 07
3) rsync any remaining Cassandra data from OLD to NEW
>> 4) Startup NEW node for the first time and have it take the place of the OLD
>> node in the cluster
>>
>> The goal here is to eliminate bootstrapping (streaming), because it’s a 1
>> for 1 swap and we can easil
r started Cassandra
> 1) rsync Cassandra data from OLD node to NEW (1.2TB)
> 2) Shutdown OLD node
> 3) rsync any remaining Cassandra data from OLD to NEW
> 4) Startup NEW node for the first time and have it take the place of the OLD
> node in the cluster
>
> The goal here is to elimin
(1.2TB)
2) Shutdown OLD node
3) rsync any remaining Cassandra data from OLD to NEW
4) Startup NEW node for the first time and have it take the place of the OLD
node in the cluster
The goal here is to eliminate bootstrapping (streaming), because it’s a 1 for 1
swap and we can easily rsync all of
The error you're seeing is specific to DSE so your best course of action is
to log a ticket with DataStax Support (https://support.datastax.com).
Cheers!
org.apache.cassandra.exceptions.IsBootstrappingException: Cannot read from
a bootstrapping node
at
org.apache.cassandra.service.StorageProxy.checkNotBootstrappingOrSystemQuery(StorageProxy.java:1295)
at
org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1306)
at
The process is almost the same as bootstrapping.
The leaving node state transits from NORMAL to LEAVING and finally to LEFT.
It waits for the ring delay as part of each state transition in order to
propagate the entire cluster.
Pending ranges are updated.
In the case of leaving, there will be
request to node A during this time? Is it possible to happen?
Thanks
Han
On Wed, Jan 27, 2021 at 5:48 PM Yifan Cai wrote:
> Your thoughts regarding Gossip are correct. There could be a time that
> nodes in the cluster hold different views of the ring locally.
>
> In the case of b
Your thoughts regarding Gossip are correct. There could be a time that
nodes in the cluster hold different views of the ring locally.
In the case of bootstrapping,
1. The joining node updates its status to BOOT before streaming data and
waits for a certain delay in order to populate the update in
>
>
>> I'm particularly trying to understand the fault-tolerant part of updating
>> Token Ring state on every node
>
> The new node only joins the ring (updates the rings state) when the data
> streaming (bootstrapping) is successful. Otherwise, the existing ring
&
.
Checkout `StorageService#joinTokenRing()` for the details.
I'm particularly trying to understand the fault-tolerant part of updating
> Token Ring state on every node
The new node only joins the ring (updates the rings state) when the data
streaming (bootstrapping) is successful. Otherw
Hi,
I wanted to understand how the bootstrapping (add a new node) works. My
understanding is that the first step is Token Allocation and the new node
will get a number of tokens.
My question is:
How / when do the existing nodes update their Token Ring state? and is
that different between the
y, 2 May 2019 at 08:26
To: "user@cassandra.apache.org"
Subject: RE: Bootstrapping to Replace a Dead Node vs. Adding a
NewNode:Consistency Guarantees
Appreciate your response.
As for extending the cluster & keeping the default range movement = true, C*
won’t allow me to bootstra
Appreciate your response.
As for extending the cluster & keeping the default range movement = true, C*
won’t allow me to bootstrap multiples nodes, anyway.
But, the question I’m still posing and have not gotten an answer for, is if fix
Cassandra-2434 disallows bootstrapping multiple n
The article you mentioned here clearly says “For new users to Cassandra, the
safest way to add multiple nodes into a cluster is to add them one at a time.
Stay tuned as I will be following up with another post on bootstrapping.”
When extending cluster it is indeed recommended to go slow
had described a situation where
bootstrapping a node to extend the cluster can loose data if this new node
bootstraps from a stale SECONDARY replica (node that was offline > hinted
had-off window). This was fixed in cassandra-2434.
http://thelastpickle.com/blog/2017/05/23/auto-bootstrapping-par
des a case scenario described by
Jeff Jirsa) suggests possible data loss case when bootstrapping a new node
to extend the cluster. The new node may bootstrap from a stale SECONDARY
replica. A fix was made in Cassandra-2434.
However, the article, the Jira, and Jeff's example all describe th
Thank you.
Range movement is one reason this is enforced when adding a new node. But, what
about forcing a consistent bootstrap i.e. bootstrapping from primary owner of
the range and not a secondary replica.
How’s consistent bootstrap enforced when replacing a dead node.
-
Thank you
/platform/
From: Fd Habash
Reply-To: "user@cassandra.apache.org"
Date: Wednesday, 1 May 2019 at 06:18
To: "user@cassandra.apache.org"
Subject: Bootstrapping to Replace a Dead Node vs. Adding a New Node:
Consistency Guarantees
Reviewing the documentation & based on my
Reviewing the documentation & based on my testing, using C* 2.2.8, I was not
able to extend the cluster by adding multiple nodes simultaneously. I got an
error message …
Other bootstrapping/leaving/moving nodes detected, cannot bootstrap while
cassandra.consistent.rangemovement is tru
> On Dec 28, 2018, at 2:17 AM, Jinhua Luo wrote:
>
> Hi All,
>
> While the pending node get streaming of token ranges from other nodes,
> all coordinator would send new writes to it so that it would not miss
> any new data, correct?
>
> I have two (maybe silly) questions here:
> Given the C
Hi All,
While the pending node get streaming of token ranges from other nodes,
all coordinator would send new writes to it so that it would not miss
any new data, correct?
I have two (maybe silly) questions here:
Given the CL is ONE,
a) what if the coordinator haven't meet the pending node via go
Agree with Jeff here, using auto_bootstrap:false is probably not what you
want.
Have you increased your streaming throughput?
Upgrading to 3.11 might reduce the time by quite a bit:
https://issues.apache.org/jira/browse/CASSANDRA-9766
You'd be doing committers a huge favor if you grabbed some hi
This violates any consistency guarantees you have and isn’t the right approach
unless you know what you’re giving up (correctness, typically)
--
Jeff Jirsa
> On Nov 28, 2018, at 2:40 AM, Vitali Dyachuk wrote:
>
> You can use auto_bootstrap set to false to add a new node to the ring, it
> wi
You can use auto_bootstrap set to false to add a new node to the ring, it
will calculate the token range for the new node, but will not start
streaming the data.
In this case you can add several nodes into the ring quickly. After that
you can start nodetool rebuild -dc <> to start streaming data.
Hello,
I have 2 dc cassandra 3.0.14 setup. I need to add 2 new nodes to each dc.
I started one node in dc1 and its already joining. 3TB of 50TB finished in 2
weeks. One year ttl time series data with twcs.
I know, its not best practise..
I want to start one node in dc2 and cassandra refused to
From: Jon Haddad on behalf of Jon Haddad
Sent: Saturday, February 24, 2018 5:44:24 PM
To: user@cassandra.apache.org
Subject: Re: Is it possible / makes it sense to limit concurrent streaming
during bootstrapping new nodes?
You can’t migrate down that way. The last several nodes you have up
t;> No miracles: reliability is mostly determined by RF number.
>>
>> Which task must be solved for large clusters: "Reliability of a cluster with
>> NNN nodes and RF=3 shouldn't be 'tangibly' less than reliability of 3-nodes
>> cluster with RF=3"
>
an reliability of 3-nodes
> cluster with RF=3"
>
> Kind Regards,
> Kyrill
> From: Jürgen Albersdorfer
> Sent: Tuesday, February 20, 2018 10:34:21 PM
> To: user@cassandra.apache.org
> Subject: Re: Is it possible / makes it sense to limit concurrent streaming
> during boo
__
From: Jürgen Albersdorfer
Sent: Tuesday, February 20, 2018 10:34:21 PM
To: user@cassandra.apache.org
Subject: Re: Is it possible / makes it sense to limit concurrent streaming
during bootstrapping new nodes?
Thanks Jeff,
your answer is really not what I expected to learn - whi
> We do archiving data in Order to make assumptions on it in future. So, yes
> we expect to grow continously. In the mean time I learned to go for
> predictable grow per partition rather than unpredictable large
> partitioning. So today we are growing 250.000.000 Records per Day going
> into a sing
We do archiving data in Order to make assumptions on it in future. So, yes we
expect to grow continously. In the mean time I learned to go for predictable
grow per partition rather than unpredictable large partitioning. So today we
are growing 250.000.000 Records per Day going into a single tabl
At a past job, we set the limit at around 60 hosts per cluster - anything
bigger than that got single token. Anything smaller, and we'd just tolerate
the inconveniences of vnodes. But that was before the new vnode token
allocation went into 3.0, and really assumed things that may not be true
for yo
Thanks Jeff,
your answer is really not what I expected to learn - which is again more manual
doing as soon as we start really using C*. But I‘m happy to be able to learn it
now and have still time to learn the neccessary Skills and ask the right
questions on how to correctly drive big data with
The scenario you describe is the typical point where people move away from
vnodes and towards single-token-per-node (or a much smaller number of
vnodes).
The default setting puts you in a situation where virtually all hosts are
adjacent/neighbors to all others (at least until you're way into the
h
it does limit the "concurrent throughput" IMHO,
>> is it not what you are looking for?
>>
>> The only limits I can think of are :
>> - number of connection between every node and the one boostrapping
>> - number of pending compaction (especially if you have
ection between every node and the one boostrapping
> - number of pending compaction (especially if you have lots of
> keyspace/table) that could lead to some JVM problem maybe ?
>
> Anyway, because while bootstrapping, a node is not accepting reads,
> configuration like compactionthr
oostrapping
- number of pending compaction (especially if you have lots of
keyspace/table) that could lead to some JVM problem maybe ?
Anyway, because while bootstrapping, a node is not accepting reads,
configuration like compactionthroughput, concurrentcompactor and
streamingthroughput
can be
Hi, I'm wondering if it is possible resp. would it make sense to limit
concurrent streaming when joining a new node to cluster.
I'm currently operating a 15-Node C* Cluster (V 3.11.1) and joining another
Node every day.
The 'nodetool netstats' shows it always streams data from all other nodes.
Ho
>> 16
>>>
>>>
>>> Which memtable_allocation_type are you using ?
>>>
>>>
>>> # grep memtable_allocation_type /etc/cassandra/conf/cassandra.yaml
>>> memtable_allocation_type: heap_buffers
>>>
>>>
>>> thanks so
;>
>>> Hi Jurgen,
>>>
>>> It does feel like some OOM during boostrap from previous C* v2, but that
>>> sould be fixed in your version.
>>>
>>> Do you know how many sstables is this new node suppose to receive ?
>>>
>>> J
guess : it may have something to do with compaction not keeping
> up because every other nodes are streaming data to this new one (resulting in
> long lived object in the heap), does disabling
> compaction_throughput_mb_per_sec or increasing concurrent_compactors has any
> effect ?
does disabling
>> compaction_throughput_mb_per_sec or increasing concurrent_compactors has
>> any effect ?
>>
>> Which memtable_allocation_type are you using ?
>>
>>
>> On 7 February 2018 at 12:38, Jürgen Albersdorfer > > wrote:
>>
>>> H
ncurrent_compactors has
> any effect ?
>
> Which memtable_allocation_type are you using ?
>
>
> On 7 February 2018 at 12:38, Jürgen Albersdorfer
> wrote:
>
>> Hi, I always face an issue when bootstrapping a Node having less than
>> 184GB RAM (156GB JVM HEAP) on our 10 Nod
ays face an issue when bootstrapping a Node having less than
> 184GB RAM (156GB JVM HEAP) on our 10 Node C* 3.11.1 Cluster.
> During bootstrap, when I watch the cassandra.log I observe a growth in JVM
> Heap Old Gen which gets not significantly freed any more.
> I know that JVM collects
Hi, I always face an issue when bootstrapping a Node having less than 184GB
RAM (156GB JVM HEAP) on our 10 Node C* 3.11.1 Cluster.
During bootstrap, when I watch the cassandra.log I observe a growth in JVM
Heap Old Gen which gets not significantly freed any more.
I know that JVM collects on Old
to blow up that instance
>
>
>
> --
> Jeff Jirsa
>
>
> On Oct 15, 2017, at 1:42 PM, Stefano Ortolani wrote:
>
> Hi Jeff,
>
> this my third attempt bootstrapping the node so I tried several tricks
> that might partially explain the output I am posting.
>
> * T
wrote:
>>
>> Hi Jeff,
>>
>> this my third attempt bootstrapping the node so I tried several tricks that
>> might partially explain the output I am posting.
>>
>> * To make the bootstrap incremental, I have been throttling the streams on
>> all nodes to 1M
ote:
>
> Hi Jeff,
>
> this my third attempt bootstrapping the node so I tried several tricks that
> might partially explain the output I am posting.
>
> * To make the bootstrap incremental, I have been throttling the streams on
> all nodes to 1Mbits. I have selectively
Hi Jeff,
this my third attempt bootstrapping the node so I tried several tricks that
might partially explain the output I am posting.
* To make the bootstrap incremental, I have been throttling the streams on
all nodes to 1Mbits. I have selectively unthrottling one node at a time
hoping that
wrote:
>>>
>>> Hi all,
>>>
>>> I have been trying "-Dcassandra.disable_stcs_in_l0=true", but no luck so
>>> far.
>>> Based on the source code it seems that this option doesn't affect
>>> compactions while bootstrapping.
ni wrote:
>
> Hi all,
>
> I have been trying "-Dcassandra.disable_stcs_in_l0=true", but no luck so
> far.
> Based on the source code it seems that this option doesn't affect
> compactions while bootstrapping.
>
> I am getting quite confused as it seems I am
n doesn't affect compactions
> while bootstrapping.
>
> I am getting quite confused as it seems I am not able to bootstrap a node if
> I don't have at least 6/7 times the disk space used by other nodes.
> This is weird. The host I am bootstrapping is using a SSD. Also compa
Hi all,
I have been trying "-Dcassandra.disable_stcs_in_l0=true", but no luck so
far.
Based on the source code it seems that this option doesn't affect
compactions while bootstrapping.
I am getting quite confused as it seems I am not able to bootstrap a node
if I don't hav
Other little update: at the same time I see the number of pending tasks
stuck (in this case at 1847); restarting the node doesn't help, so I can't
really force the node to "digest" all those compactions. In the meanwhile
the disk occupied is already twice the average load I have on other nodes.
Fe
I have been trying to add another node to the cluster (after upgrading to
3.0.15) and I just noticed through "nodetool netstats" that all nodes have
been streaming to the joining node approx 1/3 of their SSTables, basically
their whole primary range (using RF=3)?
Is this expected/normal?
I was und
user
> Subject: Pending-range-calculator during bootstrapping
>
> Dear All,
>
> when we are bootstrapping a new node,we are experiencing high cpu load which
> affect the rt ,and we noticed that it's mainly costing on
> Pending-range-calculator ,this did not ha
:2535...@qq.com]
Sent: Friday, September 22, 2017 3:56 AM
To: user
Subject: Pending-range-calculator during bootstrapping
Dear All,
when we are bootstrapping a new node,we are experiencing high cpu load which
affect the rt ,and we noticed that it's mainly costing on
Pending-range-calculator
:2535...@qq.com]
Sent: Friday, September 22, 2017 3:56 AM
To: user
Subject: Pending-range-calculator during bootstrapping
Dear All,
when we are bootstrapping a new node,we are experiencing high cpu load which
affect the rt ,and we noticed that it's mainly costing on
Pending-range-calculator
Dear All,
when we are bootstrapping a new node,we are experiencing high cpu load which
affect the rt ,and we noticed that it's mainly costing on
Pending-range-calculator ,this did not happen before.
We are using C* 2.1.13 in one DC,2.1.18 in another DC.
Could anyone please advise on
Dear All,
when we are bootstrapping a new node,we are experiencing high cpu load and this
affect the rt ,and we noticed that it's mainly costing on
Pending-range-calculator ,this did not happen before.
We are using C* 2.1.13.
Could anyone please advise on this?
Thanks,
Peng Xiao
't receive read
> queries until it finishes streaming and joins the cluster. It seems
> unlikely that you'd be bottlenecked on I/O during the bootstrapping
> process. If you were, you'd certainly have bigger problems. The aim is to
> clear out the majority of compactions *before* th
our situation since the I/O pipe
> was already fully saturated.
>
You should unthrottle during bootstrap as the node won't receive read
queries until it finishes streaming and joins the cluster. It seems
unlikely that you'd be bottlenecked on I/O during the bootstrapping
proce
er, which will allow us to add our next newly
>> bootstrapped node to a blacklist, hoping that if it doesn't accept writes
>> the rest of the cluster can serve them adequately (as is the case whenever
>> we turn down the bootstrapping node), and allow it to finish its
>
o add our next newly
> bootstrapped node to a blacklist, hoping that if it doesn't accept writes
> the rest of the cluster can serve them adequately (as is the case whenever
> we turn down the bootstrapping node), and allow it to finish its
> compactions.
>
> We were also inte
ng compaction throughput while the node is
> bootstrapping?
>
>
What version are you using? There are improvements to streaming with LCS in
2.2.
Also, are you unthrottling compaction throughput while the node is
bootstrapping?
de to
> resume acceptable operations.
>
> We're currently upgrading all of our clients to use the 3.11.0 version of
> the DataStax Python driver, which will allow us to add our next newly
> bootstrapped node to a blacklist, hoping that if it doesn't accept writes
> the rest of
the case whenever
we turn down the bootstrapping node), and allow it to finish its
compactions.
We were also interested in hearing if anyone has had much luck using the
sstableofflinerelevel tool, and if this is a reasonable approach for our
issue.
One of my colleagues found a post where a use
>
> But if it also streams, it means I'd still be under-pressure if I am not
> mistaken. I am under the assumption that the compactions are the by-product
> of streaming too many SStables at the same time, and not because of my
> current write load.
>
Ah yeah I wasn't thinking about the capacity pr
long as I am using LCS, am I right?
>
> Code seems to imply that it should always keep SSTable levels. Sounds
> like something is wrong if it is not :/.
>
I don't have the logs anymore (machine had other issues, replacing disks)
so I can't be sure of this. Definitely I r
> 1) You mean restarting the node in the middle of the bootstrap with
> join_ring=false? Would this option require me to issue a nodetool boostrap
> resume, correct? I didn't know you could instruct the join via JMX. Would
> it be the same of the nodetool boostrap command?
write_survey is slightl
Hi Kurt,
1) You mean restarting the node in the middle of the bootstrap with
join_ring=false? Would this option require me to issue a nodetool boostrap
resume, correct? I didn't know you could instruct the join via JMX. Would
it be the same of the nodetool boostrap command?
2) Yes, they are stream
Well, that sucks. Be interested if you could find out if any of the
streamed SSTables are retaining their levels.
To answer your questions:
1) No. However, you could set your nodes to join in write_survey mode,
which will stop them from joining the ring and you can initiate the join
over JMX when
Hi Kurt,
sorry, I forgot to specify. I am on 3.0.14.
Cheers,
Stefano
On Wed, Aug 23, 2017 at 12:11 AM, kurt greaves wrote:
> What version are you running? 2.2 has an improvement that will retain
> levels when streaming and this shouldn't really happen. If you're on 2.1
> best bet is to upgrade
What version are you running? 2.2 has an improvement that will retain
levels when streaming and this shouldn't really happen. If you're on 2.1
best bet is to upgrade
Hi all,
I am trying to bootstrap a node without success due to running out of space.
Average node size is 260GB with lots of LCS tables (overall data ~3.5 TB)
Each node is also configured with a 1TB disk, including the bootstrapping
node.
After 12 hours the bootstrapping node fails with more
That makes sense. Thank you so much for pointing that out Alex.
So long story short. Once I am up to the RF I actually want (RF3 per DC)
and am just adding nodes for capacity joining the Ring will correctly work
and no inconsistencies will exist.
If I just change the RF the nodes don't have the dat
On Thu, Aug 3, 2017 at 9:33 AM, Daniel Hölbling-Inzko <
daniel.hoelbling-in...@bitmovin.com> wrote:
> No I set Auto bootstrap to true and the node was UN in nodetool status but
> when doing a select on the node with ONE I got incomplete data.
>
What I think is happening here is not related to the
No I set Auto bootstrap to true and the node was UN in nodetool status but
when doing a select on the node with ONE I got incomplete data.
Jeff Jirsa schrieb am Do. 3. Aug. 2017 um 09:02:
> "nodetool status" shows node as UN (up normal) instead of UJ (up joining)
>
> What you're describing really
"nodetool status" shows node as UN (up normal) instead of UJ (up joining)
What you're describing really sounds odd. Something isn't adding up to me but
I'm not sure why. You shouldn't be able to query it directly until its
bootstrapped as far as I know
Are you sure you're not joining as a seed
Thanks Jeff. How do I determine that bootstrap is finished? Haven't seen
that anywhere so far.
Reads via storage would be ok as every query would be checked by another
node too. I was only seeing inconsistencies since clients went directly to
the node with Consistency ONE
Greetings
Jeff Jirsa sc
By the time bootstrap is complete it should be as consistent as the source node
- you can change start_native_transport to false to avoid serving clients
directly (tcp/9042), but it'll still serve reads via the storage service
(tcp/7000), but the guarantee is that data should be consistent by th
only in this one case might that work (RF==N)
On Wed, Aug 2, 2017 at 10:53 AM, Daniel Hölbling-Inzko <
daniel.hoelbling-in...@bitmovin.com> wrote:
>
> Any advice on how to avoid this in the future? Is there a way to start up
> a node that does not serve client requests but does replicate data?
>
Would it not work if you first increase the RF
You can't just add a new DC and then tell their clients to connect to the
new one (after migrating all the data to it obv.)? If you can't achieve
that you should probably use GossipingPropertyFileSnitch. Your best plan
is to have the desired RF/redundancy from the start. Changing RF in
production
Thanks for the pointers Kurt!
I did increase the RF to N so that would not have been the issue.
DC migration is also a problem since I am using the Google Cloud Snitch. So
I'd have to take down the whole DC and restart anew (which would mess with
my clients as they only connect to their local DC).
If you want to change RF on a live system your best bet is through DC
migration (add another DC with the desired # of nodes and RF), and migrate
your clients to use that DC. There is a way to boot a node and not join the
ring, however I don't think it will work for new nodes (have not
confirmed), a
Hi,
It's probably a strange question but I have a heavily read-optimized
payload where data integrity is not a big deal. So to keep latencies low I
am reading with Consistency ONE from my Multi-DC Cluster.
Now the issue I saw is that I needed to add another Cassandra node (for
redundancy reasons).
g node" we couldn't create a
cluster of size greater than one) and with full data (9GB) we were
able to create a cluster with only two nodes. The time on the nodes
may be off by up to a second - would that be big enough to cause any
trouble when bootstrapping?
Anyone seen something like this bef
the load-avg on the existing nodes, as
>> that might affect your application read request latencies.
>>
>> On Sun, Sep 11, 2016, 17:10 Jens Rantil wrote:
>>
>>> Hi Bhuvan,
>>>
>>> I have done such expansion multiple times and can really recommend
ming:
>
> 1. Bootstrapping nodes
> 2. Decomissioning nodes
> 3. Repair
>
>
> On Mon, Oct 10, 2016 at 5:00 PM Jeff Jirsa
> wrote:
>
>>
>>
>> No need to cc dev@, user@ is the right list for this question.
>>
>>
>>
>> As Jon mentioned, yo
During the upgrade you'll want to avoid the following operations that
result in data streaming:
1. Bootstrapping nodes
2. Decomissioning nodes
3. Repair
On Mon, Oct 10, 2016 at 5:00 PM Jeff Jirsa
wrote:
>
>
> No need to cc dev@, user@ is the right list for this question.
advisable.
From: Abhishek Verma
Reply-To: "user@cassandra.apache.org"
Date: Monday, October 10, 2016 at 4:34 PM
To: "user@cassandra.apache.org" ,
"d...@cassandra.apache.org"
Subject: Bootstrapping data from Cassandra 2.2.5 datacenter to 3.0.8 datacent
As Johathan said, you need to upgrade cassandra directly and use "nodetool
upgradesstables".
Datastax has an excellent resource on upgrading cassandra
https://docs.datastax.com/en/latest-upgrade/upgrade/cassandra/upgdCassandra.html,
specifically
https://docs.datastax.com/en/latest-upgrade/upgrade/c
1 - 100 of 302 matches
Mail list logo