Re: [HELP] Cassandra 4.1.1 Repeated Bootstrapping Failure

2023-09-11 Thread Bowen Song via user
ed in 4.1.1. On Sep 11, 2023, at 2:09 PM, Bowen Song via user wrote: *Description* When adding a new node to an existing cluster, the new node bootstrapping fails with the "io.netty.channel.unix.Errors$NativeIoException: writeAddress(..) failed: Connection timed out" error

Re: [HELP] Cassandra 4.1.1 Repeated Bootstrapping Failure

2023-09-11 Thread C. Scott Andreas
PM, Bowen Song via user wrote:DescriptionWhen adding a new node to an existing cluster, the new node bootstrapping fails with the "io.netty.channel.unix.Errors$NativeIoException: writeAddress(..) failed: Connection timed out" error from the streaming source node. Re

[HELP] Cassandra 4.1.1 Repeated Bootstrapping Failure

2023-09-11 Thread Bowen Song via user
*Description* When adding a new node to an existing cluster, the new node bootstrapping fails with the "io.netty.channel.unix.Errors$NativeIoException: writeAddress(..) failed: Connection timed out" error from the streaming source node. Resuming the bootstrap with "nod

Re: Bootstrapping new node throwing error - Mutation too large

2023-03-01 Thread Surbhi Gupta
urbhi Gupta wrote: > > > Hi Cassandra Community, > > We have to expand our cluster and I tried to add the first node to the > cluster and when the new node was bootstrapping , I noticed the error like > below in the system.log, but the bootstrap process was su

Re: Bootstrapping new node throwing error - Mutation too large

2023-03-01 Thread C. Scott Andreas
n, so you can upgrade simply by rev'ing the version and performing a rolling restart of the instances.– ScottOn Mar 1, 2023, at 2:43 PM, Surbhi Gupta wrote:Hi Cassandra Community,We have to expand our cluster and I tried to add the first node to the cluster and when the new node was bootstr

Bootstrapping new node throwing error - Mutation too large

2023-03-01 Thread Surbhi Gupta
Hi Cassandra Community, We have to expand our cluster and I tried to add the first node to the cluster and when the new node was bootstrapping , I noticed the error like below in the system.log, but the bootstrap process was successful . We are on 3.11.5 . ERROR [MutationStage-7] 2023-03-01 07

Re: Replacing node w/o bootstrapping (streaming)?

2023-02-09 Thread Max Campos
3) rsync any remaining Cassandra data from OLD to NEW >> 4) Startup NEW node for the first time and have it take the place of the OLD >> node in the cluster >> >> The goal here is to eliminate bootstrapping (streaming), because it’s a 1 >> for 1 swap and we can easil

Re: Replacing node w/o bootstrapping (streaming)?

2023-02-09 Thread Jeff Jirsa
r started Cassandra > 1) rsync Cassandra data from OLD node to NEW (1.2TB) > 2) Shutdown OLD node > 3) rsync any remaining Cassandra data from OLD to NEW > 4) Startup NEW node for the first time and have it take the place of the OLD > node in the cluster > > The goal here is to elimin

Replacing node w/o bootstrapping (streaming)?

2023-02-09 Thread Max Campos
(1.2TB) 2) Shutdown OLD node 3) rsync any remaining Cassandra data from OLD to NEW 4) Startup NEW node for the first time and have it take the place of the OLD node in the cluster The goal here is to eliminate bootstrapping (streaming), because it’s a 1 for 1 swap and we can easily rsync all of

Re: Error in bootstrapping node

2021-12-16 Thread Erick Ramirez
The error you're seeing is specific to DSE so your best course of action is to log a ticket with DataStax Support (https://support.datastax.com). Cheers!

Error in bootstrapping node

2021-12-16 Thread Jean Carlo
org.apache.cassandra.exceptions.IsBootstrappingException: Cannot read from a bootstrapping node at org.apache.cassandra.service.StorageProxy.checkNotBootstrappingOrSystemQuery(StorageProxy.java:1295) at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1306) at

Re: understand bootstrapping

2021-02-04 Thread Yifan Cai
The process is almost the same as bootstrapping. The leaving node state transits from NORMAL to LEAVING and finally to LEFT. It waits for the ring delay as part of each state transition in order to propagate the entire cluster. Pending ranges are updated. In the case of leaving, there will be

Re: understand bootstrapping

2021-02-04 Thread Han
request to node A during this time? Is it possible to happen? Thanks Han On Wed, Jan 27, 2021 at 5:48 PM Yifan Cai wrote: > Your thoughts regarding Gossip are correct. There could be a time that > nodes in the cluster hold different views of the ring locally. > > In the case of b

Re: understand bootstrapping

2021-01-27 Thread Yifan Cai
Your thoughts regarding Gossip are correct. There could be a time that nodes in the cluster hold different views of the ring locally. In the case of bootstrapping, 1. The joining node updates its status to BOOT before streaming data and waits for a certain delay in order to populate the update in

Re: understand bootstrapping

2021-01-26 Thread Han
> > >> I'm particularly trying to understand the fault-tolerant part of updating >> Token Ring state on every node > > The new node only joins the ring (updates the rings state) when the data > streaming (bootstrapping) is successful. Otherwise, the existing ring &

Re: understand bootstrapping

2021-01-25 Thread Yifan Cai
. Checkout `StorageService#joinTokenRing()` for the details. I'm particularly trying to understand the fault-tolerant part of updating > Token Ring state on every node The new node only joins the ring (updates the rings state) when the data streaming (bootstrapping) is successful. Otherw

understand bootstrapping

2021-01-24 Thread Han
Hi, I wanted to understand how the bootstrapping (add a new node) works. My understanding is that the first step is Token Allocation and the new node will get a number of tokens. My question is: How / when do the existing nodes update their Token Ring state? and is that different between the

Re: Bootstrapping to Replace a Dead Node vs. Adding a NewNode:Consistency Guarantees

2019-05-01 Thread Alok Dwivedi
y, 2 May 2019 at 08:26 To: "user@cassandra.apache.org" Subject: RE: Bootstrapping to Replace a Dead Node vs. Adding a NewNode:Consistency Guarantees Appreciate your response. As for extending the cluster & keeping the default range movement = true, C* won’t allow me to bootstra

RE: Bootstrapping to Replace a Dead Node vs. Adding a NewNode:Consistency Guarantees

2019-05-01 Thread Fd Habash
Appreciate your response. As for extending the cluster & keeping the default range movement = true, C* won’t allow me to bootstrap multiples nodes, anyway. But, the question I’m still posing and have not gotten an answer for, is if fix Cassandra-2434 disallows bootstrapping multiple n

RE: Bootstrapping to Replace a Dead Node vs. Adding a New Node:Consistency Guarantees

2019-05-01 Thread ZAIDI, ASAD A
The article you mentioned here clearly says “For new users to Cassandra, the safest way to add multiple nodes into a cluster is to add them one at a time. Stay tuned as I will be following up with another post on bootstrapping.” When extending cluster it is indeed recommended to go slow

RE: Bootstrapping to Replace a Dead Node vs. Adding a New Node:Consistency Guarantees

2019-05-01 Thread Fd Habash
had described a situation where bootstrapping a node to extend the cluster can loose data if this new node bootstraps from a stale SECONDARY replica (node that was offline > hinted had-off window). This was fixed in cassandra-2434. http://thelastpickle.com/blog/2017/05/23/auto-bootstrapping-par

Re: Bootstrapping to Replace a Dead Node vs. Adding a New Node: Consistency Guarantees

2019-05-01 Thread Fred Habash
des a case scenario described by Jeff Jirsa) suggests possible data loss case when bootstrapping a new node to extend the cluster. The new node may bootstrap from a stale SECONDARY replica. A fix was made in Cassandra-2434. However, the article, the Jira, and Jeff's example all describe th

Re: Bootstrapping to Replace a Dead Node vs. Adding a New Node: Consistency Guarantees

2019-05-01 Thread Fred Habash
Thank you. Range movement is one reason this is enforced when adding a new node. But, what about forcing a consistent bootstrap i.e. bootstrapping from primary owner of the range and not a secondary replica. How’s consistent bootstrap enforced when replacing a dead node. - Thank you

Re: Bootstrapping to Replace a Dead Node vs. Adding a New Node: Consistency Guarantees

2019-04-30 Thread Alok Dwivedi
/platform/ From: Fd Habash Reply-To: "user@cassandra.apache.org" Date: Wednesday, 1 May 2019 at 06:18 To: "user@cassandra.apache.org" Subject: Bootstrapping to Replace a Dead Node vs. Adding a New Node: Consistency Guarantees Reviewing the documentation & based on my

Bootstrapping to Replace a Dead Node vs. Adding a New Node: Consistency Guarantees

2019-04-30 Thread Fd Habash
Reviewing the documentation & based on my testing, using C* 2.2.8, I was not able to extend the cluster by adding multiple nodes simultaneously. I got an error message … Other bootstrapping/leaving/moving nodes detected, cannot bootstrap while cassandra.consistent.rangemovement is tru

Re: Is there any chance the bootstrapping lost data?

2018-12-28 Thread Jeff Jirsa
> On Dec 28, 2018, at 2:17 AM, Jinhua Luo wrote: > > Hi All, > > While the pending node get streaming of token ranges from other nodes, > all coordinator would send new writes to it so that it would not miss > any new data, correct? > > I have two (maybe silly) questions here: > Given the C

Is there any chance the bootstrapping lost data?

2018-12-27 Thread Jinhua Luo
Hi All, While the pending node get streaming of token ranges from other nodes, all coordinator would send new writes to it so that it would not miss any new data, correct? I have two (maybe silly) questions here: Given the CL is ONE, a) what if the coordinator haven't meet the pending node via go

Re: multiple node bootstrapping

2018-11-28 Thread Jonathan Haddad
Agree with Jeff here, using auto_bootstrap:false is probably not what you want. Have you increased your streaming throughput? Upgrading to 3.11 might reduce the time by quite a bit: https://issues.apache.org/jira/browse/CASSANDRA-9766 You'd be doing committers a huge favor if you grabbed some hi

Re: multiple node bootstrapping

2018-11-28 Thread Jeff Jirsa
This violates any consistency guarantees you have and isn’t the right approach unless you know what you’re giving up (correctness, typically) -- Jeff Jirsa > On Nov 28, 2018, at 2:40 AM, Vitali Dyachuk wrote: > > You can use auto_bootstrap set to false to add a new node to the ring, it > wi

Re: multiple node bootstrapping

2018-11-28 Thread Vitali Dyachuk
You can use auto_bootstrap set to false to add a new node to the ring, it will calculate the token range for the new node, but will not start streaming the data. In this case you can add several nodes into the ring quickly. After that you can start nodetool rebuild -dc <> to start streaming data.

multiple node bootstrapping

2018-11-28 Thread Osman YOZGATLIOĞLU
Hello, I have 2 dc cassandra 3.0.14 setup. I need to add 2 new nodes to each dc. I started one node in dc1 and its already joining. 3TB of 50TB finished in 2 weeks. One year ttl time series data with twcs. I know, its not best practise.. I want to start one node in dc2 and cassandra refused to

Re: Is it possible / makes it sense to limit concurrent streaming during bootstrapping new nodes?

2018-02-24 Thread Kyrylo Lebediev
From: Jon Haddad on behalf of Jon Haddad Sent: Saturday, February 24, 2018 5:44:24 PM To: user@cassandra.apache.org Subject: Re: Is it possible / makes it sense to limit concurrent streaming during bootstrapping new nodes? You can’t migrate down that way. The last several nodes you have up

Re: Is it possible / makes it sense to limit concurrent streaming during bootstrapping new nodes?

2018-02-24 Thread Jon Haddad
t;> No miracles: reliability is mostly determined by RF number. >> >> Which task must be solved for large clusters: "Reliability of a cluster with >> NNN nodes and RF=3 shouldn't be 'tangibly' less than reliability of 3-nodes >> cluster with RF=3" >

Re: Is it possible / makes it sense to limit concurrent streaming during bootstrapping new nodes?

2018-02-24 Thread Jon Haddad
an reliability of 3-nodes > cluster with RF=3" > > Kind Regards, > Kyrill > From: Jürgen Albersdorfer > Sent: Tuesday, February 20, 2018 10:34:21 PM > To: user@cassandra.apache.org > Subject: Re: Is it possible / makes it sense to limit concurrent streaming > during boo

Re: Is it possible / makes it sense to limit concurrent streaming during bootstrapping new nodes?

2018-02-24 Thread Kyrylo Lebediev
__ From: Jürgen Albersdorfer Sent: Tuesday, February 20, 2018 10:34:21 PM To: user@cassandra.apache.org Subject: Re: Is it possible / makes it sense to limit concurrent streaming during bootstrapping new nodes? Thanks Jeff, your answer is really not what I expected to learn - whi

Re: Is it possible / makes it sense to limit concurrent streaming during bootstrapping new nodes?

2018-02-20 Thread Nate McCall
> We do archiving data in Order to make assumptions on it in future. So, yes > we expect to grow continously. In the mean time I learned to go for > predictable grow per partition rather than unpredictable large > partitioning. So today we are growing 250.000.000 Records per Day going > into a sing

Re: Is it possible / makes it sense to limit concurrent streaming during bootstrapping new nodes?

2018-02-20 Thread Jürgen Albersdorfer
We do archiving data in Order to make assumptions on it in future. So, yes we expect to grow continously. In the mean time I learned to go for predictable grow per partition rather than unpredictable large partitioning. So today we are growing 250.000.000 Records per Day going into a single tabl

Re: Is it possible / makes it sense to limit concurrent streaming during bootstrapping new nodes?

2018-02-20 Thread Jeff Jirsa
At a past job, we set the limit at around 60 hosts per cluster - anything bigger than that got single token. Anything smaller, and we'd just tolerate the inconveniences of vnodes. But that was before the new vnode token allocation went into 3.0, and really assumed things that may not be true for yo

Re: Is it possible / makes it sense to limit concurrent streaming during bootstrapping new nodes?

2018-02-20 Thread Jürgen Albersdorfer
Thanks Jeff, your answer is really not what I expected to learn - which is again more manual doing as soon as we start really using C*. But I‘m happy to be able to learn it now and have still time to learn the neccessary Skills and ask the right questions on how to correctly drive big data with

Re: Is it possible / makes it sense to limit concurrent streaming during bootstrapping new nodes?

2018-02-20 Thread Jeff Jirsa
The scenario you describe is the typical point where people move away from vnodes and towards single-token-per-node (or a much smaller number of vnodes). The default setting puts you in a situation where virtually all hosts are adjacent/neighbors to all others (at least until you're way into the h

Re: Is it possible / makes it sense to limit concurrent streaming during bootstrapping new nodes?

2018-02-20 Thread Nicolas Guyomar
it does limit the "concurrent throughput" IMHO, >> is it not what you are looking for? >> >> The only limits I can think of are : >> - number of connection between every node and the one boostrapping >> - number of pending compaction (especially if you have

Re: Is it possible / makes it sense to limit concurrent streaming during bootstrapping new nodes?

2018-02-20 Thread Jürgen Albersdorfer
ection between every node and the one boostrapping > - number of pending compaction (especially if you have lots of > keyspace/table) that could lead to some JVM problem maybe ? > > Anyway, because while bootstrapping, a node is not accepting reads, > configuration like compactionthr

Re: Is it possible / makes it sense to limit concurrent streaming during bootstrapping new nodes?

2018-02-20 Thread Nicolas Guyomar
oostrapping - number of pending compaction (especially if you have lots of keyspace/table) that could lead to some JVM problem maybe ? Anyway, because while bootstrapping, a node is not accepting reads, configuration like compactionthroughput, concurrentcompactor and streamingthroughput can be

Is it possible / makes it sense to limit concurrent streaming during bootstrapping new nodes?

2018-02-20 Thread Jürgen Albersdorfer
Hi, I'm wondering if it is possible resp. would it make sense to limit concurrent streaming when joining a new node to cluster. I'm currently operating a 15-Node C* Cluster (V 3.11.1) and joining another Node every day. The 'nodetool netstats' shows it always streams data from all other nodes. Ho

Re: Bootstrapping fails with < 128GB RAM ...

2018-02-16 Thread Jürgen Albersdorfer
>> 16 >>> >>> >>> Which memtable_allocation_type are you using ? >>> >>> >>> # grep memtable_allocation_type /etc/cassandra/conf/cassandra.yaml >>> memtable_allocation_type: heap_buffers >>> >>> >>> thanks so

Re: Bootstrapping fails with < 128GB RAM ...

2018-02-09 Thread Jürgen Albersdorfer
;> >>> Hi Jurgen, >>> >>> It does feel like some OOM during boostrap from previous C* v2, but that >>> sould be fixed in your version. >>> >>> Do you know how many sstables is this new node suppose to receive ? >>> >>> J

Re: Bootstrapping fails with < 128GB RAM ...

2018-02-07 Thread Jon Haddad
guess : it may have something to do with compaction not keeping > up because every other nodes are streaming data to this new one (resulting in > long lived object in the heap), does disabling > compaction_throughput_mb_per_sec or increasing concurrent_compactors has any > effect ?

Re: Bootstrapping fails with < 128GB RAM ...

2018-02-07 Thread Nicolas Guyomar
does disabling >> compaction_throughput_mb_per_sec or increasing concurrent_compactors has >> any effect ? >> >> Which memtable_allocation_type are you using ? >> >> >> On 7 February 2018 at 12:38, Jürgen Albersdorfer > > wrote: >> >>> H

Re: Bootstrapping fails with < 128GB RAM ...

2018-02-07 Thread Jürgen Albersdorfer
ncurrent_compactors has > any effect ? > > Which memtable_allocation_type are you using ? > > > On 7 February 2018 at 12:38, Jürgen Albersdorfer > wrote: > >> Hi, I always face an issue when bootstrapping a Node having less than >> 184GB RAM (156GB JVM HEAP) on our 10 Nod

Re: Bootstrapping fails with < 128GB RAM ...

2018-02-07 Thread Nicolas Guyomar
ays face an issue when bootstrapping a Node having less than > 184GB RAM (156GB JVM HEAP) on our 10 Node C* 3.11.1 Cluster. > During bootstrap, when I watch the cassandra.log I observe a growth in JVM > Heap Old Gen which gets not significantly freed any more. > I know that JVM collects

Bootstrapping fails with < 128GB RAM ...

2018-02-07 Thread Jürgen Albersdorfer
Hi, I always face an issue when bootstrapping a Node having less than 184GB RAM (156GB JVM HEAP) on our 10 Node C* 3.11.1 Cluster. During bootstrap, when I watch the cassandra.log I observe a growth in JVM Heap Old Gen which gets not significantly freed any more. I know that JVM collects on Old

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-15 Thread Stefano Ortolani
to blow up that instance > > > > -- > Jeff Jirsa > > > On Oct 15, 2017, at 1:42 PM, Stefano Ortolani wrote: > > Hi Jeff, > > this my third attempt bootstrapping the node so I tried several tricks > that might partially explain the output I am posting. > > * T

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-15 Thread Jeff Jirsa
wrote: >> >> Hi Jeff, >> >> this my third attempt bootstrapping the node so I tried several tricks that >> might partially explain the output I am posting. >> >> * To make the bootstrap incremental, I have been throttling the streams on >> all nodes to 1M

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-15 Thread Jeff Jirsa
ote: > > Hi Jeff, > > this my third attempt bootstrapping the node so I tried several tricks that > might partially explain the output I am posting. > > * To make the bootstrap incremental, I have been throttling the streams on > all nodes to 1Mbits. I have selectively

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-15 Thread Stefano Ortolani
Hi Jeff, this my third attempt bootstrapping the node so I tried several tricks that might partially explain the output I am posting. * To make the bootstrap incremental, I have been throttling the streams on all nodes to 1Mbits. I have selectively unthrottling one node at a time hoping that

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-15 Thread Jeff Jirsa
wrote: >>> >>> Hi all, >>> >>> I have been trying "-Dcassandra.disable_stcs_in_l0=true", but no luck so >>> far. >>> Based on the source code it seems that this option doesn't affect >>> compactions while bootstrapping.

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-15 Thread Stefano Ortolani
ni wrote: > > Hi all, > > I have been trying "-Dcassandra.disable_stcs_in_l0=true", but no luck so > far. > Based on the source code it seems that this option doesn't affect > compactions while bootstrapping. > > I am getting quite confused as it seems I am

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-15 Thread Jeff Jirsa
n doesn't affect compactions > while bootstrapping. > > I am getting quite confused as it seems I am not able to bootstrap a node if > I don't have at least 6/7 times the disk space used by other nodes. > This is weird. The host I am bootstrapping is using a SSD. Also compa

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-15 Thread Stefano Ortolani
Hi all, I have been trying "-Dcassandra.disable_stcs_in_l0=true", but no luck so far. Based on the source code it seems that this option doesn't affect compactions while bootstrapping. I am getting quite confused as it seems I am not able to bootstrap a node if I don't hav

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-13 Thread Stefano Ortolani
Other little update: at the same time I see the number of pending tasks stuck (in this case at 1847); restarting the node doesn't help, so I can't really force the node to "digest" all those compactions. In the meanwhile the disk occupied is already twice the average load I have on other nodes. Fe

Re: Bootstrapping a node fails because of compactions not keeping up

2017-10-13 Thread Stefano Ortolani
I have been trying to add another node to the cluster (after upgrading to 3.0.15) and I just noticed through "nodetool netstats" that all nodes have been streaming to the joining node approx 1/3 of their SSTables, basically their whole primary range (using RF=3)? Is this expected/normal? I was und

Re: Pending-range-calculator during bootstrapping

2017-09-22 Thread Jeff Jirsa
user > Subject: Pending-range-calculator during bootstrapping > > Dear All, > > when we are bootstrapping a new node,we are experiencing high cpu load which > affect the rt ,and we noticed that it's mainly costing on > Pending-range-calculator ,this did not ha

RE: Pending-range-calculator during bootstrapping

2017-09-22 Thread Durity, Sean R
:2535...@qq.com] Sent: Friday, September 22, 2017 3:56 AM To: user Subject: Pending-range-calculator during bootstrapping Dear All, when we are bootstrapping a new node,we are experiencing high cpu load which affect the rt ,and we noticed that it's mainly costing on Pending-range-calculator

RE: Pending-range-calculator during bootstrapping

2017-09-22 Thread Durity, Sean R
:2535...@qq.com] Sent: Friday, September 22, 2017 3:56 AM To: user Subject: Pending-range-calculator during bootstrapping Dear All, when we are bootstrapping a new node,we are experiencing high cpu load which affect the rt ,and we noticed that it's mainly costing on Pending-range-calculator

Pending-range-calculator during bootstrapping

2017-09-22 Thread Peng Xiao
Dear All, when we are bootstrapping a new node,we are experiencing high cpu load which affect the rt ,and we noticed that it's mainly costing on Pending-range-calculator ,this did not happen before. We are using C* 2.1.13 in one DC,2.1.18 in another DC. Could anyone please advise on

Pending-range-calculator during bootstrapping

2017-09-21 Thread Peng Xiao
Dear All, when we are bootstrapping a new node,we are experiencing high cpu load and this affect the rt ,and we noticed that it's mainly costing on Pending-range-calculator ,this did not happen before. We are using C* 2.1.13. Could anyone please advise on this? Thanks, Peng Xiao

Re: Bootstrapping node on Cassandra 3.7 causes cluster-wide performance issues

2017-09-11 Thread Paul Pollack
't receive read > queries until it finishes streaming and joins the cluster. It seems > unlikely that you'd be bottlenecked on I/O during the bootstrapping > process. If you were, you'd certainly have bigger problems. The aim is to > clear out the majority of compactions *before* th

Re: Bootstrapping node on Cassandra 3.7 causes cluster-wide performance issues

2017-09-11 Thread kurt greaves
our situation since the I/O pipe > was already fully saturated. > You should unthrottle during bootstrap as the node won't receive read queries until it finishes streaming and joins the cluster. It seems unlikely that you'd be bottlenecked on I/O during the bootstrapping proce

Re: Bootstrapping node on Cassandra 3.7 causes cluster-wide performance issues

2017-09-11 Thread Lerh Chuan Low
er, which will allow us to add our next newly >> bootstrapped node to a blacklist, hoping that if it doesn't accept writes >> the rest of the cluster can serve them adequately (as is the case whenever >> we turn down the bootstrapping node), and allow it to finish its >

Re: Bootstrapping node on Cassandra 3.7 causes cluster-wide performance issues

2017-09-11 Thread Aaron Wykoff
o add our next newly > bootstrapped node to a blacklist, hoping that if it doesn't accept writes > the rest of the cluster can serve them adequately (as is the case whenever > we turn down the bootstrapping node), and allow it to finish its > compactions. > > We were also inte

Re: Bootstrapping node on Cassandra 3.7 causes cluster-wide performance issues

2017-09-11 Thread Paul Pollack
ng compaction throughput while the node is > bootstrapping? > ​ >

Re: Bootstrapping node on Cassandra 3.7 causes cluster-wide performance issues

2017-09-11 Thread kurt greaves
What version are you using? There are improvements to streaming with LCS in 2.2. Also, are you unthrottling compaction throughput while the node is bootstrapping? ​

Re: Bootstrapping node on Cassandra 3.7 causes cluster-wide performance issues

2017-09-11 Thread Lerh Chuan Low
de to > resume acceptable operations. > > We're currently upgrading all of our clients to use the 3.11.0 version of > the DataStax Python driver, which will allow us to add our next newly > bootstrapped node to a blacklist, hoping that if it doesn't accept writes > the rest of

Bootstrapping node on Cassandra 3.7 causes cluster-wide performance issues

2017-09-11 Thread Paul Pollack
the case whenever we turn down the bootstrapping node), and allow it to finish its compactions. We were also interested in hearing if anyone has had much luck using the sstableofflinerelevel tool, and if this is a reasonable approach for our issue. One of my colleagues found a post where a use

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread kurt greaves
> > But if it also streams, it means I'd still be under-pressure if I am not > mistaken. I am under the assumption that the compactions are the by-product > of streaming too many SStables at the same time, and not because of my > current write load. > Ah yeah I wasn't thinking about the capacity pr

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread Stefano Ortolani
long as I am using LCS, am I right? > > Code seems to imply that it should always keep SSTable levels. Sounds > like something is wrong if it is not :/. > I don't have the logs anymore (machine had other issues, replacing disks) so I can't be sure of this. Definitely I r

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread kurt greaves
> ​1) You mean restarting the node in the middle of the bootstrap with > join_ring=false? Would this option require me to issue a nodetool boostrap > resume, correct? I didn't know you could instruct the join via JMX. Would > it be the same of the nodetool boostrap command? write_survey is slightl

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread Stefano Ortolani
Hi Kurt, 1) You mean restarting the node in the middle of the bootstrap with join_ring=false? Would this option require me to issue a nodetool boostrap resume, correct? I didn't know you could instruct the join via JMX. Would it be the same of the nodetool boostrap command? 2) Yes, they are stream

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread kurt greaves
Well, that sucks. Be interested if you could find out if any of the streamed SSTables are retaining their levels. To answer your questions: 1) No. However, you could set your nodes to join in write_survey mode, which will stop them from joining the ring and you can initiate the join over JMX when

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread Stefano Ortolani
Hi Kurt, sorry, I forgot to specify. I am on 3.0.14. Cheers, Stefano On Wed, Aug 23, 2017 at 12:11 AM, kurt greaves wrote: > What version are you running? 2.2 has an improvement that will retain > levels when streaming and this shouldn't really happen. If you're on 2.1 > best bet is to upgrade

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-22 Thread kurt greaves
What version are you running? 2.2 has an improvement that will retain levels when streaming and this shouldn't really happen. If you're on 2.1 best bet is to upgrade

Bootstrapping a node fails because of compactions not keeping up

2017-08-22 Thread Stefano Ortolani
Hi all, I am trying to bootstrap a node without success due to running out of space. Average node size is 260GB with lots of LCS tables (overall data ~3.5 TB) Each node is also configured with a 1TB disk, including the bootstrapping node. After 12 hours the bootstrapping node fails with more

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-03 Thread Daniel Hölbling-Inzko
That makes sense. Thank you so much for pointing that out Alex. So long story short. Once I am up to the RF I actually want (RF3 per DC) and am just adding nodes for capacity joining the Ring will correctly work and no inconsistencies will exist. If I just change the RF the nodes don't have the dat

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-03 Thread Oleksandr Shulgin
On Thu, Aug 3, 2017 at 9:33 AM, Daniel Hölbling-Inzko < daniel.hoelbling-in...@bitmovin.com> wrote: > No I set Auto bootstrap to true and the node was UN in nodetool status but > when doing a select on the node with ONE I got incomplete data. > What I think is happening here is not related to the

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-03 Thread Daniel Hölbling-Inzko
No I set Auto bootstrap to true and the node was UN in nodetool status but when doing a select on the node with ONE I got incomplete data. Jeff Jirsa schrieb am Do. 3. Aug. 2017 um 09:02: > "nodetool status" shows node as UN (up normal) instead of UJ (up joining) > > What you're describing really

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-03 Thread Jeff Jirsa
"nodetool status" shows node as UN (up normal) instead of UJ (up joining) What you're describing really sounds odd. Something isn't adding up to me but I'm not sure why. You shouldn't be able to query it directly until its bootstrapped as far as I know Are you sure you're not joining as a seed

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread Daniel Hölbling-Inzko
Thanks Jeff. How do I determine that bootstrap is finished? Haven't seen that anywhere so far. Reads via storage would be ok as every query would be checked by another node too. I was only seeing inconsistencies since clients went directly to the node with Consistency ONE Greetings Jeff Jirsa sc

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread Jeff Jirsa
By the time bootstrap is complete it should be as consistent as the source node - you can change start_native_transport to false to avoid serving clients directly (tcp/9042), but it'll still serve reads via the storage service (tcp/7000), but the guarantee is that data should be consistent by th

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread kurt greaves
only in this one case might that work (RF==N)

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread Oleksandr Shulgin
On Wed, Aug 2, 2017 at 10:53 AM, Daniel Hölbling-Inzko < daniel.hoelbling-in...@bitmovin.com> wrote: > > Any advice on how to avoid this in the future? Is there a way to start up > a node that does not serve client requests but does replicate data? > Would it not work if you first increase the RF

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread kurt greaves
You can't just add a new DC and then tell their clients to connect to the new one (after migrating all the data to it obv.)? If you can't achieve that you should probably use GossipingPropertyFileSnitch.​ Your best plan is to have the desired RF/redundancy from the start. Changing RF in production

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread Daniel Hölbling-Inzko
Thanks for the pointers Kurt! I did increase the RF to N so that would not have been the issue. DC migration is also a problem since I am using the Google Cloud Snitch. So I'd have to take down the whole DC and restart anew (which would mess with my clients as they only connect to their local DC).

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread kurt greaves
If you want to change RF on a live system your best bet is through DC migration (add another DC with the desired # of nodes and RF), and migrate your clients to use that DC. There is a way to boot a node and not join the ring, however I don't think it will work for new nodes (have not confirmed), a

Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread Daniel Hölbling-Inzko
Hi, It's probably a strange question but I have a heavily read-optimized payload where data integrity is not a big deal. So to keep latencies low I am reading with Consistency ONE from my Multi-DC Cluster. Now the issue I saw is that I needed to add another Cassandra node (for redundancy reasons).

Weird Bootstrapping Issue

2017-05-01 Thread Gareth Collins
g node" we couldn't create a cluster of size greater than one) and with full data (9GB) we were able to create a cluster with only two nodes. The time on the nodes may be off by up to a second - would that be big enough to cause any trouble when bootstrapping? Anyone seen something like this bef

Re: Bootstrapping multiple cassandra nodes simultaneously in existing dc

2016-12-03 Thread Bhuvan Rawal
the load-avg on the existing nodes, as >> that might affect your application read request latencies. >> >> On Sun, Sep 11, 2016, 17:10 Jens Rantil wrote: >> >>> Hi Bhuvan, >>> >>> I have done such expansion multiple times and can really recommend

Re: Bootstrapping data from Cassandra 2.2.5 datacenter to 3.0.8 datacenter fails because of streaming errors

2016-10-10 Thread Abhishek Verma
ming: > > 1. Bootstrapping nodes > 2. Decomissioning nodes > 3. Repair > > > On Mon, Oct 10, 2016 at 5:00 PM Jeff Jirsa > wrote: > >> >> >> No need to cc dev@, user@ is the right list for this question. >> >> >> >> As Jon mentioned, yo

Re: Bootstrapping data from Cassandra 2.2.5 datacenter to 3.0.8 datacenter fails because of streaming errors

2016-10-10 Thread Jonathan Haddad
During the upgrade you'll want to avoid the following operations that result in data streaming: 1. Bootstrapping nodes 2. Decomissioning nodes 3. Repair On Mon, Oct 10, 2016 at 5:00 PM Jeff Jirsa wrote: > > > No need to cc dev@, user@ is the right list for this question.

Re: Bootstrapping data from Cassandra 2.2.5 datacenter to 3.0.8 datacenter fails because of streaming errors

2016-10-10 Thread Jeff Jirsa
advisable. From: Abhishek Verma Reply-To: "user@cassandra.apache.org" Date: Monday, October 10, 2016 at 4:34 PM To: "user@cassandra.apache.org" , "d...@cassandra.apache.org" Subject: Bootstrapping data from Cassandra 2.2.5 datacenter to 3.0.8 datacent

Re: Bootstrapping data from Cassandra 2.2.5 datacenter to 3.0.8 datacenter fails because of streaming errors

2016-10-10 Thread Utkarsh Sengar
As Johathan said, you need to upgrade cassandra directly and use "nodetool upgradesstables". Datastax has an excellent resource on upgrading cassandra https://docs.datastax.com/en/latest-upgrade/upgrade/cassandra/upgdCassandra.html, specifically https://docs.datastax.com/en/latest-upgrade/upgrade/c

  1   2   3   4   >