Thanks to both Erik/Shaun for your responses,
Both your explanations are plausible in my scenario, this is what I have done
subsequently which seems to have improved the situation,
1. The cluster was very busy trying to run repairs/sync the new replicas
(about 350GB) in the new DC (Gossi
Yes to the steps. The only thing I would add is to run a nodetool drain
before shutting C* down so all mutations are flushed to SSTables and there
won't be any commit logs to replay on startup.
Also, the usual "backup your cluster and configuration files" boilerplate
applies. 😁
>
Should I follow the steps above right?
Thanks Erick!
On Wed, Feb 12, 2020, 6:58 PM Erick Ramirez
wrote:
> In case you have an hybrid situation with 3.11.3 , 3.11.4 and 3.11.5 that
>> it is working and it is in production what do you recommend?
>
>
> You shouldn't end up in this mixed-version sit
Thanks Eric ...
This is helpful...
On Wed, 12 Feb 2020 at 17:46, Erick Ramirez
wrote:
> There shouldn't be any negative impact from dropping MVs and there's
> certainly no risk to the base table if that is your concern. All it will do
> is remove all the data in the respective views plus drop a
>
> In case you have an hybrid situation with 3.11.3 , 3.11.4 and 3.11.5 that
> it is working and it is in production what do you recommend?
You shouldn't end up in this mixed-version situation at all. I would highly
recommend you upgrade all the nodes to 3.11.5 or whatever the latest
version is
Thanks everyone!
In case you have an hybrid situation with 3.11.3 , 3.11.4 and 3.11.5 that
it is working and it is in production what do you recommend?
On Wed, Feb 12, 2020, 5:55 PM Erick Ramirez
wrote:
> So unless the sstable format has not been changed I can avoid to do that.
>
>
> Just to
I've just seen your questions on ASF Slack and didn't immediately make the
connection that this post in the mailing list is one and the same. I
understand what you're doing now -- you have an existing DC with no
encryption and you want to add a new DC with encryption enabled but don't
want the down
>
> So unless the sstable format has not been changed I can avoid to do that.
Just to reinforce what Jon and Sean already said, the above assumption is
dangerous. It is always best to follow the recommended upgrade procedure
and mixed-versions is never a good idea unless you've received instructi
There shouldn't be any negative impact from dropping MVs and there's
certainly no risk to the base table if that is your concern. All it will do
is remove all the data in the respective views plus drop any pending view
mutations from the batch log. If anything, you should see some performance
gain
>
> ... where dc-1 have encryption enabled and dc-2 does't have encryption?
... is there a way to specify encrypt within DC?
The quick answer to your question is no. But you've got me really curious
now because you have a very strange setup which makes no sense to me and
I'm hoping you could ela
I generally see these exceptions when the cluster is overloaded. I think
what's happening is that when the app/driver sends a read request, the
coordinator takes a long time to respond because the nodes are busy serving
other requests. The driver gives up (client-side timeout reached) and the
socke
Ah - I should have looked it up! Thank you for fixing my mistake.
Sean Durity
-Original Message-
From: Michael Shuler
Sent: Wednesday, February 12, 2020 3:17 PM
To: user@cassandra.apache.org
Subject: Re: [EXTERNAL] Cassandra 3.11.X upgrades
On 2/12/20 12:58 PM, Durity, Sean R wrote:
> C
Hi Sergio,
We have a production cluster with vnodes=4 that is a bit larger than that, so
yes it is possible to do so. That said, we aren’t wedded to vnodes=4 and are
paying attention to discussions happening around the 4.0 work and mulling the
possibility of shifting to 16.
Note though, we di
On 2/12/20 12:58 PM, Durity, Sean R wrote:
Check the readme.txt for any upgrade notes
Just a quick correction:
NEWS.txt (upgrade (and other important) notes)
CHANGES.txt (changelog with JIRAs)
This is why we list links to these two files in the release announcements.
--
Kind regards,
Michael
Thanks, everyone! @Jon
https://lists.apache.org/thread.html/rd18814bfba487824ca95a58191f4dcdb86f15c9bb66cf2bcc29ddf0b%40%3Cuser.cassandra.apache.org%3E
I have a side response to something that looks to be controversial with the
response from Anthony.
So is it safe to go to production in a 1TB clus
>>A while ago, on my first cluster
Understatement used so effectively. Jon is a master.
On Wed, Feb 12, 2020 at 11:02 AM Sergio
mailto:lapostadiser...@gmail.com>> wrote:
Thanks for your reply!
So unless the sstable format has not been changed I can avoid to do that.
Correct?
Best,
Sergio
A while ago, on my first cluster, I decided to do an upgrade by adding
nodes running 1.2 to an existing cluster running version 1.1. This was a
bad decision, and at that point I decided to always play it safe and always
stick to a single version, and never bootstrap in a node running different
ver
Thanks for your reply!
So unless the sstable format has not been changed I can avoid to do that.
Correct?
Best,
Sergio
On Wed, Feb 12, 2020, 10:58 AM Durity, Sean R
wrote:
> Check the readme.txt for any upgrade notes, but the basic procedure is to:
>
>- Verify that nodetool upgradesstabl
Check the readme.txt for any upgrade notes, but the basic procedure is to:
* Verify that nodetool upgradesstables has completed successfully on all
nodes from any previous upgrade
* Turn off repairs and any other streaming operations (add/remove nodes)
* Stop an un-upgraded node (seed
>This means that from the client driver perspective when I define the
contact points I can specify any node in the cluster as contact point and
not necessary a seed node?
Correct.
On Wed, Feb 12, 2020 at 11:48 AM Sergio wrote:
> So if
> 1) I stop the a Cassandra node that doesn't have in the
Seed nodes are special in the sense that other nodes need them for
bootstrap (first startup only) and they have a special place in the Gossip
system. Odds of gossiping to a seed node are higher than other nodes, which
makes them "hubs" of gossip messaging.
Also, they do not bootstrap, so they won't
So if
1) I stop the a Cassandra node that doesn't have in the seeds IP list
itself
2) I change the cassandra.yaml of this node and I add it to the seed list
3) I restart the node
It will work completely fine and this is not even necessary.
This means that from the client driver perspective when I
Hi,
So application team created 11 materialized views on a base table in
production and we need to drop 7 Materialized views as they are not in use.
Wanted to understand the impact of dropping the materialized views.
We are on Cassandra 3.11.1 , multi datacenter with replication factor of 3
in eac
I believe seed nodes are not special nodes, it's just that you choose a few
nodes from cluster that helps to bootstrap new joining nodes. You can
change Cassandra.yaml to make any other node as seed node. There's nothing
like promotion.
-Arvinder
On Wed, Feb 12, 2020, 8:37 AM Sergio wrote:
> Hi
Hi guys!
Is there a way to promote a not seed node to a seed node?
If yes, how do you do it?
Thanks!
Hi guys!
How do you usually upgrade your cluster for minor version upgrades?
I tried to add a node with 3.11.5 version to a test cluster with 3.11.4
nodes.
Is there any restriction?
Best,
Sergio
Hello,
Is there a way we can have a multi DC Cassandra cluster, where dc-1 have
encryption enabled and dc-2 does't have encryption?
I am trying to add a new DC to the existing cluster, where existing dc
don't have encryption between the nodes but the new DC have encryption
enabled?
I see the bel
thank you
On Tue, Feb 11, 2020 at 6:38 PM Erick Ramirez
wrote:
> I am using astyanax client
>
>
> Right. It was announced as being retired back in 2016 [1] which ended in
> 2018 [2]:
>
>
>>
>> *DeprecationAstyanax has been retired and is no longer under active
>> development but may receive depe
This looks like an error between your client and the cluster. Is the other ip
address your client app? I have typically seen this when there are network
issues between the client and the cluster. Cassandra driver connections are
typically very long-lived. If something like a switch or firewall t
Hi Cassandra folks,
We are getting a lot of these errors and transactions are timing out and I was
wondering if this can be caused by Cassandra itself or if this is a Linux
network issue only. The client job reports Cassandra node down after this
occurs but I suspect this is due to the connect
Hi Cassandra folks,
We are getting a lot of these errors and transactions are timing out and I was
wondering if this can be caused by Cassandra itself or if this is a genuine
Linux network issue only. The client job reports Cassandra node down after this
occurs but I suspect this is due to the
31 matches
Mail list logo