n. 4 déc. 2023, 11:28, Dipan Shah
a écrit :
Hello Sebastien,
There are no inbuilt tools that will automatically
remove folders of deleted tables.
Thanks,
Dipan Shah
---
t;> small tables at a high rate. You could quickly end with more than 65K
>>> (limit of ext4) subdirectories in the KS directory, while 99.9.. % of them
>>> are residual of deleted tables.
>>>
>>> That looks quite dirty from Cassandra to not clean its own &
éc. 2023, 11:28, Dipan Shah
a écrit :
Hello Sebastien,
There are no inbuilt tools that will automatically
remove folders of deleted tables.
Thanks,
Dipan Shah
dual of deleted tables.
>>
>> That looks quite dirty from Cassandra to not clean its own "garbage" by
>> itself, and quite dangerous for the end user to have to do it alone, don't
>> you think so?
>>
>> Thanks,
>>
>> Sébastien.
>>
>
Sébastien.
Le lun. 4 déc. 2023, 11:28, Dipan Shah a
écrit :
Hello Sebastien,
There are no inbuilt tools that will automatically remove
folders of deleted tables.
Thanks,
Dipan Shah
-
The last time you mentioned this:
On Tue, Dec 5, 2023 at 11:57 AM Sébastien Rebecchi
wrote:
> Hi Bowen,
>
> Thanks for your answer.
>
> I was thinking of extreme use cases, but as far as I am concerned I can
> deal with creation and deletion of 2 tables every 6 hours for a keyspace.
> So it lets
ts own "garbage" by
> itself, and quite dangerous for the end user to have to do it alone, don't
> you think so?
>
> Thanks,
>
> Sébastien.
>
> Le lun. 4 déc. 2023, 11:28, Dipan Shah a écrit :
>
>> Hello Sebastien,
>>
>> There
-empty -exec rmdir {} \;
rmdir only removes empty directories, you'll need to run it twice (once for
backup, once for the empty table). It will remove all empty directories in
that folder so if you've got unused tables, you'd be better off using the find
command, getting the lis
Hello Sebastien,
There are no inbuilt tools that will automatically remove folders
of deleted tables.
Thanks,
Dipan Shah
*From:* Sébastien Rebecchi
*Sent:* 04 December 2023
11:28, Dipan Shah a écrit :
> Hello Sebastien,
>
> There are no inbuilt tools that will automatically remove folders of
> deleted tables.
>
> Thanks,
>
> Dipan Shah
> --
> *From:* Sébastien Rebecchi
> *Sent:* 04 December 2023 13:54
&
Hello Sebastien,
There are no inbuilt tools that will automatically remove folders of deleted
tables.
Thanks,
Dipan Shah
From: Sébastien Rebecchi
Sent: 04 December 2023 13:54
To: user@cassandra.apache.org
Subject: Remove folders of deleted tables
Hello
.
Is there a way to automatically remove folders of deleted tables?
Sébastien.
d successfully
*Is skipping dependencies with --nodeps a right approach ??*
###Next, i tried to *uninstall the version* using
"yum remove cassandra"
It gives error: *Invalid version flag: or*
Refer complete trace below:
# yum remove cassandra
Loaded plugins: fastestmirror
Resolving Depende
cies) <= 4.12.0-1 is
needed by cassandra-4.0.7-1.noarch
Then, i solve this by using "sudo rpm --nodeps -ivh
cassandra-4.0.7-1.noarch.rpm"
and version was installed successfully
*Is skipping dependencies with --nodeps a right approach ??*
###Next, i tried to *uninstall the version*
Thanks Jeff and Vytenis.
Jeff, could you explain what do you mean by
If you just pipe all of your sstables to user defined compaction jmx
> endpoint one at a time you’ll purge many of the tombstones as long as you
> don’t have a horrific data model.
Regards
Manish
On Wed, Jul 7, 2021 at 4:21
In 2.1 the only option is enable auto compaction or queue up manual user
defined compaction
If you just pipe all of your sstables to user defined compaction jmx endpoint
one at a time you’ll purge many of the tombstones as long as you don’t have a
horrific data model.
> On Jul 6, 2021, at 3
You might want to take a look at `unchecked_tombstone_compaction` table
setting. The best way to see if this is affecting you is to look at the
sstablemetadata for the sstables and see if your tombstone ratio is higher
than the configured tombstone_threshold ratio (0.2 be default) for the
table.
F
Thanks Kane for the suggestion.
Regards
Manish
On Tue, Jul 6, 2021 at 6:19 AM Kane Wilson wrote:
>
> In one of our LCS table auto compaction was disabled. Now after years of
>> run, range queries using spark-cassandra-connector are failing. Cassandra
>> version is 2.1.16.
>>
>> I suspect due to
In one of our LCS table auto compaction was disabled. Now after years of
> run, range queries using spark-cassandra-connector are failing. Cassandra
> version is 2.1.16.
>
> I suspect due to disabling of autocompaction lots of tombstones got
> created. And now while reading those are creating issue
In one of our LCS table auto compaction was disabled. Now after years of
run, range queries using spark-cassandra-connector are failing. Cassandra
version is 2.1.16.
I suspect due to disabling of autocompaction lots of tombstones got
created. And now while reading those are creating issues and que
ote:
>
>
> Hi all,
>
> Is it possible to remove dead node directly from the cluster without
> streaming?
>
> My Cassandra cluster is quite large and takes too long to stream. (nodetool
> removenode)
>
> It's okay if my data is temporarily inconsistent.
>
> Thanks in advance.
Hi all,
Is it possible to remove dead node directly from the cluster without streaming?
My Cassandra cluster is quite large and takes too long to stream. (nodetool
removenode)
It's okay if my data is temporarily inconsistent.
Thanks in advance.
Remove me from Cassandra listserv.
maharkn...@comcast.net
Could you explain more?
When we add a new node, it should migrate data from other nodes, right?
What happens if other nodes are absent? For example, the cluster
consists of 3 nodes, but 2 nodes down, now we add the fourth new node,
what happens then?
2018-05-01 12:01 GMT+08:00 Jeff Jirsa :
> node
nodetool decommission streams data from the losing replica, so only that
instance has to be online (and decom should be preferred to removenode)
If that instance is offline, you can use removenode, but you risk violating
consistency guarantees
Adding nodes is similar - bootstrap streams from th
Hi All,
When a new node added, due to the even distribution of the new tokens,
the current nodes of the ring should migrate data to this new node.
So, does it requires all nodes be present? If not, then if some nodes
are down, then it will miss the data migration of those parts, how and
when to f
; repair to make the replaced node consistent again, since it missed ongoing
>> writes during bootstrapping.but for a great cluster,repair is a painful
>> process.
>>
>> Thanks,
>> Peng Xiao
>>
>>
>>
>> -- 原始邮件 ------
>
人:* "Anthony Grasso";
> *发送时间:* 2018年3月22日(星期四) 晚上7:13
> *收件人:* "user";
> *主题:* Re: replace dead node vs remove node
>
> Hi Peng,
>
> Depending on the hardware failure you can do one of two things:
>
> 1. If the disks are intact and uncorrupted yo
ain, since it missed ongoing
>> writes during bootstrapping.but for a great cluster,repair is a painful
>> process.
>>
>> Thanks,
>> Peng Xiao
>>
>>
>>
>> -- 原始邮件 --
>> *发件人:* "Anthony Grasso";
>> *发送时间:* 201
> process.
>
> Thanks,
> Peng Xiao
>
>
>
> -- 原始邮件 --
> *发件人:* "Anthony Grasso";
> *发送时间:* 2018年3月22日(星期四) 晚上7:13
> *收件人:* "user";
> *主题:* Re: replace dead node vs remove node
>
> Hi Peng,
>
> Depending on the hardware f
apping.but for a great cluster,repair is a painful
> process.
>
> Thanks,
> Peng Xiao
>
>
>
> -- 原始邮件 --
> 发件人: "Anthony Grasso";
> 发送时间: 2018年3月22日(星期四) 晚上7:13
> 收件人: "user";
> 主题: Re: replace dead node vs
painful process.
Thanks,
Peng Xiao
-- --
??: "Anthony Grasso";
: 2018??3??22??(??) 7:13
??: "user";
: Re: replace dead node vs remove node
Hi Peng,
Depending on the hardware failure you can do one of
described in the blogpost you linked to. The operation is similar to
bootstrapping a new node. There is no need to perform any other remove or
join operation on the failed or new nodes. As per the blog post, you
definitely want to run repair on the new node as soon as it joins the
cluster. In this
-03-12/replace-a-dead-node-in-cassandra.html,we
can replace this dead node,is it the same as bootstrap new node?that means we
don't need to remove node and rejoin?
Could anyone please advise?
Thanks,
Peng Xiao
2018 2:18 PM
*To:* user@cassandra.apache.org
*Subject:* Re: system.size_estimates - safe to remove sstables?
Finally, got a chance to work on it over the weekend.
It worked as advertised. :)
Thanks a lot, Chris.
Kunal
On 8 March 2018 at 10:47, Kunal Gangakhedkar
wrote:
Thanks a
Kunal,
Is this the GCE cluster you are speaking of in the “Adding new DC?” thread?
Kenneth Brotman
From: Kunal Gangakhedkar [mailto:kgangakhed...@gmail.com]
Sent: Sunday, March 11, 2018 2:18 PM
To: user@cassandra.apache.org
Subject: Re: system.size_estimates - safe to remove sstables
nd backups - none found.
>> Also, we're not using opscenter, hadoop or spark or any such tool.
>>
>> So, do you think we can just remove the cf and restart the service?
>>
>> Thanks,
>> Kunal
>>
>> On 5 March 2018 at 21:52, Chris Lohfink
Hi Chris,
>
> I checked for snapshots and backups - none found.
> Also, we're not using opscenter, hadoop or spark or any such tool.
>
> So, do you think we can just remove the cf and restart the service?
>
> Thanks,
> Kunal
>
> On 5 March 2018 at 21:52, Chris
t;
> So, do you think we can just remove the cf and restart the service?
>
> Thanks,
> Kunal
>
> On 5 March 2018 at 21:52, Chris Lohfink <mailto:clohf...@apple.com>> wrote:
> Any chance space used by snapshots? What files exist there that are taking up
>
Hi Chris,
I checked for snapshots and backups - none found.
Also, we're not using opscenter, hadoop or spark or any such tool.
So, do you think we can just remove the cf and restart the service?
Thanks,
Kunal
On 5 March 2018 at 21:52, Chris Lohfink wrote:
> Any chance space used by s
lmost all of it shows
> up as occupied by size_estimates CF.
> Out of 296GiB, 288GiB shows up as consumed by size_estimates in 'du -sh'
> output.
>
> This is while the other node is chugging along - shows only 25MiB consumed by
> size_estimates (du -sh output).
>
>
lmost all of it shows
> up as occupied by size_estimates CF.
> Out of 296GiB, 288GiB shows up as consumed by size_estimates in 'du -sh'
> output.
>
> This is while the other node is chugging along - shows only 25MiB consumed by
> size_estimates (du -sh output).
>
>
s chugging along - shows only 25MiB consumed
by size_estimates (du -sh output).
Any idea why this descripancy?
Is it safe to remove the size_estimates sstables from the affected node and
restart the service?
Thanks,
Kunal
invoked on
all the related nodes when upsert a record? We use Cassandra 2.1.x.
Thanks
Boying
From: Romain Hardouin [mailto:romainh...@yahoo.fr]
Sent: 2016年6月8日 20:12
To: user@cassandra.apache.org
Subject: Re: How to remove 'compact storage' attribute?
Hi,
You can't
age’ attribute which prevent us
from using some new features provided by CQL such as secondary index. Does
anyone know how to remove this attribute? “ALTER TABLE” seems doesn’t work
according to the CQL document. Thanks Boying
dary index.
Does anyone know how to remove this attribute? "ALTER TABLE" seems doesn't work
according to the CQL document.
Thanks
Boying
On Sun, Sep 27, 2015 at 11:59 PM, Erick Ramirez
wrote:
> You should never run `nodetool compact` since this will result in a
> massive SSTable that will almost never get compacted out or take a very
> long time to get compacted out.
>
Respectfully disagree. There are various cases where nodetool
ache.org", Dongfeng Lu
Subject: Re: How to remove huge files with all expired data sooner?
Hello,
You should never run `nodetool compact` since this will result in a massive
SSTable that will almost never get compacted out or take a very long time to
get compacted out.
You are correct th
quot;
Date: Sunday, September 27, 2015 at 11:59 PM
To: "user@cassandra.apache.org", Dongfeng Lu
Subject: Re: How to remove huge files with all expired data sooner?
Hello,
You should never run `nodetool compact` since this will result in a massive
SSTable that will almost never get compacte
On Mon, Sep 28, 2015 at 2:59 AM, Erick Ramirez wrote:
> have many tables like this, and I'd like to reclaim those spaces sooner.
> What would be the best way to do it? Should I run "nodetool compact" when I
> see two large files that are 2 weeks old? Is there configuration parameters
> I can tune
Hello,
You should never run `nodetool compact` since this will result in a massive
SSTable that will almost never get compacted out or take a very long time
to get compacted out.
You are correct that there needs to be 4 similar-sized SSTables for them to
get compacted. If you want the expired dat
Apparently this was reported back in May:
https://issues.apache.org/jira/browse/CASSANDRA-9510
- Jeff
From: Dikang Gu
Reply-To: "user@cassandra.apache.org"
Date: Friday, September 25, 2015 at 11:31 AM
To: cassandra
Subject: Re: Unable to remove dead node from cluster.
The NPE t
Hi I have a table where I set TTL to only 7 days for all records and we keep
pumping records in every day. In general, I would expect all data files for
that table to have timestamps less than, say 8 or 9 days old, giving the system
some time to work its magic. However, I see some files more tha
The NPE throws when node tried to handleStateLeft, because it can not find
the tokens associated with the node, can we just ignore the NPE and
continue to remove the endpoint from the ring?
On Fri, Sep 25, 2015 at 10:52 AM, Dikang Gu wrote:
> @Jeff, yeah, I run the nodetool grep, and in my c
are our stack traces and we can find the underlying root cause
>> of this together.
>>
>> - Jeff
>>
>> From: Dikang Gu
>> Reply-To: "user@cassandra.apache.org"
>> Date: Thursday, September 24, 2015 at 9:10 PM
>> To: cassandra
>>
>> Subj
of the
> nodes). If this is the case, please reply so that you and I can submit a
> Jira and compare our stack traces and we can find the underlying root cause
> of this together.
>
> - Jeff
>
> From: Dikang Gu
> Reply-To: "user@cassandra.apache.org"
> Date:
To: cassandra
Subject: Re: Unable to remove dead node from cluster.
@Jeff, I just use jmx connect to one node, run the unsafeAssainateEndpoint, and
pass in the "10.210.165.55" ip address.
Yes, we have hundreds of other nodes in the nodetool status output as well.
On Tue, Sep 22, 2015 at 11
mber 22, 2015 at 10:09 PM
> To: cassandra
> Cc: "d...@cassandra.apache.org"
> Subject: Re: Unable to remove dead node from cluster.
>
> ping.
>
> On Mon, Sep 21, 2015 at 11:51 AM, Dikang Gu wrote:
>
>> I have tried all of them, neither of them worked.
>>
r 22, 2015 at 10:09 PM
To: cassandra
Cc: "d...@cassandra.apache.org"
Subject: Re: Unable to remove dead node from cluster.
ping.
On Mon, Sep 21, 2015 at 11:51 AM, Dikang Gu wrote:
I have tried all of them, neither of them worked.
1. decommission: the host had hardware issue, and I c
ping.
On Mon, Sep 21, 2015 at 11:51 AM, Dikang Gu wrote:
> I have tried all of them, neither of them worked.
> 1. decommission: the host had hardware issue, and I can not connect to it.
> 2. remove, there is not HostID, so the removenode did not work.
> 3. unsafeAssassinateEndpo
I have tried all of them, neither of them worked.
1. decommission: the host had hardware issue, and I can not connect to it.
2. remove, there is not HostID, so the removenode did not work.
3. unsafeAssassinateEndpoint, it will throw NPE as I pasted before, can we
fix it?
Thanks
Dikang.
On Mon
Order is decommission, remove, assassinate.
Which have you tried?
On Sep 21, 2015 10:47 AM, "Dikang Gu" wrote:
> Hi there,
>
> I have a dead node in our cluster, which is a wired state right now, and
> can not be removed from cluster.
>
> The nodesta
(ThreadPoolExecutor.java:615)
~[na:1.7.0_45]
2015-09-18_23:21:40.80674 at java.lang.Thread.run(Thread.java:744)
~[na:1.7.0_45]
2015-09-18_23:21:40.85812 WARN 23:21:40 Not marking nodes down due to
local pause of 10852378435 > 50
Any suggestions about how to remove it?
Thanks.
--
Dikang
17 GMT+08:00 Alain RODRIGUEZ :
>
>> Hi, I am facing the same issue on 2.0.16.
>>
>> Did you solve this ? How ?
>>
>> I plan to try a rolling restart and see if gossip state recover from this.
>>
>> C*heers,
>>
>> Alain
>>
>> 2015-06-19 11:
is.
>
> C*heers,
>
> Alain
>
> 2015-06-19 11:40 GMT+02:00 曹志富 :
>
>> I have a C* 2.1.5 with 24 nodes.A few days ago ,I have remove a node from
>> this cluster using nodetool decommission.
>>
>> But tody I find some log like this:
>>
>> INFO [G
Hi, I am facing the same issue on 2.0.16.
Did you solve this ? How ?
I plan to try a rolling restart and see if gossip state recover from this.
C*heers,
Alain
2015-06-19 11:40 GMT+02:00 曹志富 :
> I have a C* 2.1.5 with 24 nodes.A few days ago ,I have remove a node from
> this cluster
Thanks guys for the answers!
Saludos / Regards.
Analía Lorenzatto.
"Hapiness is not something really made. It comes from your own actions" by
Dalai Lama
On 21 Aug 2015 2:31 pm, "Sebastian Estevez"
wrote:
> To clarify, you do not need a ttl for deletes to be compacted away in
> Cassandra. Whe
To clarify, you do not need a ttl for deletes to be compacted away in
Cassandra. When you delete, we create a tombstone which will remain in the
system __at least__ gc grace seconds. We wait this long to give the
tombstone a chance to make it to all replica nodes, the best practice is to
run repair
The TTL shouldn't matter if you deleted the data, since to my understanding
the delete should shadow the data signaling to C* that the data is a
candidate for removal on compaction.
Others might know better, but it could very well be the fact that
gc_grace_seconds is 0 that is causing your problem
Hello,
Daniel, I am using Size Tiered compaction.
My concern is that as I do not have a TTL defined on the Column family, and
I do not have the possibility to create it. Perhaps, the "deleted data"
is never actually going to be removed?
Thanks a lot!
On Thu, Aug 20, 2015 at 4:24 AM, Daniel C
Is this a LCS family, or Size Tiered? Manually running compaction on LCS
doesn't do anything until C* 2.2 (
https://issues.apache.org/jira/browse/CASSANDRA-7272)
Thanks,
Daniel
On Wed, Aug 19, 2015 at 6:56 PM, Analia Lorenzatto <
analialorenza...@gmail.com> wrote:
> Hello Michael,
>
> Thanks for
Hello Michael,
Thanks for responding!
I do not have snapshots on any node of the cluster.
Saludos / Regards.
Analía Lorenzatto.
"Hapiness is not something really made. It comes from your own actions" by
Dalai Lama
On 19 Aug 2015 6:19 pm, "Laing, Michael" wrote:
> Possibly you have snapshot
Possibly you have snapshots? If so, use nodetool to clear them.
On Wed, Aug 19, 2015 at 4:54 PM, Analia Lorenzatto <
analialorenza...@gmail.com> wrote:
> Hello guys,
>
> I have a cassandra cluster 2.1 comprised of 4 nodes.
>
> I removed a lot of data in a Column Family, then I ran manually a
> co
Hello guys,
I have a cassandra cluster 2.1 comprised of 4 nodes.
I removed a lot of data in a Column Family, then I ran manually a
compaction on this Column family on every node. After doing that, If I
query that data, cassandra correctly says this data is not there. But the
space on disk is e
my mistake: I was typing the command on the node I was trying to
> remove from the cluster. After trying the command on another node in the
> cluster, it worked (`nodetool status` shows the node as removed), however
> OpsCenter still does not recognize the node as removed.
>
> Any ways t
Found my mistake: I was typing the command on the node I was trying to remove
from the cluster. After trying the command on another node in the cluster, it
worked (`nodetool status` shows the node as removed), however OpsCenter still
does not recognize the node as removed.
Any ways to fix
specify a node with the switch -h.
So for your that would be:
nodetool -h removenode
where is the address of a server which has cassandra daemon
running.
Cheers
Jean
On 08 Jul 2015, at 01:39 , Sid Tantia
mailto:sid.tan...@baseboxsoftware.com>> wrote:
I tried both `nodetool remov
On 07/07/2015 07:27 PM, Robert Coli wrote:
On Tue, Jul 7, 2015 at 4:39 PM, Sid Tantia
mailto:sid.tan...@baseboxsoftware.com>>
wrote:
I tried both `nodetool remove node ` and `nodetool
decommission` and they both give the error:
nodetool: Failed to connect to '127
On Tue, Jul 7, 2015 at 4:39 PM, Sid Tantia
wrote:
> I tried both `nodetool remove node ` and `nodetool decommission`
> and they both give the error:
>
> nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException:
> 'Connection refused’.
>
> H
I tried both `nodetool remove node ` and `nodetool decommission` and
they both give the error:
nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection
refused’.
Here is what I have tried to fix this:
1) Uncommented JVM_
If node is down use :
nodetool removenode
We have to run the below command when the node is down & if the cluster
does not use vnodes, before running the nodetool removenode command, adjust
the tokens.
If the node is up, then the command would be “nodetool decommission” to
remove the
Thanks for the response. I’m trying to remove a node that’s already down for
some reason so its not allowing me to decommission it, is there some other way
to do this?
On Tue, Jul 7, 2015 at 12:45 PM, Kiran mk wrote:
> Yes, if your intension is to decommission a node. You can do that
Yes, if your intension is to decommission a node. You can do that by
clicking on the node and decommission.
Best Regards,
Kiran.M.K.
On Jul 8, 2015 1:00 AM, "Sid Tantia" wrote:
> I know you can use `nodetool removenode` from the command line but is
> there a way to remo
I know you can use `nodetool removenode` from the command line but is there a
way to remove a node from a cluster using OpsCenter?
I have a C* 2.1.5 with 24 nodes.A few days ago ,I have remove a node from
this cluster using nodetool decommission.
But tody I find some log like this:
INFO [GossipStage:1] 2015-06-19 17:38:05,616 Gossiper.java:968 -
InetAddress /172.19.105.41 is now DOWN
INFO [GossipStage:1] 2015-06-19 17:38
On Wed, Mar 4, 2015 at 6:12 PM, Steffen Winther
wrote:
> Robert Coli eventbrite.com> writes:
>
> >
> > 1) stop node
> > 2) move sstables from no-longer-data-directories into
> still-data-directories
>
> Okay, just into any other random data dir?
> Few files here and there to spread amount of dat
Robert Coli eventbrite.com> writes:
>
> 1) stop node
> 2) move sstables from no-longer-data-directories into still-data-directories
Okay, just into any other random data dir?
Few files here and there to spread amount of data between still-data-dirs?
> 3) modify conf file
> 4) start node
>
> I
On Wed, Mar 4, 2015 at 3:28 PM, Steffen Winther
wrote:
> Howto remove already assigned
> data file directories from running nodes?
>
1) stop node
2) move sstables from no-longer-data-directories into still-data-directories
3) modify conf file
4) start node
I wonder how pending compac
HI
Got a cassandra cluster 2.0.12 with three nodes,
that I would like to reduce storage capacity as I
would like to reuse some disks for a PoC
cassandra 1.2.15 cluster on the same nodes.
Howto remove already assigned
data file directories from running nodes?
f.ex. got:
data_file_directories
Gesellschaft ist eingetragen beim Registergericht Hamburg
Nr. HRB 120447.
2015-02-09 18:22 GMT+01:00 Nick Bailey :
> To clarify what Chris said, restarting opscenter will remove the
> notification, but we also have a bug filed to make that behavior a little
> better and allow dismis
To clarify what Chris said, restarting opscenter will remove the
notification, but we also have a bug filed to make that behavior a little
better and allow dismissing that notification without a restart. Thanks for
reporting the issue!
-Nick
On Mon, Feb 9, 2015 at 9:00 AM, Chris Lohfink wrote
g
> an error message at the top of its screen:
> "Error restarting cluster: Timed out waiting for Cassandra to start.".
>
> Does anybody know how to remove that message permanently?
>
> Thank you very much in advance!
>
> Kind regards
> Björn Hachmann
>
ailed. No big deal, but since then OpsCenter is showing an
> error message at the top of its screen:
> "Error restarting cluster: Timed out waiting for Cassandra to start.".
>
> Does anybody know how to remove that message permanently?
>
> Thank you very much in advance!
>
> Kind regards
> Björn Hachmann
ybody know how to remove that message permanently?
Thank you very much in advance!
Kind regards
Björn Hachmann
tem keyspace, I
cannot alter its gc_grace_seconds setting. gc_grace_seconds is now 7 days,
which is certainly longer than the age of the tombstones.
Is there any way that I can remove the tombstones in the system keyspace
immediately?
At 2015-01-13 19:49:47, "Rahul Neelakantan" wrote:
rnings with the
> exact number of tombstones.
> Why is this happening? What should I do to remove the tombstones in the
> system keyspace?
this happening? What should I do to remove the tombstones in the system
keyspace?
Thanks Jens and Robert !!!
On Wed, Dec 3, 2014 at 2:20 AM, Robert Coli wrote:
> On Mon, Dec 1, 2014 at 7:10 PM, Neha Trivedi
> wrote:
>
>> No the old node is not defective. We Just want to separate out that
>> Server for testing.
>> And add a new node. (Present cluster has two Nodes and RF=2)
>
On Mon, Dec 1, 2014 at 7:10 PM, Neha Trivedi wrote:
> No the old node is not defective. We Just want to separate out that Server
> for testing.
> And add a new node. (Present cluster has two Nodes and RF=2)
>
If you currently have two nodes and RF=2, you must add the new node before
removing the
to Add new Node and remove existing node.
>>
>
> What is the purpose of this action? Is the old node defective, and being
> replaced 1:1 with the new node?
>
> =Rob
>
>
1 - 100 of 196 matches
Mail list logo