On Thu, Feb 14, 2019 at 4:39 PM Jeff Jirsa wrote:
>
> Wait, doesn't cleanup just rewrite every SSTable one by one? Why would
compaction strategy matter? Do you mean that after cleanup STCS may pick
some resulting tables to re-compact them due to the min/max size
difference, which would not be th
> On Feb 14, 2019, at 12:19 AM, Oleksandr Shulgin
> wrote:
>
>> On Wed, Feb 13, 2019 at 6:47 PM Jeff Jirsa wrote:
>> Depending on how bad data resurrection is, you should run it for any host
>> that loses a range. In vnodes, that's usually all hosts.
>>
>> Cleanup with LCS is very cheap.
Cleanup is a great way to free up disk space.
Just note you might run into
https://issues.apache.org/jira/browse/CASSANDRA-9036 if you use a version
older than 2.0.15.
On Thu, Feb 14, 2019 at 10:20 AM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
> On Wed, Feb 13, 2019 at 6:47 PM Je
On Wed, Feb 13, 2019 at 6:47 PM Jeff Jirsa wrote:
> Depending on how bad data resurrection is, you should run it for any host
> that loses a range. In vnodes, that's usually all hosts.
>
> Cleanup with LCS is very cheap. Cleanup with STCS/TWCS is a bit more work.
>
Wait, doesn't cleanup just rew
On Wed, Feb 13, 2019 at 7:47 AM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
> On Wed, Feb 13, 2019 at 4:40 PM Jeff Jirsa wrote:
>
>> Some people who add new hosts rebalance the ring afterward - that
>> rebalancing can look a lot like a shrink.
>>
>
> You mean by moving the tokens? T
On Wed, Feb 13, 2019 at 4:40 PM Jeff Jirsa wrote:
> Some people who add new hosts rebalance the ring afterward - that
> rebalancing can look a lot like a shrink.
>
You mean by moving the tokens? That's only possible if one is not using
vnodes, correct?
I also believe, but don’t have time to pr
Some people who add new hosts rebalance the ring afterward - that rebalancing
can look a lot like a shrink.
I also believe, but don’t have time to prove, that enough new hosts can
eventually give you a range back (moving it all the way around the ring) - less
likely but probably possible.
Eas
On Wed, Feb 13, 2019 at 5:31 AM Jeff Jirsa wrote:
> The most likely result of not running cleanup is wasted disk space.
>
> The second most likely result is resurrecting deleted data if you do a
> second range movement (expansion, shrink, etc).
>
> If this is bad for you, you should run cleanup n
The most likely result of not running cleanup is wasted disk space.
The second most likely result is resurrecting deleted data if you do a second
range movement (expansion, shrink, etc).
If this is bad for you, you should run cleanup now. For many use cases, it’s a
nonissue.
If you know you’
Hi,
I should have run cleanup after adding a few nodes to my cluster, about 2
months ago, the ttl is 6 month, What happens now? Should i worry about any
catastrophics?
Should i run the cleanup now?
Thanks in advance
Sent using https://www.zoho.com/mail/
I have created https://issues.apache.org/jira/browse/CASSANDRA-14701
Please adapt as needed. Thanks!
Thomas
From: Jeff Jirsa
Sent: Donnerstag, 06. September 2018 07:52
To: cassandra
Subject: Re: nodetool cleanup - compaction remaining time
Probably worth a JIRA (especially if you can repro
Alain,
compaction throughput is set to 32.
Regards,
Thomas
From: Alain RODRIGUEZ
Sent: Donnerstag, 06. September 2018 11:50
To: user cassandra.apache.org
Subject: Re: nodetool cleanup - compaction remaining time
Hello Thomas.
Be aware that this behavior happens when the compaction
>
> As far as I can remember, if you have unthrottled compaction, then the
> message is different: it says "n/a".
Ah right!
I am now completely convinced this needs a JIRA as well (indeed, if it's
not fixed in C*3+, as Jeff mentioned).
Thanks for the feedback Alex.
Le jeu. 6 sept. 2018 à 11:06,
On Thu, Sep 6, 2018 at 11:50 AM Alain RODRIGUEZ wrote:
>
> Be aware that this behavior happens when the compaction throughput is set
> to *0 *(unthrottled/unlimited). I believe the estimate uses the speed
> limit for calculation (which is often very much wrong anyway).
>
As far as I can remember
Hello Thomas.
Be aware that this behavior happens when the compaction throughput is set
to *0 *(unthrottled/unlimited). I believe the estimate uses the speed limit
for calculation (which is often very much wrong anyway).
I just meant to say, you might want to make sure that it's due to cleanup
ty
Probably worth a JIRA (especially if you can repro in 3.0 or higher, since
2.1 is critical fixes only)
On Wed, Sep 5, 2018 at 10:46 PM Steinmaurer, Thomas <
thomas.steinmau...@dynatrace.com> wrote:
> Hello,
>
>
>
> is it a known issue / limitation that cleanup compactions aren’t counted
> in the
Hello,
is it a known issue / limitation that cleanup compactions aren't counted in the
compaction remaining time?
nodetool compactionstats -H
pending tasks: 1
compaction type keyspace table completed totalunit
progress
CleanupXXX YYY
:
> Hi Mikhail,
>
>
> Nodetool cleanup can add a fair amount of extra load (mostly IO) on your
> Cassandra nodes. Therefore it is recommended to run it during lower cluster
> usage, and one node at a time, in order to limit the impact on your
> cluster. There are no technical l
Hi Mikhail,
Nodetool cleanup can add a fair amount of extra load (mostly IO) on your
Cassandra nodes. Therefore it is recommended to run it during lower cluster
usage, and one node at a time, in order to limit the impact on your
cluster. There are no technical limitations that would prevent you
Hi,
In
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddNodeToCluster.html
there is recommendation:
6) After all new nodes are running, run nodetool cleanup
<https://docs.datastax.com/en/cassandra/3.0/cassandra/tools/toolsCleanup.html>
on each of the previously existing
same file.
The compactions will be retriggered if necessary after cleanup has
completed.
On 22 January 2018 at 13:05, Steinmaurer, Thomas <
thomas.steinmau...@dynatrace.com> wrote:
> Hello,
>
>
>
> when triggering a „nodetool cleanup“ with Cassandra 3.11, the nodetool
> c
Hello,
when triggering a "nodetool cleanup" with Cassandra 3.11, the nodetool call
almost returns instantly and I see the following INFO log.
INFO [CompactionExecutor:54] 2018-01-22 12:59:53,903
CompactionManager.java:1777 - Compaction interrupted:
Compaction@fc9b0073-1008
possibly disabling
snapshots for many hours.
Regards,
Thomas
From: Peng Xiao [mailto:2535...@qq.com]
Sent: Mittwoch, 27. September 2017 06:25
To: user
Subject: 回复: nodetool cleanup in parallel
Thanks Kurt.
-- 原始邮件 --
发件人: "kurt";mailto:k...@instaclustr.c
Thanks Kurt.
-- --
??: "kurt";;
: 2017??9??27??(??) 11:57
??: "User";
: Re: nodetool cleanup in parallel
correct. you can run it in parallel across many nodes if you have capacity.
generally see abou
t the same time on a single node, and also
increase compaction throughput to speed the process up.
On 27 Sep. 2017 13:20, "Peng Xiao" <2535...@qq.com> wrote:
hi,
nodetool cleanup will only remove those keys which no longer belong to
those nodes,than theoretically we can run nodetoo
hi,
nodetool cleanup will only remove those keys which no longer belong to those
nodes,than theoretically we can run nodetool cleanup in parallel,right?the
document suggests us to run this one by one,but it's too slow.
Thanks,
Peng Xiao
017 at 12:53 PM, Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
>> Yes I have many keyspaces which are not spread across all the data
>> centers(expected by design).
>> In this case, is this the expected behavior cleanup will not work for all
>> the
ces which are not spread across all the data
> centers(expected by design).
> In this case, is this the expected behavior cleanup will not work for all
> the keyspaces(nodetool cleanup)? is it going to be fixed in the latest
> versions?
>
> P.S: Thanks for the tip, I can workaroun
Yes I have many keyspaces which are not spread across all the data
centers(expected by design).
In this case, is this the expected behavior cleanup will not work for all
the keyspaces(nodetool cleanup)? is it going to be fixed in the latest
versions?
P.S: Thanks for the tip, I can workaround this
If you didn't explicitly remove a keyspace from one of your datacenters,
the next most likely cause is that you have one keyspace that's NOT
replicated to one of the datacenters. You can work around this by running
'nodetool cleanup ' on all of your other keyspaces individual
700), Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
> > Hello,
> >
> > I am running into an issue where *nodetool cleanup *fails to cleanup
> data.
> > We are running 2.1.16 version of Cassandra.
> >
> >
> > [user@host ~]$ nodeto
On 2017-05-10 22:44 (-0700), Jai Bheemsen Rao Dhanwada
wrote:
> Hello,
>
> I am running into an issue where *nodetool cleanup *fails to cleanup data.
> We are running 2.1.16 version of Cassandra.
>
>
> [user@host ~]$ nodetool cleanup
> Aborted cleaning up atle
Hello,
I am running into an issue where *nodetool cleanup *fails to cleanup data.
We are running 2.1.16 version of Cassandra.
[user@host ~]$ nodetool cleanup
Aborted cleaning up atleast one column family in keyspace user, check
server logs for more information.
Aborted cleaning up atleast one
> The nodetool cleanup ran successfully after setting the CLASSPATH
> variable to the kubernetes-cassandra.jar.
>
> Thanks.
>
> On 09-Feb-2017, at 2:23 PM, Srinath Reddy wrote:
>
> Alex,
>
> Thanks for reply. I will try the workaround and post an update.
>
> Rega
re
is not replicated, so it is indeed correct. I guess that
the "Cleanup
cannot run before a node has joined the ring" message refers to this.
3. I have seen a few log lines from CompactionManager.java showing nodetool
cleanup doing work, so that's good. I guess that the "No ssta
running DSE 4.8.7 / Cassandra 2.1.14.
> When I attempt to run nodetool cleanup on any node / any environment we
> are managing, I get the following output:
>
> Aborted cleaning up atleast one column family in keyspace ,
> check server logs for more information.
> error: nod
I am running DSE 4.8.7 / Cassandra 2.1.14.
When I attempt to run nodetool cleanup on any node / any environment we are
managing, I get the following output:
Aborted cleaning up atleast one column family in keyspace ,
check server logs for more information.
error: nodetool failed, check server
The nodetool cleanup ran successfully after setting the CLASSPATH variable to
the kubernetes-cassandra.jar.
Thanks.
> On 09-Feb-2017, at 2:23 PM, Srinath Reddy wrote:
>
> Alex,
>
> Thanks for reply. I will try the workaround and post an update.
>
> Regards,
>
>
re-balacne a Cassandra cluster after adding a new node and I'm
> getting this error when running nodetool cleanup. The Cassandra cluster is
> running in a Kubernetes cluster.
>
> Cassandra version is 2.2.8
>
> nodetool cleanup
> error: io.k8s.cassandra.KubernetesSeedProv
On Thu, Feb 9, 2017 at 6:13 AM, Srinath Reddy wrote:
> Hi,
>
> Trying to re-balacne a Cassandra cluster after adding a new node and I'm
> getting this error when running nodetool cleanup. The Cassandra cluster
> is running in a Kubernetes cluster.
>
> Cassandra ver
Yes, I ran the nodetool cleanup on the other nodes and got the error.
Thanks.
> On 09-Feb-2017, at 11:12 AM, Harikrishnan Pillai
> wrote:
>
> The cleanup has to run on other nodes
>
> Sent from my iPhone
>
> On Feb 8, 2017, at 9:14 PM, Srinath Reddy <mailt
The cleanup has to run on other nodes
Sent from my iPhone
On Feb 8, 2017, at 9:14 PM, Srinath Reddy
mailto:ksre...@gmail.com>> wrote:
Hi,
Trying to re-balacne a Cassandra cluster after adding a new node and I'm
getting this error when running nodetool cleanup. The Cassandra
Hi,
Trying to re-balacne a Cassandra cluster after adding a new node and I'm
getting this error when running nodetool cleanup. The Cassandra cluster is
running in a Kubernetes cluster.
Cassandra version is 2.2.8
nodetool cleanup
error: io.k8s.cassandra.KubernetesSeedProvider
Hi All,
I use cassandra 3.4.When running 'nodetool cleanup' command , see this error?
error: Expecting URI in variable: [cassandra.config]. Found[cassandra.yaml].
Please prefix the file with [file:///] for local files and [file:///]
for remote files. If you are executing this from a
big of a sstable.
>
> Regards,
> K F
>
> From: Robert Coli
> To: "user@cassandra.apache.org" ; K F
>
> Sent: Thursday, November 5, 2015 1:53 PM
> Subject: Re: Does nodetool cleanup clears tombstones in the CF?
>
>
>
> On Wed, Nov 4, 2015 at 12:56
bject: Re: Does nodetool cleanup clears tombstones in the CF?
On Wed, Nov 4, 2015 at 12:56 PM, K F wrote:
Quick question, in order for me to purge tombstones on particular nodes if I
run nodetool cleanup will that help in purging
the tombstones from that node?
cleanup is for removing data from
On Wed, Nov 4, 2015 at 12:56 PM, K F wrote:
> Quick question, in order for me to purge tombstones on particular nodes if
> I run nodetool cleanup will that help in
> purging the tombstones from that node?
>
cleanup is for removing data from ranges the node no longer owns.
It is
Hi,
Quick question, in order for me to purge tombstones on particular nodes if I
run nodetool cleanup will that help in purging
the tombstones from that node?
Thanks.
day. Diverting traffic away from a DC just to run cleanup feels
>>> like overkill to me.
>>>
>>>
>>>
>>> On Thu, Oct 8, 2015 at 2:39 PM sai krishnam raju potturi <
>>> pskraj...@gmail.com> wrote:
>>>
>>>> hi;
>>>>
t;>our cassandra cluster currently uses DSE 4.6. The underlying
>>> cassandra version is 2.0.14.
>>>
>>> We are planning on adding multiple nodes to one of our datacenters. This
>>> requires "nodetool cleanup". The "nodetool cleanup" op
> version is 2.0.14.
>>
>> We are planning on adding multiple nodes to one of our datacenters. This
>> requires "nodetool cleanup". The "nodetool cleanup" operation takes
>> around 45 mins for each node.
>>
>> Datastax documentation recommends
ju potturi <
pskraj...@gmail.com> wrote:
> hi;
>our cassandra cluster currently uses DSE 4.6. The underlying cassandra
> version is 2.0.14.
>
> We are planning on adding multiple nodes to one of our datacenters. This
> requires "nodetool cleanup". The "n
hi;
our cassandra cluster currently uses DSE 4.6. The underlying cassandra
version is 2.0.14.
We are planning on adding multiple nodes to one of our datacenters. This
requires "nodetool cleanup". The "nodetool cleanup" operation takes around
45 mins for each node.
Da
On Wed, Oct 7, 2015 at 9:06 PM, Kevin Burton wrote:
> Let's say I have 10 nodes, I add 5 more, if I fail to run nodetool
> cleanup, is excessive data transferred when I add the 6th node? IE do the
> existing nodes send more data to the 6th node?
>
No. Streaming only streams
ice for the worlds
> most innovative companies such as Netflix, Adobe, Intuit, and eBay.
>
> On Thu, Oct 8, 2015 at 12:06 AM, Kevin Burton wrote:
>
>> Let's say I have 10 nodes, I add 5 more, if I fail to run nodetool
>> cleanup, is excessive data transferred when I add t
aStax is the
database technology and transactional backbone of choice for the worlds
most innovative companies such as Netflix, Adobe, Intuit, and eBay.
On Thu, Oct 8, 2015 at 12:06 AM, Kevin Burton wrote:
> Let's say I have 10 nodes, I add 5 more, if I fail to run nodetool
> cleanup, is e
Let's say I have 10 nodes, I add 5 more, if I fail to run nodetool cleanup,
is excessive data transferred when I add the 6th node? IE do the existing
nodes send more data to the 6th node?
the documentation is unclear. It sounds like the biggest problem is that
the existing data causes thin
On Fri, Jul 24, 2015 at 5:03 PM, rock zhang wrote:
> It already 2 hours, only progress is 6%, seems it is very slow. Is there
> any way to speedup ?
>
Cleanup is a type of compaction; it obeys the compaction throttle.
> If I interrupted the process, what gonna happen ? Next time it just
> co
2%
> Active compaction remaining time : 0h00m00s
>
>
>
>
> On Jul 24, 2015, at 4:04 PM, Jeff Jirsa wrote:
>
>> You can check for progress using `nodetool compactionstats` (which will show
>> Cleanup tasks), or check for ‘Cleaned up’ messages in the log
>
ck for ‘Cleaned up’ messages in the log
> (/var/log/cassandra/system.log).
>
> However, `nodetool cleanup` has a very specific and limited task - it deletes
> data no longer owned by the node, typically after adding nodes or moving
> tokens. Have you done that recently? Or are y
You can check for progress using `nodetool compactionstats` (which will show
Cleanup tasks), or check for ‘Cleaned up’ messages in the log
(/var/log/cassandra/system.log).
However, `nodetool cleanup` has a very specific and limited task - it deletes
data no longer owned by the node, typically
Hi All,
After I added node, I run node tool cleanup on the old notes , but it takes
forever, no error message, and I don't see space are freed.
What should I do ? Repair first ?
Thanks
Rock
It should work on 2.0.13. If it fails with that assertion, you should just
retry. If that does not work, and you can reproduce this, please file a
ticket
/Marcus
On Tue, Mar 31, 2015 at 9:33 AM, Amlan Roy wrote:
> Hi,
>
> Thanks for the reply. Since nodetool cleanup is not working e
Hi,
Thanks for the reply. Since nodetool cleanup is not working even after
upgrading to 2.0.13, is it recommended to go to an older version (2.0.11 for
example, with 2.0.12 also it did not work). Is there any other way of cleaning
data from existing nodes after adding a new node.
Regards
Looks like the issue is https://issues.apache.org/jira/browse/CASSANDRA-9070.
On Mon, Mar 30, 2015 at 6:25 PM, Robert Coli wrote:
> On Mon, Mar 30, 2015 at 4:21 PM, Amlan Roy wrote:
>>
>> Thanks for the reply. I have upgraded to 2.0.13. Now I get the following
>> error.
>
>
> If cleanup is still
On Mon, Mar 30, 2015 at 4:21 PM, Amlan Roy wrote:
> Thanks for the reply. I have upgraded to 2.0.13. Now I get the following
> error.
>
If cleanup is still excepting for you on 2.0.13 with some sstables you
have, I would strongly consider :
1) file a JIRA (http://issues.apache.org) and attach /
gt; Hi Amlan,
>
> On 30/03/15 22:12, Amlan Roy wrote:
>> Hi,
>>
>> I have added new nodes to an existing cluster and ran the “nodetool
>> cleanup”. I
>> am getting the following error. Wanted to know if there is any solution to
Code problem that was patched in
https://issues.apache.org/jira/browse/CASSANDRA-8716
<https://issues.apache.org/jira/browse/CASSANDRA-8716>. Upgrade to 2.0.13
> On Mar 30, 2015, at 1:12 PM, Amlan Roy wrote:
>
> Hi,
>
> I have added new nodes to an existing cluster
Hi Amlan,
On 30/03/15 22:12, Amlan Roy wrote:
Hi,
I have added new nodes to an existing cluster and ran the “nodetool cleanup”. I
am getting the following error. Wanted to know if there is any solution to it.
Regards,
Amlan
Error occurred during cleanup
Hi,
I have added new nodes to an existing cluster and ran the “nodetool cleanup”. I
am getting the following error. Wanted to know if there is any solution to it.
Regards,
Amlan
Error occurred during cleanup
java.util.concurrent.ExecutionException: java.lang.AssertionError: Memory was
freed
Thanks a lot for pointing this out! Yes, a workaround would be very much
appreciated, or also an ETA for 2.0.13, so that I could decide whether or
not going for an officially unsupported 2.0.12 to 2.0.11 downgrade, since I
really need that cleanup.
Thanks
On Feb 27, 2015 10:53 PM, "Jeff Wehrwein"
We had the exact same problem, and found this bug:
https://issues.apache.org/jira/browse/CASSANDRA-8716. It's fixed in 2.0.13
(unreleased), but we haven't found a workaround for the interim. Please
share if you find one!
Thanks,
Jeff
On Fri, Feb 27, 2015 at 6:01 PM, Gianluca Borello
wrote:
>
Hello,
I have a cluster of four nodes running 2.0.12. I added one more node and
then went on with the cleanup procedure on the other four nodes, but I get
this error (the same error on each node):
INFO [CompactionExecutor:10] 2015-02-28 01:55:15,097
CompactionManager.java (line 619) Cleaned up t
Hi,
We have a small cluster running 2.0.12 and after adding a new node to it
running nodetool cleanup fails on every old node with "AssertionError:
Memory was freed". It seems to be fixed in 2.0.13, see
https://issues.apache.org/jira/browse/CASSANDRA-8716. Is there any
workaround f
After joining a node to an existing cluster, I run 'nodetool cleanup'
as recommended on existing nodes in the cluster. On one node, after
some time, I am getting an error starting with
Exception in thread "main" java.lang.AssertionError:
[SSTableReader(path='/data/cass
On Thu, Aug 7, 2014 at 2:46 PM, Viswanathan Ramachandran <
vish.ramachand...@gmail.com> wrote:
> I plan to have a multi data center Cassandra 2 setup with 2-4 nodes per
> data center and several 10s of data centers. We have My understanding is
> that nodetool cleanup removes data
, it does not clarify the procedure in a multi-data
center setup.
My understanding is that nodetool cleanup removes data which no longer
belongs to that node. When a new data center is being setup, we are
creating completely new replicas and AFAICT, it does not result in data
movement/rebalance
On Thu, Jan 30, 2014 at 3:23 AM, Edward Capriolo wrote:
> Is this only a ByteOrderPartitioner problem?
>
No, see the comments on
https://issues.apache.org/jira/browse/CASSANDRA-6638for more details.
--
Sylvain
>
>
> On Wed, Jan 29, 2014 at 7:34 PM, Tyler Hobbs wrote:
>
>> Ignace,
>>
>> Thanks
Is this only a ByteOrderPartitioner problem?
On Wed, Jan 29, 2014 at 7:34 PM, Tyler Hobbs wrote:
> Ignace,
>
> Thanks for reporting this. I've been able to reproduce the issue with a
> unit test, so I opened
> https://issues.apache.org/jira/browse/CASSANDRA-6638. I'm not 100% sure
> if your f
Ignace,
Thanks for reporting this. I've been able to reproduce the issue with a
unit test, so I opened https://issues.apache.org/jira/browse/CASSANDRA-6638.
I'm not 100% sure if your fix is the correct one, but I should be able to
get it fixed quickly and figure out the full set of cases where a
Got into a problem when testing a vnode setup.
I'm using a byteordered partitioner, linux, code version 2.0.4, replication
factor 1, 4 machine
All goes ok until I run cleanup, and gets worse when adding / decommissioning
nodes.
In my opinion the problem can be found in the SSTableScanner::
KeyS
> Is there some other mechanism for forcing expired data to be removed without
> also compacting? (major compaction having obvious problematic side effects,
> and user defined compaction being significant work to script up).
Tombstone compactions may help here
https://issues.apache.org/jira/brow
>
>
>> Is there some other mechanism for forcing expired data to be removed
> without also compacting? (major compaction having obvious problematic side
> effects, and user defined compaction being significant work to script up).
>
>
Online scrubs will, as a side effect, purge expired tombstones *w
On 01/07/2014 01:38 PM, Tyler Hobbs wrote:
On Tue, Jan 7, 2014 at 7:49 AM, Chris Burroughs
wrote:
This has not reached a consensus in #cassandra in the past. Does
`nodetool cleanup` also remove data that has expired from a TTL?
No, cleanup only removes rows that the node is not a replica
On Tue, Jan 7, 2014 at 7:49 AM, Chris Burroughs
wrote:
> This has not reached a consensus in #cassandra in the past. Does
> `nodetool cleanup` also remove data that has expired from a TTL?
No, cleanup only removes rows that the node is not a replica for.
--
Tyler Hobbs
DataStax
This has not reached a consensus in #cassandra in the past. Does
`nodetool cleanup` also remove data that has expired from a TTL?
or the repair. After the compaction this is
>> streamed to the different nodes in order to repair them.
>>
>> If you trigger this on every node simultaneously you basically take the
>> performance away from your cluster. I would expect cassandra still to
>> function, ju
slower then before. Triggering it node after node will
> leave your cluster with more resources to handle incoming requests.
>
>
> Cheers,
>
> Artur
> On 25/11/13 15:12, Julien Campan wrote:
>
> Hi,
>
> I'm working with Cassandra 1.2.2 and I have a question
:
>> Hi,
>>
>> I'm working with Cassandra 1.2.2 and I have a question about nodetool
>> cleanup.
>> In the documentation , it's writted " Wait for cleanup to complete on one
>> node before doing the next"
>>
>> I would like to know, why we can't perform a lot of cleanup in a same time ?
>>
>>
>> Thanks
>>
>>
>
requests.
Cheers,
Artur
On 25/11/13 15:12, Julien Campan wrote:
Hi,
I'm working with Cassandra 1.2.2 and I have a question about nodetool
cleanup.
In the documentation , it's writted "Wait for cleanup to complete on
one node before doing the next"
I would like to know, wh
Hi,
I'm working with Cassandra 1.2.2 and I have a question about nodetool
cleanup.
In the documentation , it's writted " Wait for cleanup to complete on one
node before doing the next"
I would like to know, why we can't perform a lot of cleanup in a same time
?
Thanks
nodetool setcompactionthroughput controls the speed of compaction, and cleanup
runs in the compaction manager.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 8/05/2013, at 8:59 AM, Michael Morris wrote:
> Not sur
Not sure about making things go faster, but you should be able to monitor
it with nodetool compactionstats.
Thanks,
Mike
On Tue, May 7, 2013 at 12:43 PM, Brian Tarbox wrote:
> I'm recovering from a significant failure and so am doing lots of nodetool
> move, removetoken, repair and cleanup.
>
I'm recovering from a significant failure and so am doing lots of nodetool
move, removetoken, repair and cleanup.
For most of these I can do "nodetool netstats" to monitor progress but it
doesn't show anything for cleanup...how can I monitor the progress of
cleanup? On a related note: I'm able to
used that much.
thx
On Tue, Oct 23, 2012 at 12:40 AM, aaron morton wrote:
> what is the internal memory model used? It sounds like it doesn't have a
> page manager?
>
> Nodetool cleanup is a maintenance process to remove data on disk that the
> node is no longer a replica for.
> what is the internal memory model used? It sounds like it doesn't have a page
> manager?
Nodetool cleanup is a maintenance process to remove data on disk that the node
is no longer a replica for. It is typically used after the token assignments
have been change
On 10/23/2012 01:25 AM, Peter Schuller wrote:
On Oct 22, 2012 11:54 AM, "B. Todd Burruss" <mailto:bto...@gmail.com>> wrote:
>
> does "nodetool cleanup" perform a major compaction in the process of
> removing unwanted data?
No.
what is the interna
On Oct 22, 2012 11:54 AM, "B. Todd Burruss" wrote:
>
> does "nodetool cleanup" perform a major compaction in the process of
> removing unwanted data?
No.
does "nodetool cleanup" perform a major compaction in the process of
removing unwanted data?
i seem to remember this to be the case, but can't find anything definitive
got it! thanks a lot for the explanation!
On Wed, Aug 24, 2011 at 1:06 AM, Edward Capriolo wrote:
>
> On Tue, Aug 23, 2011 at 11:56 AM, Sam Overton wrote:
>
>> On 21 August 2011 12:34, Yan Chunlu wrote:
>>
>>> since "nodetool cleanup" could remove hin
1 - 100 of 122 matches
Mail list logo