Re: Urgent Problem - Disk full

2018-04-04 Thread Jürgen Albersdorfer
Thank You All for your hints on this. I added another folder on the commitlog Disk to compensate immediate urgency. Next Step will be to reorganize and deduplicate the data into a 2nd table. Then drop the original one, clean the snapshot, consolidate back all data Files away from the commitlog

RE: Urgent Problem - Disk full

2018-04-04 Thread Kenneth Brotman
Agreed that you tend to add capacity to nodes or add nodes once you know you have no unneeded data in the cluster. From: Alain RODRIGUEZ [mailto:arodr...@gmail.com] Sent: Wednesday, April 04, 2018 9:10 AM To: user cassandra.apache.org Subject: Re: Urgent Problem - Disk full Hi, When

Re: Urgent Problem - Disk full

2018-04-04 Thread Alain RODRIGUEZ
Kenneth Brotman [mailto:kenbrot...@yahoo.com.INVALID] > Sent: Wednesday, April 04, 2018 7:28 AM > To: user@cassandra.apache.org > Subject: RE: Urgent Problem - Disk full > > Jeff, > > Just wondering: why wouldn't the answer be to: > 1. move anything you want to arc

RE: Urgent Problem - Disk full

2018-04-04 Thread Kenneth Brotman
There's also the old snapshots to remove that could be a significant amount of memory. -Original Message- From: Kenneth Brotman [mailto:kenbrot...@yahoo.com.INVALID] Sent: Wednesday, April 04, 2018 7:28 AM To: user@cassandra.apache.org Subject: RE: Urgent Problem - Disk full

RE: Urgent Problem - Disk full

2018-04-04 Thread Kenneth Brotman
Jeff Jirsa [mailto:jji...@gmail.com] Sent: Wednesday, April 04, 2018 7:10 AM To: user@cassandra.apache.org Subject: Re: Urgent Problem - Disk full Yes, this works in TWCS. Note though that if you have tombstone compaction subproperties set, there may be sstables with newer filesystem timestamps

Re: Urgent Problem - Disk full

2018-04-04 Thread Jeff Jirsa
There is zero reason to believe a full repair would make this better and a lot of reason to believe it’ll make it worse For casual observers following along at home, this is probably not the answer you’re looking for. -- Jeff Jirsa > On Apr 4, 2018, at 4:37 AM, Rahul Singh wrote: > > Nothi

Re: Urgent Problem - Disk full

2018-04-04 Thread Jeff Jirsa
Yes, this works in TWCS. Note though that if you have tombstone compaction subproperties set, there may be sstables with newer filesystem timestamps that actually hold older Cassandra data, in which case sstablemetadata can help finding the sstables with truly old timestamps Also if you’ve ex

RE: Urgent Problem - Disk full

2018-04-04 Thread Kenneth Brotman
, April 04, 2018 4:38 AM To: user@cassandra.apache.org; user@cassandra.apache.org Subject: Re: Urgent Problem - Disk full Nothing a full repair won’t be able to fix. On Apr 4, 2018, 7:32 AM -0400, Jürgen Albersdorfer , wrote: Hi, I have an urgent Problem. - I will run out of disk space in

Re: Urgent Problem - Disk full

2018-04-04 Thread Rahul Singh
Nothing a full repair won’t be able to fix. On Apr 4, 2018, 7:32 AM -0400, Jürgen Albersdorfer , wrote: > Hi, > > I have an urgent Problem. - I will run out of disk space in near future. > Largest Table is a Time-Series Table with TimeWindowCompactionStrategy (TWCS) > and default_time_to_live =

Urgent Problem - Disk full

2018-04-04 Thread Jürgen Albersdorfer
Hi, I have an urgent Problem. - I will run out of disk space in near future. Largest Table is a Time-Series Table with TimeWindowCompactionStrategy (TWCS) and default_time_to_live = 0 Keyspace Replication Factor RF=3. I run C* Version 3.11.2 We have grown the Cluster over time, so SSTable files h

Re: Disk full during new node bootstrap

2017-02-06 Thread Alain RODRIGUEZ
ifics about what you're calling "groups" in a DC. Are these racks ? >> >> Thanks >> >> On Sat, Feb 4, 2017 at 10:41 AM laxmikanth sadula < >> laxmikanth...@gmail.com> wrote: >> >>> Yes .. same number of tokens... >>&

Re: Disk full during new node bootstrap

2017-02-04 Thread techpyaasa .
at 11:56 AM, Jonathan Haddad >> wrote: >> >> Are you using the same number of tokens on the new node as the old ones? >> >> On Fri, Feb 3, 2017 at 8:31 PM techpyaasa . wrote: >> >> Hi, >> >> We are using c* 2.0.17 , 2 DCs , RF=3. >> >&

Re: Disk full during new node bootstrap

2017-02-04 Thread Alexander Dejanovski
M techpyaasa . wrote: > > Hi, > > We are using c* 2.0.17 , 2 DCs , RF=3. > > When I try to add new node to one group in a DC , I got disk full. Can > someone please tell what is the best way to resolve this? > > Run compaction for nodes in that group(to which I'm going

Re: Disk full during new node bootstrap

2017-02-04 Thread laxmikanth sadula
2 DCs , RF=3. >> >> When I try to add new node to one group in a DC , I got disk full. Can >> someone please tell what is the best way to resolve this? >> >> Run compaction for nodes in that group(to which I'm going to add new >> node, as data streams to new no

Re: Disk full during new node bootstrap

2017-02-03 Thread Jonathan Haddad
Are you using the same number of tokens on the new node as the old ones? On Fri, Feb 3, 2017 at 8:31 PM techpyaasa . wrote: > Hi, > > We are using c* 2.0.17 , 2 DCs , RF=3. > > When I try to add new node to one group in a DC , I got disk full. Can > someone please tell what

Disk full during new node bootstrap

2017-02-03 Thread techpyaasa .
Hi, We are using c* 2.0.17 , 2 DCs , RF=3. When I try to add new node to one group in a DC , I got disk full. Can someone please tell what is the best way to resolve this? Run compaction for nodes in that group(to which I'm going to add new node, as data streams to new nodes from nodes of

Re: disk full and COMMIT-LOG-WRITER ?

2011-09-08 Thread Yang
ok, found past discussions: http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/cassandra-server-disk-full-td6560725.html On Thu, Sep 8, 2011 at 11:00 PM, Yang wrote: > I found the reason of my server freeze: > > COMMIT-LOG-WRITER thread is gone, dead, so the blocking

disk full and COMMIT-LOG-WRITER ?

2011-09-08 Thread Yang
I found the reason of my server freeze: COMMIT-LOG-WRITER thread is gone, dead, so the blocking queue in PeriodicCommitLogExecutorService is full, then all mutationStage jobs are stuck on the mutations flushing. the COMMIT-LOG-WRITER thread died because at one time the disk was full, I cleaned up

Re: cassandra server disk full

2011-08-03 Thread Ryan King
The last patch on that ticket is what we're running in prod. Its working well for us with disk_failure_mode: readwrite. In the case of filesystem errors the node shuts off thrift and gossip. While the gossip is propagating we can continue to serve some reads out of the caches. -ryan On Tue, Aug 2

Re: cassandra server disk full

2011-08-02 Thread Jim Ancona
On Mon, Aug 1, 2011 at 6:12 PM, Ryan King wrote: > On Fri, Jul 29, 2011 at 12:02 PM, Chris Burroughs > wrote: >> On 07/25/2011 01:53 PM, Ryan King wrote: >>> Actually I was wrong– our patch will disable gosisp and thrift but >>> leave the process running: >>> >>> https://issues.apache.org/jira/br

Re: cassandra server disk full

2011-08-01 Thread Ryan King
On Fri, Jul 29, 2011 at 12:02 PM, Chris Burroughs wrote: > On 07/25/2011 01:53 PM, Ryan King wrote: >> Actually I was wrong– our patch will disable gosisp and thrift but >> leave the process running: >> >> https://issues.apache.org/jira/browse/CASSANDRA-2118 >> >> If people are interested in that

Re: cassandra server disk full

2011-07-29 Thread Chris Burroughs
On 07/25/2011 01:53 PM, Ryan King wrote: > Actually I was wrong– our patch will disable gosisp and thrift but > leave the process running: > > https://issues.apache.org/jira/browse/CASSANDRA-2118 > > If people are interested in that I can make sure its up to date with > our latest version. Thank

cassandra server disk full

2011-07-26 Thread Donna Li
@cassandra.apache.org 主题: Re: cassandra server disk full If the commit log or data disk is full it's not possible for the server to process any writes, the best it could do is perform reads. But reads may result in a write due to read repair and will also need to do some app logging, so IMHO it's r

cassandra server disk full

2011-07-26 Thread Donna Li
@cassandra.apache.org 主题: Re: cassandra server disk full Actually I was wrong�C our patch will disable gosisp and thrift but leave the process running: https://issues.apache.org/jira/browse/CASSANDRA-2118 If people are interested in that I can make sure its up to date with our latest version. -ryan On

Re: cassandra server disk full

2011-07-25 Thread aaron morton
20:06, Donna Li wrote: > > All: > Could anyone help me? > > > Best Regards > Donna li > > -邮件原件- > 发件人: Donna Li [mailto:donna...@utstar.com] > 发送时间: 2011年7月22日 11:23 > 收件人: user@cassandra.apache.org > 主题: cassandra server disk full &g

Re: cassandra server disk full

2011-07-25 Thread Ryan King
it >> runs out of space. >> (https://issues.apache.org/jira/browse/CASSANDRA-809) >> >> Unfortunately dealing with disk-full conditions tends to be a low >> priority for many people because it's relatively easy to avoid with >> decent monitoring, but if it's

Re: cassandra server disk full

2011-07-25 Thread Ryan King
t; (https://issues.apache.org/jira/browse/CASSANDRA-809) > > Unfortunately dealing with disk-full conditions tends to be a low > priority for many people because it's relatively easy to avoid with > decent monitoring, but if it's critical for you, we'd welcome the > assista

cassandra server disk full

2011-07-25 Thread Donna Li
All: Could anyone help me? Best Regards Donna li -邮件原件- 发件人: Donna Li [mailto:donna...@utstar.com] 发送时间: 2011年7月22日 11:23 收件人: user@cassandra.apache.org 主题: cassandra server disk full All: Is there an easy way to fix the bug by change server's code? Best Regards Don

cassandra server disk full

2011-07-21 Thread Donna Li
All: Is there an easy way to fix the bug by change server's code? Best Regards Donna li -邮件原件- 发件人: Donna Li [mailto:donna...@utstar.com] 发送时间: 2011年7月8日 11:29 收件人: user@cassandra.apache.org 主题: cassandra server disk full Does CASSANDRA-809 resolved or any other path can re

cassandra server disk full

2011-07-07 Thread Donna Li
server disk full Yeah, ideally it should probably die or drop into read-only mode if it runs out of space. (https://issues.apache.org/jira/browse/CASSANDRA-809) Unfortunately dealing with disk-full conditions tends to be a low priority for many people because it's relatively easy to avoid with d

Re: cassandra server disk full

2011-07-07 Thread Jonathan Ellis
Yeah, ideally it should probably die or drop into read-only mode if it runs out of space. (https://issues.apache.org/jira/browse/CASSANDRA-809) Unfortunately dealing with disk-full conditions tends to be a low priority for many people because it's relatively easy to avoid with decent monit

cassandra server disk full

2011-07-07 Thread Donna Li
All: When one of the cassandra servers disk full, the cluster can not work normally, even I make space. I must reboot the server that disk full, the cluster can work normally. Best Regards Donna li

Re: disk full

2011-06-12 Thread aaron morton
Please provide some more information. In general, avoid using more than 50% of the available disk space. Cheers - Aaron Morton Freelance Cassandra Developer @aaronmorton http://www.thelastpickle.com On 11 Jun 2011, at 16:19, Donna Li wrote: > Hi, all: > When disk is f

disk full

2011-06-10 Thread Donna Li
Hi, all: When disk is full, why ddb must reboot even after I clear the disk? Best Regards Donna li

Re: Commitlog Disk Full

2011-05-19 Thread Jonathan Ellis
That's basically the approach I want to take in https://issues.apache.org/jira/browse/CASSANDRA-2427. On Thu, May 19, 2011 at 12:00 PM, Mike Malone wrote: > Just noticed this thread and figured I'd chime in since we've had similar > issues with the commit log growing too large on our clusters. Tu

Re: Commitlog Disk Full

2011-05-19 Thread Mike Malone
Just noticed this thread and figured I'd chime in since we've had similar issues with the commit log growing too large on our clusters. Tuning down the flush timeout wasn't really an acceptable solution for us since we didn't want to be constantly flushing and generating extra SSTables for no reaso

Re: Commitlog Disk Full

2011-05-17 Thread mcasandra
m the same node. I looked at the code and it looks like you should see something in the logs for those files. -- View this message in context: http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Commitlog-Disk-Full-tp6356797p6375353.html Sent from the cassandra-u...@incubator.apache.or

Re: Commitlog Disk Full

2011-05-17 Thread Sanjeev Kulkarni
4797. On Tue, May 17, 2011 at 10:49 AM, mcasandra wrote: > Do you see anything in log files? > > -- > View this message in context: > http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Commitlog-Disk-Full-tp6356797p6374234.html > Sent from the cassandra-u...@incubator.apache.org mailing list archive at > Nabble.com. >

Re: Commitlog Disk Full

2011-05-17 Thread mcasandra
Do you see anything in log files? -- View this message in context: http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Commitlog-Disk-Full-tp6356797p6374234.html Sent from the cassandra-u...@incubator.apache.org mailing list archive at Nabble.com.

Re: Commitlog Disk Full

2011-05-16 Thread Sanjeev Kulkarni
- min_compaction_threshold: Avoid minor compactions of less than this >>> number of sstable files >>>- max_compaction_threshold: Compact no more than this number of >>> sstable >>> files at once >>>- column_metadata: Metadata which describes columns of column family. >>>Supported format is [{ k:v, k:v, ... }, { ... }, ...] >>>Valid attributes: column_name, validation_class (see comparator), >>> index_type (integer), index_name. >>> >>> >>> -- >>> View this message in context: >>> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Commitlog-Disk-Full-tp6356797p6370913.html >>> Sent from the cassandra-u...@incubator.apache.org mailing list archive >>> at Nabble.com. >>> >> >> >

Re: Commitlog Disk Full

2011-05-16 Thread Sanjeev Kulkarni
}, { ... }, ...] >>Valid attributes: column_name, validation_class (see comparator), >> index_type (integer), index_name. >> >> >> -- >> View this message in context: >> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Commitlog-Disk-Full-tp6356797p6370913.html >> Sent from the cassandra-u...@incubator.apache.org mailing list archive at >> Nabble.com. >> > >

Re: Commitlog Disk Full

2011-05-16 Thread Sanjeev Kulkarni
umn_metadata: Metadata which describes columns of column family. >Supported format is [{ k:v, k:v, ... }, { ... }, ...] >Valid attributes: column_name, validation_class (see comparator), > index_type (integer), index_name. > > > -- > View this message in context:

Re: Commitlog Disk Full

2011-05-16 Thread mcasandra
/Commitlog-Disk-Full-tp6356797p6370913.html Sent from the cassandra-u...@incubator.apache.org mailing list archive at Nabble.com.

Re: Commitlog Disk Full

2011-05-16 Thread Sanjeev Kulkarni
see if that helps. > > -- > View this message in context: > http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Commitlog-Disk-Full-tp6356797p6362301.html > Sent from the cassandra-u...@incubator.apache.org mailing list archive at > Nabble.com. >

Re: Commitlog Disk Full

2011-05-13 Thread mcasandra
.nabble.com/Commitlog-Disk-Full-tp6356797p6362301.html Sent from the cassandra-u...@incubator.apache.org mailing list archive at Nabble.com.

Re: Commitlog Disk Full

2011-05-13 Thread Sanjeev Kulkarni
our write happen in bursts. So often times, clients write data as fast as they can. Conceivably one can write 5G in one hour. The other setting that we have is that our replication factor is 3 and we write using QUORUM. Not sure if that will affect things. On Fri, May 13, 2011 at 12:04 AM, Peter S

Re: Commitlog Disk Full

2011-05-13 Thread mcasandra
Is there a way to look at the actual size of memtable? Would that help? -- View this message in context: http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Commitlog-Disk-Full-tp6356797p6360001.html Sent from the cassandra-u...@incubator.apache.org mailing list archive at

Re: Commitlog Disk Full

2011-05-13 Thread Peter Schuller
> I haven't explictly set a value for the memtable_flush_after_mins parameter. > Looks like the default is 60minutes. > I will try to play around this value to see if that fixes things. Is the amount of data in the commit log consistent with what you might have been writing during 60 minutes? Incl

Re: Commitlog Disk Full

2011-05-12 Thread Sanjeev Kulkarni
Hi Peter, Thanks for the response. I haven't explictly set a value for the memtable_flush_after_mins parameter. Looks like the default is 60minutes. I will try to play around this value to see if that fixes things. Thanks again! On Thu, May 12, 2011 at 11:41 AM, Peter Schuller < peter.schul...@inf

Re: Commitlog Disk Full

2011-05-12 Thread Peter Schuller
> I understand that cassandra periodically cleans up the commitlog directories > by generating sstables in datadir. Is there any way to speed up this > movement from commitog to datadir? commitlog_rotation_threshold_in_mb could cause problems if it was set very very high, but with the default of 1

Commitlog Disk Full

2011-05-12 Thread Sanjeev Kulkarni
Hey guys, I have a ec2 debian cluster consisting of several nodes running 0.7.5 on ephimeral disks. These are fresh installs and not upgrades. The commitlog is set to the smaller of the disks which is around 10G in size and the datadir is set to the bigger disk. The config file is basically the sam

Re: Disk Full Error on Cleanup

2010-11-26 Thread Peter Schuller
> I keep running into the following error while running a nodetool cleanup: Depending on version, maybe you're running into this: https://issues.apache.org/jira/browse/CASSANDRA-1674 But note though that independently of the above, if your 80 gb is mostly a single column family, you're in dan

Disk Full Error on Cleanup

2010-11-26 Thread Jake Maizel
Hello, I keep running into the following error while running a nodetool cleanup: ERROR [COMPACTION-POOL:1] 2010-11-26 12:36:38,383 CassandraDaemon.java (line 87) Uncaught exception in thread Thread[COMPACTION-POOL:1,5,main] java.lang.UnsupportedOperationException: disk full at

Re: disk full error while bootstrapping

2010-09-09 Thread Gurpreet Singh
I did set autobootstrap true. It got the new token, and even proceeded to print the message that its bootstrapping, however the source node just didnt show any activity. At a later point, when i tried again (after the other bootstrap from the other source was finished), it did proceed, however that

Re: disk full error while bootstrapping

2010-09-09 Thread Jonathan Ellis
On Thu, Sep 9, 2010 at 2:24 PM, Gurpreet Singh wrote: > D was once a part of the cluster, but had gone down because of disk issues. > Its back up, it still has the old data, however to bootstrap again, i > deleted the old Location db (is that a good practise?), and so i see it did > take a new tok

Re: disk full error while bootstrapping

2010-09-09 Thread Gurpreet Singh
Thanks Jonathan. I guess i need to be patient for JVM GC :-) Two more things i was trying, and wanted to check if it was supported. Now, i have a 2 node cluster (say A and B), and i am trying to bootstrap 2 more nodes. (C and D) The first bootstrap started successfully. I see anticompaction happen

Re: disk full error while bootstrapping

2010-09-09 Thread Jonathan Ellis
On Thu, Sep 9, 2010 at 12:50 AM, Gurpreet Singh wrote: > 1. what is the purpose of this anticompacted file created during cleanup? That is all the data that still belongs to the node, post-bootstrap. Since you were just bringing the cluster back up to RF nodes, that's all the data it started with

Re: disk full error while bootstrapping

2010-09-08 Thread Gurpreet Singh
be stuck with the bootstrapping message, and did > not > > show any activity. > > Only after i checked the logs of the seed node, i realise there has been > an > > error: > > Caused by: java.lang.UnsupportedOperationException: disk full > > at > > > o

Re: disk full error while bootstrapping

2010-09-08 Thread Jonathan Ellis
ootstrapping message, and did not > show any activity. > Only after i checked the logs of the seed node, i realise there has been an > error: > Caused by: java.lang.UnsupportedOperationException: disk full > at > org.apache.cassandra.db.CompactionManager.doAntiCompaction(Co

disk full error while bootstrapping

2010-09-08 Thread Gurpreet Singh
: java.lang.UnsupportedOperationException: disk full at org.apache.cassandra.db.CompactionManager.doAntiCompaction(CompactionManager.java:345) at org.apache.cassandra.db.CompactionManager.access$500(CompactionManager.java:49) at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:143) at

Re: Handling disk-full scenarios

2010-06-08 Thread Jonathan Ellis
Sounds like you ran into https://issues.apache.org/jira/browse/CASSANDRA-1169. The only workaround until that is fixed is to re-run repair. On Tue, Jun 8, 2010 at 7:17 AM, Ian Soboroff wrote: > And three days later, AE stages are still running full-bore.  So I conclude > this is not a very good

Re: Handling disk-full scenarios

2010-06-08 Thread Ian Soboroff
And three days later, AE stages are still running full-bore. So I conclude this is not a very good approach. I wonder what will happen when I lose a disk (which is essentially the same as what I did -- rm the data directory). What happens if I lose a disk while the AE stages are running? Since

Re: Handling disk-full scenarios

2010-06-04 Thread Ian Soboroff
Story continued, in hopes this experience is useful to someone... I shut down the node, removed the huge file, restarted the node, and told everybody to repair. Two days later, AE stages are still running. Ian On Thu, Jun 3, 2010 at 2:21 AM, Jonathan Ellis wrote: > this is why JBOD configurat

Re: Handling disk-full scenarios

2010-06-02 Thread Jonathan Ellis
this is why JBOD configuration is contraindicated for cassandra. http://wiki.apache.org/cassandra/CassandraHardware On Tue, Jun 1, 2010 at 1:08 PM, Ian Soboroff wrote: > My nodes have 5 disks and are using them separately as data disks.  The > usage on the disks is not uniform, and one is nearly

Capacity planning and Re: Handling disk-full scenarios

2010-06-02 Thread Ian Soboroff
Reading some more (someone break in when I lose my clue ;-) Reading the streams page in the wiki about anticompaction, I think the best approach to take when a node gets its disks overfull, is to set the compaction thresholds to 0 on all nodes, decommission the overfull node, wait for stuff to get

Re: Handling disk-full scenarios

2010-06-02 Thread Ian Soboroff
Ok, answered part of this myself. You can stop a node, move files around on the data disks, as long as they stay in the right keyspace directories, and all is fine. Now, I have a single Data.db file which is 900GB and is compacted. The drive its on is only 1.5TB, so it can't anticompact at all.

Handling disk-full scenarios

2010-06-01 Thread Ian Soboroff
My nodes have 5 disks and are using them separately as data disks. The usage on the disks is not uniform, and one is nearly full. Is there some way to manually balance the files across the disks? Pretty much anything done via nodetool incurs an anticompaction with obviously fails. system/ is no