Thank You All for your hints on this.
I added another folder on the commitlog Disk to compensate immediate urgency.
Next Step will be to reorganize and deduplicate the data into a 2nd table.
Then drop the original one, clean the snapshot, consolidate back all data Files
away from the commitlog
Agreed that you tend to add capacity to nodes or add nodes once you know you
have no unneeded data in the cluster.
From: Alain RODRIGUEZ [mailto:arodr...@gmail.com]
Sent: Wednesday, April 04, 2018 9:10 AM
To: user cassandra.apache.org
Subject: Re: Urgent Problem - Disk full
Hi,
When
Kenneth Brotman [mailto:kenbrot...@yahoo.com.INVALID]
> Sent: Wednesday, April 04, 2018 7:28 AM
> To: user@cassandra.apache.org
> Subject: RE: Urgent Problem - Disk full
>
> Jeff,
>
> Just wondering: why wouldn't the answer be to:
> 1. move anything you want to arc
There's also the old snapshots to remove that could be a significant amount of
memory.
-Original Message-
From: Kenneth Brotman [mailto:kenbrot...@yahoo.com.INVALID]
Sent: Wednesday, April 04, 2018 7:28 AM
To: user@cassandra.apache.org
Subject: RE: Urgent Problem - Disk full
Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Wednesday, April 04, 2018 7:10 AM
To: user@cassandra.apache.org
Subject: Re: Urgent Problem - Disk full
Yes, this works in TWCS.
Note though that if you have tombstone compaction subproperties set, there may
be sstables with newer filesystem timestamps
There is zero reason to believe a full repair would make this better and a lot
of reason to believe it’ll make it worse
For casual observers following along at home, this is probably not the answer
you’re looking for.
--
Jeff Jirsa
> On Apr 4, 2018, at 4:37 AM, Rahul Singh wrote:
>
> Nothi
Yes, this works in TWCS.
Note though that if you have tombstone compaction subproperties set, there may
be sstables with newer filesystem timestamps that actually hold older Cassandra
data, in which case sstablemetadata can help finding the sstables with truly
old timestamps
Also if you’ve ex
, April 04, 2018 4:38 AM
To: user@cassandra.apache.org; user@cassandra.apache.org
Subject: Re: Urgent Problem - Disk full
Nothing a full repair won’t be able to fix.
On Apr 4, 2018, 7:32 AM -0400, Jürgen Albersdorfer
, wrote:
Hi,
I have an urgent Problem. - I will run out of disk space in
Nothing a full repair won’t be able to fix.
On Apr 4, 2018, 7:32 AM -0400, Jürgen Albersdorfer
, wrote:
> Hi,
>
> I have an urgent Problem. - I will run out of disk space in near future.
> Largest Table is a Time-Series Table with TimeWindowCompactionStrategy (TWCS)
> and default_time_to_live =
Hi,
I have an urgent Problem. - I will run out of disk space in near future.
Largest Table is a Time-Series Table with TimeWindowCompactionStrategy (TWCS)
and default_time_to_live = 0
Keyspace Replication Factor RF=3. I run C* Version 3.11.2
We have grown the Cluster over time, so SSTable files h
ifics about what you're calling "groups" in a DC. Are these racks ?
>>
>> Thanks
>>
>> On Sat, Feb 4, 2017 at 10:41 AM laxmikanth sadula <
>> laxmikanth...@gmail.com> wrote:
>>
>>> Yes .. same number of tokens...
>>&
at 11:56 AM, Jonathan Haddad
>> wrote:
>>
>> Are you using the same number of tokens on the new node as the old ones?
>>
>> On Fri, Feb 3, 2017 at 8:31 PM techpyaasa . wrote:
>>
>> Hi,
>>
>> We are using c* 2.0.17 , 2 DCs , RF=3.
>>
>&
M techpyaasa . wrote:
>
> Hi,
>
> We are using c* 2.0.17 , 2 DCs , RF=3.
>
> When I try to add new node to one group in a DC , I got disk full. Can
> someone please tell what is the best way to resolve this?
>
> Run compaction for nodes in that group(to which I'm going
2 DCs , RF=3.
>>
>> When I try to add new node to one group in a DC , I got disk full. Can
>> someone please tell what is the best way to resolve this?
>>
>> Run compaction for nodes in that group(to which I'm going to add new
>> node, as data streams to new no
Are you using the same number of tokens on the new node as the old ones?
On Fri, Feb 3, 2017 at 8:31 PM techpyaasa . wrote:
> Hi,
>
> We are using c* 2.0.17 , 2 DCs , RF=3.
>
> When I try to add new node to one group in a DC , I got disk full. Can
> someone please tell what
Hi,
We are using c* 2.0.17 , 2 DCs , RF=3.
When I try to add new node to one group in a DC , I got disk full. Can
someone please tell what is the best way to resolve this?
Run compaction for nodes in that group(to which I'm going to add new node,
as data streams to new nodes from nodes of
ok, found past discussions:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/cassandra-server-disk-full-td6560725.html
On Thu, Sep 8, 2011 at 11:00 PM, Yang wrote:
> I found the reason of my server freeze:
>
> COMMIT-LOG-WRITER thread is gone, dead, so the blocking
I found the reason of my server freeze:
COMMIT-LOG-WRITER thread is gone, dead, so the blocking queue in
PeriodicCommitLogExecutorService is full, then all mutationStage jobs
are stuck on the mutations flushing.
the COMMIT-LOG-WRITER thread died because at one time the disk was full,
I cleaned up
The last patch on that ticket is what we're running in prod. Its
working well for us with disk_failure_mode: readwrite. In the case of
filesystem errors the node shuts off thrift and gossip. While the
gossip is propagating we can continue to serve some reads out of the
caches.
-ryan
On Tue, Aug 2
On Mon, Aug 1, 2011 at 6:12 PM, Ryan King wrote:
> On Fri, Jul 29, 2011 at 12:02 PM, Chris Burroughs
> wrote:
>> On 07/25/2011 01:53 PM, Ryan King wrote:
>>> Actually I was wrong– our patch will disable gosisp and thrift but
>>> leave the process running:
>>>
>>> https://issues.apache.org/jira/br
On Fri, Jul 29, 2011 at 12:02 PM, Chris Burroughs
wrote:
> On 07/25/2011 01:53 PM, Ryan King wrote:
>> Actually I was wrong– our patch will disable gosisp and thrift but
>> leave the process running:
>>
>> https://issues.apache.org/jira/browse/CASSANDRA-2118
>>
>> If people are interested in that
On 07/25/2011 01:53 PM, Ryan King wrote:
> Actually I was wrong– our patch will disable gosisp and thrift but
> leave the process running:
>
> https://issues.apache.org/jira/browse/CASSANDRA-2118
>
> If people are interested in that I can make sure its up to date with
> our latest version.
Thank
@cassandra.apache.org
主题: Re: cassandra server disk full
If the commit log or data disk is full it's not possible for the server to
process any writes, the best it could do is perform reads. But reads may result
in a write due to read repair and will also need to do some app logging, so
IMHO it's r
@cassandra.apache.org
主题: Re: cassandra server disk full
Actually I was wrong�C our patch will disable gosisp and thrift but
leave the process running:
https://issues.apache.org/jira/browse/CASSANDRA-2118
If people are interested in that I can make sure its up to date with
our latest version.
-ryan
On
20:06, Donna Li wrote:
>
> All:
> Could anyone help me?
>
>
> Best Regards
> Donna li
>
> -邮件原件-
> 发件人: Donna Li [mailto:donna...@utstar.com]
> 发送时间: 2011年7月22日 11:23
> 收件人: user@cassandra.apache.org
> 主题: cassandra server disk full
&g
it
>> runs out of space.
>> (https://issues.apache.org/jira/browse/CASSANDRA-809)
>>
>> Unfortunately dealing with disk-full conditions tends to be a low
>> priority for many people because it's relatively easy to avoid with
>> decent monitoring, but if it's
t; (https://issues.apache.org/jira/browse/CASSANDRA-809)
>
> Unfortunately dealing with disk-full conditions tends to be a low
> priority for many people because it's relatively easy to avoid with
> decent monitoring, but if it's critical for you, we'd welcome the
> assista
All:
Could anyone help me?
Best Regards
Donna li
-邮件原件-
发件人: Donna Li [mailto:donna...@utstar.com]
发送时间: 2011年7月22日 11:23
收件人: user@cassandra.apache.org
主题: cassandra server disk full
All:
Is there an easy way to fix the bug by change server's code?
Best Regards
Don
All:
Is there an easy way to fix the bug by change server's code?
Best Regards
Donna li
-邮件原件-
发件人: Donna Li [mailto:donna...@utstar.com]
发送时间: 2011年7月8日 11:29
收件人: user@cassandra.apache.org
主题: cassandra server disk full
Does CASSANDRA-809 resolved or any other path can re
server disk full
Yeah, ideally it should probably die or drop into read-only mode if it
runs out of space.
(https://issues.apache.org/jira/browse/CASSANDRA-809)
Unfortunately dealing with disk-full conditions tends to be a low
priority for many people because it's relatively easy to avoid with
d
Yeah, ideally it should probably die or drop into read-only mode if it
runs out of space.
(https://issues.apache.org/jira/browse/CASSANDRA-809)
Unfortunately dealing with disk-full conditions tends to be a low
priority for many people because it's relatively easy to avoid with
decent monit
All:
When one of the cassandra servers disk full, the cluster can not work
normally, even I make space. I must reboot the server that disk full,
the cluster can work normally.
Best Regards
Donna li
Please provide some more information.
In general, avoid using more than 50% of the available disk space.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 11 Jun 2011, at 16:19, Donna Li wrote:
> Hi, all:
> When disk is f
Hi, all:
When disk is full, why ddb must reboot even after I clear the
disk?
Best Regards
Donna li
That's basically the approach I want to take in
https://issues.apache.org/jira/browse/CASSANDRA-2427.
On Thu, May 19, 2011 at 12:00 PM, Mike Malone wrote:
> Just noticed this thread and figured I'd chime in since we've had similar
> issues with the commit log growing too large on our clusters. Tu
Just noticed this thread and figured I'd chime in since we've had similar
issues with the commit log growing too large on our clusters. Tuning down
the flush timeout wasn't really an acceptable solution for us since we
didn't want to be constantly flushing and generating extra SSTables for no
reaso
m the same node. I looked at the code and it looks like
you should see something in the logs for those files.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Commitlog-Disk-Full-tp6356797p6375353.html
Sent from the cassandra-u...@incubator.apache.or
4797.
On Tue, May 17, 2011 at 10:49 AM, mcasandra wrote:
> Do you see anything in log files?
>
> --
> View this message in context:
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Commitlog-Disk-Full-tp6356797p6374234.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
> Nabble.com.
>
Do you see anything in log files?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Commitlog-Disk-Full-tp6356797p6374234.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
- min_compaction_threshold: Avoid minor compactions of less than this
>>> number of sstable files
>>>- max_compaction_threshold: Compact no more than this number of
>>> sstable
>>> files at once
>>>- column_metadata: Metadata which describes columns of column family.
>>>Supported format is [{ k:v, k:v, ... }, { ... }, ...]
>>>Valid attributes: column_name, validation_class (see comparator),
>>> index_type (integer), index_name.
>>>
>>>
>>> --
>>> View this message in context:
>>> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Commitlog-Disk-Full-tp6356797p6370913.html
>>> Sent from the cassandra-u...@incubator.apache.org mailing list archive
>>> at Nabble.com.
>>>
>>
>>
>
}, { ... }, ...]
>>Valid attributes: column_name, validation_class (see comparator),
>> index_type (integer), index_name.
>>
>>
>> --
>> View this message in context:
>> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Commitlog-Disk-Full-tp6356797p6370913.html
>> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
>> Nabble.com.
>>
>
>
umn_metadata: Metadata which describes columns of column family.
>Supported format is [{ k:v, k:v, ... }, { ... }, ...]
>Valid attributes: column_name, validation_class (see comparator),
> index_type (integer), index_name.
>
>
> --
> View this message in context:
/Commitlog-Disk-Full-tp6356797p6370913.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
see if that helps.
>
> --
> View this message in context:
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Commitlog-Disk-Full-tp6356797p6362301.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
> Nabble.com.
>
.nabble.com/Commitlog-Disk-Full-tp6356797p6362301.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
our write happen in bursts. So often times, clients write data as fast as
they can. Conceivably one can write 5G in one hour.
The other setting that we have is that our replication factor is 3 and we
write using QUORUM. Not sure if that will affect things.
On Fri, May 13, 2011 at 12:04 AM, Peter S
Is there a way to look at the actual size of memtable? Would that help?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Commitlog-Disk-Full-tp6356797p6360001.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
> I haven't explictly set a value for the memtable_flush_after_mins parameter.
> Looks like the default is 60minutes.
> I will try to play around this value to see if that fixes things.
Is the amount of data in the commit log consistent with what you might
have been writing during 60 minutes? Incl
Hi Peter,
Thanks for the response.
I haven't explictly set a value for the memtable_flush_after_mins parameter.
Looks like the default is 60minutes.
I will try to play around this value to see if that fixes things.
Thanks again!
On Thu, May 12, 2011 at 11:41 AM, Peter Schuller <
peter.schul...@inf
> I understand that cassandra periodically cleans up the commitlog directories
> by generating sstables in datadir. Is there any way to speed up this
> movement from commitog to datadir?
commitlog_rotation_threshold_in_mb could cause problems if it was set
very very high, but with the default of 1
Hey guys,
I have a ec2 debian cluster consisting of several nodes running 0.7.5 on
ephimeral disks.
These are fresh installs and not upgrades.
The commitlog is set to the smaller of the disks which is around 10G in size
and the datadir is set to the bigger disk.
The config file is basically the sam
> I keep running into the following error while running a nodetool cleanup:
Depending on version, maybe you're running into this:
https://issues.apache.org/jira/browse/CASSANDRA-1674
But note though that independently of the above, if your 80 gb is
mostly a single column family, you're in dan
Hello,
I keep running into the following error while running a nodetool cleanup:
ERROR [COMPACTION-POOL:1] 2010-11-26 12:36:38,383 CassandraDaemon.java
(line 87) Uncaught exception in thread
Thread[COMPACTION-POOL:1,5,main]
java.lang.UnsupportedOperationException: disk full
at
I did set autobootstrap true. It got the new token, and even proceeded to
print the message that its bootstrapping, however the source node just didnt
show any activity. At a later point, when i tried again (after the other
bootstrap from the other source was finished), it did proceed, however that
On Thu, Sep 9, 2010 at 2:24 PM, Gurpreet Singh wrote:
> D was once a part of the cluster, but had gone down because of disk issues.
> Its back up, it still has the old data, however to bootstrap again, i
> deleted the old Location db (is that a good practise?), and so i see it did
> take a new tok
Thanks Jonathan. I guess i need to be patient for JVM GC :-)
Two more things i was trying, and wanted to check if it was supported.
Now, i have a 2 node cluster (say A and B), and i am trying to bootstrap 2
more nodes. (C and D)
The first bootstrap started successfully. I see anticompaction happen
On Thu, Sep 9, 2010 at 12:50 AM, Gurpreet Singh
wrote:
> 1. what is the purpose of this anticompacted file created during cleanup?
That is all the data that still belongs to the node, post-bootstrap.
Since you were just bringing the cluster back up to RF nodes, that's
all the data it started with
be stuck with the bootstrapping message, and did
> not
> > show any activity.
> > Only after i checked the logs of the seed node, i realise there has been
> an
> > error:
> > Caused by: java.lang.UnsupportedOperationException: disk full
> > at
> >
> o
ootstrapping message, and did not
> show any activity.
> Only after i checked the logs of the seed node, i realise there has been an
> error:
> Caused by: java.lang.UnsupportedOperationException: disk full
> at
> org.apache.cassandra.db.CompactionManager.doAntiCompaction(Co
: java.lang.UnsupportedOperationException: disk full
at
org.apache.cassandra.db.CompactionManager.doAntiCompaction(CompactionManager.java:345)
at
org.apache.cassandra.db.CompactionManager.access$500(CompactionManager.java:49)
at
org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:143)
at
Sounds like you ran into
https://issues.apache.org/jira/browse/CASSANDRA-1169. The only
workaround until that is fixed is to re-run repair.
On Tue, Jun 8, 2010 at 7:17 AM, Ian Soboroff wrote:
> And three days later, AE stages are still running full-bore. So I conclude
> this is not a very good
And three days later, AE stages are still running full-bore. So I conclude
this is not a very good approach.
I wonder what will happen when I lose a disk (which is essentially the same
as what I did -- rm the data directory). What happens if I lose a disk
while the AE stages are running? Since
Story continued, in hopes this experience is useful to someone...
I shut down the node, removed the huge file, restarted the node, and told
everybody to repair. Two days later, AE stages are still running.
Ian
On Thu, Jun 3, 2010 at 2:21 AM, Jonathan Ellis wrote:
> this is why JBOD configurat
this is why JBOD configuration is contraindicated for cassandra.
http://wiki.apache.org/cassandra/CassandraHardware
On Tue, Jun 1, 2010 at 1:08 PM, Ian Soboroff wrote:
> My nodes have 5 disks and are using them separately as data disks. The
> usage on the disks is not uniform, and one is nearly
Reading some more (someone break in when I lose my clue ;-)
Reading the streams page in the wiki about anticompaction, I think the best
approach to take when a node gets its disks overfull, is to set the
compaction thresholds to 0 on all nodes, decommission the overfull node,
wait for stuff to get
Ok, answered part of this myself. You can stop a node, move files around on
the data disks, as long as they stay in the right keyspace directories, and
all is fine.
Now, I have a single Data.db file which is 900GB and is compacted. The
drive its on is only 1.5TB, so it can't anticompact at all.
My nodes have 5 disks and are using them separately as data disks. The
usage on the disks is not uniform, and one is nearly full. Is there some
way to manually balance the files across the disks? Pretty much anything
done via nodetool incurs an anticompaction with obviously fails. system/ is
no
67 matches
Mail list logo