Mark gmail.com> writes:
> Caused by: java.lang.RuntimeException: Insufficient disk space to flush
> at
> >
> On 12/7/10 8:44 PM, Mark wrote:
> > 3 Node cluster and I just ran a nodetool cleanup on node #3. 1 and 2
> > are now at 100% disk space. What should I do?
>
>
Is there files w
Hi Ryan,
Thanks for the swift response. I've tested your latest commit and it fixed
the problem.
Kind regards,
Joshua
On Wed, Dec 8, 2010 at 5:23 AM, Ryan King wrote:
> Please file this on github issues:
> https://github.com/fauna/cassandra/issues. And I'll get to it soon.
>
> -ryan
>
> On Tue
Hello,
Is there a definitive way to tell if a Decommission operation has
completed, such as a log message similar to what happens with a Drain
command?
Thanks.
--
Jake Maizel
Network Operations
Soundcloud
Mail & GTalk: j...@soundcloud.com
Skype: jakecloud
Rosenthaler strasse 13, 101 19, Berli
Also, look for any snapshots that can be cleared with nodetool
clearsnapshot or just run the command to remove any that exist.
On Wed, Dec 8, 2010 at 9:04 AM, Oleg Anastasyev wrote:
>
> Mark gmail.com> writes:
>
>> Caused by: java.lang.RuntimeException: Insufficient disk space to flush
>> a
Thanks for your answer Aaron,
I'm now on the RC1, I have no longer the ActiveCount error, however my nodes
still dying under bulk insertion.
I have modified my nodes configuration (all of them has now 2GB Heap size).
The nodes still under heavy pressure and they dies after a random timeout
(somet
Cut your memtable thresholds (throughput and ops) in half. See "describe
keyspaces" and "update keyspace" in the cli.
On Wed, Dec 8, 2010 at 9:18 AM, Amin Sakka, Novapost wrote:
> Thanks for your answer Aaron,
>
> I'm now on the RC1, I have no longer the ActiveCount error, however my
> nodes st
On Tue, Dec 7, 2010 at 4:00 PM, Reverend Chip wrote:
> On 12/7/2010 1:10 PM, Jonathan Ellis wrote:
>> I'm inclined to think there's a bug in your client, then.
>
> That doesn't pass the smell test. The very same client has logged
> timeout and unavailable exceptions on other occasions, e.g. when
I believe the decommission call is blocking in both .6 and .7, so once it
returns it should have completed.
On Wed, Dec 8, 2010 at 3:10 AM, Jake Maizel wrote:
> Hello,
>
> Is there a definitive way to tell if a Decommission operation has
> completed, such as a log message similar to what happens
Did both but didn't seem to help. I have another drive on that machine
with some free space. If I add another directory to the
DataFileDirectory config and restart, will it start using that directory?
Anything else I can do?
This actually leads me to an important question. Should I always make
Thanks all,
I have three questions:
1. Must seed list be identical on all nodes ?
2. If one seed node crash, all node will communication with the failure seed
node, I think this is harm for all nodes, isn't it?if this yes, how I can
replace the failure seed node in all node?
2010/12/8 Jonathan E
Indeed, it is. Also, the node being decommissioned drops out of the
ring when it is completed. Trail and error. Thanks for following up.
On Wed, Dec 8, 2010 at 4:39 PM, Nick Bailey wrote:
> I believe the decommission call is blocking in both .6 and .7, so once it
> returns it should have compl
I was in a similar situation and luckily had snapshots to clear and
gain space but you are correct. I would be careful of using the disk
more than 50% as the anit-compaction during cleanup could fail.
I don't have any experience with adding a data directory on the fly.
On Wed, Dec 8, 2010 at 4:5
Does cassandra suffer from this same issue in 0.7? You would think it
would at least warn if not prevent anticompaction if it knows there is a
good chance of running out of space.
On 12/8/10 8:05 AM, Jake Maizel wrote:
I was in a similar situation and luckily had snapshots to clear and
gain sp
Interesting idea, .
If it is like dividing the entire load on the system by 6, so if the
effective load is still the same and used SSD's for commit volume we could
get away with 1 commitlog SSD. Even if these 6 instances can handle 80% of
the load (compared to 1 on this machine), that might be acc
On Wed, Dec 8, 2010 at 1:19 AM, Arijit Mukherjee wrote:
> So how do you iterate over all records
You can iterate over your records with RandomPartitioner, they will just be
in the order of their hash, not the order of the keys.
> or try to find a list of all records matching a certain criter
Just a note that the README.txt file doesn't show using the ';' in the
command.
On Thu, Dec 2, 2010 at 10:46 AM, Yikuo Chan wrote:
> Hi Norman :
>
> it's work and thanks for your help ..
>
> Kevin
>
>
> On Fri, Dec 3, 2010 at 1:43 AM, Norman Maurer wrote:
>
>> You need to terminate the command
On Wed, Dec 8, 2010 at 12:09 PM, Ned Wolpert wrote:
> Just a note that the README.txt file doesn't show using the ';' in the
> command.
Fixed in RC2 (pending vote completion right now.)
-Brandon
Please send this to the list rather than me personally. AaronBegin forwarded message:From: Wenjun Che Date: 08 December 2010 4:35:10 PMTo: aa...@thelastpickle.comSubject: Re: NullPointerException in Beta3 and rc1I created the CF on beta3 with:create column family RecipientChat with gc_grace=5 and c
Jonathan suggested your cluster has multiple schemas, caused by https://issues.apache.org/jira/browse/CASSANDRA-1824Can you run this API command describe_schema_versions() , it's not listed on the wiki yet but it will tell you how many schema versions are out there. pycassa supports it. AaronOn 09
1. Ideally yes, but the system will work if they are not. 2. No. Once the node is down they will stop sending requests to it, and gossip is designed to test down nodes to see if they are back up.AaronOn 09 Dec, 2010,at 04:54 AM, lei liu wrote:Thanks all,I have three questions:1. Must seed list be
On 12/8/2010 7:30 AM, Jonathan Ellis wrote:
> On Tue, Dec 7, 2010 at 4:00 PM, Reverend Chip wrote:
>> Full DEBUG level logs would be a space problem; I'm loading at least 1T
>> per node (after 3x replication), and these events are rare. Can the
>> DEBUG logs be limited to the specific modules hel
Hi,
Using EmbeddedCassandra Serivce inside Junit Tests(BEFORECLASS) and tests are
running fine and no issues. Code to start the cassandra is something like the
following:
BUT the issue is when i try to get the data using cassandra-cli, i am not
getting any results. the data cleanup happens o
I just pushed a 0.9.0 release of the fauna-cassandra ruby client. This
is our first release that includes support for Cassandra 0.7
(currently supporting RC1 and not earlier 0.7 releases).
code/download: https://rubygems.org/gems/cassandra
git: http://github.com/fauna/cassandra
File any bugs on g
Nice. Thanks for the hardwork Ryan. Will try this out tonight.
Cheers,
Joshua.
On Thu, Dec 9, 2010 at 8:44 AM, Ryan King wrote:
> I just pushed a 0.9.0 release of the fauna-cassandra ruby client. This
> is our first release that includes support for Cassandra 0.7
> (currently supporting RC1 and
What is this directory used for and how was it created?
You'll need to provide some more information. Is it under / or under something else? What was in it?These are the yaml settings that control where cassandra stores data...# directories where Cassandra should store data on disk.data_file_directories: - /var/lib/cassandra/data# commit logcommitlog
On Wed, Dec 8, 2010 at 4:09 PM, Mark wrote:
> What is this directory used for and how was it created?
I believe you may be referring to the temp directory used for, for
example, a place to put SSTable files which are created as a part of
streaming?
I presume, that like other directories used by
27 matches
Mail list logo