On Tue, Sep 18, 2012 at 1:54 AM, aaron morton wrote:
> each with several disks having large capacity, totaling 10 - 12 TB. Is
> this (another) bad idea?
>
> Yes. Very bad.
> If you had 6TB on average system with spinning disks you would measure
> duration of repairs and compactions in days.
>
> I
On Mon, Sep 17, 2012 at 1:19 AM, aaron morton wrote:
> 4 drives for data and 1 drive for commitlog,
>
> How are you configuring the drives ? It's normally best to present one big
> data volume, e.g. using raid 0, and put the commit log on say the system
> mirror.
>
>
Given the advice to use a sin
I'm building a new "cluster" (to replace the broken setup I've written
about in previous posts) that will consist of only two nodes. I understand
that I'll be sacrificing high availability of writes if one of the nodes
goes down, and I'm okay with that. I'm more interested in maintaining high
con
On Thu, Aug 30, 2012 at 11:21 AM, Rob Coli wrote:
> On Thu, Aug 30, 2012 at 10:18 AM, Casey Deccio wrote:
> > I'm adding a new node to an existing cluster that uses
> > ByteOrderedPartitioner. The documentation says that if I don't
> configure a
> > tok
All,
I'm adding a new node to an existing cluster that uses
ByteOrderedPartitioner. The documentation says that if I don't configure a
token, then one will be automatically generated to take load from an
existing node. What I'm finding is that when I add a new node, (super)
column lookups begin
On Tue, May 15, 2012 at 5:41 PM, Dave Brosius wrote:
> The replication factor for a keyspace is stored in the
> system.schema_keyspaces column family.
>
> Since you can't view this with cli as the server won't start, the only way
> to look at it, that i know of is to use the
>
> sstable2json tool
Sorry to reply to my own message (again). I took a closer look at the logs
and realized that the partitioner errors aren't what kept the daemon to
stop; those errors are in the logs even before I upgraded. This one seems
to be the culprit.
java.lang.reflect.InvocationTargetException
at s
> Did you check cassandra.yaml to make sure partitioner there matches what
> was in your old cluster ?
>
> Regards,
> Oleg Dulin
> Please note my new office #: 732-917-0159
>
> On May 15, 2012, at 3:22 PM, Casey Deccio wrote:
>
> Here's something new in the logs:
&g
at java.lang.Thread.run(Thread.java:662)
Casey
On Tue, May 15, 2012 at 12:08 PM, Casey Deccio wrote:
> I recently upgraded from cassandra 1.0.10 to 1.1. Everything worked fine
> in one environment, but after I upgraded in another, I can't find my
> keyspace. When I run, e.g., cass
I recently upgraded from cassandra 1.0.10 to 1.1. Everything worked fine
in one environment, but after I upgraded in another, I can't find my
keyspace. When I run, e.g., cassandra-cli with 'use KeySpace;' It tells me
that the keyspace doesn't exist. In the log I see this:
ERROR [MigrationStage:
On Thu, Mar 1, 2012 at 9:33 AM, aaron morton wrote:
> What RF were you using and had you been running repair regularly ?
>
>
RF 1 *sigh*. Waiting until I have more/better resources to use RF > 1.
Hopefully soon.
In the mean time... Oddly (to me), when I removed the most recently added
node, all
On Thu, Mar 1, 2012 at 2:39 AM, aaron morton wrote:
> I am guessing you are running low on disk space. Can you check and try to
> free some up ?
>
>
Okay, I've freed some up and am trying again.
> Looks like a bug in CompactionTask.execute() see
> https://issues.apache.org/jira/browse/CASSANDRA-
On Wed, Feb 29, 2012 at 5:29 AM, Casey Deccio wrote:
> On Wed, Feb 29, 2012 at 5:25 AM, Casey Deccio wrote:
>
>> I recently had to do some shuffling with one of my cassandra nodes
>> because it was running out of disk space. I did a few things in the
>> process, and
Using cassandra 1.0.7, I got the following, as I was trying to rebuild my
sstables:
$ nodetool -h localhost upgradesstables
Error occured while upgrading the sstables for keyspace MyKeySpace
java.util.concurrent.ExecutionException: java.lang.NullPointerException
at
java.util.concurrent.Fut
On Wed, Feb 29, 2012 at 5:25 AM, Casey Deccio wrote:
> I recently had to do some shuffling with one of my cassandra nodes because
> it was running out of disk space. I did a few things in the process, and
> I'm not sure in the end which caused my problem. First I added a second
I recently had to do some shuffling with one of my cassandra nodes because
it was running out of disk space. I did a few things in the process, and
I'm not sure in the end which caused my problem. First I added a second
file path to the data directory in cassandra.yaml. Things still worked
fine
On Tue, Jul 12, 2011 at 10:10 AM, Brandon Williams wrote:
> On Mon, Jul 11, 2011 at 11:51 PM, Casey Deccio wrote:
> > java.lang.RuntimeException: Cannot recover SSTable with version f
> (current
> > version g).
>
> You need to scrub before any streaming is performed.
>
On Sat, Jul 9, 2011 at 4:47 PM, aaron morton wrote:
> Check the log on all the machines for ERROR messages. An error on any of
> the nodes could have caused the streaming to hang. nodetool netstats will
> let you know if there is a failed stream.
>
>
Here's what I see in the logs on the node I'm s
I've got a node that is stuck "Leaving" the ring. Running "nodetool
decommission" never terminates. It's been in this state for about a week,
and the load has not decreased:
$ nodetool -h localhost ring
Address DC RackStatus State Load
OwnsToken
Token(bytes[de4075
t; node and bring it back up with the new IP and It Just Works
> https://issues.apache.org/jira/browse/CASSANDRA-872
>
> I've not done it before, anyone else ?
>
> Aaron
>
> On 23 Mar 2011, at 07:53, Casey Deccio wrote:
>
> What is the process of changing the IP address for a node in a cluster?
>
> Casey
>
>
>
What is the process of changing the IP address for a node in a cluster?
Casey
On Sat, Mar 5, 2011 at 7:37 PM, aaron morton wrote:
> There is some additional memory usage in the JVM beyond that Heap size, in
> the permanent generation. 900mb sounds like too much for that, but you can
> check by connecting with JConsole and looking at the memory tab. You can
> also check the
On Fri, Mar 4, 2011 at 11:03 AM, Chris Burroughs
wrote:
> What do you mean by "eating up the memory"? Resident set size, low
> memory available to page cache, excessive gc of the jvm's heap?
>
>
jvm's heap is set for half of the physical memory (1982 MB out of 4G), and
jsvc is using 2.9G (73%) of
I have a small ring of cassandra nodes that have somewhat limited memory
capacity for the moment. Cassandra is eating up all the memory on these
nodes. I'm not sure where to look first in terms of reducing the foot
print. Keys cached? Compaction?
Any hints would be greatly appreciated.
Regard
On Wed, Feb 16, 2011 at 10:01 PM, Nate McCall wrote:
> See the following mail thread:
> http://www.mail-archive.com/user@cassandra.apache.org/msg10183.html
>
> In short, running nodetool compact should clear it up.
>
>
Thanks for the pointer! I ran nodetool compact on my nodes, and so far it's
l
I recently upgraded to 0.7.2 from 0.7.0, and now when I run my
multi-threaded app (using python/pycassa/thrift) I'm getting the following
NegativeArraySizeException. The traceback is below:
ERROR 21:08:33,187 Fatal exception in thread Thread[ReadStage:219,5,main]
java.lang.RuntimeException: java.
On Sun, Nov 14, 2010 at 12:50 AM, Jonathan Ellis wrote:
> As you may have guessed from the lack of a "reversed" option on the
> range slice api, backward scans are not supported. The standard thing
> to do is load the keys you are interested in as columns to a row.
>
That makes sense. Just need
Hi,
I'm working with Cassandra 0.7.0-beta3. Given rows with keys 1, 2, 3,
I'd like to be able to search on key 4 (non-existent) and get row 3
(and possibly the n rows before in a range). I've tried several
options, but I keep getting an empty range. What am I missing?
Thanks,
Casey
28 matches
Mail list logo