Looks like that did it, thanks!
Scott
From: Brandon Williams [dri...@gmail.com]
Sent: Thursday, October 13, 2011 2:16 PM
To: user@cassandra.apache.org
Subject: Re: MapReduce with two ethernet cards
On Thu, Oct 13, 2011 at 1:17 PM, Scott Fines wrote:
> Whe
What does the Load column in nodetool ring mean? From the output
below it shows 101.62 GB. However if I do a disk usage it is about 6
GB.
thanks
Ramesh
[root@CAP2-CNode1 cassandra]#
~root/apache-cassandra-1.0.0-rc2/bin/nodetool -h localhost ring
Address DC RackStatus St
On Thu, Oct 13, 2011 at 11:33 PM, Eric Czech wrote:
> Thanks Brandon! Out of curiosity, would making schema changes through a
> thrift interface (via hector) be any different? In other words, would using
> hector instead of the cli make schema changes possible without upgrading?
No, but if the
Thanks again. I have truncated certain cf's recently and the cli didn't
complain and listings of the cf rows return nothing after truncation. Is
that data not actually deleted?
On Fri, Oct 14, 2011 at 1:28 PM, Brandon Williams wrote:
> On Thu, Oct 13, 2011 at 11:33 PM, Eric Czech
> wrote:
> >
On Fri, Oct 14, 2011 at 2:36 PM, Eric Czech wrote:
> Thanks again. I have truncated certain cf's recently and the cli didn't
> complain and listings of the cf rows return nothing after truncation. Is
> that data not actually deleted?
Hmm, well, now I'm confused because if 3259 is your problem t
Well, the schema versions are still apparently consistent across the nodes
that are actually part of the ring (according to "describe cluster"). I
could just upgrade, but I'm trying to hold out for datastax enterprise or at
least community and would rather not have to upgrade to 0.8.7 and then 1.x
Hi, I posted this message last month and I promised to put up a public
repository with all of our configuration details.
You can find it at https://github.com/vCider/BenchmarksCassandra
We've built an completely automated system with Puppet that configures EC2
instances with Cassandra as well as
> Now it is true that it could be a shame to interrupt a compaction that have
> been running for a long time and is about to finish (so typically not one
> that
> has just been triggered by your drain), but you can always check the
> compaction manager in JMX to see if it's the case before killing
Continuing this conversation. If there was a long running compaction
happening, I have to kill the node and start it again. Will it pick up
that compaction immediately?
no