Modify your Capistrano script to install an init script. If you use debian
or redhat you can copy these or modify them:
https://github.com/Shimi/cassandra/blob/trunk/debian/init
https://github.com/Shimi/cassandra/blob/trunk/redhat/cassandra
and setup Capistrano to call /etc/init.d/cassandra stop/s
I'm seeing some strange behavior and not sure how it is possible. We updated
some data using a pig script and that wrote back to cassandra. We get the
value and list the value on the Cassandra CLI and it's the updated value - from
MARKET to market. However, when doing a pig script to filter b
See
http://www.datastax.com/dev/blog/deploying-cassandra-across-multiple-data-centers
and
http://www.datastax.com/docs/0.8/brisk/about_brisk#about-the-brisk-architecture
It's possible to run multi DC and use LOCAL_QUORUM consistency level in your
production centre to allow the prod code to get
When you run drain the node will log someone like "node drained" when it is
done.
The commit log should be empty, any data in the log may be due to changes in
the system tables after the drain. Can you raise a ticket and include the
commit logs left behind and any relevant log messages?
The no
try nohup
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-Capistrano-recipes-tp6556591p6556636.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
It's not currently supported via the api. But I *think* it's technically
possible, the code could page backwards using the index sampling the same way
it does for columns.
Best advice is to raise a ticket on
https://issues.apache.org/jira/browse/CASSANDRA (maybe do a search first,
someone el
That advice is a little out of date, specially in the future world of 0.8
memory management, see
http://thelastpickle.com/2011/05/04/How-are-Memtables-measured/
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 6/07/2011, at 5:51 PM
Use move when you need to change the token distribution, e.g. To re-balance the
ring.
During decommission writes that would go to the old node will also (I think,
may be instead off) go to the node that will later be responsible for the old
nodes range. Same thing when a node enters the ring, i
I'm just setting up a Cassandra cluster for my company. For a variety of
reasons, we have the servers that run our hadoop jobs in our local office
and our production machines in a collocated data center. We don't want to
run hadoop jobs against cassandra servers on the other side of the US from
u
Hi
I'm using Capistrano with Cassandra and was wondering if anyone has a
recipe(s), for in particular, starting Cassandra as a daemon. Running the
'bin/cassandra' shell script (without the '-f' switch) doesn't quite work as
this only runs Cassandra in the background, logging out will kill it.
See http://wiki.apache.org/cassandra/FAQ#range_ghosts
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 6/07/2011, at 3:46 AM, karim abbouh wrote:
> i use get_range_slice to get the list of keys,
> then i call client.remove(keysp
If you still have problems send through some details of where you get incorrect
results.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 6/07/2011, at 3:23 AM, Anand Somani wrote:
> Hi,
>
> Using thrift and get_range_slices cal
couple questions about commitlogs and the nodetool drain operator:
* in 0.6, after invoking a drain, the commitlog directory would be empty.
in 0.8, it seems to contain 2 files, a 44 byte .header file and 270 byte
.log file. do these indicate a fully drained commitlog?
* i have a couple node
Hey folks, I am going to start prototyping our media tier using cassandra as
a file system (meaning upload video/audio/images to web server save in
cassandra and then streaming them out)
Has anyone done this before?
I was thinking brisk's CassandraFS might be a fantastic implementation for
this b
That makes sense. The problem is I jumped directly to using pig, which is
abstracting some of the data flow from me. I guess I'll have to figure out
what it's doing under the covers, to know how to optimize/fix bottlenecks.
But for now, I'm taking this information to mean "I should run datanodes
On Wed, Jul 6, 2011 at 2:48 PM, William Oberman wrote:
> I have a few cassandra/hadoop/pig questions. I currently have things set
> up in a test environment, and for the most part everything works. But,
> before I start to roll things out to production, I wanted to check
> on/confirm some things
Hi all
Are there any active cassandra meet ups in southern California?
Thanks
Mike
I have a few cassandra/hadoop/pig questions. I currently have things set up
in a test environment, and for the most part everything works. But, before
I start to roll things out to production, I wanted to check on/confirm some
things.
When I originally set things up, I used:
http://wiki.apache.o
On Wed, Jul 6, 2011 at 3:06 PM, A J wrote:
> I wish to use the order preserving byte-ordered partitioner. How do I
> figure the initial token values based on the text key value.
> Say I wish to put all keys starting from a to d on N1. e to m on N2
> and n to z on N3. What would be the initial_toke
# The Index Interval determines how large the sampling of row keys
# is for a given SSTable. The larger the sampling, the more effective
# the index is at the cost of space.
index_interval: 128
2011/7/6 Héctor Izquierdo Seliva :
> Forcing a full gc doesn't help either. Now the node is stuck in a
I wish to use the order preserving byte-ordered partitioner. How do I
figure the initial token values based on the text key value.
Say I wish to put all keys starting from a to d on N1. e to m on N2
and n to z on N3. What would be the initial_token values on each of
the 3 nodes to accomplish this ?
Forcing a full gc doesn't help either. Now the node is stuck in an
endless loop of full gcs that don't free any memory.
Hi,
I am using get_range_slice and I get the results sorted by keys, Is it possible
to have the results also sorted by keys but in reverse (from the biggest to the
smallest)?
Hi all,
I don't seem to be able to complete a full repair on one of the nodes.
Memory consuptiom keeps growing till it starts complaining about not
having enough heap. I had to disable the automatic memtable flush, as it
was generating thousands of almost empty memtables.
My guess is that the key
> That would be a bug. Can you open a ticket with the exact version
> you're using and
> the circumstance where this happens.
>
> Thanks.
>
> --
> Sylvain
https://issues.apache.org/jira/browse/CASSANDRA-2863
2011/7/6 Héctor Izquierdo Seliva :
> I'm also finding a few of this while opening sstables that have been
> build with repair (SSTable build compactions)
>
> ERROR [CompactionExecutor:2] 2011-07-06 10:09:16,054
> AbstractCassandraDaemon.java (line 113) Fatal exception in thread
> Thread[CompactionE
Thanks Sylvain. I'll try option 3 when the current repair ends so I can
fix the remaining CFs.
I'm also finding a few of this while opening sstables that have been
build with repair (SSTable build compactions)
ERROR [CompactionExecutor:2] 2011-07-06 10:09:16,054
AbstractCassandraDaemon.java (line
2011/7/6 Héctor Izquierdo Seliva :
> Hi, i've been struggling to repair my failed node for the past few days,
> and I've seen this erros a few times.
>
> java.lang.RuntimeException: Cannot recover SSTable with version f
> (current version g).
>
> If it can read the sstables, why can't they be used
Hi, i've been struggling to repair my failed node for the past few days,
and I've seen this erros a few times.
java.lang.RuntimeException: Cannot recover SSTable with version f
(current version g).
If it can read the sstables, why can't they be used to repair? Is there
anything I can do besides r
29 matches
Mail list logo