Re: normal thread counts?

2013-04-30 Thread aaron morton
The issue below could result in abandoned threads under high contention, so we'll get that fixed. But we are not sure how/why it would be called so many times. If you could provide a full list of threads and the output from nodetool gossipinfo that would help. Cheers - Aaron

Re: SSTables not opened on new cluste

2013-04-30 Thread Philippe
Hi Aaron, thanks for the response. Permissions are correct : owner is cassandra (ubuntu) and permissions are drwxr-xr-x When I created the schema, the KS were created as directory in the .../data/ directory When I use cassandra-cli, set the CL to QUORUM, ensure two instances are up (nodetool ring

Re: SSTables not opened on new cluste

2013-04-30 Thread aaron morton
Double check the file permissions ? Write some data (using cqlsh or cassandra-cli) and flush to make sure the new files are created where you expect them to be. Cheers - Aaron Morton Freelance Cassandra Consultant New Zealand @aaronmorton http://www.thelastpickle.com On 1/05

Re: normal thread counts?

2013-04-30 Thread aaron morton
> Many many many of the threads are trying to talk to IPs that aren't in the > cluster (I assume they are the IP's of dead hosts). Are these IP's from before the upgrade ? Are they IP's you expect to see ? Cross reference them with the output from nodetool gossipinfo to see why the node think

Re: nodetool status OWNS and multiple DCs

2013-04-30 Thread aaron morton
> I thought that each datacenter has 100% coverage of token range. What does > the value in "Owns" field mean and how it affects a replication (for exaple, > with replication factors DC1:1, DC2:2)? Run the command and specify your keyspace, that will tell nodetool to use the Replication Strategy

Re: cassandra-shuffle time to completion and required disk space

2013-04-30 Thread aaron morton
> These are taken just before starting shuffle (ran repair/cleanup the day > before). > During shuffle disabled all reads/writes to the cluster. > > nodetool status keyspace: > > Load Tokens Owns (effective) Host ID > 80.95 GB 256 16.7% 754f9f4c-4ba7-4495-97e7-1f5b6755c

CfP 2013 Workshop on Middleware for HPC and Big Data Systems (MHPC'13)

2013-04-30 Thread MHPC 2013
we apologize if you receive multiple copies of this message === CALL FOR PAPERS 2013 Workshop on Middleware for HPC and Big Data Systems MHPC '13 as part of Euro-Par 2013, Aachen, Germany =

Re: Exporting all data within a keyspace

2013-04-30 Thread Chidambaran Subramanian
Thanks guys,both are good pointers Regards Chiddu On Tue, Apr 30, 2013 at 7:09 PM, Brian O'Neill wrote: > > You could always do something like this as well: > > http://brianoneill.blogspot.com/2012/05/dumping-data-from-cassandra-like.html > > -brian > > --- > > Brian O'Neill > > Lead Architect,

Re: Really odd issue (AWS related?)

2013-04-30 Thread Ben Chobot
We've also had issues with ephemeral drives in a single AZ in us-east-1, so much so that we no longer use that AZ. Though our issues tended to be obvious from instance boot - they wouldn't suddenly degrade. On Apr 28, 2013, at 2:27 PM, Alex Major wrote: > Hi Mike, > > We had issues with the ep

Re: Compaction, Slow Ring, and bad behavior

2013-04-30 Thread aaron morton
Check the logs for warnings from the GCInspector. If you see messages that correlate with compaction running limit compaction to help stabilise things… * set concurrrent_compactions to 2 * if you have wide rows reduce in_memory_compaction_limit * reduce compaction_throughput If you have a lot (

Re: error casandra ring an hadoop connection ¿?

2013-04-30 Thread aaron morton
> ava.lang.RuntimeException: UnavailableException() Looks like the pig script could talk to one node, but the coordinator could not process the request at the consistency level requested. Check all the nodes are up, that the RF is set to the correct value and the CL you are using. Cheers

How does a healthy node look like?

2013-04-30 Thread Steppacher Ralf
Hi, I have troubles finding some quantitative information as to how a healthy Cassandra node should look like (CPU usage, number of flushes,SSTables, compactions, GC), given a certain hardware spec and read/write load. I have troubles gauging our first and only Cassandra node, whether it needs

SSTables not opened on new cluste

2013-04-30 Thread Philippe
Hello, I'm trying to bring up a copy of an existing 3-node cluster running 1.0.8 into a 3-node cluster running 1.0.11. The new cluster has been configured to have the same tokens and the same partitioner. Initially, I copied the files in the data directory of each node into their corresponding no

Re: normal thread counts?

2013-04-30 Thread William Oberman
I use phpcassa. I did a thread dump. 99% of the threads look very similar (I'm using 1.1.9 in terms of matching source lines). The thread names are all like this: "WRITE-/10.x.y.z". There are a LOT of duplicates (in terms of the same IP). Many many many of the threads are trying to talk to IPs

Re: Exporting all data within a keyspace

2013-04-30 Thread Brian O'Neill
You could always do something like this as well: http://brianoneill.blogspot.com/2012/05/dumping-data-from-cassandra-like.htm l -brian --- Brian O'Neill Lead Architect, Software Development Health Market Science The Science of Better Results 2700 Horizon Drive € King of Prussia, PA € 19406 M: 21

Re: Exporting all data within a keyspace

2013-04-30 Thread Kumar Ranjan
Try sstable2json and json2sstable. But it works on column family so you can fetch all column family and iterate over list of CF and use sstable2json tool to extract data. Remember this will only fetch on disk data do anything in memtable/cache which is to be flushed will be missed. So run compactio

Exporting all data within a keyspace

2013-04-30 Thread Chidambaran Subramanian
Is there any easy way of exporting all data for a keyspace (and conversely) importing it. Regards Chiddu

nodetool status OWNS and multiple DCs

2013-04-30 Thread Sergey Naumov
Hello. I have set up test cluster of 2 DCs with 1 node in each DC. In each config I specified 256 virtual nodes and chosed GossippingPropertyFileSnitch. For node1: ~/Cassandra$ cat /etc/cassandra/cassandra-rackdc.properties dc=DC1 rack=RAC1 For node2: ~/Cassandra$ cat /etc/cassandra/cassandra-ra

RE: Exception when setting tokens for the cassandra nodes

2013-04-30 Thread Rahul
Oh.my bad. Thanks mate, that worked. On Apr 29, 2013 10:03 PM, wrote: > For starters: If you are using the Murmur3 partitioner, which is the > default in cassandra.yaml, then you need to calculate the tokens using:*** > * > > python -c 'print [str(((2**64 / 2) * i) - 2**63) for i in range(2)]'**