Exceptions whenever compaction happens

2016-09-25 Thread Nikhil Sharma
Hi, We are not exactly sure what is causing this problem. But after compaction happens (after 1 week ttl). We start getting this exception: WARN [SharedPool-Worker-1] 2016-09-26 04:07:19,849 AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread Thread[SharedPool-Worker-1,5,ma

How long/how many days 'nodetool gossipinfo' will have decommissioned nodes info

2016-09-25 Thread Laxmikanth S
Hi, Recently we have decommissioned nodes from Cassandra cluster , but even after nearly 48 hours 'nodetool gossipinfo' still shows the removed nodes( as LEFT). I just wanted to recommission the same node again. So just wanted to know , will it create a problem if I recommission the same node(sam

Iterating over a table with multiple producers [Python]

2016-09-25 Thread Bhuvan Rawal
Hi, Its a common occurrence where full scan of Cassandra table is required. One of the most common requirement is to get the count of rows in a table. As Cassandra doesn't keep count information stored anywhere (A node may not have any clue about writes happening on other nodes) when we aggregate

Re: regarding drain process

2016-09-25 Thread jason zhao yang
Hi Varun, It looks like a scheduled job that runs "nodetool drain".. Zhao Yang Varun Barala 于2016年9月25日周日 下午7:45写道: > Jeff Jirsa thanks for your reply!! > > We are not using any chef/puppet and It happens only at one node other > nodes are working fine. > And all machines are using same AMI ima

Re: regarding drain process

2016-09-25 Thread Varun Barala
Jeff Jirsa thanks for your reply!! We are not using any chef/puppet and It happens only at one node other nodes are working fine. And all machines are using same AMI image. Did anybody face such situation or have any suggestions ? Thanking you. On Wed, Jul 27, 2016 at 10:27 PM, Jeff Jirsa wrot