Re: Bringing a dead node back up after fixing hardware issues

2012-07-30 Thread Eran Chinthaka Withana
Thanks Brandon. Thanks, Eran Chinthaka Withana On Thu, Jul 26, 2012 at 12:46 PM, Brandon Williams wrote: > On Wed, Jul 25, 2012 at 6:16 PM, Eran Chinthaka Withana > wrote: > > > Alright, lets assume I want to go on this route. I have RF=2 in the data > > center and I believe I need at least R

unsubscribe

2012-07-30 Thread Alvin UW
thx

Re: Practical node size limits

2012-07-30 Thread Tyler Hobbs
On Mon, Jul 30, 2012 at 2:04 PM, Dustin Wenz wrote: > CFStats reports that the bloom filter size is currently several gigabytes Just so you know, you can control bloom filter sizes now with the per-cf bloom_filter_fp_chance attribute. -- Tyler Hobbs DataStax

Re: Practical node size limits

2012-07-30 Thread Dustin Wenz
Thanks for the pointer! It sounds likely that's what I'm seeing. CFStats reports that the bloom filter size is currently several gigabytes. Is there any way to estimate how much heap space a repair would require? Is it a function of simply adding up the filter file sizes, plus some fraction of n

"All time blocked" FlushWriter

2012-07-30 Thread Mohit Agarwal
Hi guys, Can anyone explain what "all time blocked" and "blocked" columns mean in tpstats? I have a 2 node 1.1.2 cluster, on which one of the nodes has the following output for FlushWriter. The other node is on 15 'All time blocked'. Do i need to 'unblock' this? Pool NameActi

Re: increased RF and repair, not working?

2012-07-30 Thread Tamar Fraenkel
Thanks! *Tamar Fraenkel * Senior Software Engineer, TOK Media [image: Inline image 1] ta...@tok-media.com Tel: +972 2 6409736 Mob: +972 54 8356490 Fax: +972 2 5612956 On Mon, Jul 30, 2012 at 5:30 PM, Tim Wintle wrote: > > On Mon, 2012-07-30 at 15:16 +0300, Tamar Fraenkel wrote: > > Ho

unsubscribe

2012-07-30 Thread Qisheng Zhu

Introspecting table with CQL3

2012-07-30 Thread Thierry Templier
Hello, I wonder if it's possible to have access to metadata regarding a table/column family (its column names, types...) using CQL3. Thanks very much for your help! Thierry

Problem when deleting a keyspace

2012-07-30 Thread Thierry Templier
Hello, I have a problem when trying to delete a keyspace using cqlsh. It seems to remain existing even after executing a drop keyspace command: cqlsh> use teststore; cqlsh:teststore> drop keyspace teststore; cqlsh:teststore>describe keyspace teststore; CREATE KEYSPACE teststore WITH strategy_

Re: increased RF and repair, not working?

2012-07-30 Thread Tim Wintle
On Mon, 2012-07-30 at 15:16 +0300, Tamar Fraenkel wrote: > How do you make this calculation? It seems I did make a mistake somewhere before (or I mistyped it) - it should have been 2.7%, not 2.8%. You're sending read requests to RF servers, and hoping for a response from CL of them within the t

Re: increased RF and repair, not working?

2012-07-30 Thread Tamar Fraenkel
How do you make this calculation? Thanks, *Tamar Fraenkel * Senior Software Engineer, TOK Media [image: Inline image 1] ta...@tok-media.com Tel: +972 2 6409736 Mob: +972 54 8356490 Fax: +972 2 5612956 On Mon, Jul 30, 2012 at 3:14 PM, Tim Wintle wrote: > On Mon, 2012-07-30 at 14:40 +03

Re: increased RF and repair, not working?

2012-07-30 Thread Tim Wintle
On Mon, 2012-07-30 at 14:40 +0300, Tamar Fraenkel wrote: > Hi! > To clarify it a bit more, > Let's assume the setup is changed to > RF=3 > W_CL=QUORUM (or two for that matter) > R_CL=ONE > The setup will now work for both read and write in case of one node > failure. > What are the disadvantages,

Re: Cassandra 1.0 hangs during GC

2012-07-30 Thread Nikolay Kоvshov
You mean using swap memory? I have total of 48G of RAM and Cassandra never used more than 2G, swap is disabled. But as I have little clues, I can give this a try. Is there any fresh instruction on running Cassandra with JNA ? 30.07.2012, 16:01, "Mateusz Korniak" : > On Monday 30 of July 2012, N

Re: Cassandra 1.0 hangs during GC

2012-07-30 Thread Mateusz Korniak
On Monday 30 of July 2012, Nikolay Kоvshov wrote: > - JNA is not installed on both machines So your GC times may be strongly [1] affected by swapping. IIRC, also snapshotting is more expensive and may trigger more swapping. I would start with turning JNA mlockall on [2]. [1]: Not sure if up to

Re: Cassandra 1.0 hangs during GC

2012-07-30 Thread Nikolay Kоvshov
- JNA is not installed on both machines 30.07.2012, 14:44, "Mateusz Korniak" : > On Monday 30 of July 2012, Nikolay Kоvshov wrote: > >>  What I plan to compare between 'bad' cluster and 'good' cluster: >> >>  - Configs, schemas, data etc: same >>  - java version : same >>  - RAM and CPU : 'bad'

Re: increased RF and repair, not working?

2012-07-30 Thread Tamar Fraenkel
Hi! To clarify it a bit more, Let's assume the setup is changed to RF=3 W_CL=QUORUM (or two for that matter) R_CL=ONE The setup will now work for both read and write in case of one node failure. What are the disadvantages, other than the disk space needed to replicate everything trice instead of t

Re: Cassandra 1.0 hangs during GC

2012-07-30 Thread Mateusz Korniak
On Monday 30 of July 2012, Nikolay Kоvshov wrote: > What I plan to compare between 'bad' cluster and 'good' cluster: > > - Configs, schemas, data etc: same > - java version : same > - RAM and CPU : 'bad' cluster has more > - Ubuntu version > - Networking > - What else??? - JNA. If it's working an

unsubscribe

2012-07-30 Thread Andrew Knox
unsubscribe

Re: Cassandra 1.0 hangs during GC

2012-07-30 Thread Nikolay Kоvshov
There is one peculiar thing about this problem When I copy the cluster to other 2 servers (much weaker servers in means of CPU and RAM) and run cassandra there, I see no massive CPU load and no huge GC times. Thus I suppose it has something to do with hardware or software installed on the machi

Re: Cassandra 1.0.6 nodetool drain gives lots of batch_mutate exceptions

2012-07-30 Thread Tamil selvan R.S
Isn't that obvious http://wiki.apache.org/cassandra/NodeTool check drain On Mon, Jul 30, 2012 at 11:07 AM, Roshan wrote: > Hi > > As a part of the Cassandra upgrade to 1.1.2 from 1.0.6, I am running > *nodetool drain* node by node to empty the commit logs. When draining a > particular node, that