Unavailable exception with CL.ANY

2010-12-14 Thread Rajat Chopra
Hi! I have a 16 node cluster with two keyspaces and several column families within each, all with RF=2. All the reads/writes work with all column families but only one of them gives me an unavailable exception, even with CL.ANY consistency. The nodetool ring shows that three of the nodes of t

Dual NIC server problems

2010-12-14 Thread Arjan van Ham
I have a cluster with three nodes, version 0.7.0 RC2. Each node has dual NIC's, eth0 to the internet and eth1 to a private network (192.168.1.xxx). The outside NIC on each node is frewalled using iptables, only port 22 is allowed through. My cassandra.yaml configuration file refers only to the

Re: Unavailable exception with CL.ANY

2010-12-14 Thread aaron morton
First thing you are going to need is some log messages. From the machine the client was connected to when it returned the UnavailableException. At CL ANY even Hinted Handoff counts towards meeting the CL for a write http://wiki.apache.org/cassandra/HintedHandoff . So as long as the client can c

RE: Unavailable exception with CL.ANY

2010-12-14 Thread Rajat Chopra
Okay. Got that. Thanks. Bringing back just one of the nodes solved it. What I was keen on knowing was if there exists a way to know which keys reside on which node. Like a 'nodetool column.path' and it prints the nodelist that the column path resides on :). I have HH disabled for some other bene

Re: Dual NIC server problems

2010-12-14 Thread aaron morton
The code for nodetool appears to just pass the host value through to the NodeProbe. Was there anything else in the stack trace ? If you use the host name of the machine rather than ip what happens? cassandra-env.sh includes a link to this page about getting JMX running with firewalls http://bl

Re: Dynamic Snitch / Read Path Questions

2010-12-14 Thread Daniel Doubleday
On Dec 14, 2010, at 2:29 AM, Brandon Williams wrote: > On Mon, Dec 13, 2010 at 6:43 PM, Daniel Doubleday > wrote: > Oh - well but I see that the coordinator is actually using its own score for > ordering. I was only concerned that dropped messages are ignored when > calculating latencies but

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Jedd Rashbrooke
Peter, Jonathon - thank you for your replies. I should probably have repeated myself in the body, but as I mentioned in the subject line, we're running Sun Java 1.6. On 10 December 2010 18:37, Peter Schuller wrote: > Memory-mapped files will account for both virtual and, to the extent > that

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Jedd Rashbrooke
On 10 December 2010 18:37, Peter Schuller wrote: > Memory-mapped files will account for both virtual and, to the extent > that they are resident in memory, to the resident size of the process. To clarify - in our storage-conf we have: mmap_index_only I know it's a matter of degree, but

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Timo Nentwig
On Dec 12, 2010, at 17:21, Jonathan Ellis wrote: > http://www.riptano.com/docs/0.6/troubleshooting/index#nodes-are-dying-with-oom-errors I can rule out the first 3. I was running cassandra with default settings, i.e. 1GB heap and 256M memtable. So, with 3 memtables+1GB the JVM should run with

Cassandra-Pig keyspace not found issue

2010-12-14 Thread Peter Davies
Pig seems to think my keyspace doesn't exist. I'm connecting to a remote cassandra instance configured in the environment variables PIG_RPC_PORT and PIG_INITIAL_ADDRESS (an ip address) I get the following backend logged output... ** org.apache.pig.backend.executionengine.E

Re: cassandra database viewer

2010-12-14 Thread Amin Sakka, Novapost
Thanks for your answers, I have checkout the 0.7 branch but still having troubles: *init_() takes at least 3 arguments (2 given)* 2010-12-14 14:08:36+0100 [Uninitialized] will retry in 2 seconds 2010-12-14 14:08:36+0100 [Uninitialized] Stopping factory 2010-12-14 14:08:37+0100 [Uninitialized] Un

OOM error while startup cassandra0.7.0 rc1

2010-12-14 Thread Donal Zang
hi, I'm using apache-cassandra-0.7.0-rc1, and use java 1.6.0_17. The node collapse because of java.lang.OutOfMemoryError: Java heap space, and now it can't be restarted, becuase every time when it's replay the commit-logs, it will collapse by OOM error. I have 4G memory and I tried to set the bi

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Jonathan Ellis
This is "A row has grown too large" section from that troubleshooting guide. On Tue, Dec 14, 2010 at 5:27 AM, Timo Nentwig wrote: > > On Dec 12, 2010, at 17:21, Jonathan Ellis wrote: > > > > http://www.riptano.com/docs/0.6/troubleshooting/index#nodes-are-dying-with-oom-errors > > I can rule out t

Re: Running multiple instances on a single server --micrandra ??

2010-12-14 Thread Gary Dusbabek
On Tue, Dec 7, 2010 at 20:25, Edward Capriolo wrote: > I am quite ready to be stoned for this thread but I have been thinking > about this for a while and I just wanted to bounce these ideas of some > guru's. > > ... > > The upsides ? > 1) Since disk/instance failure only degrades the overall perf

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Timo Nentwig
On Dec 14, 2010, at 14:41, Jonathan Ellis wrote: > This is "A row has grown too large" section from that troubleshooting guide. Why? This is what a typical "row" (?) looks like: [defa...@test] list tracking limit 1; --- RowKey: 123 => (column=key, value=foo, timestamp=129223800

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Clint Byrum
On Tue, 2010-12-14 at 11:06 +, Jedd Rashbrooke wrote: > JNA is something I'd read briefly about a while back, but now > it might be something I need to explore further. We're using > Cassandra 0.6.6, and our Ubuntu version offers a packaged > release of libjna 3.2.3-1 .. rumours on the Int

Fauna Questions

2010-12-14 Thread Alberto Velandia
Hi has anyone noticed that the documentation for the Cassandra Class is gone from the website? http://blog.evanweaver.com/2010/12/06/cassandra-0-8/ I was wondering if there's a way for me to count how many rows exist inside a Column Family and a way to erase the contents of that Column Family

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Timo Nentwig
On Dec 14, 2010, at 15:31, Timo Nentwig wrote: > On Dec 14, 2010, at 14:41, Jonathan Ellis wrote: > >> This is "A row has grown too large" section from that troubleshooting guide. > > Why? This is what a typical "row" (?) looks like: > > [defa...@test] list tracking limit 1; > -

Re: Consistency question caused by Read_all and Write_one

2010-12-14 Thread Alvin UW
Thanks. It is very helpful. I think I'd like to write to the same column. Would you please give me more details about your last sentence? For example, why can't I use locking mechanism inside of cassandra? Thanks. Alvin 2010/12/13 Aaron Morton > In your example is a little unclear. > > If yo

Re: Unavailable exception with CL.ANY

2010-12-14 Thread Robert Coli
On Tue, Dec 14, 2010 at 1:08 AM, Rajat Chopra wrote: > What I was keen on knowing was if there exists a way to know which keys > reside on which node. Like a ‘nodetool column.path’ and it prints the > nodelist that the column path resides on J. I have HH disabled for some > other benevolent reason

Re: OOM error while startup cassandra0.7.0 rc1

2010-12-14 Thread Peter Schuller
> I have 4G memory and I tried to set the binary_memtable_throughput_in_mb to > 128M and 64M, It's still not work. > I do not have big rows, and no row cache,use default Consistency_level... > Any ideas? Assuming you truly don't have large rows; what's your memtable thresholds set to? Make sure it

Re: Running multiple instances on a single server --micrandra ??

2010-12-14 Thread Edward Capriolo
On Tue, Dec 14, 2010 at 8:52 AM, Gary Dusbabek wrote: > On Tue, Dec 7, 2010 at 20:25, Edward Capriolo wrote: >> I am quite ready to be stoned for this thread but I have been thinking >> about this for a while and I just wanted to bounce these ideas of some >> guru's. >> >> ... >> >> The upsides ?

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Nate McCall
Timo, Apologies if I missed it above, but how big is the batch size you are sending to batch_mutate? On Tue, Dec 14, 2010 at 9:15 AM, Timo Nentwig wrote: > On Dec 14, 2010, at 15:31, Timo Nentwig wrote: > >> On Dec 14, 2010, at 14:41, Jonathan Ellis wrote: >> >>> This is "A row has grown too larg

Re: Consistency question caused by Read_all and Write_one

2010-12-14 Thread Tyler Hobbs
Could you give more details on what you're trying to do? This sounds like a case where a UUID will give you what you need without needing to lock. - Tyler On Tue, Dec 14, 2010 at 10:24 AM, Alvin UW wrote: > Thanks. > It is very helpful. > > I think I'd like to write to the same column. > > Wou

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Peter Schuller
>> Memory-mapped files will account for both virtual and, to the extent >> that they are resident in memory, to the resident size of the process. > >  This bears further investigation.  Would you consider a 3GB overhead >  on a 4GB heap a possibility?  (From a position of some naivety, this >  seem

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Peter Schuller
> I can rule out the first 3. I was running cassandra with default settings, > i.e. 1GB heap and 256M memtable. So, with 3 memtables+1GB the JVM should run > with >1.75G (although http://wiki.apache.org/cassandra/MemtableThresholds > considers to increase heap size only gently). Did so. 4GB mach

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Peter Schuller
> java.lang.OutOfMemoryError: Java heap space >        at java.nio.HeapByteBuffer.(HeapByteBuffer.java:39) >        at java.nio.ByteBuffer.allocate(ByteBuffer.java:312) >        at > org.apache.cassandra.utils.FBUtilities.readByteArray(FBUtilities.java:261) >        at > org.apache.cassandra.db.C

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Timo Nentwig
On Dec 14, 2010, at 18:45, Nate McCall wrote: > Timo, > Apologies if I missed it above, but how big is the batch size you are > sending to batch_mutate? Actually only a single row at a time for now (hector 0.7-21): final Cluster cluster = HFactory.getOrCreateCluster("Test", cassandraHo

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Timo Nentwig
On Dec 14, 2010, at 19:45, Peter Schuller wrote: >> java.lang.OutOfMemoryError: Java heap space >>at java.nio.HeapByteBuffer.(HeapByteBuffer.java:39) >>at java.nio.ByteBuffer.allocate(ByteBuffer.java:312) >>at >> org.apache.cassandra.utils.FBUtilities.readByteArray(FBUtil

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Timo Nentwig
On Dec 14, 2010, at 19:38, Peter Schuller wrote: > For debugging purposes you may want to switch Cassandra to "standard" > IO mode instead of mmap. This will have a performance-penalty, but the > virtual/resident sizes won't be polluted with mmap():ed data. Already did so. It *seems* to run more

org.apache.cassandra.service.ReadResponseResolver question

2010-12-14 Thread Daniel Doubleday
Hi I'm sorry - don't want to be a pain in the neck with source questions. So please just ignore me if this is stupid: Isn't org.apache.cassandra.service.ReadResponseResolver suposed to throw a DigestMismatchException if it receives a digest wich does not match the digest of a read message? If

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Peter Schuller
>> The stack trace doesn't make sense relative to what I get checking out >> 0.6.6. Are you *sure* this is 0.6.6, without patches or other changes? > > Oh, sorry, the original poster of this thread was/is actually using 0.6, I am > (as mentioned in other posts) actually on 0.7rc2. Sorry that I did

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Peter Schuller
> I just uncommented the GC JVMOPTS from the shipped cassandra start script and > use Sun JVM 1.6.0_23. Hmm, but these "GC tuning options" are also > uncommented. I'll comment them again and try again. Maybe I was just too quick trying to mentally parse it and given the jumbled line endings. You

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Peter Schuller
>> For debugging purposes you may want to switch Cassandra to "standard" >> IO mode instead of mmap. This will have a performance-penalty, but the >> virtual/resident sizes won't be polluted with mmap():ed data. > > Already did so. It *seems* to run more stable, but it's still far off from > being

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Peter Schuller
>  I posted mostly as a heads up for others using similar profiles (4GB >  heap on ~8GB boxes) to keep an eye out for.  I expect a few people, >  particularly if they're on Amazon EC2, are running this type of setup. > >  On the other hand, mum always said I was unique.  ;) So, now that I get that

Re: org.apache.cassandra.service.ReadResponseResolver question

2010-12-14 Thread Jonathan Ellis
Correct. https://issues.apache.org/jira/browse/CASSANDRA-1830 is open to fix that. If you'd like to review the patch there, that would be very helpful. :) On Tue, Dec 14, 2010 at 1:55 PM, Daniel Doubleday wrote: > Hi > > I'm sorry - don't want to be a pain in the neck with source questions. So

Re: Dynamic Snitch / Read Path Questions

2010-12-14 Thread Peter Schuller
> We are entirely IO-bound. What killed us last week were to many reads > combined with flushes and compactions. Reducing compaction priority helped > but it was not enough. Them main problem why we could not add nodes though > had to do with the quorum reads we are doing: I'm going to respond

Re: cassandra database viewer

2010-12-14 Thread Brandon Williams
On Tue, Dec 14, 2010 at 7:11 AM, Amin Sakka, Novapost < amin.sa...@novapost.fr> wrote: > > Thanks for your answers, I have checkout the 0.7 branch but still having > troubles: > *init_() takes at least 3 arguments (2 given)* Are you using the 0.7 branch of telephus too? -Brandon

Insertion batch stoping for some reason at 100 records

2010-12-14 Thread Alberto Velandia
Hi i'm using Cassandra 0.6.8 and Fauna, I'm running a batch to populate my db and for some reason every time it gets to a 100 records it stops no error report or anything apparently it keeps storing but every time I count the number of records it stays in a hundred, it is updating two Column Fam

Re: Insertion batch stoping for some reason at 100 records

2010-12-14 Thread Peter Schuller
> Hi i'm using Cassandra 0.6.8 and Fauna, I'm running a batch to populate my db > and for some reason every time it gets to a 100 records it stops no error > report or anything apparently it keeps storing but every time I count the > number of records it stays in a hundred, it is updating two Co

Re: Insertion batch stoping for some reason at 100 records

2010-12-14 Thread Alberto Velandia
Makes perfect sense thanks, how can I set the Count limit for an specific Column Family? On Dec 14, 2010, at 3:47 PM, Peter Schuller wrote: >> Hi i'm using Cassandra 0.6.8 and Fauna, I'm running a batch to populate my >> db and for some reason every time it gets to a 100 records it stops no er

Re: Insertion batch stoping for some reason at 100 records

2010-12-14 Thread Peter Schuller
(Btw I said "row count" in my response; that was a poor choice of words given that "row" has a specific meaning in Cassandra. I meant column count.) > Makes perfect sense thanks, how can I set the Count limit for an specific > Column Family? Looks like you can pass a :count option to get() (I ju

Re: Insertion batch stoping for some reason at 100 records

2010-12-14 Thread Peter Schuller
>> Makes perfect sense thanks, how can I set the Count limit for an specific >> Column Family? > > Looks like you can pass a :count option to get() (I just did a quick > check, I haven't used the client myself). See cassandra.rb's > documentation at the top. ... and that seems to apply to count_c

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Peter Schuller
> Memory-mapped files will account for both virtual and, to the extent > that they are resident in memory, to the resident size of the process. > However, your graph: Correcting myself in the interest of providing correct information, this doesn't seem to be true - at least not always. I don't kno

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Timo Nentwig
On Dec 14, 2010, at 21:07, Peter Schuller wrote: > In that case, based on the strack trace, I wonder if you're hitting > what I was hitting just yesterday/earlier today: > > https://issues.apache.org/jira/browse/CASSANDRA-1860 > > Which is suspected (currently being tested that it's gone with

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Peter Schuller
> If it helps, I also found quite a few of these in the logs > > org.apache.cassandra.db.UnserializableColumnFamilyException: Couldn't find > cfId=224101 > > However a single cassandra instance locally (OSX, 1.6.0_22, mmap) runs just > perfect for > hours. No exceptions, no OOM. Given that these

Re: Fauna Questions

2010-12-14 Thread Aaron Morton
There is a truncate() function on the ruby api if you require Cassandra/0.7 this can truncate all the data in a CF. It will call the truncate function on the thrift api. I do not know of a precise way to get a count of rows. There is a function to count the number of columns, see count_columns

Re: Fauna Questions

2010-12-14 Thread Tyler Hobbs
There's an estimateKeys() function exposed via JMX that will give you an approximate row count for the node. In jconsole this shows up under o.a.c.db -> ColumnFamilies -> Keyspace -> CF -> Operations. There's not a "precise" way to count rows other than to do a get_range_slices() over the entire

How to get columns in a super column in cassandra-cli ?

2010-12-14 Thread Hayarobi Park
Hello, I'm using cassandra 0.7.0-rc2. When I tried to get column contents in a super column of Super CF like below; ] get myCF['key']['scName']; the client reply supercolumn parameter is not optional for super CF user It seemed to work in cassandra-0.7.0-beta2, if my memory is not wrong. The clu

Re: Dual NIC server problems

2010-12-14 Thread Oleg Anastasyev
This is probably because rmi code jmx uses to listen detected wrong address. To fix this add the following to cassandra nodes startup script instances: -Djava.rmi.server.hostname=127.0.0.1 (change 127.0.0.1 to actual internal address of cassandra node)