jamm - memory meter
1. jamm still do not works on openJDK? WARN [MutationStage:368] 2011-10-01 18:21:39,695 Memtable.java (line 156) MemoryMeter uninitialized (jamm not specified as java agent); assuming liveRatio of 10.0. Usually this means cassandra-env.sh disabled jamm because you are using a buggy JRE; upgrade to the Sun JRE instead 2. Cassandra needs jamm just until first CF flush is done because it knows live to serialized ratio from flush?
Repair in Cassandra 0.8.4 taking too long
I had 3 nodes with strategy_options (DC1=3) in 1 DC. I added 1 more DC and 3 more nodes. I didnt set the initial token. But I ran nodetool move on the new nodes(adding 1 to the tokens of the nodes in DC1) . I updated the keyspace to strategy_options (DC1=3, DC2=3). Then I started running nodetool repair on each of the nodes. Before I started repair each node had around 5 GB of data. I started on the new nodes. 2 of the nodes completed the repair in 2 hours each. During the repair I saw the data to grow to almost 25 GB, but eventually when the repair was done the data settled at around 9 GB. Is this normal? The 3rd node has been running repair for a long time. It eventually stopped throwing an exception - Exception in thread "main" java.rmi.UnmarshalException: Error unmarshaling return header; nested exception is: java.io.EOFException at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:209) at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:142) at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source) at javax.management.remote.rmi.RMIConnectionImpl_Stub.invoke(Unknown Source) at javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.invoke(RMIConnector.java:993) at javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:288) at $Proxy0.forceTableRepair(Unknown Source) at org.apache.cassandra.tools.NodeProbe.forceTableRepair(NodeProbe.java:192) at org.apache.cassandra.tools.NodeCmd.optionalKSandCFs(NodeCmd.java:773) at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:669) Caused by: java.io.EOFException at java.io.DataInputStream.readByte(DataInputStream.java:250) at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:195) I started repair again since its safe to do so. Now the GCInspector complains of not enough heap - WARN [ScheduledTasks:1] 2011-10-01 13:08:16,227 GCInspector.java (line 149) Heap is 0.7598414264960864 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically INFO [ScheduledTasks:1] 2011-10-01 13:08:16,227 StorageService.java (line 2398) Unable to reduce heap usage since there are no dirty column families nodetool ring shows 48GB of data on the node. My Xmx is 2G. I rely on OS caching more than Row caching or key caching. Hence the column families are created with default settings. Any help would be appreciated. Thanks -Raj
UUID cli output
hi all I am using UUID as my row key, and when examine it in CLI, I was expecting something like this that get printed: b2f0da40-ec2c-11e0--242d50cf1fbf instead, I am seeing something like this: 633866363838343065626462313165303030303032343264353063663166 How does this get transformed? thanks Ruby
Re: mmap segment underflow
Scrub seems to have worked. Thanks again! Will a major compaction delete the "tmp" sstables genereated though? Scrub seems to have generated a lot of them and they're taking up an unnerving amount of disk space. On Mon, Sep 19, 2011 at 5:34 PM, Eric Czech wrote: > Ok then I'll shutdown the server, change the access mode, restart, and > run scrub (and then change the access mode back). > > Thanks for the pointers and I'll let you know how it goes one way or the > other. > > On Tue, Sep 20, 2011 at 12:29 AM, aaron morton > wrote: > > I've also found it useful to disable memmapped file access until the > scrub is complete by adding this to the yaml > > > > disk_access_mode: standard > > > > Cheers > > > > - > > Aaron Morton > > Freelance Cassandra Developer > > @aaronmorton > > http://www.thelastpickle.com > > > > On 20/09/2011, at 6:55 AM, Jonathan Ellis wrote: > > > >> You should start with scrub. > >> > >> On Mon, Sep 19, 2011 at 1:04 PM, Eric Czech > wrote: > >>> I'm getting a lot of errors that look something like "java.io.IOError: > >>> java.io.IOException: mmap segment underflow; remaining is 348268797 > >>> but 892417075 requested" on one node in a 10 node cluster. I'm > >>> currently running version 0.8.4 but this is data that was carried over > >>> from much earlier versions. Should I try to run scrub or are there > >>> any other general guidelines for dealing with this sort of error? > >>> > >>> Thanks everyone! > >>> > >> > >> > >> > >> -- > >> Jonathan Ellis > >> Project Chair, Apache Cassandra > >> co-founder of DataStax, the source for professional Cassandra support > >> http://www.datastax.com > > > > >
Re: UUID cli output
You don't have a key type defined in your schema, so the cli is showing you the bytes, in hex notation. Look at "help update column family" for how to add the uuid key validator. On Sat, Oct 1, 2011 at 5:31 PM, Ruby Stevenson wrote: > hi all > > I am using UUID as my row key, and when examine it in CLI, I was expecting > something like this that get printed: > > b2f0da40-ec2c-11e0--242d50cf1fbf > > instead, I am seeing something like this: > > 633866363838343065626462313165303030303032343264353063663166 > > How does this get transformed? > > thanks > > Ruby > > -- Jonathan Ellis Project Chair, Apache Cassandra co-founder of DataStax, the source for professional Cassandra support http://www.datastax.com
Re: jamm - memory meter
Have a look at this https://issues.apache.org/jira/browse/CASSANDRA-2787 Thanks, Nehal Mehta 2011/10/1 Radim Kolar > 1. > jamm still do not works on openJDK? > > WARN [MutationStage:368] 2011-10-01 18:21:39,695 Memtable.java (line 156) > MemoryMeter uninitialized (jamm not specified as java agent); assuming > liveRatio of 10.0. Usually this means cassandra-env.sh disabled jamm > because you are using a buggy JRE; upgrade to the Sun JRE instead > > 2. > Cassandra needs jamm just until first CF flush is done because it knows > live to serialized ratio from flush? >
Re: anyway to disable row/key cache on single node while starting it?
Thanks a lot! Will try it. Sent from my iPhone On Sep 28, 2011, at 12:03 AM, Peter Schuller wrote: >> again I was doing repair on single CF and it crashed because of OOM, >> leaving 286GB data(should be 40GB). the problem here is it take very very >> long to make the node back to alive, seems because of it was loading row >> cache. the last time I encountered this, I did people suggested that delete >> everything in saved_cache directory and update the schema to set row/key >> cache to 0. but it seems is cluster wide and affect other nodes. >> so is there anyway to stop the node to load cache while starting? > > Just removing the saved_cache stuff on the node accomplishes that. > > You can also change the cache sizes transiently on a single node by > using nodetool, but there is no need to do that if all you're after is > to avoid reading in row cache on startup. Simply removing the saved > caches is enough. > > -- > / Peter Schuller (@scode on twitter)