Sorry, for the delay in replying. I'd like to stress that I've been working on this cluster for many months and this was the first and so far last time I got this error so I couldn't guess how to duplicate. Sorry I can't be more help.
Anyways, here's the details requested: Row caching is enabled, at the time the error occurred using ConcurrentLinkedHashCacheProvider. It's the Apache packaged version with JNA pulled in as a dependency when I installed so yes. We're using Hector 1.0.1. I'm not sure what was happening at the time the error occured altho the empty super columns are expected, assuming my understanding of super columns being deleted is correct, which is to say if I delete a super column from a row it'll tombstone it and delete the data. The schema for PlayerCity is as follows: create column family PlayerCity with column_type = 'Super' and comparator = 'UTF8Type' and subcomparator = 'BytesType' and default_validation_class = 'BytesType' and key_validation_class = 'BytesType' and rows_cached = 400.0 and row_cache_save_period = 0 and row_cache_keys_to_save = 2147483647 and keys_cached = 200000.0 and key_cache_save_period = 14400 and read_repair_chance = 1.0 and gc_grace = 864000 and min_compaction_threshold = 4 and max_compaction_threshold = 32 and replicate_on_write = true and row_cache_provider = 'ConcurrentLinkedHashCacheProvider' and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'; On Fri, Feb 24, 2012 at 10:07 PM, Jonathan Ellis <jbel...@gmail.com> wrote: > I've filed https://issues.apache.org/jira/browse/CASSANDRA-3957 as a > bug. Any further light you can shed here would be useful. (Is row > cache enabled? Is JNA installed?) > > On Mon, Feb 20, 2012 at 5:43 AM, Thomas van Neerijnen > <t...@bossastudios.com> wrote: > > Hi all > > > > I am running the Apache packaged Cassandra 1.0.7 on Ubuntu 11.10. > > It has been running fine for over a month however I encountered the below > > error yesterday which almost immediately resulted in heap usage rising > > quickly to almost 100% and client requests timing out on the affected > node. > > I gave up waiting for the init script to stop Cassandra and killed it > myself > > after about 3 minutes, restarted it and it has been fine since. Anyone > seen > > this before? > > > > Here is the error in the output.log: > > > > ERROR 10:51:44,282 Fatal exception in thread > > Thread[COMMIT-LOG-WRITER,5,main] > > java.lang.AssertionError: Final buffer length 4690 to accomodate data > size > > of 2347 (predicted 2344) for RowMutation(keyspace='Player', > > > key='36336138643338652d366162302d343334392d383466302d356166643863353133356465', > > modifications=[ColumnFamily(PlayerCity [SuperColumn(owneditem_1019 > > []),SuperColumn(owneditem_1024 []),SuperColumn(owneditem_1026 > > []),SuperColumn(owneditem_1074 []),SuperColumn(owneditem_1077 > > []),SuperColumn(owneditem_1084 []),SuperColumn(owneditem_1094 > > []),SuperColumn(owneditem_1130 []),SuperColumn(owneditem_1136 > > []),SuperColumn(owneditem_1141 []),SuperColumn(owneditem_1142 > > []),SuperColumn(owneditem_1145 []),SuperColumn(owneditem_1218 > > [636f6e6e6563746564:false:5@1329648704269002 > ,63757272656e744865616c7468:false:3@1329648704269006 > ,656e64436f6e737472756374696f6e54696d65:false:13@1329648704269007 > ,6964:false:4@1329648704269000,6974656d4964:false:15@1329648704269001 > ,6c61737444657374726f79656454696d65:false:1@1329648704269008 > ,6c61737454696d65436f6c6c6563746564:false:13@1329648704269005 > ,736b696e4964:false:7@1329648704269009,78:false:4@1329648704269003 > ,79:false:3@1329648704269004,]),SuperColumn(owneditem_133 > > []),SuperColumn(owneditem_134 []),SuperColumn(owneditem_135 > > []),SuperColumn(owneditem_141 []),SuperColumn(owneditem_147 > > []),SuperColumn(owneditem_154 []),SuperColumn(owneditem_159 > > []),SuperColumn(owneditem_171 []),SuperColumn(owneditem_253 > > []),SuperColumn(owneditem_422 []),SuperColumn(owneditem_438 > > []),SuperColumn(owneditem_515 []),SuperColumn(owneditem_521 > > []),SuperColumn(owneditem_523 []),SuperColumn(owneditem_525 > > []),SuperColumn(owneditem_562 []),SuperColumn(owneditem_61 > > []),SuperColumn(owneditem_634 []),SuperColumn(owneditem_636 > > []),SuperColumn(owneditem_71 []),SuperColumn(owneditem_712 > > []),SuperColumn(owneditem_720 []),SuperColumn(owneditem_728 > > []),SuperColumn(owneditem_787 []),SuperColumn(owneditem_797 > > []),SuperColumn(owneditem_798 []),SuperColumn(owneditem_838 > > []),SuperColumn(owneditem_842 []),SuperColumn(owneditem_847 > > []),SuperColumn(owneditem_849 []),SuperColumn(owneditem_851 > > []),SuperColumn(owneditem_852 []),SuperColumn(owneditem_853 > > []),SuperColumn(owneditem_854 []),SuperColumn(owneditem_857 > > []),SuperColumn(owneditem_858 []),SuperColumn(owneditem_874 > > []),SuperColumn(owneditem_884 []),SuperColumn(owneditem_886 > > []),SuperColumn(owneditem_908 []),SuperColumn(owneditem_91 > > []),SuperColumn(owneditem_911 []),SuperColumn(owneditem_930 > > []),SuperColumn(owneditem_934 []),SuperColumn(owneditem_937 > > []),SuperColumn(owneditem_944 []),SuperColumn(owneditem_945 > > []),SuperColumn(owneditem_962 []),SuperColumn(owneditem_963 > > []),SuperColumn(owneditem_964 []),])]) > > at > > org.apache.cassandra.utils.FBUtilities.serialize(FBUtilities.java:682) > > at > > > org.apache.cassandra.db.RowMutation.getSerializedBuffer(RowMutation.java:279) > > at > > > org.apache.cassandra.db.commitlog.CommitLogSegment.write(CommitLogSegment.java:122) > > at > > > org.apache.cassandra.db.commitlog.CommitLog$LogRecordAdder.run(CommitLog.java:599) > > at > > > org.apache.cassandra.db.commitlog.PeriodicCommitLogExecutorService$1.runMayThrow(PeriodicCommitLogExecutorService.java:49) > > at > > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) > > at java.lang.Thread.run(Thread.java:662) > > WARN 10:51:54,302 Heap is 0.764063958911146 full. You may need to > reduce > > memtable and/or cache sizes. Cassandra will now flush up to the two > largest > > memtables to free up memory. Adjust flush_largest_memtables_at > threshold in > > cassandra.yaml if you don't want Cassandra to do this automatically > > WARN 10:51:54,303 Flushing CFS(Keyspace='Player', > > ColumnFamily='PlayerDetail') to relieve memory pressure > > INFO 11:00:41,162 Started hinted handoff for token: > > 121529416757478022665490931225631504090 with IP: /10.16.96.212 > > INFO 11:00:41,163 Finished hinted handoff of 0 rows to endpoint > > /10.16.96.212 > > [Unloading class > sun.reflect.GeneratedSerializationConstructorAccessor192] > > [Unloading class > sun.reflect.GeneratedSerializationConstructorAccessor165] > > [Unloading class > sun.reflect.GeneratedSerializationConstructorAccessor202] > > [Unloading class > sun.reflect.GeneratedSerializationConstructorAccessor232] > > [Unloading class > sun.reflect.GeneratedSerializationConstructorAccessor146] > > [Unloading class > sun.reflect.GeneratedSerializationConstructorAccessor181] > > [Unloading class > sun.reflect.GeneratedSerializationConstructorAccessor190] > > [Unloading class > sun.reflect.GeneratedSerializationConstructorAccessor214] > > ... > > [Unloading class sun.reflect.GeneratedConstructorAccessor19] > > [Unloading class > sun.reflect.GeneratedSerializationConstructorAccessor209] > > [Unloading class > sun.reflect.GeneratedSerializationConstructorAccessor227] > > [Unloading class > sun.reflect.GeneratedSerializationConstructorAccessor205] > > [Unloading class sun.reflect.GeneratedMethodAccessor202] > > > > The "Unloading" messages continued, with occasional mentions of hinted > > handoffs in between, until I killed Cassandra. > > > > -- > Jonathan Ellis > Project Chair, Apache Cassandra > co-founder of DataStax, the source for professional Cassandra support > http://www.datastax.com >