Hi Ahmed,

for that you should increase the flush queue size setting in your
cassandra.yaml

kind regards,
Christian



On Thu, Jul 17, 2014 at 10:54 AM, Kais Ahmed <k...@neteck-fr.com> wrote:

> Thanks christian,
>
> I'll check on my side.
>
> Have you an idea about FlushWriter 'All time blocked'
>
> Thanks,
>
>
> 2014-07-16 16:23 GMT+02:00 horschi <hors...@gmail.com>:
>
> Hi Ahmed,
>>
>> this exception is caused by you creating rows with a key-length of more
>> than 64kb. Your key is 394920 bytes long it seems.
>>
>> Keys and column-names are limited to 64kb. Only values may be larger.
>>
>> I cannot say for sure if this is the cause of your high
>> MemtablePostFlusher pending count, but I would say it is possible.
>>
>> kind regards,
>> Christian
>>
>> PS: I still use good old thrift lingo.
>>
>>
>>
>>
>>
>>
>> On Wed, Jul 16, 2014 at 3:14 PM, Kais Ahmed <k...@neteck-fr.com> wrote:
>>
>>> Hi chris, christan,
>>>
>>> Thanks for reply, i'm not using DSE.
>>>
>>> I have in the log files, this error that appear two times.
>>>
>>> ERROR [FlushWriter:3456] 2014-07-01 18:25:33,607 CassandraDaemon.java
>>> (line 196) Exception in thread Thread[FlushWriter:3456,5,main]
>>> java.lang.AssertionError: 394920
>>>         at
>>> org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:342)
>>>         at
>>> org.apache.cassandra.db.ColumnIndex$Builder.maybeWriteRowHeader(ColumnIndex.java:201)
>>>         at
>>> org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:188)
>>>         at
>>> org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:133)
>>>         at
>>> org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:202)
>>>         at
>>> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:187)
>>>         at
>>> org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:365)
>>>         at
>>> org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:318)
>>>         at
>>> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>>>         at
>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>         at java.lang.Thread.run(Thread.java:744)
>>>
>>>
>>> It's the same error than this link
>>> http://mail-archives.apache.org/mod_mbox/cassandra-user/201305.mbox/%3cbay169-w52699dd7a1c0007783f8d8a8...@phx.gbl%3E
>>> ,
>>> with the same configuration 2 nodes RF 2 with SimpleStrategy.
>>>
>>> Hope this help.
>>>
>>> Thanks,
>>>
>>>
>>>
>>> 2014-07-16 1:49 GMT+02:00 Chris Lohfink <clohf...@blackbirdit.com>:
>>>
>>> The MemtablePostFlusher is also used for flushing non-cf backed (solr)
>>>> indexes.  Are you using DSE and solr by chance?
>>>>
>>>> Chris
>>>>
>>>> On Jul 15, 2014, at 5:01 PM, horschi <hors...@gmail.com> wrote:
>>>>
>>>> I have seen this behavour when Commitlog files got deleted (or
>>>> permissions were set to read only).
>>>>
>>>> MemtablePostFlusher is the stage that marks the Commitlog as flushed.
>>>> When they fail it usually means there is something wrong with the commitlog
>>>> files.
>>>>
>>>> Check your logfiles for any commitlog related errors.
>>>>
>>>> regards,
>>>> Christian
>>>>
>>>>
>>>> On Tue, Jul 15, 2014 at 7:03 PM, Kais Ahmed <k...@neteck-fr.com> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> I have a small cluster (2 nodes RF 2)  running with C* 2.0.6 on I2
>>>>> Extra Large (AWS) with SSD disk,
>>>>> the nodetool tpstats shows many MemtablePostFlusher pending and
>>>>> FlushWriter All time blocked.
>>>>>
>>>>> The two nodes have the default configuration. All CF use size-tiered
>>>>> compaction strategy.
>>>>>
>>>>> There are 10 times more reads than writes (1300 reads/s and 150
>>>>> writes/s).
>>>>>
>>>>>
>>>>> ubuntu@node1 :~$ nodetool tpstats
>>>>> Pool Name                    Active   Pending      Completed
>>>>> Blocked  All time blocked
>>>>> MemtablePostFlusher               1      1158         159590
>>>>> 0                 0
>>>>> FlushWriter                       0         0          11568
>>>>> 0              1031
>>>>>
>>>>> ubuntu@node1:~$ nodetool compactionstats
>>>>> pending tasks: 90
>>>>> Active compaction remaining time :        n/a
>>>>>
>>>>>
>>>>> ubuntu@node2:~$ nodetool tpstats
>>>>> Pool Name                    Active   Pending      Completed
>>>>> Blocked  All time blocked
>>>>> MemtablePostFlusher               1      1020          50987
>>>>> 0                 0
>>>>> FlushWriter                       0         0           6672
>>>>> 0               948
>>>>>
>>>>>
>>>>> ubuntu@node2:~$ nodetool compactionstats
>>>>> pending tasks: 89
>>>>> Active compaction remaining time :        n/a
>>>>>
>>>>> I think there is something wrong, thank you for your help.
>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>

Reply via email to