Right, I think there must be some other factor here because neither
the flush nor discard of obsolete commitlog segments block writes.

On Wed, Jun 23, 2010 at 7:05 PM, Sean Bridges <sean.brid...@gmail.com> wrote:
> I see about 3000 lines of,
>
> INFO [COMMIT-LOG-WRITER] 2010-06-23 16:40:29,107 CommitLog.java (line
> 412) Discarding obsolete commit
> log:CommitLogSegment(/data1/cass/commitlog/CommitLog-1277302220723.log)
>
> Then, http://pastebin.com/YQA0mpRG
>
> It's around 16:50 that cassandra writes stop timing out.  Some writes
> are getting through during this 10 minutes, but they shouldn't be
> enough to cause the index memtables to flush.
>
> Thanks,
>
> Sean
>
>
>
>
>
>
> On Wed, Jun 23, 2010 at 3:30 PM, Benjamin Black <b...@b3k.us> wrote:
>> Are you seeing any sort of log messages from Cassandra at all?
>>
>> On Wed, Jun 23, 2010 at 2:26 PM, Sean Bridges <sean.brid...@gmail.com> wrote:
>>> We were running a load test against a single 0.6.2 cassandra node.  24
>>> hours into the test,  Cassandra appeared to be nearly frozen for 10
>>> minutes.  Our write rate went to almost 0, and we had a large number
>>> of write timeouts.  We weren't swapping or gc'ing at the time.
>>>
>>> It looks like the problems were caused by our memtables flushing after
>>> 24 hours (we have MemtableFlushAfterMinutes=1440).  Some of our column
>>> families are written to infrequently so that they don't hit the flush
>>> thresholds in MemtableOperationsInMillions and MemtableThroughputInMB.
>>>  After 24 hours we had ~3000 commit log files.
>>>
>>> Is this flushing causing Cassandra to become unresponsive?  I would
>>> have thought Cassandra could flush in the background without blocking
>>> new writes.
>>>
>>> Thanks,
>>>
>>> Sean
>>>
>>
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Reply via email to