I upgraded the Cassandra to 2.0.5, these issues did not occur so far.

Thanks
Mahesh


On Mon, Feb 17, 2014 at 1:43 PM, mahesh rajamani
<rajamani.mah...@gmail.com>wrote:

> Christian,
>
> There are 2 use cases which are failing, and both looks to be similar
> issue, basically happens in column family  set with TTL.
>
> case 1) I manage index for specific data as single row in a column family.
> I set TTL to 1 second if the data need to be removed from the index row.
> Under some scenario the get and count for the row key gives different
> column counts. In the application if I do get I get correct set of
> columns(expired columns don't return), but if I do slice query and read 100
> columns at a time, the columns set with TTL returns. I am not able to
> understand, what is starting this issue.
>
> case 2) I have column family for managing locks, In this case I insert
> a column with by  default TTL as 15 seconds. If the transaction completes
> before I remove the column by again setting TTL to 1 second.
>
> In this case when running flush the flush hangs with following Assertion
> exception.
>
> ERROR [FlushWriter:1] 2014-02-17 11:49:29,349 CassandraDaemon.java (line
> 187) Exception in thread Thread[FlushWriter:1,5,main]
> java.lang.AssertionError
>         at
> org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:198)
>         at
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:186)
>         at
> org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:360)
>         at
> org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:315)
>         at
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>         at
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:722)
>
>
> Thanks
> Mahesh
>
>
>
> On Mon, Feb 17, 2014 at 12:43 PM, horschi <hors...@gmail.com> wrote:
>
>> Hi Mahesh,
>>
>> the problem is that every column is only tombstoned for as long as the
>> original column was valid.
>>
>> So if the last update was only valid for 1 sec, then the tombstone will
>> also be valid for 1 second! If the previous was valid for a longer time,
>> then this old value might reappear.
>>
>> Maybe you can explain why you are doing this?
>>
>> kind regards,
>> Christian
>>
>>
>>
>> On Mon, Feb 17, 2014 at 6:18 PM, mahesh rajamani <
>> rajamani.mah...@gmail.com> wrote:
>>
>>> Christain,
>>>
>>> Yes. Is it a problem?  Can you explain what happens in this scenario?
>>>
>>> Thanks
>>> Mahesh
>>>
>>>
>>> On Fri, Feb 14, 2014 at 3:07 PM, horschi <hors...@gmail.com> wrote:
>>>
>>>> Hi Mahesh,
>>>>
>>>> is it possible you are creating columns with a long TTL, then update
>>>> these columns with a smaller TTL?
>>>>
>>>> kind regards,
>>>> Christian
>>>>
>>>>
>>>> On Fri, Feb 14, 2014 at 3:45 PM, mahesh rajamani <
>>>> rajamani.mah...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I am using Cassandra 2.0.2 version. On a wide row (approx. 10000
>>>>> columns), I expire few column by setting TTL as 1 second. At times these
>>>>> columns show up during slice query.
>>>>>
>>>>> When I have this issue, running count and get commands for that row
>>>>> using Cassandra cli it gives different column counts.
>>>>>
>>>>> But once I run flush and compact, the issue goes off and expired
>>>>> columns don't show up.
>>>>>
>>>>> Can someone provide some help on this issue.
>>>>>
>>>>> --
>>>>> Regards,
>>>>> Mahesh Rajamani
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Regards,
>>> Mahesh Rajamani
>>>
>>
>>
>
>
> --
> Regards,
> Mahesh Rajamani
>



-- 
Regards,
Mahesh Rajamani

Reply via email to