Hi Ian,

The issues here, which relates to normal and index column families, is
scanning over a large number of tombstones can cause Cassandra to fall over
due to increased GC pressure. This pressure is caused because tombstones
will create DeletedColumn objects which consume heap. Also
these DeletedColumn objects will have to be serialized and sent back to the
coordinator, thus increasing your response times. Take for example a row
that does deletes and you query it with a limit of 100. In a worst case
scenario you could end up reading say 50k tombstones to reach the 100
'live' column limit, all of which has to be put on heap and then sent over
the wire to the coordinator. This would be considered a Cassandra
anti-pattern.[1]

With that in mind there was a debug warning added to 1.2 to inform the user
when they were querying a row with 1000 tombstones [2]. Then in 2.0 the
action was taken to drop requests reaching 100k tombstones[3] rather than
just printing out a warning. This is a safety measure, as it is not advised
to perform such a query and is a result of most people 'doing it wrong'.

For those people who understand the risk of scanning over large numbers of
tombstones there is a configuration option in the cassandra.yaml to
increase this threshold, tombstone_failure_threshold.[4]


Mark

[1]
http://www.datastax.com/dev/blog/cassandra-anti-patterns-queues-and-queue-like-datasets
[2] https://issues.apache.org/jira/browse/CASSANDRA-6042
[3] https://issues.apache.org/jira/browse/CASSANDRA-6117
[4]
https://github.com/jbellis/cassandra/blob/4ac18ae805d28d8f4cb44b42e2244bfa6d2875e1/conf/cassandra.yaml#L407-L417



On Sun, Aug 10, 2014 at 7:19 PM, Ian Rose <ianr...@fullstory.com> wrote:

> Hi -
>
> On this page (
> http://www.datastax.com/documentation/cql/3.0/cql/ddl/ddl_when_use_index_c.html),
> the docs state:
>
> Do not use an index [...] On a frequently updated or deleted column
>
>
> and
>
>
>> *Problems using an index on a frequently updated or deleted column*ΒΆ
>> <http://www.datastax.com/documentation/cql/3.0/cql/ddl/ddl_when_use_index_c.html?scroll=concept_ds_sgh_yzz_zj__upDatIndx>
>
> Cassandra stores tombstones in the index until the tombstone limit reaches
>> 100K cells. After exceeding the tombstone limit, the query that uses the
>> indexed value will fail.
>
>
>
> I'm afraid I don't really understand this limit from its (brief)
> description.  I also saw this recent thread
> <http://mail-archives.apache.org/mod_mbox/cassandra-user/201403.mbox/%3CCABNXB2Bf4aeoDVpMNOxJ_e7aDez2EuZswMJx=jWfb8=oyo4...@mail.gmail.com%3E>
>  but
> I'm afraid it didn't help me much...
>
>
> *SHORT VERSION*
>
> If I have tens or hundreds of thousands of rows in a keyspace, where every
> row has an indexed column that is updated O(10) times during the lifetime
> of each row, is that going to cause problems for me?  If that 100k limit is 
> *per
> row* then I should be fine but if that 100k limit is *per keyspace* then
> I'd definitely exceed it quickly.
>
>
> *FULL EXPLANATION*
>
> In our system, items are created at a rate of ~10/sec.  Each item is
> updated ~10 times over the next few minutes (although in rare cases the
> number of updates, and the duration, might be several times as long).  Once
> the last update is received for an item, we select it from Cassandra,
> process the data, then delete the entire row.
>
> The tricky bit is that sometimes (maybe 30-40% of the time) we don't
> actually know when the last update has been received so we use a timeout:
> if an item hasn't been updated for 30 minutes, then we assume it is done
> and should process it as before (select, then delete).  So I am trying to
> design a schema that will allow for efficient queries of the form "find me
> all items that have not been updated in the past 30 minutes."  We plan to
> call this query once a minute.
>
> Here is my tentative schema:
>
> CREATE TABLE items (
>   item_id ascii,
>   last_updated timestamp,
>   item_data list<blob>,
>   PRIMARY KEY (item_id)
> )
> plus an index on last_updated.
>
> So updates to an existing item would just be "lookup by item_id, append
> new data to item_data, and set last_updated to now".  And queries to find
> items that have timed out would use the index on last_updated: "find all
> items where last_updated < [now - 30 minutes]".
>
> Assuming, that is, that the aforementioned 100k tombstone limit won't
> bring this index crashing to a halt...
>
> Any clarification on this limit and/or suggestions on a better way to
> model/implement this system would be greatly appreciated!
>
> Cheers,
> Ian
>
>

Reply via email to