Thank you for your time!

Our replication factor is 'DC1': '2', 'DC2': '2'.
Consistency is set to LOCAL_ONE for these queries.

Indeed timeouts might be a problem as some of the nodes in DC2 are under
high load from time to time.
Is there some counter (eg. JMX or so) I could monitor to verify this
assumption.

​​
>
>> - What could we do to investigate the cause of this issue deeper?
>>
>
> Are the hints being successfully delivered? It sounds like not..
>

No, I do not think so. Actually we are not really interested in this data
at DC2, we only replicate them because this table is in that keyspace for
historic reasons.
Seems like we need to migrate that table to a different keyspace. doesn't
it?

Kind regards
Björn


​​
2015-09-23 22:56 GMT+02:00 Robert Coli <rc...@eventbrite.com>:

> On Wed, Sep 23, 2015 at 7:28 AM, Björn Hachmann <
> bjoern.hachm...@metrigo.de> wrote:
>
>> Today I realized that one of the nodes in our Cassandra cluster (2.1.7)
>> is storing a lot of hints (>80GB) and I fail to see a convincing way to
>> deal with them.
>> ...
>> We had a look into the table system.hints and from there we learnt that
>> most hints
>> are for one of the nodes in our 2nd datacenter and most of the mutations
>> are
>> increments to one of our counter tables which are very frequent.
>>
>
> This is probably timeouts on the increment creating your hints.
>
>
>> We have several questions:
>> - What could be the reason that only one of the nodes has hints for only
>> one target node, altough every other node should be coordinator for these
>> queries sometimes also?
>>
>
> That sounds unexpected, I don't have a good answer.
>
>
>> - Is there a way to turn of hinted handoff on a table level or on data
>> center level?
>>
>
> No.
> ​​
>
>> - What could we do to investigate the cause of this issue deeper?
>>
>
> Are the hints being successfully delivered? It sounds like not..
>
> =Rob
>
>

Reply via email to