We heavily use counters, will upgrade and check if it solves the
issue... The current cluster version is 2.0.14

We do a lot of delete operations and do major compaction to remove the
tombstones. Is there any better way?

On 1 July 2015 at 20:02, Sebastian Estevez
<sebastian.este...@datastax.com> wrote:
> Looks like a CASSANDRA-6405 (replicate on write is the counter tp). Upgrade
> to the latest 2.1 version and let us know if he situation improves.
>
> Major compactions are usually a bad idea by the way. Do you really want one
> huge sstable?
>
> On Jul 1, 2015 10:03 AM, "Jayapandian Ponraj" <pandian...@gmail.com> wrote:
>>
>> HI I have a 6 node cluster and I ran a major compaction on node 1 but
>> I found that the load reached very high levels on node 2. Is this
>> explainable?
>>
>> Attaching tpstats and metrics:
>>
>> cassandra-2 ~]$ nodetool tpstats
>> Pool Name                    Active   Pending      Completed   Blocked
>>  All time blocked
>> MutationStage                     0         0      185152938         0
>>                 0
>> ReadStage                         0         0        1111490         0
>>                 0
>> RequestResponseStage              0         0      168660091         0
>>                 0
>> ReadRepairStage                   0         0          21247         0
>>                 0
>> ReplicateOnWriteStage            32      6186       88699535         0
>>              7163
>> MiscStage                         0         0              0         0
>>                 0
>> HintedHandoff                     0         1           1090         0
>>                 0
>> FlushWriter                       0         0           2059         0
>>                13
>> MemoryMeter                       0         0           3922         0
>>                 0
>> GossipStage                       0         0        2246873         0
>>                 0
>> CacheCleanupExecutor              0         0              0         0
>>                 0
>> InternalResponseStage             0         0              0         0
>>                 0
>> CompactionExecutor                0         0          12353         0
>>                 0
>> ValidationExecutor                0         0              0         0
>>                 0
>> MigrationStage                    0         0              1         0
>>                 0
>> commitlog_archiver                0         0              0         0
>>                 0
>> AntiEntropyStage                  0         0              0         0
>>                 0
>> PendingRangeCalculator            0         0             16         0
>>                 0
>> MemtablePostFlusher               0         0          10932         0
>>                 0
>>
>> Message type           Dropped
>> READ                     49051
>> RANGE_SLICE                  0
>> _TRACE                       0
>> MUTATION                   269
>> COUNTER_MUTATION           185
>> BINARY                       0
>> REQUEST_RESPONSE             0
>> PAGED_RANGE                  0
>> READ_REPAIR                  0
>>
>>
>>
>> Also I saw that the nativetransportreqests was 23 in active and 23 in
>> pending, I found this in opscenter.
>>
>>
>> Any settings i can make to keep the load under control?
>> Appreciate any help.. Thanks

Reply via email to