Do you perform a lot of deletes or updates on your database?
On restart, it performs major compaction which can reduce the load on your
node by removing stale data.
Try configuring compaction in you conf to perform minor compaction i.e.
compactions at a regular interval.

Thanks,
Anuja

On Wed, Apr 12, 2017 at 3:02 PM, Osman YOZGATLIOGLU <
osman.yozgatlio...@krontech.com> wrote:

> Hello,
>
> Here is the problem loads, first node shows 206TB data. After cassandra
> restart it shows 51TB, like df shows.
>
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address       Load       Tokens       Owns (effective)  Host ID  Rack
> UN  x.x.x.1  206 TB     256          50.6%             xx  rack1
> UN  x.x.x.2  190.77 TB  256          49.9%             yy  rack1
> ..
>
> --  Address       Load       Tokens       Owns (effective)  Host ID  Rack
> UN  x.x.x.1  51.01 TB   256          50.6%             xx  rack1
> UN  x.x.x.2  49.84 TB   256          49.9%             yy  rack1
> ..
>
>
> nodetool tpstats;
> Pool Name                    Active   Pending      Completed   Blocked
> All time blocked
> MutationStage                     2         1    75536494778         0
>              0
> ViewMutationStage                 0         0              0         0
>              0
> ReadStage                         0         0          41402         0
>              0
> RequestResponseStage              0         0    35515109625         0
>              0
> ReadRepairStage                   0         0              3         0
>              0
> CounterMutationStage              0         0              0         0
>              0
> MiscStage                         0         0              0         0
>              0
> CompactionExecutor                5         5         732161         0
>              0
> MemtableReclaimMemory             0         0         198602         0
>              0
> PendingRangeCalculator            0         0             11         0
>              0
> GossipStage                       0         0        3854373         0
>              0
> SecondaryIndexManagement          0         0              0         0
>              0
> HintsDispatcher                   1         7              6         0
>              0
> MigrationStage                    0         0              6         0
>              0
> MemtablePostFlush                 0         0         200265         0
>              0
> ValidationExecutor                0         0              0         0
>              0
> Sampler                           0         0              0         0
>              0
> MemtableFlushWriter               0         0         198602         0
>              0
> InternalResponseStage             0         0        5209219         0
>              0
> AntiEntropyStage                  0         0              0         0
>              0
> CacheCleanupExecutor              0         0              0         0
>              0
> Native-Transport-Requests         0         0    15910719923         0
>      192131887
>
> Message type           Dropped
> READ                         0
> RANGE_SLICE                  0
> _TRACE                       0
> HINT                         0
> MUTATION               1000085
> COUNTER_MUTATION             0
> BATCH_STORE                  0
> BATCH_REMOVE                 0
> REQUEST_RESPONSE             0
> PAGED_RANGE                  0
> READ_REPAIR                  0
>
> sar values;
> 05:10:01        CPU     %user     %nice   %system   %iowait    %steal
>  %idle
> 05:20:01        all     26.96     16.09      3.73      2.23      0.00
>  50.99
> 05:30:02        all     26.99     16.83      3.82      2.86      0.00
>  49.50
> 05:40:01        all     27.17     18.19      3.83      0.89      0.00
>  49.91
> 05:50:01        all     27.16     18.74      3.80      0.28      0.00
>  50.02
> 06:00:01        all     26.30     19.88      3.88      0.29      0.00
>  49.64
> 06:10:01        all     28.02     21.11      3.91      0.28      0.00
>  46.68
> 06:20:01        all     28.37     19.64      3.98      0.40      0.00
>  47.61
> 06:30:01        all     29.56     19.51      4.08      0.45      0.00
>  46.40
> 06:40:01        all     29.28     20.56      4.08      0.34      0.00
>  45.74
> 06:50:01        all     29.46     19.15      3.99      0.19      0.00
>  47.20
> 07:00:01        all     29.45     21.09      4.07      0.26      0.00
>  45.13
> 07:10:01        all     29.23     21.59      4.18      0.29      0.00
>  44.71
> 07:20:01        all     30.78     21.24      4.09      0.48      0.00
>  43.40
> 07:30:01        all     29.06     21.63      4.09      0.27      0.00
>  44.94
> 07:40:01        all     28.84     21.85      4.13      1.76      0.00
>  43.41
> 07:50:01        all     29.22     21.35      4.14      2.53      0.00
>  42.76
> 08:00:01        all     30.10     21.66      4.24      2.39      0.00
>  41.60
> 08:10:01        all     28.63     21.69      4.22      2.57      0.00
>  42.88
> 08:20:01        all     28.63     20.78      4.08      2.61      0.00
>  43.91
> 08:30:01        all     30.46     20.08      3.83      2.58      0.00
>  43.05
> 08:40:01        all     27.71     21.31      4.06      2.60      0.00
>  44.33
> 08:50:01        all     28.87     21.49      4.15      2.58      0.00
>  42.91
> 09:00:01        all     29.61     21.38      3.86      2.51      0.00
>  42.64
> 09:10:01        all     28.85     21.74      4.16      2.46      0.00
>  42.79
> 09:20:01        all     30.15     20.79      4.31      2.44      0.00
>  42.31
> Average:        all     22.78     15.21      3.34      0.79      0.00
>  57.88
>
>
> Regards,
> Osman
>
> On 12-04-2017 11:53, Bhuvan Rawal wrote:
> Try nodetool tpstats - it can lead you to where your threads are stuck.
> There could be various reasons for load factor to go high like disk/cpu
> getting choked, you'll probably need to check dstat & iostat output along
> with Cassandra Threadpool stats to get a decent idea.
>
> On Wed, Apr 12, 2017 at 1:48 PM, Osman YOZGATLIOGLU <
> osman.yozgatlio...@krontech.com<mailto:osman.yozgatlio...@krontech.com>>
> wrote:
> Hello,
>
> Nodetool status shows much more than actual data size.
> When I restart node, it shows normal a while and increase load in time.
> Where should I look?
>
> Cassandra 3.0.8, jdk 1.8.121
>
> Regards,
> Osman
>
>
> This e-mail message, including any attachments, is for the sole use of the
> person to whom it has been sent, and may contain information that is
> confidential or legally protected. If you are not the intended recipient or
> have received this message in error, you are not authorized to copy,
> distribute, or otherwise use this message or its attachments. Please notify
> the sender immediately by return e-mail and permanently delete this message
> and any attachments. KRON makes no warranty that this e-mail is error or
> virus free.
>
>
>
>
> This e-mail message, including any attachments, is for the sole use of the
> person to whom it has been sent, and may contain information that is
> confidential or legally protected. If you are not the intended recipient or
> have received this message in error, you are not authorized to copy,
> distribute, or otherwise use this message or its attachments. Please notify
> the sender immediately by return e-mail and permanently delete this message
> and any attachments. KRON makes no warranty that this e-mail is error or
> virus free.
>

Reply via email to