Hi,

Nodetool repair always list lots of data and never stays repaired. I think.
>

This might be the reason:

"incremental: true"


Incremental repairs is the default in your version. It marks data as being
repaired in order to only repair each data only once. It is a clever
feature, but with some caveats. I would read about it as it is not trivial
to understand impacts and in some cases it can create issues and not be
such a good idea to use incremental repairs. Make sure to run a full repair
instead when a node goes down for example.

C*heers,
-----------------------
Alain Rodriguez - @arodream - al...@thelastpickle.com
France

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com



2017-01-11 15:21 GMT+01:00 Cogumelos Maravilha <cogumelosmaravi...@sapo.pt>:

> Nodetool repair always list lots of data and never stays repaired. I think.
>
> Cheers
>
>
> On 01/11/2017 02:15 PM, Hannu Kröger wrote:
> > Just to understand:
> >
> > What exactly is the problem?
> >
> > Cheers,
> > Hannu
> >
> >> On 11 Jan 2017, at 16.07, Cogumelos Maravilha <
> cogumelosmaravi...@sapo.pt> wrote:
> >>
> >> Cassandra 3.9.
> >>
> >> nodetool status
> >> Datacenter: dc1
> >> ===============
> >> Status=Up/Down
> >> |/ State=Normal/Leaving/Joining/Moving
> >> --  Address       Load       Tokens       Owns (effective)  Host
> >> ID                               Rack
> >> UN  10.0.120.145  1.21 MiB   256          49.5%
> >> da6683cd-c3cf-4c14-b3cc-e7af4080c24f  rack1
> >> UN  10.0.120.179  1020.51 KiB  256          48.1%
> >> fb695bea-d5e8-4bde-99db-9f756456a035  rack1
> >> UN  10.0.120.55   1.02 MiB   256          53.3%
> >> eb911989-3555-4aef-b11c-4a684a89a8c4  rack1
> >> UN  10.0.120.46   1.01 MiB   256          49.1%
> >> 8034c30a-c1bc-44d4-bf84-36742e0ec21c  rack1
> >>
> >> nodetool repair
> >> [2017-01-11 13:58:27,274] Replication factor is 1. No repair is needed
> >> for keyspace 'system_auth'
> >> [2017-01-11 13:58:27,284] Starting repair command #4, repairing keyspace
> >> system_traces with repair options (parallelism: parallel, primary range:
> >> false, incremental: true, job threads: 1, ColumnFamilies: [],
> >> dataCenters: [], hosts: [], # of ranges: 515)
> >> [2017-01-11 14:01:55,628] Repair session
> >> 82a25960-d806-11e6-8ac4-73b93fe4986d for range
> >> [(-1278992819359672027,-1209509957304098060],
> >> (-2593749995021251600,-2592266543457887959],
> >> (-6451044457481580778,-6438233936014720969],
> >> (-1917989291840804877,-1912580903456869648],
> >> (-3693090304802198257,-3681923561719364766],
> >> (-380426998894740867,-350094836653869552],
> >> (1890591246410309420,1899294587910578387],
> >> (6561031217224224632,6580230317350171440],
> >> ... 4 pages of data
> >> , (6033828815719998292,6079920177089043443]] finished (progress: 1%)
> >> [2017-01-11 13:58:27,986] Repair completed successfully
> >> [2017-01-11 13:58:27,988] Repair command #4 finished in 0 seconds
> >>
> >> nodetool gcstats
> >> Interval (ms) Max GC Elapsed (ms)Total GC Elapsed (ms)Stdev GC Elapsed
> >> (ms)   GC Reclaimed (MB)         Collections      Direct Memory Bytes
> >>            360134                  23
> >> 23                   0           333975216
> >> 1                       -1
> >>
> >> (wait)
> >> nodetool gcstats
> >> Interval (ms) Max GC Elapsed (ms)Total GC Elapsed (ms)Stdev GC Elapsed
> >> (ms)   GC Reclaimed (MB)         Collections      Direct Memory Bytes
> >>           60016                   0                   0
> >> NaN                   0                   0                       -1
> >>
> >> nodetool repair
> >> [2017-01-11 14:00:45,888] Replication factor is 1. No repair is needed
> >> for keyspace 'system_auth'
> >> [2017-01-11 14:00:45,896] Starting repair command #5, repairing keyspace
> >> system_traces with repair options (parallelism: parallel, primary range:
> >> false, incremental: true, job threads: 1, ColumnFamilies: [],
> >> dataCenters: [], hosts: [], # of ranges: 515)
> >> ... 4 pages of data
> >> , (94613607632078948,219237792837906432],
> >> (6033828815719998292,6079920177089043443]] finished (progress: 1%)
> >> [2017-01-11 14:00:46,567] Repair completed successfully
> >> [2017-01-11 14:00:46,576] Repair command #5 finished in 0 seconds
> >>
> >> nodetool gcstats
> >> Interval (ms) Max GC Elapsed (ms)Total GC Elapsed (ms)Stdev GC Elapsed
> >> (ms)   GC Reclaimed (MB)         Collections      Direct Memory Bytes
> >>       9169                  25                  25
> >> 0           330518688                   1                       -1
> >>
> >>
> >> Always in loop, I think!
> >>
> >> Thanks in advance.
> >>
>
>

Reply via email to