ONLY tar this exact snapshot.
>
> Short (high level) overview about our backup script:
>
> 1. check if repair is running - if yes, exit
> 2. Dump the current db-schema
> 3. nodetool snapshot -t $(date)
> 4. Wait 60 seconds
> 5. Create a tar of all snapshot folders with t
with the date we just created
6. Copy that away to a remote server
>
> Kind regards
>
> Reynald
>
> On 01/06/2016 13:27, Paul Dunkler wrote:
>>> I guess this might come from the incremental repairs...
>>> The repair time is stored in the sstable (RepairedAt t
in snapshot
directories. I feel like it's something to do with flushing / compaction. But
no clue, what... :(
>
> Cheers,
> Reynald
>
> On 31/05/2016 11:03, Paul Dunkler wrote:
>> Hi there,
>>
>> i am sometimes running in very strange errors while
) is done?
Probably it would be a better idea to not do manual flushes when saving the
incremental_backups (because then compactions won't happen at same time with
snapshot), right?
> Regards,
>
> Mike Yeap
>
>
>
>
> On Tue, May 31, 2016 at 5:21 PM, Paul D
g changes
> in between?
> Or is there a way to "pause" the incremental repairs?
>
>
>> Cheers,
>> Reynald
>>
>> On 31/05/2016 11:03, Paul Dunkler wrote:
>>> Hi there,
>>>
>>> i am sometimes running in very strange errors
ange snapshotted data?
>> I already searched through cassandra jira but couldn't find a bug which
>> looks related to this behaviour.
>>
>> Would love to get some help on this.
>>
>> —
>> Paul Dunkler
>
—
Paul Dunkler
** * * UPLEX - Nils Goroll
rare case with running repair
operations or what-so-ever which can change snapshotted data?
I already searched through cassandra jira but couldn't find a bug which looks
related to this behaviour.
Would love to get some help on this.
—
Paul Dunkler
signature.asc
Description: Message sign