Hi Martin, List,
Just an update to let ye know how things went and what we learned.
We did the force-replace procedure to bring the new node into the cluster
in place of the old one. I attached to the riak erlang shell and with a
little hacking was able to get all the bitcask handles and then do
Hi Martin,
Thanks for taking the time.
Yes, by "size of the bitcask directory" I mean I did a "du -h --max-depth=1
bitcask", so I think that would cover all the vnodes. We don't use any
other backends.
Those answers are helpful, will get back to this in a few days and see what
I can determine about
Based on a quick read of the code, compaction in bitcask is performed only
on "readable" files, and the current active file for writing is excluded
from that list. With default settings, that active file can grow to 2GB.
So it is possible that if objects had been replaced/deleted many times
within
Sean,
Some partial answers to your questions.
I don't believe force-replace itself will sync anything up - it just
reassigns ownership (hence handoff happens very quickly).
Read repair would synchronise a portion of the data. So if 10% of you data
is read regularly, this might explain some of w
Hi All,
A few questions on the procedure here to recover a failed node:
http://docs.basho.com/riak/kv/2.2.3/using/repair-recovery/failed-node/
We lost a production riak server when AWS decided to delete a node and we
plan on doing this procedure to replace it with a newly built node. A
practice r