> this might be helpful, an Omniti article. > https://omniti.com/seeds/migrating-riak-do-it-live >
Thanks for this article, will be valuable for further managing riak in the cluster. > As to fixing this specific error. That iirc can be done doing a name > change in the ring to match your new node name. renaming the node will > make that orddict lookup succeed. > Theres a supplied admin utility for that. > > The problem is, there is no cluster anymore just single nodes, because kubernetes teared all nodes down. And each single node won't start on its own, because of the orddict error. And IIRC riak-admin cluster replace works only on a node which is already running, right? Currently my problem seems to be solved. After 13 hours of fixing our cluster I missed one node to reintegrate into the cluster but it was already a load balancing target. Can I assume, the /var/lib/riak/ring folder is not data-critical? So after clustering all new nodes with all old data, the data integrity is preserved? Greeting Jan
_______________________________________________ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com