For future reference here is how we backed out of a search enabled 1.0.0
cluster and moved to 1.0.3
Step 1
Disable search on each of the 1.0.0 and restart them
Note that disabling search and still having the search pre-commit hook active
in a bucket will result in write failures.
Step 2:
Execut
27;m not sure what to make of the output from riak-admin
>> transfers:
>> 't...@qbkpxadmin01.ad.qnet.local' waiting to handoff 62 partitions
>> 'qbkp...@qbkpx03.ad.qnet.local' waiting to handoff 42 partitions
>> 'qbkp...@qbkpx01.ad.qnet.local' waiting to handoff 42 partitions
>>
>> Our second no
t;> states that the new node (test) wants to handoff 62 partitions although
>> it is the owner of 0 partitions.
>>
>> riak-admin ring_status lists various pending ownership handoffs, all of
>> them are between our 3 original nodes. The new node is not mentioned
>> a
------
*From:* Aphyr [ap...@aphyr.com]
*Sent:* Wednesday, January 18, 2012 11:15 PM
*To:* Fredrik Lindström
*Cc:* riak-users@lists.basho.com
*Subject:* Re: Pending transfers when joining 1.0.3 node to 1.0.0 cluster
Did you try riak_core_ring_manager
..@aphyr.com]
Sent: Wednesday, January 18, 2012 11:15 PM
To: Fredrik Lindström
Cc: riak-users@lists.basho.com
Subject: Re: Pending transfers when joining 1.0.3 node to 1.0.0 cluster
Did you try riak_core_ring_manager:force_update() and force_handoffs() on the
old partition owner as well as the ne
k footprint is so small I doubt
> any data has been transferred.
>
> /F
> From: Aphyr [ap...@aphyr.com]
> Sent: Wednesday, January 18, 2012 10:46 PM
> To: Fredrik Lindström
> Cc: riak-users@lists.basho.com
> Subject: Re: Pending transfers when joining 1.0.3 node to 1.0.0 clus
footprint is so small I doubt any
data has been transferred.
/F
From: Aphyr [ap...@aphyr.com]
Sent: Wednesday, January 18, 2012 10:46 PM
To: Fredrik Lindström
Cc: riak-users@lists.basho.com
Subject: Re: Pending transfers when joining 1.0.3 node to 1.0.0
https://github.com/basho/riak/blob/riak-1.0.2/RELEASE-NOTES.org
If partition transfer is blocked awaiting [] (as opposed to [kv_vnode] or
whatever), There's a snippet in there that might be helpful.
--Kyle
On Jan 18, 2012, at 1:43 PM, Fredrik Lindström wrote:
> After some digging I found a sug
After some digging I found a suggestion from Joseph Blomstedt in an earlier
mail thread
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2012-January/007116.html
in the riak console:
riak_core_ring_manager:force_update().
riak_core_vnode_manager:force_handoffs().
The symptoms would ap