Riak Large Database restore
Hi All I have 5 node cluster of riak database. I have take backup of this database & trying to restore it with riak-admin. But it take too much much already 7days has passed & still restoration process is running. I want to know is their any command or way to restore it quickly. I am trying to restore it in single node server in my local for Backup testing. For same I have started a aws ec2 intance of large capacity of m3.x2large but the result is same. Thanks Gaurav ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
[no subject]
Hi All I am getting below error while writing anything in riak database. This is random issue & i am able to write data in 3-4 attempts which creates null value information in bucket. Please help me to resolve this problem. I have restored the live database of 5 node cluster on 1 node server. Riak Version : 1.4.7 Installed on ubuntu 12.04 Error - 2014-05-23 12:28:51.701 [error] <0.19748.55> CRASH REPORT Process <0.19748.55> with 10 neighbours exited with reason: no match of right hand value {error,{badmatch,{error,eexist}}} in bitcask:do_put/5 line 1232 in gen_fsm:terminate/7 line 611 2014-05-23 12:28:51.702 [error] <0.19749.55> Supervisor {<0.19749.55>,poolboy_sup} had child riak_core_vnode_worker started with riak_core_vnode_worker:start_link([{worker_module,riak_core_vnode_worker},{worker_args,[890602560248518965780370444936484965102833893376,...]},...]) at undefined exit with reason no match of right hand value {error,{badmatch,{error,eexist}}} in bitcask:do_put/5 line 1232 in context shutdown_error Thanks & Regards Gaurav ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Riak One partition handoff stall
Hi All - Good Day! I have a 7 Node Raik_KV cluster. Recently I have upgraded this cluster from 1.4.2 to 1.4.12 on Ubuntu 16.04. After upgrading the cluster whenever I leave a node from cluster one partition hand off stalled every time & Active transfers shows 'waiting to handoff 1 partitions", to complete this process I need to reboot the riak service on all nodes one by one. I am not sure if it's configuration problem. Here is the current state of cluster. *#output of riak-admin member-status* = Membership == Status RingPendingNode --- leaving 0.0% -- 'riak@192.168.2.10' valid 14.1% -- 'riak@192.168.2.11' valid 14.1% -- 'riak@192.168.2.12' valid 15.6% -- 'riak@192.168.2.13' valid 14.1% -- 'riak@192.168.2.14' valid 14.1% -- 'riak@192.168.2.15' valid 14.1% -- 'riak@192.168.2.16' valid 14.1% -- 'riak@192.168.2.17' --- Valid:7 / Leaving:1 / Exiting:0 / Joining:0 / Down:0 *#output of riak-admin transfers* 'riak@192.168.2.10' waiting to handoff 1 partitions Active Transfers: (nothing here) *#Output of riak-admin ring_status* == Claimant === Claimant: 'riak@192.168.2.10' Status: up Ring Ready: true == Ownership Handoff == No pending changes. == Unreachable Nodes == All nodes are up and reachable *current Transfer Limit is 2.* Thanks Gaurav ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak One partition handoff stall
Thanks Bryan Below is the ouput of command riak-admin vnode_status. May be data transfer has stopped on the claimant node. Output of all commands is constant. 1) VNode: 342539446249430371453988632667878832731859189760 Backend: riak_kv_eleveldb_backend Status: [{stats,<<" Compactions\nLevel Files Size(MB) Time(sec) Read(MB) Write(MB)\n--- ---\n 010 0 0 0\n">>}, {read_block_error,<<"0">>}, {fixed_indexes,true}] 2) 30GB data per server 4) I am not sure about the number of objects. Is there any way to get the count of objects. On Mon, May 28, 2018 at 4:57 PM, Bryan Hunt wrote: > Are you constantly executing a particular riak command, in your system > monitoring scripts, for example: `riak-admin vnode-status` ? > > What size is your data per server ? > > How many objects are you storing ? > > --- > Erlang Solutions cares about your data and privacy; please find all > details about the basis for communicating with you and the way we process > your data in our Privacy Policy.You can update your email preferences or > opt-out from receiving Marketing emails here. > > On 28 May 2018, at 08:29, Gaurav Sood > wrote: > > Hi All - Good Day! > > I have a 7 Node Raik_KV cluster. Recently I have upgraded this cluster > from 1.4.2 to 1.4.12 on Ubuntu 16.04. After upgrading the cluster whenever > I leave a node from cluster one partition hand off stalled every time & > Active transfers shows 'waiting to handoff 1 partitions", to complete this > process I need to reboot the riak service on all nodes one by one. > > I am not sure if it's configuration problem. Here is the current state of > cluster. > > *#output of riak-admin member-status* > = Membership > == > Status RingPendingNode > > --- > leaving 0.0% -- 'riak@192.168.2.10' > valid 14.1% -- 'riak@192.168.2.11' > valid 14.1% -- 'riak@192.168.2.12' > valid 15.6% -- 'riak@192.168.2.13' > valid 14.1% -- 'riak@192.168.2.14' > valid 14.1% -- 'riak@192.168.2.15' > valid 14.1% -- 'riak@192.168.2.16' > valid 14.1% -- 'riak@192.168.2.17' > > --- > Valid:7 / Leaving:1 / Exiting:0 / Joining:0 / Down:0 > > *#output of riak-admin transfers* > > 'riak@192.168.2.10' waiting to handoff 1 partitions > > Active Transfers: > > (nothing here) > > > *#Output of riak-admin ring_status* > == Claimant == > = > Claimant: 'riak@192.168.2.10' > Status: up > Ring Ready: true > > == Ownership Handoff > == > No pending changes. > > == Unreachable Nodes > == > All nodes are up and reachable > > *current Transfer Limit is 2.* > > Thanks > Gaurav > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > > ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com