Riak won't start when configured for Riak-CS
Hi, I am following the directions for configuring Riak with Riak-CS. http://docs.basho.com/riakcs/latest/cookbooks/configuration/Configuring-Riak/ When I follow the directions, riak console gives a problem when starting: 22:43:33.174 [error] Loading of /usr/lib/riak-cs/lib/riak_cs-1.4.1/ebin/riak_cs_kv_multi_backend.beam failed: badfile 22:43:33.174 [error] Failed to start riak_cs_kv_multi_backend Reason: {undef,[{riak_cs_kv_multi_backend,start,[0,[{async_folds,true},[{vnode_vclocks,true},{included_applications,[]},{allow_strfun,false},{reduce_js_vm_count,6},{storage_backend,riak_cs_kv_multi_backend},{legacy_keylisting,false},{pb_ip,"127.0.0.1"},{hook_js_vm_count,2},{listkeys_backpressure,true},{mapred_name,"mapred"},{stats_urlpath,"stats"},{legacy_stats,true},{js_thread_stack,16},{multi_backend,[{be_default,riak_kv_eleveldb_backend,[{max_open_files,50},{data_root,"/var/lib/riak/leveldb"}]},{be_blocks,riak_kv_bitcask_backend,[{data_root,"/var/lib/riak/bitcask"}]}]},{multi_backend_prefix_list,[{<<"0b:">>,be_blocks}]},{riak_kv_stat,true},{add_paths,["/usr/lib/riak-cs/lib/riak_cs-1.4.1/ebin"]},{http_url_encoding,on},{map_js_vm_count,8},{pb_port,8087},{multi_backend_default,be_default},{mapred_2i_pipe,true},{mapred_system,pipe},{js_max_vm_mem,8}]]]},{riak_kv_vnode,init,1},{riak_core_vnode,init,1},{gen_fsm,init_it,6},{proc_lib,init_p_do_apply,3}]} 22:43:33.250 [error] beam/beam_load.c(1365): Error loading module riak_cs_kv_multi_backend: use of opcode 153; this emulator supports only up to 152 22:43:33.330 [notice] "backend module failed to start." I am on Ubuntu 13.04 thanks for any tips. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak won't start when configured for Riak-CS
Hi! I did build Riak-CS from source, but there is no 'install' target in the make so I wasn't sure how that got done. There was no 'install' readme also. I then installed the .deb instead. I installed erlang from ubuntu 13.04 repo. On 09/23/2013 11:16 PM, Dave Parfitt wrote: Hello - Sounds like you have an older version of Erlang trying to load code compiled with a newer version of Erlang. Cheers, Dave On Sep 23, 2013, at 10:48 PM, Darren Govoni <dar...@ontrenet.com> wrote: Hi, I am following the directions for configuring Riak with Riak-CS. http://docs.basho.com/riakcs/latest/cookbooks/configuration/Configuring-Riak/ When I follow the directions, riak console gives a problem when starting: 22:43:33.174 [error] Loading of /usr/lib/riak-cs/lib/riak_cs-1.4.1/ebin/riak_cs_kv_multi_backend.beam failed: badfile 22:43:33.174 [error] Failed to start riak_cs_kv_multi_backend Reason: {undef,[{riak_cs_kv_multi_backend,start,[0,[{async_folds,true},[{vnode_vclocks,true},{included_applications,[]},{allow_strfun,false},{reduce_js_vm_count,6},{storage_backend,riak_cs_kv_multi_backend},{legacy_keylisting,false},{pb_ip,"127.0.0.1"},{hook_js_vm_count,2},{listkeys_backpressure,true},{mapred_name,"mapred"},{stats_urlpath,"stats"},{legacy_stats,true},{js_thread_stack,16},{multi_backend,[{be_default,riak_kv_eleveldb_backend,[{max_open_files,50},{data_root,"/var/lib/riak/leveldb"}]},{be_blocks,riak_kv_bitcask_backend,[{data_root,"/var/lib/riak/bitcask"}]}]},{multi_backend_prefix_list,[{<<"0b:">>,be_blocks}]},{riak_kv_stat,true},{add_paths,["/usr/lib/riak-cs/lib/riak_cs-1.4.1/ebin"]},{http_url_encoding,on},{map_js_vm_count,8},{pb_port,8087},{multi_backend_default,be_default},{mapred_2i_pipe,true},{mapred_system,pipe},{js_max_vm_mem,8}]]]},{riak_kv_vnode,init,1},{riak_core_vnode,init,1},{gen_fsm,init_it,6},{proc_lib,init_p_do_apply,3}]} 22:43:33.250 [error] beam/beam_load.c(1365): Error loading module riak_cs_kv_multi_backend: use of opcode 153; this emulator supports only up to 152 22:43:33.330 [notice] "backend module failed to start." I am on Ubuntu 13.04 thanks for any tips. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak won't start when configured for Riak-CS
Brian, Thanks. I will try that again. Perhaps there is a way for the package scripts or build scripts to detect this situation in the future? thanks, Darren On 09/24/2013 09:42 AM, Brian Sparrow wrote: Hey Darren, As Dave said, the reason for a bad file error is the beam was compiled with one version of the Erlang VM and it is not being run in a different version. My suggestion would be to completely remove all RiakCS installations you have(from source and from package) and re-install from the .deb package. This has the correct version of Erlang packaged within it so you shouldn't have a problem. Thanks! Brian -- Brian Sparrow Developer Advocate Basho Technologies Sent with Sparrow On Tuesday, September 24, 2013 at 5:14 AM, Darren Govoni wrote: Hi! I did build Riak-CS from source, but there is no 'install' target in the make so I wasn't sure how that got done. There was no 'install' readme also. I then installed the .deb instead. I installed erlang from ubuntu 13.04 repo. On 09/23/2013 11:16 PM, Dave Parfitt wrote: Hello - Sounds like you have an older version of Erlang trying to load code compiled with a newer version of Erlang. Cheers, Dave On Sep 23, 2013, at 10:48 PM, Darren Govoni <dar...@ontrenet.com> wrote: Hi, I am following the directions for configuring Riak with Riak-CS. http://docs.basho.com/riakcs/latest/cookbooks/configuration/Configuring-Riak/ When I follow the directions, riak console gives a problem when starting: 22:43:33.174 [error] Loading of /usr/lib/riak-cs/lib/riak_cs-1.4.1/ebin/riak_cs_kv_multi_backend.beam failed: badfile 22:43:33.174 [error] Failed to start riak_cs_kv_multi_backend Reason: {undef,[{riak_cs_kv_multi_backend,start,[0,[{async_folds,true},[{vnode_vclocks,true},{included_applications,[]},{allow_strfun,false},{reduce_js_vm_count,6},{storage_backend,riak_cs_kv_multi_backend},{legacy_keylisting,false},{pb_ip,"127.0.0.1"},{hook_js_vm_count,2},{listkeys_backpressure,true},{mapred_name,"mapred"},{stats_urlpath,"stats"},{legacy_stats,true},{js_thread_stack,16},{multi_backend,[{be_default,riak_kv_eleveldb_backend,[{max_open_files,50},{data_root,"/var/lib/riak/leveldb"}]},{be_blocks,riak_kv_bitcask_backend,[{data_root,"/var/lib/riak/bitcask"}]}]},{multi_backend_prefix_list,[{<<"0b:">>,be_blocks}]},{riak_kv_stat,true},{add_paths,["/usr/lib/riak-cs/lib/riak_cs-1.4.1/ebin"]},{http_url_encoding,on},{map_js_vm_count,8},{pb_port,8087},{multi_backend_default,be_default},{mapred_2i_pipe,true},{mapred_system,pipe},{js_max_vm_mem,8}]]]},{riak_kv_vnode,init,1},{riak_core_vnode,init,1},{gen_fsm,init_it,6},{proc_lib,init_p_do_apply,3}]} 22:43:33.250 [error] beam/beam_load.c(1365): Error loading module riak_cs_kv_multi_backend: use of opcode 153; this emulator supports only up to 152 22:43:33.330 [notice] "backend module failed to start." I am on Ubuntu 13.04 thanks for any tips. ___ riak-users mailing list
Re: Riak on SAN
That's typically called multi-datacenter replication. Beyond scope for RIAK, but a dilemma for deployments all the same. Good point. Some similar products (e.g. mongo) provide this but I think only in their enterprise paid version or upper tier offerings. On 10/02/2013 04:12 PM, Alexander Sicular wrote: Cluster "redundancy" is no safeguard against the unknown. The only true reliable protection is a complete offline backup in a separate facility not run by the same provider as your primary facility. I'm not saying everyone is running at that level of paranoia, but it is something to consider against the value of your data. What if you get rooted and someone runs something like for node in nodes rm -rf myriakdata ? -Alexander Sicular @siculars On Oct 2, 2013, at 4:03 PM, "Victor"wrote: Excuse me, if I misunderstood something, but why you would even want to have backup of a single node, if you are running 5 node cluster? Assuming your W key value for put requests is higher then number of vnodes on each physical node, scenario when you loose data because of single node failure does not seems to be possible. And restoring failed node should not require data backup, as backend hinted handoff should make all work for you and get failed system back to state prior failure. Sure, backup of initial state would be helpful, as it would help to save plenty of time on node re-setup, but redundancy on cluster-level seems reliable enough. From: riak-users [mailto:riak-users-boun...@lists.basho.com] On Behalf Of John E. Vincent Sent: Wednesday, October 02, 2013 3:12 PM To: riak-users Subject: Re: Riak on SAN I'm going to take a competing view here. SAN is a bit overloaded of a term at this point. Nothing precludes a SAN from being performant or having SSDs. Yes the cost is overkill for fiber but iSCSI is much more realistic. Alternately you can even do ATAoE. From a hardware perspective, if I have 5 pizza boxes as riak nodes, I can only fit so many disks in them. Meanwhile I can add another shelf to my SAN and expand as needed. Additionally backup of a SAN is MUCH easier than backup of a riak node itself. It's a snapshot and you're done. Mind you nothing precludes you from doing LVM snapshots in the OS but you still need to get the data OFF that system for it to be truly backed up. I love riak and other distributed stores but backing them up is NOT a solved problem. Walking all keys, coordinating the take down of all your nodes in a given order or whatever your strategy is a serious pain point. Using a SAN or local disk also doesn't excuse you from watching I/O performance. With a SAN I get multiple redundant paths to a block device and I don't get that necessarily with local storage. Just my two bits. On Wed, Oct 2, 2013 at 2:18 AM, Jeremiah Peschka wrote: Could you do it? Sure. Should you do it? No.
Re: Riak on SAN
Sweet! On 10/02/2013 05:51 PM, Steve Vinoski wrote: Hi Darren, Riak Enterprise supports multi-datacenter replication: http://docs.basho.com/riakee/latest/cookbooks/Multi-Data-Center-Replication-Architecture/ --steve On Wed, Oct 2, 2013 at 5:34 PM, Darren Govoni <dar...@ontrenet.com> wrote: That's typically called multi-datacenter replication. Beyond scope for RIAK, but a dilemma for deployments all the same. Good point. Some similar products (e.g. mongo) provide this but I think only in their enterprise paid version or upper tier offerings. On 10/02/2013 04:12 PM, Alexander Sicular wrote: Cluster "redundancy" is no safeguard against the unknown. The only true reliable protection is a complete offline backup in a separate facility not run by the same provider as your primary facility. I'm not saying everyone is running at that level of paranoia, but it is something to consider against the value of your data. What if you get rooted and someone runs something like for node in nodes rm -rf myriakdata ? -Alexander Sicular @siculars On Oct 2, 2013, at 4:03 PM, "Victor" <vic...@boirefillergroup.com> wrote: Excuse me, if I misunderstood something, but why you would even want to have backup of a single node, if you are running 5 node cluster? Assuming your W key value for put requests is higher then number of vnodes on each physical node, scenario when you loose data because of single node failure does not seems to be possible. And restoring failed node should not require data backup, as backend hinted handoff should make all work for you and get failed system back to state prior failure. Sure, backup of initial state would be helpful, as it would help to save plenty of time on node re-setup, but redundancy on cluster-level seems reliable enough. From: riak-users [mailto:riak-users-boun...@lists.basho.com] On Behalf Of John E. Vincent Sent: Wednesday, October 02, 2013 3:12 PM To: riak-users Subject: Re: Riak on SAN I'm going to take a competing view here. SAN is a bit overloaded of a term at this point. Nothing precludes a SAN from being performant or having SSDs. Yes the cost is overkill for fiber but iSCSI is much more realistic. Alternately you can even do ATAoE.
Re: Riak consumes too much memory
Sounds nice. And then the question is what happens when that limit is reached on a node? On 10/18/2013 02:21 PM, Matthew Von-Maszewski wrote: The user has the option of setting a default memory limit in the app.config / riak.conf file (either absolute number or percentage of total system memory). There is a default percentage (which I am still adjusting) if the user takes no action. The single memory value is then dynamically partitioned to each Riak vnode (and AAE vnodes) as the server takes on more or fewer vnodes throughout normal operations and node failures. There is no human interaction required once the memory limit is established. Matthew On Oct 18, 2013, at 2:08 PM, darrenwrote: Is it smart enough to manage itself? Or does it require human babysitting? Sent from my Verizon Wireless 4G LTE Smartphone Original message From: Matthew Von-Maszewski Date: 10/18/2013 1:48 PM (GMT-05:00) To: Dave Martorana Cc: darren ,riak-users@lists.basho.com Subject: Re: Riak consumes too much memory Dave, flexcache will be a new feature in Riak 2.0. There are some subscribers to this mailing list that like to download and try things early. I was directing those subscribers to the GitHub branch that contains the work-in-progress code. flexcache is a new method for sizing / accounting the memory used by leveldb. It replaces the current method completely. flexcache is therefore not an option, but an upgrade to the existing logic. Again, the detailed discussion is here: ttps://github.com/basho/leveldb/wiki/mv-flexcache Matthew On Oct 18, 2013, at 12:33 PM, Dave Martorana wrote: Matthew, For we who don't quite understand, can you explain - does this mean mv-flexcache is a feature that just comes with 2.0, or is it something that will need to be turned on, etc? Thanks! Dave On Thu, Oct 17, 2013 at 9:45 PM, Matthew Von-Maszewski wrote: It is already in test and available for your download now: https://github.com/basho/leveldb/tree/mv-flexcache Discussion is here: https://github.com/basho/leveldb/wiki/mv-flexcache This code is slated for Riak 2.0. Enjoy!! Matthew On Oct 17, 2013, at 20:50, darren wrote: But why isn't riak smart enough to adjust itself to the available memory or la