Re: Cloning a cluster / copying all cluster data

2012-09-11 Thread Matthew Tovbin
Matt, We copied all the data key by key, since we used incorrect 'ring_creation_size' value. ( http://riak-users.197444.n3.nabble.com/Cluster-migration-due-to-incorrect-quot-ring-creation-size-quot-value-td4024509.html ) So, you can use this copy tool - https://github.com/tovbinm/riak-tools, whic

Re: 'forward_preflist' error in MapReduce

2012-08-30 Thread Matthew Tovbin
No, 1.1.2 -Matthew On Thu, Aug 30, 2012 at 5:20 PM, Mark Phillips wrote: > Hi Matthew, > > Riak 1.2? > > Mark > > > On Aug 31, 2012, at 0:58, Matthew Tovbin wrote: > >> Howdy Riak Users, >> >> Recently I discovered that my MapReduce job, that

'forward_preflist' error in MapReduce

2012-08-30 Thread Matthew Tovbin
Howdy Riak Users, Recently I discovered that my MapReduce job, that used to run successfully, stopped working. Here is the error: {"phase":0,"error":"[timeout]","input":"{<<\"test\">>,<<\"123\">>}","type":"forward_preflist","stack":"[]"} Is anyone familiar with this problem? How can I make it to

Re: LevelDB

2012-08-21 Thread Matthew Tovbin
thanks!! -Matthew On Tue, Aug 21, 2012 at 7:29 PM, David Yu wrote: > > > On Wed, Aug 22, 2012 at 5:33 AM, Alexander Sicular wrote: > >> I was in the Riak 1.2 webinar earlier today and asked a leveldb question >> about insertion order and durability vs. bitcask's WOL architecture. Joe >> was

Re: Riak Recap for August 4 - 15

2012-08-21 Thread Matthew Tovbin
Mark, Thanks for the script! We've been using it for a couple of months already and it seems to be doing ok. But my current concern is - how can we measure the performance impact of calling "/stats/" on Riak nodes prior to 1.2? Maybe it is better to use the suggested scripts ( https://github.com/

Re: Large buckets with Secondary Index

2012-07-30 Thread Matthew Tovbin
Yousuf, Thanks for the update! Did you try to reproduce with 1.2.X? -Matthew On Sun, Jul 1, 2012 at 1:08 PM, Mik Quinlan wrote: > Hi, do the LevelDB buffer size settings write_buffer_size_min and > write_buffer_size_max make a different to point 7? > > > __

Re: Intermittent MapReduce crashes (reserve_vm)

2012-07-18 Thread Matthew Tovbin
},{workers,0}]], []} {"`",[]} {"@",[]} While I have the following in my app.config file: map_js_vm_count: 96 reduce_js_vm_count: 64 hook_js_vm_count: 16 js_max_vm_mem: 8 js_thread_stack: 16 -Matthew On Wed, Jul 18, 2012 at 2:43 PM, Matthew Tovbin wrote:

Re: Intermittent MapReduce crashes (reserve_vm)

2012-07-18 Thread Matthew Tovbin
Hi Brian, I've also started getting the same error lately. Did you able to solve it? Does any one has a clue what to do with it?! -Matthew On Fri, Apr 27, 2012 at 10:38 PM, Brian Conway wrote: > Any ideas? Using JS for MapReduce, I'm currently unable to do anything > except trivial tasks wi

Re: Pagination with Ripple

2012-07-18 Thread Matthew Tovbin
Martin, You may try to use range queries for this purpose in couple with LevelDB as a backend. .../buckets//\$key// And in case you a ready to sacrifice performance, you may use Solr, which is available through Riak Search

Re: Cluster migration due to incorrect "ring_creation_size" value

2012-07-03 Thread Matthew Tovbin
w much data do > you have in your five nodes? > > Mark > > On Mon, Jul 2, 2012 at 11:00 AM, Matthew Tovbin wrote: > >> Hi, >> >> Do you have any suggestion for me?! >> >> >> -Matthew >> >> >> >> On Tue, Jun 26, 201

Re: Cluster migration due to incorrect "ring_creation_size" value

2012-07-02 Thread Matthew Tovbin
Hi, Do you have any suggestion for me?! -Matthew On Tue, Jun 26, 2012 at 1:36 PM, Matthew Tovbin wrote: > Hi Basho, > > We have a running cluster of 5 nodes which accidentally was configured > with a default setting > "{ ring_creation_size: 64 }". > > A

Cluster migration due to incorrect "ring_creation_size" value

2012-06-26 Thread Matthew Tovbin
Hi Basho, We have a running cluster of 5 nodes which accidentally was configured with a default setting "{ ring_creation_size: 64 }". As suggested in documentation we cannot scale this cluster more than 6 machines, so the only option is to migrate to a new cluster with a larger ring_creation_size

Re: Very (very) slow handoff, how to investigate?

2012-05-24 Thread Matthew Tovbin
Guys, Thanks for the tips!! Helpful indeed. -Matthew On Tue, Jan 31, 2012 at 2:32 AM, Gal Barnea wrote: > Guys > Thanks a lot for the helpful pointers > > I decided to focus more on speeding the process of joining servers to the > cluster, where it is easier to monitor disk space during the

Re: Riak Map / Reduce / Map capacity limit reached at 20000 keys / Doubt about reduce step

2012-05-22 Thread Matthew Tovbin
return > [v.reduce(function(acc,value){return acc + value;},0)]}") > > Output: 0 - 0 > > Thanks in advance for clarification. > > Regards, > Claude > * > Claude Falbriard > Certified IT Specialist L2 - Middleware > AMS Hortolândia / SP - Brazil >

Re: Riak Map / Reduce / Map capacity limit reached at 20000 keys

2012-05-21 Thread Matthew Tovbin
Claude, 1. try to increase js_vm_counts as follows: {riak_kv, [ ... {map_js_vm_count, 24 }, {reduce_js_vm_count, 18 }, ...] 2. optimize your js function: query.map("""function(v) {if (v.values[0].data.search( '"""+search_string+"""') != -1) { retu

Efficient way of passing multiple arguments to mapreduce functions

2012-05-08 Thread Matthew Tovbin
Hi Riak-users, I'm looking for an efficient way of passing multiple arguments to mapreduce functions, Instead of passing a JSON (or other custom string representation of arguments) and being forced to parse it on every map/reduce function call, i.e.: Now: //Passing: arg = {"a":"0", "b":"1"} //Ex

Re: preflist_exhausted error

2012-04-24 Thread Matthew Tovbin
Sati, Check, if you see the following error in your log file: crasher: initial call: riak_kv_js_vm:init/1 pid: <0.404.0> registered_name: [] exception exit: {{badmatch,{error ,enoent}},[{riak_kv_js_vm,load_mapred_builtins,1},{js_driver,new,3},{riak_kv_js_vm,init,1},{gen_server,init

Re: Riak Adoption - What can we do better?

2012-04-20 Thread Matthew Tovbin
Riak is one of the simplest KV stores I used so far. Almost zero learning curve and ease of setup makes it a great tool to start working with. Moreover, later you got rewarded by an outstanding stability and minimal maintenance even at large scale. So, the only obstacle I see in adoption of Riak

Re: preflist_exhausted with map-reduce and riak-js

2012-04-04 Thread Matthew Tovbin
>> Not having any VMs available would explain your preflist exhausted issue. >> >> >> Could you restart one of the nodes and check the count, preferably before >> running any JS MapReduce jobs? Then if they are still zero we'll have to >> work out why they aren&#x

Re: preflist_exhausted with map-reduce and riak-js

2012-04-04 Thread Matthew Tovbin
ork out why they aren't starting up. > > Cheers, Jon. > > > On Tue, Apr 3, 2012 at 5:10 PM, Matthew Tovbin wrote: > >> Jon, >> >> We dont have any MR jobs running at all :)) Here you go: >> >> 1>rpc:multicall(supervisor, count_children, [riak_pi

Re: preflist_exhausted with map-reduce and riak-js

2012-04-03 Thread Matthew Tovbin
the two managers, > that gives how many JS VMs are running per-node. > > If the number of concurrent requests is low, maybe we can find a higher > bandwidth way of investigating. > > Jon > > On Mon, Apr 2, 2012 at 4:21 PM, Matthew Tovbin wrote: > >>

preflist_exhausted with map-reduce and riak-js

2012-04-02 Thread Matthew Tovbin
d_terminated Reason: {sink_died,shutdown} Offender: [{pid,<0.22121.162>},{name,undefined},{mfargs,{riak_pipe_builder,start_link,undefined}},{restart_type,temporary},{shutdown,brutal_kill},{child_type,worker}] -Matthew On Mon, Apr 2, 2012 at 12:27, Jon Meredith wrote: >

Re: preflist_exhausted with map-reduce and riak-js

2012-04-02 Thread Matthew Tovbin
p_js_vm_count, 24 }, > {reduce_js_vm_count, 18 }, > ...] > > If you were affected by it and changing this does not resolve your issue, > I'll keep digging. > > Cheers, Jon. > > On Thu, Mar 29, 2012 at 10:29 AM, Matthew Tovbin wrote: > >>

Re: preflist_exhausted with map-reduce and riak-js

2012-03-29 Thread Matthew Tovbin
Guys, Any updates on the issue?! -Matthew On Tue, Mar 13, 2012 at 18:29, Matthew Tovbin wrote: > Here is a log from one of the servers: > > ==> /mnt/dataraid/log/crash.log <== > 2012-03-13 18:24:44 =CRASH REPORT > crasher: > initial call: riak_pipe_

Re: preflist_exhausted with map-reduce and riak-js

2012-03-13 Thread Matthew Tovbin
error] <0.175.0> Supervisor riak_pipe_builder_sup had child undefined started with {riak_pipe_builder,start_link,undefined} at <0.20949.24> exit with reason {sink_died,shutdown} in context child_terminated -Matthew On Tue, Mar 13, 2012 at 18:17, Matthew Tovbin wrote: > Hi,

Re: preflist_exhausted with map-reduce and riak-js

2012-03-13 Thread Matthew Tovbin
Hi, I got the same problem today on 1.1.0. As suggested, I updated all the nodes to 1.1.1. The error remains the same: { stack: [Getter/Setter], arguments: undefined, type: undefined, message: 'HTTP error 500: {"phase":0,"error":"[preflist_exhausted]","input":"{ok,{r_object ."}}} -Ma

Re: Solving "bad_utf8_character_code" error returned by Riak MR

2012-03-09 Thread Matthew Tovbin
know that (Javascript funargs are exceptionally lazy anyway). It will still > try to serialize the object into JSON to send marshal it into the VM. > That's where the error is generated. > > On Fri, Mar 9, 2012 at 12:20 PM, Matthew Tovbin wrote: > >> Good day! >> >

Solving "bad_utf8_character_code" error returned by Riak MR

2012-03-09 Thread Matthew Tovbin
Good day! As previously discussed here ( https://github.com/basho/riak_kv/pull/252#issuecomment-4415767) I am unable to perform MR on my cluster due to wrong encoding in some of my values. What are the suggestions for solving this issue, except of cleaning data before putting it to Riak? Why

Re: [ANN] Riaknostic, your Riak Doctor

2012-03-02 Thread Matthew Tovbin
Hi Sean, Seems like a nice tool, but it's nice to have more examples on the homepage. Because currently I've a problem understanding if the tool is working at all: no connected nodes info is printed. See below. Similar happens with other diag commands, such as 'diag disk', 'diag dumps' etc. #~/