Setting read quorum for map reduce jobs

2010-05-04 Thread Johnathan Loggie
Hi,

I¹m struggling to find how to set the the read quorum for a map reduce job
via the REST api.

The documentation doesn¹t mention that this is even possible.
I¹m using ripple, and tried forcing r=1 onto the end of the query string
(e.g with @client = Riak::Client.new(:mapred = Œ/mapred?r=1¹)) to see if
this resource worked like the others, but had no luck.

I have 2 riak nodes and have deliberately stopped one as if it were down or
disconnected.
In this state my map reduce jobs fail to run on the node that is up and
running.

Can anybody help?
Have I completely got the wrong end of the stick on this one?

Johnno
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Setting read quorum for map reduce jobs

2010-05-04 Thread Johnathan Loggie
Hi,

OK Thanks, I thought as much. Its was more of an academic question really.

Johnno


On 04/05/2010 15:27, "Sean Cribbs"  wrote:

> The read quorum doesn't apply to map-reduce - the value from the first vnode
> in the preflist will be tried, followed by the others if it is not available.
> If an input is completely unavailable (as may be the case with 1/2 of the
> nodes down), the job will fail.  2 nodes is a bit of degenerate case anyway -
> especially if you use the default N value of 3.  When the size of the cluster
> smaller than N, there's a chance that some data could become unavailable when
> a node goes down.
> 
> Sean Cribbs 
> Developer Advocate
> Basho Technologies, Inc.
> http://basho.com/
> 
> On May 4, 2010, at 10:13 AM, Johnathan Loggie wrote:
> 
>> Hi,
>> 
>> I¹m struggling to find how to set the the read quorum for a map reduce job
>> via the REST api.
>> 
>> The documentation doesn¹t mention that this is even possible.
>> I¹m using ripple, and tried forcing r=1 onto the end of the query string (e.g
>> with @client = Riak::Client.new(:mapred = Œ/mapred?r=1¹)) to see if this
>> resource worked like the others, but had no luck.
>> 
>> I have 2 riak nodes and have deliberately stopped one as if it were down or
>> disconnected.
>> In this state my map reduce jobs fail to run on the node that is up and
>> running.
>> 
>> Can anybody help?
>> Have I completely got the wrong end of the stick on this one?
>> 
>> Johnno 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Caching of anonymous javascript reduce phase results

2010-12-16 Thread Johnathan Loggie
Hiya,

I¹m seeing a problem where when running a map reduce on some newly created
data that Riak will return [] for the final reduce phase.

Riak::MapReduce.new(client).
  add(self.class.bucket_name, self.key).
  link(:tag => "batches",:keep => false).
  link(:tag => "objects",:keep => false).
  map("function(v) { return [v]; }", :keep => false).
  reduce("function(valueList, arg)
{
 return [valueList.reduce(
  function(acc, value)
  {
   if (value.not_found) return acc;
   return acc.concat(value);
  }, [] )];
}", :keep => true)

When setting keep to true on the map phase I can see that the objects are
there as the input to the reduce phase.
Now here¹s the weird part

1 If I add a line like  Œ // #{rand} #{Time.now} Œ to the reduce phase, it
will work

2 If I wait a while (read several minutes) between writing the data and
running the unmodified reduce phase code it will work.

So it looks like Riak is perhaps using a cached result for the reduce phase,
which would make no sense as the inputs to the whole map reduce job are
different (as I said they are for newly created documents).

I¹ve fixed the issue for now using the solution shown in point 1 above but
am I correct in thinking that this is a bug?

Thanks

Johnno

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com