Re: Buckets with different expiry_secs value

2011-05-17 Thread Mathias Meyer
Dmitry,

the Protocol Buffer interface doesn't yet support setting most of the available 
bucket properties. Support for doing that was added just recently though.

For now the easiest workaround is to just use the REST API, for which you'll 
need the binaries. Both binaries and atoms work, but for the REST API it's best 
to use binaries as it doesn't handle atoms.

Mathias Meyer
Developer Advocate, Basho Technologies

On Dienstag, 17. Mai 2011 at 08:58, Dmitry Demeshchuk wrote: 
> Greetings.
> 
> I'm looking for a way to set expiry_secs for specific buckets. For
> example, if some buckets need to have object expiration time and some
> buckets need to have unlimited lifetime.
> 
> I've tried to do this using multi_backend:
> 
> {storage_backend, riak_kv_multi_backend},
> {multi_backend_default, <<"default">>},
> {multi_backend, [
>  {<<"default">>, riak_kv_bitcask_backend, []},
>  {<<"cache">>, riak_kv_bitcask_backend, [
>  {expiry_secs, 60},
>  ]}
> ]},
> 
> But when I call
> 
> riakc_pb_socket:set_bucket(<<"test">>, [{backend, <<"cache">>}]),
> 
> this setting isn't applied to the bucket. Tried to set different data
> directories for both backends – with the same result.
> 
> Is such thing possible at all, and if it's possible, what am I doing wrong?
> 
> Also, looks like riak_kv_multi_backend.erl has documentation a bit
> different with wiki. At wiki backends aliases are binaries, and at the
> module's docs they are atoms. Probably, both of them work, but neither
> of them worked for me :(
> 
> -- 
> Best regards,
> Dmitry Demeshchuk
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Mapreduce crosstalk

2011-05-17 Thread Aphyr
I was writing a new mapreduce query to look at users over time, and ran 
it over a single user in production. After that, other mapreduce jobs 
over users started returning results from my new map phase, some of the 
time. After five minutes of this, I had to restart every node in the 
cluster to get it to stop.


Every node has {map_cache_size, 0} in riak_kv.

The map phase that screwed things up was:

function(v) {
  o = JSON.parse(v.values[0].data);

  // Age of account in days
  age = Math.round(
(Date.now() - Date.iso8601(o.created_at)) /
(1000 * 60 * 60 * 24)
  );

  return [['t_user_scores', v.key, age]];
}

It looks like one node started running that phase instead of the 
requested phase for subsequent jobs. It *should* have run this one, but 
didn't.


function(v) {
o = JSON.parse(v.values[0].data);
return [{
key: v.key,
name: o.name,
thumbnail: o.thumbnail
}];
}

Now I'm scared to run MR jobs. Could it be an issue with returning 
keydata? Anybody else seen this before?


--Kyle

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


MapReduce with Haskell

2011-05-17 Thread Nick Partridge
Hey all,

I'm playing with Riak via the haskell client, and I've run into a problem
with mapreduce. The gist of the problem is that once I've done one mapreduce
job with a connection, I only ever see results from that job.

I've distilled it down to small example here (https://gist.github.com/977611),
which exhibits slightly different behaviour to my application code. In the
small example, I'm seeing results from the second query twice. I think there
might be something going on with laziness, but it's beyond my ability to
debug.

Any pointers? This is somewhat specific to the haskell client, I hope this
is the right place to ask :)

Cheers,
Nick
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com