Re: speeding up riaksearch precommit indexing

2011-06-21 Thread Les Mikesell
I'd like to have fully redundant feeds with no single point of failure, but avoid the work of indexing the duplicate copy and having it written to a bitcask even if it would eventually be cleaned up. On 6/21/2011 4:43 PM, Sylvain Niles wrote: Why not write to a queue bucket with a timestamp a

RE: Re: Riak crash on 0.14.2 riak_kv_stat terminating

2011-06-21 Thread David Mitchell
As a follow up to my earlier post, I just reran all 208 MapReduce jobs, and this time I got four timeouts. This time, riak03 was the culprit (rather than riak02). This first timeout wrote to the error log after seven seconds. The second and third wrote to the error log after five seconds. Th

Re: speeding up riaksearch precommit indexing

2011-06-21 Thread Sylvain Niles
Why not write to a queue bucket with a timestamp and have a queue processor move writes to the "final" bucket once they're over a certain age? It can dedup/validate at that point too. On Tue, Jun 21, 2011 at 2:26 PM, Les Mikesell wrote: > Where can I find the redis hacks that get close to cluste

Re: speeding up riaksearch precommit indexing

2011-06-21 Thread Les Mikesell
Where can I find the redis hacks that get close to clustering? Would membase work with syncronous replication on a pair of nodes for a reliable atomic 'check and set' operation to dedup redundant data before writing to riak? Conceptually I like the 'smart client' fault tolerance of memcache/

Re: Riak crash on 0.14.2 riak_kv_stat terminating

2011-06-21 Thread David Mitchell
Erlang: R13B04 Riak: 0.14.2 I am having the same issue as Jeremy. I just did 208 MapReduce jobs using anonymous JavaScript functions in the map and reduce phases. I am sending the MapReduce jobs to a single node, riak01. Out of the 208 jobs, I got two "mapexec_error" {error,timeout} on riak02

Re: Puppet Module for Riak

2011-06-21 Thread Jacques
Oops, forgot to include list. Sorry. -- Forwarded message -- From: Jacques Date: Tue, Jun 21, 2011 at 1:22 PM Subject: Re: Puppet Module for Riak To: Ken Perkins I'm very new to using puppet but I'll give you what I have. It is just a basic example. The init references two e

Re: Newbie Ripple

2011-06-21 Thread Aphyr
2. You could write x = Klass.find(key) if x.nil? x = Klass.new x.save end get_or_new doesn't save, so perhaps Klass.find(key) || Klass.new(key) Risky (another Ruby Riak model layer) offers Klass.get_or_new(key) 3. control the bucket on which the document is stored/retri

Riak async PB client based on Grizzly

2011-06-21 Thread Jon Brisbin
I'm trying to get my feet wet with Grizzly on another project, so I've been spending some late(ish) nights also working on a Grizzly-based asynchronous PB client for Riak. I'm dropping all the way down to the protobuf level and using Grizzly's NIO abstractions to implement a completely non-block

Puppet Module for Riak

2011-06-21 Thread Ken Perkins
I've been searching and have had little success in finding a Riak Module for puppet, so I wanted to turn to the community. Are any of you using puppet to manage your riak boxes? Please share anything if you do! It'd be great to have something to start from. Thanks! --Ken clipboard, inc. _

Re: Newbie Ripple

2011-06-21 Thread Thomas Fee
1. The configuration for Ripple goes like this... require 'ripple' Ripple.client.host = 'hostname-or-ip-address' 2. You could write x = Klass.find(key) if x.nil? x = Klass.new x.save end Unless you meant that you need an atomic read-or-create operation. I don't know if such a th

Re: Newbie Ripple

2011-06-21 Thread Jeremiah Peschka
Hi Pablo, welcome! In answer to question 1: I put together an introduction to Riak a while back, you can find it on github: https://github.com/peschkaj/riak_intro/ Specifically, https://github.com/peschkaj/riak_intro/blob/master/src/mr_filter.rb has an example of how to configure get Riak up and

Newbie Ripple

2011-06-21 Thread Pablo Chacin
Hello I'm new to reply and Ruby in general, so probably my questions have evident answers. I'm trying to use the Document Model, but there are a few things that are not clear from the documentation: 1. How to configure ripple when used in a ruby (not rails) application. 2. I want to do something s

Re: Riak crash on 0.14.2 riak_kv_stat terminating

2011-06-21 Thread Dan Reverri
Hi Jeremy, The flow_timeout error would not cause a node to crash. Supervisor and error reports are normal log entries and do not usually correspond to a node crash. Can you provide all the log files from the crashing node? Also, can you look for an erl_crash.dump file? Thanks, Dan Daniel Reverr

Re: Riak crash on 0.14.2 riak_kv_stat terminating

2011-06-21 Thread Jeremy Raymond
Ok thanks. Where do I increase the timeout? Would the timeout cause the node to crash? - Jeremy On Tue, Jun 21, 2011 at 9:27 AM, Mathias Meyer wrote: > Jeremy, > > looks like you're hitting the timeout for the MapReduce job you're running, > so the issue isn't memory-related. You could either

Re: Benchmarks of backends

2011-06-21 Thread Justin Sheehy
Hi, Anthony. Most people using Riak today use either Bitcask or Innostore, as I suspect you know. Bitcask has excellent performance, but the limitation that you are aware of with a hard limit on number of keys per unit of available RAM. Innostore does not have that limitation, but is much harde

Re: Riak crash on 0.14.2 riak_kv_stat terminating

2011-06-21 Thread Mathias Meyer
Jeremy, looks like you're hitting the timeout for the MapReduce job you're running, so the issue isn't memory-related. You could either increase the timeout for the whole job, or split up the single MapReduce request in multiple, ensuring a shorter runtime of a single job, then collecting the r

Re: Riak crash on 0.14.2 riak_kv_stat terminating

2011-06-21 Thread Jeremy Raymond
I increased the memory to 3GB on the VMs I'm using for Riak and also replaced a JavaScript reduce function I had missed converting to Erlang with the Erlang version. Monitoring the memory on the machines indicates that Riak is not running out of memory. There is lots of disk space on the machines (