I'd like to have fully redundant feeds with no single point of failure,
but avoid the work of indexing the duplicate copy and having it written
to a bitcask even if it would eventually be cleaned up.
On 6/21/2011 4:43 PM, Sylvain Niles wrote:
Why not write to a queue bucket with a timestamp a
As a follow up to my earlier post, I just reran all 208 MapReduce jobs, and
this time I got four timeouts. This time, riak03 was the culprit (rather than
riak02).
This first timeout wrote to the error log after seven seconds. The second and
third wrote to the error log after five seconds. Th
Why not write to a queue bucket with a timestamp and have a queue
processor move writes to the "final" bucket once they're over a
certain age? It can dedup/validate at that point too.
On Tue, Jun 21, 2011 at 2:26 PM, Les Mikesell wrote:
> Where can I find the redis hacks that get close to cluste
Where can I find the redis hacks that get close to clustering? Would
membase work with syncronous replication on a pair of nodes for a
reliable atomic 'check and set' operation to dedup redundant data before
writing to riak? Conceptually I like the 'smart client' fault
tolerance of memcache/
Erlang: R13B04
Riak: 0.14.2
I am having the same issue as Jeremy.
I just did 208 MapReduce jobs using anonymous JavaScript functions in the map
and reduce phases. I am sending the MapReduce jobs to a single node, riak01.
Out of the 208 jobs, I got two "mapexec_error" {error,timeout} on riak02
Oops, forgot to include list. Sorry.
-- Forwarded message --
From: Jacques
Date: Tue, Jun 21, 2011 at 1:22 PM
Subject: Re: Puppet Module for Riak
To: Ken Perkins
I'm very new to using puppet but I'll give you what I have. It is just a
basic example.
The init references two e
2. You could write
x = Klass.find(key)
if x.nil?
x = Klass.new
x.save
end
get_or_new doesn't save, so perhaps Klass.find(key) || Klass.new(key)
Risky (another Ruby Riak model layer) offers Klass.get_or_new(key)
3. control the bucket on which the document is stored/retri
I'm trying to get my feet wet with Grizzly on another project, so I've been
spending some late(ish) nights also working on a Grizzly-based asynchronous PB
client for Riak. I'm dropping all the way down to the protobuf level and using
Grizzly's NIO abstractions to implement a completely non-block
I've been searching and have had little success in finding a Riak Module for
puppet, so I wanted to turn to the community.
Are any of you using puppet to manage your riak boxes? Please share anything if
you do! It'd be great to have something to start from.
Thanks!
--Ken
clipboard, inc.
_
1. The configuration for Ripple goes like this...
require 'ripple'
Ripple.client.host = 'hostname-or-ip-address'
2. You could write
x = Klass.find(key)
if x.nil?
x = Klass.new
x.save
end
Unless you meant that you need an atomic read-or-create operation. I don't
know if such a th
Hi Pablo, welcome!
In answer to question 1: I put together an introduction to Riak a while
back, you can find it on github: https://github.com/peschkaj/riak_intro/
Specifically,
https://github.com/peschkaj/riak_intro/blob/master/src/mr_filter.rb has an
example of how to configure get Riak up and
Hello
I'm new to reply and Ruby in general, so probably my questions have evident
answers.
I'm trying to use the Document Model, but there are a few things that are
not clear from
the documentation:
1. How to configure ripple when used in a ruby (not rails) application.
2. I want to do something s
Hi Jeremy,
The flow_timeout error would not cause a node to crash. Supervisor and error
reports are normal log entries and do not usually correspond to a node
crash. Can you provide all the log files from the crashing node? Also, can
you look for an erl_crash.dump file?
Thanks,
Dan
Daniel Reverr
Ok thanks. Where do I increase the timeout? Would the timeout cause the node
to crash?
- Jeremy
On Tue, Jun 21, 2011 at 9:27 AM, Mathias Meyer wrote:
> Jeremy,
>
> looks like you're hitting the timeout for the MapReduce job you're running,
> so the issue isn't memory-related. You could either
Hi, Anthony.
Most people using Riak today use either Bitcask or Innostore, as I suspect you
know. Bitcask has excellent performance, but the limitation that you are aware
of with a hard limit on number of keys per unit of available RAM. Innostore
does not have that limitation, but is much harde
Jeremy,
looks like you're hitting the timeout for the MapReduce job you're running, so
the issue isn't memory-related. You could either increase the timeout for the
whole job, or split up the single MapReduce request in multiple, ensuring a
shorter runtime of a single job, then collecting the r
I increased the memory to 3GB on the VMs I'm using for Riak and also
replaced a JavaScript reduce function I had missed converting to Erlang with
the Erlang version. Monitoring the memory on the machines indicates that
Riak is not running out of memory. There is lots of disk space on the
machines (
17 matches
Mail list logo