Hello Riak-Java users.
Just trying to figure out how I can register the Scala Jackson module with
Jackson so that when Riak goes to convert objects to be saved in the
database it will pick up on the included module. In the README in
https://github.com/FasterXML/jackson-module-scala, it's rather si
Hi, tested and while only wiping leveldb and ring helped ultimately I
modified my wipe script to
riak stop
rm -rf /var/lib/riak/ring/*
rm -rf /var/lib/riak/leveldb/*
riak start
riak-admin wait-for-service riak_kv riak@127.0.0.1
Which worked like a charm.
Regards
Jonas
On 1 March 2013 20:46, Jo
Hi,
we changed the n_val of a bucket from 3 to 12. If we are now doing this:
riak:8098/riak/config?keys=true
or
riak:8098/buckets/config/keys?keys=true
we get some keys multiple times. Getting the content works
well but we can't rely on the output (or have to sort/uniq the output).
Is this a no
I've been asking our engineers for help with this, and here's what I've found...
In theory, you could use the standard OTP mechanism[1] via the console to stop
and restart the riak_kv application to stop a (non-MapReduce) query, but that
has not worked across all versions of Riak. The rolling cl
Just a thought but we've been working on disabling certain API
operations at the proxy level. We have a subsystem that uses riak that
should NEVER see a DELETE call ever and we're planning on guaranteeing
that by blocking it at the proxy level.
Combined with the actual nodes being inaccessible in
Hey Simon,
A few questions for starters:
How many Riak nodes are in your cluster?
Is your R still the default of 2?
Out of curiosity, what's the use case for an n_val of 12? I think the
highest I've ever seen is 5 or 6. :)
Mark
On Wed, Mar 6, 2013 at 10:28 AM, Simon Effenberg
wrote:
> Hi,
>
>
Hello Riak Users,
Today I'm pleased to announce the 0.4.0 release of Yokozuna. This is a
small release in terms of features added, as there are no new features, but
an important release for reasons enumerated below.
* Performance improvements to Solr's distributed search thus improving
performan
I did not.
uname -r
11.4.2
I.
On Fri, Mar 1, 2013 at 9:09 AM, Mark Phillips wrote:
> Hey Istvan,
>
> Did you happen to figure this out? I may have missed this, but what
> version of osx?
>
> Mark
>
>
> On Thursday, February 28, 2013, István wrote:
>
>> Hi Riak Users,
>>
>> Here is what I have
Thanks Johns...
Our main interface into Riak is via Erlang, we just like the HTTP for the
Big Green Check mark that tells us the cluster is ok.
What we'll probably do for now then is turn off the HTTP(S) interfaces and
write a little clone of sorts for the status.
It would be good if looking at
So I should expect {error, notfound} inputs to map jobs while ownership
handoff is in progress? Are the not found items actually unavailable during
handoff or is this just not found on the old node, but will be picked up by
the new node during the same mapreduce job?
--
Jeremy
On Thu, Feb 28, 20
I'm sorry this went unanswered, Jeremy. Thanks for the follow up.
Your code needs to be able to handle notfound errors and tombstones[1]
regardless of ownership handoff. The coverage for the 2i or listkeys input will
be calculated up front, with the work distributed to a node where the key is
Thanks for the response. To handle not founds and tombstones I need this in
every map function?
map_something({error, notfound}, _, _) ->
[];
map_something(RiakObj, _, _) ->
Metadata = riak_object:get_metadata(RiakObj),
case dict:is_key(<<"X-Riak-Deleted">>, Metadata) of
true -
Hi Everyone,
I am building a new 5 node cluster with 1.3.0 and am transitioning from
Bitcask to LevelDB (or perhaps a Mulit with LevelDB being the main)
which is all well understood. My question is regarding image data, and
other large binary data: Is one better than the other in terms of the
Disclaimer: I haven't done this myself, or even attempted to compile the code
below, but in hopes it makes your life a little easier...
Here's some Erlang that should allow you to abstract away the checks. Assuming
that you have a function named "real_map_fun" that takes a single Riak object
as
Just curious, what is the typical size and the overall range of sizes for your
image data?
Matthew
On Mar 6, 2013, at 6:08 PM, Bryan Hughes wrote:
> Hi Everyone,
>
> I am building a new 5 node cluster with 1.3.0 and am transitioning from
> Bitcask to LevelDB (or perhaps a Mulit with LevelDB
Hi Matthew,
Right now we are storing images captured from mobile phones in Riak. So
far it works really well as the image sizes range from a few hundred K
to a few MB. But, as camera resolutions increase, so does the data
size, plus, users are actually interested in the higher resolution
im
Bryan,
My job is to test and optimize leveldb. I do not have a test case with your
size objects. Hence I asked the specifics.
I do not have any data at this time to help with your question. But I will add
your parameters to my standard testing and see what I can find / optimize. I
am not c
Hi Tiago,
Mikhail Sobelov wrote an erlang function a while back that'll take the
output of a map and store it to a bucket and key.
http://contrib.basho.com/save_reduce.html
Fair warning: it was written a few years back, so you might have to
revise the code a bit, but it's a good starting point.
18 matches
Mail list logo