Hello Riak-Java users.
Just trying to figure out how I can register the Scala Jackson module with
Jackson so that when Riak goes to convert objects to be saved in the
database it will pick up on the included module. In the README in
https://github.com/FasterXML/jackson-module-scala, it's rather si
Hello everyone,
Our Riak cluster has failed after what seems to be an issue in LevelDB. Noticed
that a process running a segment compact has started to throw errors non-stop.
I opened a Stack Overflow question here where you will find a lot of log data:
http://stackoverflow.com/questions/201728
p,[298,304]}],1383756713657753},{<<"31715058">>,[{p,[325]}],1383611996352193}]},4}}}
[{riak_core_handoff_sender,start_fold,5,[{file,"src/riak_core_handoff_sender.erl"},{line,161}]}]
--
I am aware that values that
.
>
> Search index repair is document here:
> http://docs.basho.com/riak/1.4.0/cookbooks/Repairing-Search-Indexes/
> However, you would need to first modify your extractor to not produce search
> keys larger than 32k or the corruption issues will recur.
>
> Joe Caswell
>
t;>>,<<32897-byte string>>}
>
> Hope this helps.
>
> Joe
> From: Justin Long
> Date: Sunday, November 24, 2013 5:17 PM
> To: Joe Caswell
> Cc: Richard Shaw , riak-users
> Subject: Re: Runaway "Failed to compact" errors
>
> Thanks Jo
fields. Checking this Javadoc
http://basho.github.io/riak-java-client/1.0.5/com/basho/riak/client/bucket/WriteBucket.html
shows that search=true is written for the bucket.
Would that cause the entire object to be indexed, not just the explicit fields?
On Nov 24, 2013, at 3:02 PM, Justin Long
Hi everyone,
Experiencing some weird behaviour in a cluster of 3 nodes. As of this morning,
we noticed a massive performance lag and when checking the logs Active
Anti-Entropy Exchange suddenly began initiating a huge series of transfers.
Here’s what we see in the logs:
2014-01-02 19:26:34.396
are less than 32kb in size). Thanks for your help.
FYI Using Riak 1.4.6
On Jan 2, 2014, at 11:31 AM, Justin Long wrote:
> Hi everyone,
>
> Experiencing some weird behaviour in a cluster of 3 nodes. As of this
> morning, we noticed a massive performance lag and when checkin