By the way,
I think that the number of repaired key is pretty high.
2014-01-03 06:33:42.857 [info]
<0.31440.2586>@riak_kv_exchange_fsm:key_exchange:206 Repaired 1491787 keys
during active anti-entropy exchange of
{468137243207554840987117797979434404733540892672,3} between
{4738462339783786805113
This is the only thing related to AAE that exists in my app.config. I
haven't changed any default values...
%% Enable active anti-entropy subsystem + optional debug
messages:
%% {anti_entropy, {on|off, []}},
%% {anti_entropy, {on|off, [debug]}},
Edgar:
Could you attach the AAE section of your app.config? I’d like to look into
this issue further for you. Something I think you might be running into is
https://github.com/basho/riak_core/pull/483.
The issue of concern is that the LevelDB bloom filter is not enabled properly
for the inst
Hey guys!
Nothing on this one?
Btw: Happy new year :)
On 27 December 2013 22:35, Edgar Veiga wrote:
> This is a du -hs * of the riak folder:
>
> 44G anti_entropy
> 1.1M kv_vnode
> 252G leveldb
> 124K ring
>
> It's a 6 machine cluster, so ~1512G of levelDB.
>
> Thanks for the tip, I'll upgrade
This is a du -hs * of the riak folder:
44G anti_entropy
1.1M kv_vnode
252G leveldb
124K ring
It's a 6 machine cluster, so ~1512G of levelDB.
Thanks for the tip, I'll upgrade in a near future!
Best regards
On 27 December 2013 21:41, Matthew Von-Maszewski wrote:
> I have a query out to the de
That'd be version 1.4.6.
Sent from my iPhone
> On Dec 27, 2013, at 4:42 PM, Matthew Von-Maszewski wrote:
>
> P.S. Unrelated to your question: Riak 1.4.4 is available for download. It
> has a couple of nice bug fixes for leveldb.
___
riak-users mai
I have a query out to the developer that can better respond to your follow-up
questions. It might be Monday before we get a reply due to the holidays.
Do you happen to know how much data is in the leveldb dataset and/or one vnode?
Not sure it will change the response, but might be nice to have
Ok, thanks for confirming!
Is it normal, that this action affects the overall state of the cluster? On
the 26th It started the regeneration and the the response times of the
cluster raised to never seen values. It was a day of heavy traffic but
everything was going quite ok until it started the re
Yes. Confirmed.
There are options available in app.config to control how often this occurs and
how many vnodes rehash at once: defaults are every 7 days and two vnodes per
server at a time.
Matthew Von-Maszewski
On Dec 27, 2013, at 13:50, Edgar Veiga wrote:
> Hi!
>
> I've been trying to