What version of riak_kv is behind this riak_cs install, please? Is it really
2.1.3 as stated below? This looks and sounds like sibling explosion, which is
fixed in riak 2.0 and above. Are you sure that you are using the DVV enabled
setting for riak_cs bucket properties? Can you post your bucket
Hm, definitely not a cronjob. I'll look at our app and see if there's
anything that does something like that there.
On Wed, Jun 15, 2016 at 9:10 PM, Luke Bakken wrote:
> Hi Johnny,
>
> Since this seems to happen regularly on one node on your cluster (not
> necessarily the same node), do you have
Hello.
I see very interesting and confusing thing.
>From my previous letter you can see that siblings count on manifest objects
is about 100 (actualy it is in range 100-300). Unfortunately my problem is
that almost all PUT requests are failing with 500 Internal Server error.
I've tried today set
I want to add one thing to my letter.
Yesturday (maybe the day before yesturday) I've joinde 3 node to cluster
plan. The I've reviewed it (to understand in which way my cluster will be
rebalanced). Then I've cleared this plan without commitment.
riak-admin member-status does not show the new node
Hello,
I thought that I know the meaning of command riak-admin down NODE pretty
sure. Till this evening.
So the question is what is explicit meaning of this command?
My opinion was the following. During ownership handoff (maybe hinted
handoffs also, I am not sure) when one of the members of riak
Definitely an excellent response from Alexander, describing the overall state
My suggestion would be to try at least two approaches and develop a
way to compare them.
Do you want to do statistics? are the entries for those statistics
fairly immutable? it seems TS would be a good choice.
User v
Hi Alexander thanks for your response.
2016-06-16 6:19 GMT+02:00 Alexander Sicular :
>
> My question to you is what is your
> use case?
>
This is the problem, I don't know it clearly. There are some use cases very
clear. For example I need to verify username and password on a big bunch of
users,