It happened again today, though I was not available to watch it at the time.
Three nodes each showed riak_kv being stopped for one minute:
2013-04-02 11:10:57.923 [info] <0.2833.1447>@riak_kv_app:check_kv_health:239
Disabling riak_kv due to large message queues. Offending vnodes:
[{319703483166
If your n_val is still three, then three sad nodes is a suspicious
number. My first guess would be a very large value being put in and
other requests backing up behind it. That would explain the
health-check failures (especially if you're normally doing a lot of
small/fast reads and writes).
Howe
On Mon, Mar 4, 2013 at 9:36 AM, vvsanil wrote:
> Is there anyway to set the default search schema analyzer_factory to
> 'standard_analyzer_factory' for all future buckets (i.e. without having to
> manually set schema each time a new bucket is created) ?
>
Yes, look for the file `default.def` und
Tony,
Riak Search is treating the '&' the same as 'AND'. If you encode it as
%5C%26 it should work.
-Z
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Hi Dev,
This approach *should* work, but a few caveats to be aware of before you go
this route:
* You're relying on the OS to handle to the CPU/IO task sharing for the two
different services
* Related to above: depending on the cluster load/task, you could have
resource contention that could resu
Hi Daniel,
Sorry for the delay here.
It looks like it's going to take a more investigation to nail down the
cause. Engel just opened an issue you can keep an eye on:
https://github.com/basho/riak/issues/305
Thanks for reporting. Feel free to add any relevant information to the
issue.
Mark
On
Hi all, I am new to this list. Thanks for taking the time to read my
questions! I just want to know if the data throughput I am seeing is
expected for the bitcask backend or if it is too low.
I am doing the preliminary feasibility study to decide if we should
implement a Riak data store. Our appli
Hi Mark
Thanks for info. For the moment I thought we were the only one experiencing
this.
At the moment the only workaround for us is to wait until vnode transfer is
finished.
I suspect attaching a node also will trigger that behaviour. Considering
that expanding cluster from 3 to 4 nodes can tak