Technically speaking, /search runs against the cluster plan, not all nodes
right?
We actually steal the cluster plan every n searches, then use that to
search directly against the solr nodes...
That being said, banging on solr will have an effect on kv from our
experience.. YMMV.
On May 15, 2016
We've got an expiry worker rig I can likely pass over offline. Its not
overly clever.
Basic idea stream a feed of keys into a pool of workers that spin off
delete calls.
We feed this based on continuous search's of an expiry TTL field in all
keys.
It'd likely be better to run this from with the E
Quick question...
Is it safe to assume these two metrics endpoint are expected to display the
same stats.
watch -n 1 riak-admin stat show **get_fsm_time**
shows 0's for node_get_fsm_time_99/95 (occasionally will see stats build
up for them, but then vanish back to 0).
The stats interface via :
I'd lean towards AAE issues on Yokozuna... Same problems we were having
with our 'spaces-in-keys' issue... Once we cleaned those up, things were
great again.
On Wed, Mar 4, 2015 at 8:36 PM, Santi Kumar wrote:
> Raik 2.0.0
> On Mar 5, 2015 12:31 AM, "Christopher Meiklejohn"
> wrote:
>
>>
>> > On
t;
Scratch the above. Error is maintained appropriately. I assume then
the only way to change n-vals after the fact is to entirely drop
search index, and reindex? Works for me I guess.
>> On Tue, Jan 21, 2014 at 1:46 PM, John O'Brien wrote:
>>>
>>> Issue:
>>&g
are you using?
>
>
> -Z
>
>
> On Tue, Jan 21, 2014 at 1:46 PM, John O'Brien wrote:
>>
>> Issue:
>>
>> When running searchs against a single dev node cluster, pre-populated
>> with 1000 keys, bitcask backend, search=on and a /search/svan?q=*
>&
Issue:
When running searchs against a single dev node cluster, pre-populated
with 1000 keys, bitcask backend, search=on and a /search/svan?q=*
search URI, the solr response is coming back with three different
resultsets, 330 values, the other 354, the other 345. The range of
keys 0-1000 are split