Hi there,
I'm currently testing custom Component in my Riak Search system. As I
need a suggestion mechanism from the Solr index, I implemented the
Suggester component (https://wiki.apache.org/solr/Suggester).
It seems to work correctly, yet I have some question regarding the usage
of custom So
A little follow up for you guys since I went offline for quite some times.
As suggested, it was a Solr performance issue, we were able to prove
that my old 5 hosts were able to handle the load without Solr/Yokozuna.
Fact was that I lacked CPU for my host, as well as RAM. Since SolR is
pretty re
Awesome! Ya, Solr like resources. If you're on 3 nodes now, consider
adjusting your n_val from default 3 to 2. With default ring_size of 64 and
n_val of 3 and a cluster size less than 5 you are not guaranteed to have
all copies of your data on distinct physical nodes. Some nodes will receive
2 copi
The docs http://docs.basho.com/riak/kv/2.1.4/configuring/basic/#ring-size
seem to imply that there's no easy, non-destructive way to change a
cluster's ring size live for Riak-1.4x.
I thought about replacing one node at a time, but you can't join a new node
or replace an existing one with a node t
whats the problem currently?
something not working on freebsd?
On Thursday, May 26, 2016, Seema Jethani > wrote:
> Hello All,
> Basho is no longer directly supporting FreeBSD due to low levels of
> adoption amongst our commercial customers, but we are looking for members
> of the community to he
I have a Riak cluster (of 3 nodes, with 64 partitions and n_val = 3) but I
find that for some objects, their hosting partitions / vnodes are not
spread out across the 3 nodes. In some cases, 2 of them are on 1 node and
the third is on a second node. That runs contrary to my understanding (link
here
reiterating my last email
> theres no guarantee vnodes are assigned to unique servers..
referencing the docs
> Nodes *attempt* to claim their partitions at intervals around the ring
such that there is an even distribution amongst the member nodes and that
no node is responsible for more than one
Ok got it thanks.
On May 27, 2016 4:02 PM, "DeadZen" wrote:
> reiterating my last email
>
> > theres no guarantee vnodes are assigned to unique servers..
>
> referencing the docs
>
> > Nodes *attempt* to claim their partitions at intervals around the ring
> such that there is an even distribution
np, number of nodes and ring size play a lot into that.
as does your r,w settings.
might be fun to create a visualization one day ;)
On Friday, May 27, 2016, Vikram Lalit wrote:
> Ok got it thanks.
> On May 27, 2016 4:02 PM, "DeadZen" > wrote:
>
>> reiterating my last email
>>
>> > theres no gu
Yes am now beginning to realize the configs are much more important than I
originally thought!
Thanks again!!!
On Fri, May 27, 2016 at 4:28 PM, DeadZen wrote:
> np, number of nodes and ring size play a lot into that.
> as does your r,w settings.
> might be fun to create a visualization one day
There is a reason Basho's minimum production deployment recommendation is 5
machines. It is to ensure that, when operating with default settings, each
replica of any key is stored on distinct physical nodes. It has to do with
the allocation of virtual nodes to physical machines and replica sets.
Re
Thanks Alexander ... makes complete sense...
On Fri, May 27, 2016 at 4:39 PM, Alexander Sicular
wrote:
> There is a reason Basho's minimum production deployment recommendation is
> 5 machines. It is to ensure that, when operating with default settings,
> each replica of any key is stored on dist
Vikram:
To understand how Riak Objects are 'routed' to a specific set of
partitions have a look at [1].
A little book of Riak[2] is a good introduction to the concepts. I found
it useful when I started using Riak.
I found [3],[4],[5] and [6] very useful to understanding the
intricacies of diffe
13 matches
Mail list logo