Thanks very much Sam... much appreciated.
On Fri, May 27, 2016 at 4:55 PM, Vikram Lalit wrote:
> Thanks Alexander ... makes complete sense...
>
> On Fri, May 27, 2016 at 4:39 PM, Alexander Sicular
> wrote:
>
>> There is a reason Basho's minimum production deployment recommendation is
>> 5 machi
Vikram:
To understand how Riak Objects are 'routed' to a specific set of
partitions have a look at [1].
A little book of Riak[2] is a good introduction to the concepts. I found
it useful when I started using Riak.
I found [3],[4],[5] and [6] very useful to understanding the
intricacies of diffe
Thanks Alexander ... makes complete sense...
On Fri, May 27, 2016 at 4:39 PM, Alexander Sicular
wrote:
> There is a reason Basho's minimum production deployment recommendation is
> 5 machines. It is to ensure that, when operating with default settings,
> each replica of any key is stored on dist
There is a reason Basho's minimum production deployment recommendation is 5
machines. It is to ensure that, when operating with default settings, each
replica of any key is stored on distinct physical nodes. It has to do with
the allocation of virtual nodes to physical machines and replica sets.
Re
Yes am now beginning to realize the configs are much more important than I
originally thought!
Thanks again!!!
On Fri, May 27, 2016 at 4:28 PM, DeadZen wrote:
> np, number of nodes and ring size play a lot into that.
> as does your r,w settings.
> might be fun to create a visualization one day
np, number of nodes and ring size play a lot into that.
as does your r,w settings.
might be fun to create a visualization one day ;)
On Friday, May 27, 2016, Vikram Lalit wrote:
> Ok got it thanks.
> On May 27, 2016 4:02 PM, "DeadZen" > wrote:
>
>> reiterating my last email
>>
>> > theres no gu
Ok got it thanks.
On May 27, 2016 4:02 PM, "DeadZen" wrote:
> reiterating my last email
>
> > theres no guarantee vnodes are assigned to unique servers..
>
> referencing the docs
>
> > Nodes *attempt* to claim their partitions at intervals around the ring
> such that there is an even distribution
reiterating my last email
> theres no guarantee vnodes are assigned to unique servers..
referencing the docs
> Nodes *attempt* to claim their partitions at intervals around the ring
such that there is an even distribution amongst the member nodes and that
no node is responsible for more than one
I have a Riak cluster (of 3 nodes, with 64 partitions and n_val = 3) but I
find that for some objects, their hosting partitions / vnodes are not
spread out across the 3 nodes. In some cases, 2 of them are on 1 node and
the third is on a second node. That runs contrary to my understanding (link
here