> RE: large ring size warning in the release notes, is the performance
> degradation linear below 256? That is, until the major release that fixes
> this, is it best to keep ring sizes at 64 for best performance?

The large ring size warning in the release notes is predominately
related to an issue with Riak's ring gossip functionality.
Adding/removing nodes, changing bucket properties, and setting cluster
metadata all result in a brief period (usually a few seconds) where
gossip traffic increases significantly. The size of the ring
determines both the number of gossip messages that occur during this
window, as well as the size of each message. With large rings, it is
possible that messages are generated faster than they can be handled,
resulting in large message queues that impact cluster performance and
tie up system memory until the message queues are fully drained. In
general, there is no problem as long as your hardware is fast enough
to process the brief spike in gossip traffic in close to real time.
Concerning this specific issue, choosing a ring size smaller than the
maximum you can handle does not provide any additional performance
gains.

However, unrelated to this issue, there are general performance
considerations related to ring size versus the number of nodes in your
cluster. Given a fixed number of nodes, a larger ring results in more
vnodes per node. This allows more process concurrency which may
improvement performance. However, each vnode runs it's own backend
instance that has it's own set of on-disk files, and competing
reads/writes to different files may result in additional I/O
contention than having fewer vnodes per node. The overall performance
is going to depend largely on your OS, your file system, and your
traffic pattern. So, it's hard to give specific hard and fast rules.
The other issue is 2I performance. Secondary indexes send request to a
covering set of vnodes, which works out to ring_size / N requests;
therefore increasing the ring_size without increasing N leads to more
2I requests. Again, the right answer depends largely on your use case.

Overall, I believe we normally recommend between 10 and 50 vnodes per
node, with the ring size rounded up to the next power of two.
Personally, I think 16 vnodes per node is a good number, which matches
the common 64 partition, 4 node Riak cluster. Thus, choosing a ring
size based on that ratio and your expected number of future nodes is a
reasonable choice. Just be sure to stay under 1024 until the issue
with gossip overloading the cluster is resolved.

-Joe

On Sat, Nov 5, 2011 at 9:49 AM, Jim Adler <jim.ad...@comcast.net> wrote:
> RE: large ring size warning in the release notes, is the performance
> degradation linear below 256? That is, until the major release that fixes
> this, is it best to keep ring sizes at 64 for best performance?
> Jim
>
> Sent from my phone. Please forgive the typos.
> On Nov 4, 2011, at 7:20 PM, Jared Morrow <ja...@basho.com> wrote:
>
> As we've mentioned earlier, the 1.0.2 release of Riak is coming soon.   Like
> we did with Riak 1.0.0, we are provided a release candidate for test before
> we release 1.0.2 final.
> You can find the packages here:
> http://downloads.basho.com/riak/riak-1.0.2rc1/
> The release notes have been updated and can be found here:
>  https://github.com/basho/riak/blob/1.0/RELEASE-NOTES.org
> Thank you, as always, for continuing to provide bug reports and feedback.
> -Jared
>
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>



-- 
Joseph Blomstedt <j...@basho.com>
Software Engineer
Basho Technologies, Inc.
http://www.basho.com/

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to