Re: Riak Recap for Dec. 13 - 14

2010-12-17 Thread Jeremiah Peschka
Daniel - You should be using some kind of load balancer to point clients at the Riak cluster since all nodes can handle communication. There's information in the wiki about how the keys are hashed: http://wiki.basho.com/pages/viewpage.action?pageId=1245320 and there is also info about how routing

Re: Riak Recap for Dec. 13 - 14

2010-12-17 Thread Daniel Woo
Thanks for your patience :-) So, when the client makes a call, it can just be round-robin since all nodes can gossip? But it should be better to guess where the partition is before making the call, right? Where can I find the documentation about how the partition distributions are shared between n

Re: Riak Recap for Dec. 13 - 14

2010-12-16 Thread Dan Reverri
Yes. Daniel Reverri Developer Advocate Basho Technologies, Inc. d...@basho.com On Thu, Dec 16, 2010 at 7:56 PM, Daniel Woo wrote: > Hi Daniel, > > So, the hashing algorithm is still consistent, but partitions (vnodes) are > re-distributed to nodes when new nodes are added, and the nodes gossip

Re: Riak Recap for Dec. 13 - 14

2010-12-16 Thread Daniel Woo
Hi Daniel, So, the hashing algorithm is still consistent, but partitions (vnodes) are re-distributed to nodes when new nodes are added, and the nodes gossip and share the knowledge of the partition distribution, right? So the client can query any of the nodes, if the node doesn't know where the pa

Re: Riak Recap for Dec. 13 - 14

2010-12-16 Thread Dan Reverri
Hi Daniel, Clients do not specify the partition when making a request. A client can request any key from any node in the cluster and Riak will return the associated value. Thanks, Dan Daniel Reverri Developer Advocate Basho Technologies, Inc. d...@basho.com On Thu, Dec 16, 2010 at 6:01 PM, Dan

Re: Riak Recap for Dec. 13 - 14

2010-12-16 Thread Daniel Woo
Hi Mark, Thanks for your explanation, so in this case the partitions would be re-distributed *from* Node1: p1 ~ p16 Node2: p17 ~ p32 Node3: p33 ~ p48 Node4: p49 ~ p64 *to * Node1: p1 ~ p13 (remove 3 partitions) Node2: p17 ~ p29 (remove 3 partitions) Node3: p33 ~ p45 (remove 3 partitions) Node4:

Re: Riak Recap for Dec. 13 - 14

2010-12-16 Thread Mark Phillips
Hey Daniel, [snip] > So, I guess Riak would have to re-hash the whole partitions into all the 5 > nodes, right? Is this done lazily when the node finds the requested data is > missing? > Or is there a way to handle this with consistent re-hashing so we can avoid > moving data around when new node

Re: Riak Recap for Dec. 13 - 14

2010-12-16 Thread Daniel Woo
Hi guys, If we have 64 partitions with 4 nodes, each of the nodes has 16 partitions like this Node1: p1 ~ p16 Node2: p17 ~ p32 Node3: p33 ~ p48 Node4: p49 ~ p64 Now if I add a new node Node 5, each node should handle 12.8 partitions, so the partitions should look like this Node1: p1 ~ p13 Node2

Riak Recap for Dec. 13 - 14

2010-12-15 Thread Mark Phillips
Evening, Morning, Afternoon to All, For today's Recap: A few blog posts, new Riak slides, new code and functionality for Ripple, new documentation for the Python client, and more. Enjoy, Mark Community Manager Basho Technologies wiki.basho.com twitter.com/pharkmillups ---- Riak Recap fo