Should be all under 400Gig on each.

My question is -- is there additional overhead with replicas making requests to one another for keys they don't have ? how much of an overhead is that ?

On 2012-11-05 17:00:37 +0000, Michael Kjellman said:

Rule of thumb is to try to keep nodes under 400GB.
Compactions/Repairs/Move operations etc become a nightmare otherwise. How
much data do you expect to have on each node? Also depends on caches,
bloom filters etc

On 11/5/12 8:57 AM, "Oleg Dulin" <oleg.du...@gmail.com> wrote:

I have 4 nodes at my disposal.

I can configure them like this:

1) RF=1, each node has 25% of the data. On random-reads, how big is the
performance penalty if a node needs to look for data on another replica
?

2) RF=2, each node has 50% of the data. Same question ?



--
Regards,
Oleg Dulin
NYC Java Big Data Engineer
http://www.olegdulin.com/




'Like' us on Facebook for exclusive content and other resources on all Barracuda Networks solutions.

Visit http://barracudanetworks.com/facebook





--
Regards,
Oleg Dulin
NYC Java Big Data Engineer
http://www.olegdulin.com/


Reply via email to