>> With the slicing, I'm not sure off the top of my head. I'm sure >> someone else can chime in. For e.g. a multi-get, they end up as >> independent tasks. > > So if I multiget 10 keys, they are fetched in //, consolidated by the > coodinator and then sent back ?
Took me a while to figure out that // == "parallel" :) I'm pretty sure (but not entirely, I'd have to check the code) that the request is forwarded as one request to the necessary node(s); what I was saying rather was that the individual gets get queued up as individual tasks to be executed internally in the different stages. That does lead to parallelism locally on the node (subject to the concurrent reader setting. > Agreed, I followed someone suggestion some time ago to reduce my batch > sizes and it has helped tremendoulsy. I'm now doing multigetslices in > batchers of 512 instead of 5000 and I find I no longer have Pendings up so > high. The most I see now is a couple hundred. In general, the best balance will depend on the situation. For example the benefit of batching increases as the latency to the cluster (and within it) increases, and the negative effects increase as you have higher demands of low latency on other traffic to the cluster. -- / Peter Schuller (@scode, http://worldmodscode.wordpress.com)