Re: GET vs map reduce

2011-08-19 Thread Will Moss
It seems like this is a problem that can be solved without too much headache by the client library. If you're working in an asynchronous environment then forking off n green threads for each get and collecting the results is no problem (this is what we do with great results). In a traditionally thr

Re: GET vs map reduce

2011-08-19 Thread Wilson MacGyver
this is certainly a tricky problem for us, because there are times we need to perform 9 to 100 GETs on per page render basis. some bullt-in support would certainly be useful On Fri, Aug 19, 2011 at 10:12 AM, Jacques wrote: > This begs the question, is there much efficiency to gain by creating a t

Re: GET vs map reduce

2011-08-19 Thread Jacques
This begs the question, is there much efficiency to gain by creating a true multi-get? It seems like a number of people are trying to get the most efficient multi-get possible. If I remember, it was closed as a wontfix a couple years ago. Did you guys at Basho find that it just didn't have that

Re: GET vs map reduce

2011-08-19 Thread David Smith
As Matt and Jacques noted, M/R is really not intended to be used in this manner (i.e. for multi-get), particularly if you're interested in latency. Generally, you will wind up involving more nodes on a M/R request and shipping the "get" results across those nodes as the system tries to distribute M

Re: GET vs map reduce

2011-08-18 Thread Jacques
Fyi, when I did something similar, I found that running separate parallel get requests was substantially faster than a basic map reduce. That was even the case when I ran the map reduce as an erlang function instead of javascript. You should be able to find my previous post at the end of java cli

Re: GET vs map reduce

2011-08-18 Thread Matt Ranney
On Thu, Aug 18, 2011 at 10:35 AM, Wilson MacGyver wrote: > it's been working fine. but lately, as traffic begin to increase, we > started seeing time out errors > on the map reduce call. the strange thing is, if I issue GET on each > key, results are coming back > without any problem. > We've fou