MapReduce tends to be used for massive (re)indexing.
See
http://search-lucene.com/?q=hadoop+mapreduce&fc_project=Solr&fc_project=Lucene
for how Lucene/Solr people are using MapReduce.
For example, in a recent project we used MapReduce (streaming with jruby,
actually) together with Solr (Embed
Common component of HDFS-1150 (Verify datanodes' identities to clients in
secure clusters)
--
Key: HADOOP-6892
URL: https://issues.apache.org/jira/browse/HADOOP-6892
As my understanding, google may use mapred to build index, but won't use
mapred in the search phase.
Because search phase need to be low latency which is not mapred's feature.
On Fri, Jul 30, 2010 at 7:06 AM, Saikat Kanjilal wrote:
>
> Hello Yuhendar,I'll add as much as I can at a high level fro
Hello Yuhendar,I'll add as much as I can at a high level from what I have
learned so far about map-reduce to answer your questions:
1) The goal behind map-reduce is to perform a distributed computation which
breaks up a large computation intensive problem into smaller chunks and solve
those in