> No, Hadoop do not use Lucene.

have studied like
http://highscalability.com/how-rackspace-now-uses-mapreduce-and-hadoop-query-terabytes-data

 ---Given that.....

 The way the current Hadoop based system works is:
 Raw logs get streamed from hundreds of mail servers to the Hadoop
 Distributed File System (”HDFS”) in real time.
 MapReduce jobs are scheduled run to index the new data using Apache Lucene
 and Solr.
 Once the indexes have been built, they are compressed and stored away in
 HDFS.
 Each Hadoop datanode runs a Tomcat servlet container, which hosts a number
 of Solr instances that pull and merge the new indexes, and provide really
 fast search results to our support team.

> And do you mean slor combine the Lucene and Hadoop ?

No..Is that Solr (search server) uses Lucene (has library) that supports
the search..Solr needs Lucene to perform full-text indexing and searching
etc.,am i right??


>
>
>
> On Fri, Feb 26, 2010 at 2:52 PM, <gs...@tce.edu> wrote:
>
>> hi all
>>    when studying how hadoop framework works i have noticed that
>> map reduce in turn uses apache lucene for creating index for scheduled
>> new data and solr for creating instances. Is that right???
>> thanks
>> sujitha
>>
>>
>>
>>
>> -----------------------------------------
>> This email was sent using TCEMail Service.
>> Thiagarajar College of Engineering
>> Madurai-625 015, India
>>
>>
>
>
> --
> Best Regards
>
> Jeff Zhang
>


-- 
Suji


-----------------------------------------
This email was sent using TCEMail Service.
Thiagarajar College of Engineering
Madurai-625 015, India

Reply via email to