I have biuld a distribute index using the source code of
hadoop/contrib/index,but I found that when the input files become big(such
as one file is 16G),the OOM exception will be throwed .The cause is that: in
combiner ,"writer.addIndexNoOptimize()",this use much memory cause to OOM,
it's the Lucene
I‘m learning mapreduce,Please recommend some materials of analyzing mapreduce
sourcecode. Blogs,paper,or books and so on ,all of them are OK! Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/please-recommend-some-materials-of-analyzing-mapreduce-sourcecode-tp3652889p365
Please recommend me some some materials about analyzing mapreduce
sourcecode,book,blogs,paper and so on,all of them are ok! Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/I-need-some-materials-about-analyzing-mapreduce-sourcecode-tp3742786p3742786.html
Sent from the H