you should probably send this question to the nutch user mailing (or
perhaps hte hadoop user mailing list) ... this is the mailing list for the
Lucene java API that is used by nutch ... nothing in your stack trace
seems to indicate any problems in any Lucene Java code.
When i run nutch, i a
When i run nutch, i alway met this error in reduce task and is run
very slow after this error.
Do any one know how to solve this problem.
Here is the log:
java.io.IOException: Insufficient space
at
org.apache.hadoop.fs.InMemoryFileSystem$RawInMemoryFileSystem$InMemoryOutputStream.write(In