That number is around 40K (I think). I am not sure if you have certain 
configurations to cleanup user task logs periodically. We have solved this 
problem in MAPREDUCE-2415 which part of 0.20.204. 


But you cleanup the task logs periodically, you will not run into this problem.

-Bharath




________________________________
From: Michael Hu <mesolit...@gmail.com>
To: common-dev@hadoop.apache.org
Sent: Sunday, July 10, 2011 8:14 PM
Subject: "java.lang.Throwable: Child Error " And " Task process exit with 
nonzero status of 1."

Hi,all,
    The hadoop is set up. Whenever I run a job, I always got the same error.
Error is:

    micah29@nc2:/usr/local/hadoop/hadoop$ ./bin/hadoop jar
hadoop-mapred-examples-0.21.0.jar wordcount test testout

*11/07/11 10:48:59 INFO mapreduce.Job: Running job: job_201107111031_0003
11/07/11 10:49:00 INFO mapreduce.Job:  map 0% reduce 0%
11/07/11 10:49:11 INFO mapreduce.Job: Task Id :
attempt_201107111031_0003_m_000002_0, Status : FAILED
java.lang.Throwable: Child Error
        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:249)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:236)

11/07/11 10:49:11 WARN mapreduce.Job: Error reading task
outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stdout
11/07/11 10:49:11 WARN mapreduce.Job: Error reading task
outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stderr
*

    I google the " Task process exit with nonzero status of 1." They say
'it's an OS limit on the number of sub-directories that can be related in
another directory.' But I can create any sub-directories related in another
directory.

    Please, could anybody help me to solve this problem? Thanks
-- 
Yours sincerely
Hu Shengqiu

Reply via email to