[ 
https://issues.apache.org/jira/browse/HADOOP-953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins resolved HADOOP-953.
--------------------------------

    Resolution: Fixed

> huge log files
> --------------
>
>                 Key: HADOOP-953
>                 URL: https://issues.apache.org/jira/browse/HADOOP-953
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 0.10.1
>         Environment: N/A
>            Reporter: Andrew McNabb
>
> On our system, it's not uncommon to get 20 MB of logs with each MapReduce 
> job.  It would be very helpful if it were possible to configure Hadoop 
> daemons to write logs only when major things happen, but the only conf 
> options I could find are for increasing the amount of output.  The disk is 
> really a bottleneck for us, and I believe that short jobs would run much more 
> quickly with less disk usage.  We also believe that the high disk usage might 
> be triggering a kernel bug on some of our machines, causing them to crash.  
> If the 20 MB of logs went down to 20 KB, we would probably still have all of 
> the information we needed.
> Thanks!

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to