Hi Robert,

I understand your confusion.

HADOOP_ROOT_LOGGER is set to default value "INFO,console" if it hasn't set
for anything and logs will be displayed on the console itself.
This will be true for any client commands you run. For ex: "hdfs dfs -ls /"

But for the server scripts (hadoop-daemon.sh, yarn-daemon.sh, etc)
 HADOOP_ROOT_LOGGER will be set to "INFO, RFA" if HADOOP_ROOT_LOGGER env
variable is not defined.
So that all the log messages of the server daemons goto some log files and
this will be maintained by RollingFileAppender. If you want to override all
these default and set your own loglevel then define that as env variable
HADOOP_ROOT_LOGGER.

For ex:
   export HADOOP_ROOT_LOGGER="DEBUG,RFA"
  export above env variable and then start server scripts or execute client
commands, all logs goto files and will be maintained by RollingFileAppender.


Regards,
Vinay


On Wed, May 21, 2014 at 6:42 PM, Robert Rati <rr...@redhat.com> wrote:

> I noticed in hadoop-config.sh there is this line:
>
> HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.root.logger=${HADOOP_
> ROOT_LOGGER:-INFO,console}"
>
> which is setting a root logger if HADOOP_ROOT_LOGGER isn't set.  Why is
> this here.needed?  There is a log4j.properties file provided that defines a
> default logger.  I believe the line above will result in overriding
> whatever is set for the root logger in the log4j.properties file.  This has
> caused some confusion and hacks to work around this.
>
> Is there a reason not to remove the above code and just have all the
> logger definitions in the log4j.properties file?  Is there maybe a
> compatibility concern?
>
> Rob
>



-- 
Regards,
Vinay

Reply via email to