[ 
https://issues.apache.org/jira/browse/HDFS-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-8162.
----------------------------------
    Resolution: Cannot Reproduce

Hadoop logs to wherever log4j tells it to.

the standard hadoop log4j.properties does route to stderr
{code}
log4j.appender.console.target=System.err
{code}

If you are seeing it go to stdout, it's your program that's causing it.

see also http://wiki.apache.org/hadoop/InvalidJiraIssues

> Stack trace routed to standard out
> ----------------------------------
>
>                 Key: HDFS-8162
>                 URL: https://issues.apache.org/jira/browse/HDFS-8162
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: libhdfs
>    Affects Versions: 2.5.2
>            Reporter: Rod
>            Priority: Minor
>
> Calling hdfsOpenFile() can generate a stacktrace printout to standard out, 
> which can be problematic for caller program which is making use of standard 
> out. libhdfs stacktraces should be routed to standard error.
> Example of stacktrace:
> WARN  [main] hdfs.BlockReaderFactory 
> (BlockReaderFactory.java:getRemoteBlockReaderFromTcp(693)) - I/O error 
> constructing remote block reader.
> org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while 
> waiting for channel to be ready for connect. ch : 
> java.nio.channels.SocketChannel[connection-pending remote=/x.x.x.10:50010]
>       at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
>       at 
> org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3101)
>       at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:755)
>       at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:670)
>       at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:337)
>       at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:576)
>       at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:800)
>       at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:854)
>       at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:143)
> 2015-04-16 10:32:13,946 WARN  [main] hdfs.DFSClient 
> (DFSInputStream.java:blockSeekTo(612)) - Failed to connect to /x.x.x.10:50010 
> for block, add to deadNodes and continue. 
> org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while 
> waiting for channel to be ready for connect. ch : 
> java.nio.channels.SocketChannel[connection-pending remote=/x.x.x.10:50010]
> org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while 
> waiting for channel to be ready for connect. ch : 
> java.nio.channels.SocketChannel[connection-pending remote=/x.x.x.10:50010]
>       at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
>       at 
> org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3101)
>       at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:755)
>       at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:670)
>       at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:337)
>       at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:576)
>       at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:800)
>       at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:854)
>       at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:143)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to