This message points the you might have problem with HDFS: *org.apache.hadoop.ipc.* *RemoteException(java.io <http://java.io>.* *IOException): File /hbase/WALs/zjdx107,60020,* *1418269148759/zjdx107%2C60020%*
*2C1418269148759.1419 977176935 could only be replicated to 0 nodes instead of minReplication (=1). There are 12 datanode(s) running and no node(s) are excluded in this operation.* On Thu, Jan 8, 2015 at 8:10 AM, Jean-Marc Spaggiari <[email protected] > wrote: > Hi, > > How is your HDFS doing? Have you looked at FSCK, Namenode interface,etc.? > Sound like HBase is not able to write to it... > > JM > > 2015-01-08 3:13 GMT-05:00 gao <[email protected]>: > > > Hi: > > > > I am getting constant stability problems with the HBase Regionserver, it > > dies randomly everyday or every other day. It normally dies shortly after > > printing the following: > > > > 2014-12-30 23:06:17,091 ERROR [regionserver60020.logRoller] > > wal.ProtobufLogWriter: Got IOException while writing trailer > > > > org.apache.hadoop.ipc.RemoteException(java.io.IOException): File > > > > > /hbase/WALs/zjdx107,60020,1418269148759/zjdx107%2C60020%2C1418269148759.1419 > > 977176935 could only be replicated to 0 nodes instead of minReplication > > (=1). There are 12 datanode(s) running and no node(s) are excluded in > this > > operation. > > > > at > > > > > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(Bloc > > kManager.java:1430) > > > > at > > > > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNam > > esystem.java:2659) > > > > at > > > > > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRp > > cServer.java:569) > > > > at > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslator > > PB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440) > > > > at > > > > > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNam > > enodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > > > > at > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(Proto > > bufRpcEngine.java:585) > > > > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) > > > > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986) > > > > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982) > > > > at java.security.AccessController.doPrivileged(Native Method) > > > > at javax.security.auth.Subject.doAs(Subject.java:415) > > > > at > > > > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja > > va:1548) > > > > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1980) > > > > > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1409) > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1362) > > > > at > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.jav > > a:206) > > > > at com.sun.proxy.$Proxy13.addBlock(Unknown Source) > > > > at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source) > > > > at > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl > > .java:43) > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > at > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocati > > onHandler.java:186) > > > > at > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHand > > ler.java:102) > > > > at com.sun.proxy.$Proxy13.addBlock(Unknown Source) > > > > at > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBloc > > k(ClientNamenodeProtocolTranslatorPB.java:361) > > > > at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source) > > > > at > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl > > .java:43) > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > at > > org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266) > > > > at com.sun.proxy.$Proxy14.addBlock(Unknown Source) > > > > at > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFS > > OutputStream.java:1437) > > > > at > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DF > > SOutputStream.java:1260) > > > > at > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java > > :525) > > > > 2014-12-30 23:06:17,092 ERROR [regionserver60020.logRoller] wal.FSHLog: > > Failed close of HLog writer > > > > org.apache.hadoop.ipc.RemoteException(java.io.IOException): File > > > > > /hbase/WALs/zjdx107,60020,1418269148759/zjdx107%2C60020%2C1418269148759.1419 > > 977176935 could only be replicated to 0 nodes instead of minReplication > > (=1). There are 12 datanode(s) running and no node(s) are excluded in > this > > operation. > > > > at > > > > > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(Bloc > > kManager.java:1430) > > > > at > > > > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNam > > esystem.java:2659) > > > > at > > > > > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRp > > cServer.java:569) > > > > at > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslator > > PB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440) > > > > at > > > > > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNam > > enodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > > > > at > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(Proto > > bufRpcEngine.java:585) > > > > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) > > > > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986) > > > > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982) > > > > at java.security.AccessController.doPrivileged(Native Method) > > > > at javax.security.auth.Subject.doAs(Subject.java:415) > > > > at > > > > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja > > va:1548) > > > > > > > > > -- Thanks & Regards, Anil Gupta
