Hi

Yes you are right. refreshNamenodes seems to be working after using right rpc 
port.

Thanks a ton

Regards
Ajith

-----Original Message-----
From: Kihwal Lee [mailto:kih...@yahoo-inc.com.INVALID] 
Sent: 03 April 2015 12:30 AM
To: hdfs-dev@hadoop.apache.org
Subject: Re: [Federation setup] Adding a new name node to federated cluster

You might be issuing the refresh command against the dataXfer port, not the rpc 
port of the datanode.
-Kihwal
      From: Ajith shetty <ajith.she...@huawei.com>
 To: "hdfs-dev@hadoop.apache.org" <hdfs-dev@hadoop.apache.org>
 Sent: Wednesday, April 1, 2015 1:43 AM
 Subject: [Federation setup] Adding a new name node to federated cluster
   
Hi all,


Use case : I am trying to add a new name node to already running federated 
cluster 
https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/Federation.html#Adding_a_new_Namenode_to_an_existing_HDFS_cluster

When I execute the refreshNamenodes($HADOOP_PREFIX/bin/hdfs dfsadmin 
-refreshNamenodes <datanode_host_name>:<datanode_rpc_port>) step, I am getting 
exception :
HOST-10-19-92-85 is my new namenode to be added into cluster
host-10-19-92-100 is the datanode in federated cluster

At new name node :
java.io.EOFException: End of File Exception between local host is: 
"HOST-10-19-92-85/10.19.92.85"; destination host is: "host-10-19-92-100":50010; 
: java.io.EOFException; For more details see:  
http://wiki.apache.org/hadoop/EOFException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
        at org.apache.hadoop.ipc.Client.call(Client.java:1480)
        at org.apache.hadoop.ipc.Client.call(Client.java:1407)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy13.refreshNamenodes(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.refreshNamenodes(ClientDatanodeProtocolTranslatorPB.java:195)
        at 
org.apache.hadoop.hdfs.tools.DFSAdmin.refreshNamenodes(DFSAdmin.java:1919)
        at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:1825)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
        at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:1959)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:392)
        at 
org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1079)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:974)

At data node trying to refresh:
2015-04-01 16:11:31,720 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
host-10-19-92-100:50010:DataXceiver error processing unknown operation  src: 
/10.19.92.85:43802 dst: /10.19.92.100:50010
java.io.IOException: Version Mismatch (Expected: 28, Received: 26738 )
                at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:60)
                at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
                at java.lang.Thread.run(Thread.java:745)

I have cross verified the installation and jar versions.
So the refreshNamenodes command is not working in my set up but as a workaround 
I found that restarting the datanode will add it to the newly added namenode 
Please help me out

Regards
Ajith

  

Reply via email to