Hi, I just delete the 0.17.2 files and unpack the 0.18.3 to the same path, then move the 0.17.2's hadoop-site.xml, master and slave file to the conf folder.
But the namenode cann't be started. The log says the folder "hadoop/tmp/dir/hadoop-hadoop/dfs/name" does not exist, but when I use the 0.17.2(it's ok), that folder does not exist too Thanks for your help! The logs of namenode: ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- ------------------------------------------------------------------------ 35:47,631 INFO org.apache.hadoop.dfs.Storage: Storage directory /home/hadoop/HadoopInstall/hadoop/tmp/dir/hadoop-hadoop/dfs/name does not exist. 2009-03-20 19:35:47,645 ERROR org.apache.hadoop.fs.FSNamesystem: FSNamesystem initialization failed. org.apache.hadoop.dfs.InconsistentFSStateException: Directory /home/hadoop/HadoopInstall/hadoop/tmp/dir/hadoop-hadoop/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:211) at org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:80) at org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:294) at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:273) at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:148) at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:193) at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:179) at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:830) at org.apache.hadoop.dfs.NameNode.main(NameNode.java:839) 2009-03-20 19:35:47,647 INFO org.apache.hadoop.ipc.Server: Stopping server on 54310 2009-03-20 19:35:47,649 ERROR org.apache.hadoop.dfs.NameNode: org.apache.hadoop.dfs.InconsistentFSStateException: Directory /home/hadoop/HadoopInstall/hadoop/tmp/dir/hadoop-hadoop/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:211) The hadoop-site.xml: <property> <name>fs.default.name</name> <value>hdfs://202.117.16.164:54310</value> </property> <property> <name>mapred.job.tracker</name> <value>202.117.16.164:54311</value> </property>