hive> show databases; OK default mm mm2 xyz Time taken: 6.058 seconds hive> use mm2; OK Time taken: 0.039 seconds hive> show tables; OK cidade concessionaria familia modelo venda Time taken: 0.354 seconds hive> select count(*) from familia;
FAILED: Hive Internal Error: java.lang.RuntimeException(java.net.ConnectException: Call to localhost/ 127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused) java.lang.RuntimeException: java.net.ConnectException: Call to localhost/ 127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:151) at org.apache.hadoop.hive.ql.Context.getMRScratchDir(Context.java:190) at org.apache.hadoop.hive.ql.Context.getMRTmpFileURI(Context.java:247) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:900) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:6594) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:238) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:340) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:736) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:164) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:241) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:456) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:54310failed on connection exception: java.net.ConnectException: Connection refused at org.apache.hadoop.ipc.Client.wrapException(Client.java:767) at org.apache.hadoop.ipc.Client.call(Client.java:743) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) at $Proxy4.getProtocolVersion(Unknown Source) at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359) at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175) at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:145) ... 15 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304) at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176) at org.apache.hadoop.ipc.Client.getConnection(Client.java:860) at org.apache.hadoop.ipc.Client.call(Client.java:720) ... 28 more ========================================================================================================================= afterthat,i did this also shell $> jps 3630 TaskTracker 3403 JobTracker 3086 DataNode 3678 Jps 3329 SecondaryNameNode ========================================================================================================================== JT - job tracker web interface. is running well through address http://localhost:50030/jobtracker.jsp in browser & showing localhost Hadoop Map/Reduce Administration State: INITIALIZING Started: Sat Oct 27 17:41:34 IST 2012 Version: 0.20.2, r911707 Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo Identifier: 201210271741 I tried to format name node by below command but showing error shell>:~/Hadoop/hadoop-0.20.2/bin$ ./hadoop dfs namenode -format 12/10/27 17:45:06 INFO ipc.Client: Retrying connect to server: localhost/ 127.0.0.1:54310. Already tried 0 time(s). 12/10/27 17:45:07 INFO ipc.Client: Retrying connect to server: localhost/ 127.0.0.1:54310. Already tried 1 time(s). 12/10/27 17:45:08 INFO ipc.Client: Retrying connect to server: localhost/ 127.0.0.1:54310. Already tried 2 time(s). 12/10/27 17:45:09 INFO ipc.Client: Retrying connect to server: localhost/ 127.0.0.1:54310. Already tried 3 time(s). 12/10/27 17:45:10 INFO ipc.Client: Retrying connect to server: localhost/ 127.0.0.1:54310. Already tried 4 time(s). 12/10/27 17:45:11 INFO ipc.Client: Retrying connect to server: localhost/ 127.0.0.1:54310. Already tried 5 time(s). 12/10/27 17:45:12 INFO ipc.Client: Retrying connect to server: localhost/ 127.0.0.1:54310. Already tried 6 time(s). 12/10/27 17:45:13 INFO ipc.Client: Retrying connect to server: localhost/ 127.0.0.1:54310. Already tried 7 time(s). 12/10/27 17:45:14 INFO ipc.Client: Retrying connect to server: localhost/ 127.0.0.1:54310. Already tried 8 time(s). 12/10/27 17:45:15 INFO ipc.Client: Retrying connect to server: localhost/ 127.0.0.1:54310. Already tried 9 time(s). Bad connection to FS. command aborted. ======================================================================================================================== port info:--> shell>:~/Hadoop/hadoop-0.20.2/conf$ netstat -tulpn (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:50060 0.0.0.0:* LISTEN 4328/java tcp 0 0 0.0.0.0:50030 0.0.0.0:* LISTEN 4101/java tcp 0 0 127.0.0.1:45298 0.0.0.0:* LISTEN 4328/java tcp 0 0 0.0.0.0:48946 0.0.0.0:* LISTEN 3784/java tcp 0 0 0.0.0.0:54771 0.0.0.0:* LISTEN 4027/java tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:51194 0.0.0.0:* LISTEN 4101/java tcp 0 0 0.0.0.0:8006 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:54311 0.0.0.0:* LISTEN 4101/java tcp 0 0 0.0.0.0:8007 0.0.0.0:* LISTEN - tcp6 0 0 :::22 :::* LISTEN - tcp6 0 0 ::1:631 :::* LISTEN - udp 0 0 0.0.0.0:52059 0.0.0.0:* - udp 0 0 127.0.0.1:53 0.0.0.0:* - udp 0 0 0.0.0.0:68 0.0.0.0:* - udp 0 0 0.0.0.0:5353 0.0.0.0:* - udp6 0 0 :::50206 :::* - udp6 0 0 :::5353 :::* - ========================================================================================= if i am doing shell>lsof -i tcp:54310 or shell>netstat | grep 54310 nothing is shown- means no one is using 54310 port ==================================================================================================== i have attached core-site.xml file also...
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>hadoop.tmp.dir</name> <value>/home/trendwise/cluster_NameNode_Location3/</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:54310</value> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> </property> <property> <name>fs.inmemory.size.mb </name> <value>200 </value> </property> <property> <name>io.sort.factor</name> <value>100 </value> </property> <property> <name>io.sort.mb</name> <value>200 </value> </property> <property> <name>io.file.buffer.size</name> <value>131072 </value> </property> </configuration>