I check the tasktraker's hadoop.tmp.dir, it is empty! And I search the whole 
disk, but don't find any dir about "*/mapred/local/tasktracker".
I have tested my cluster with jars come with the hadoop, they all goes well. 
Does that means the cache local directory of every tasktrasker is OK?

Date: Fri, 7 Jun 2013 15:15:48 +0800
Subject: Re: What is HIVE_PLAN?
From: caofang...@gmail.com
To: user@hive.apache.org

The plan will be serialized to the default hdfs instance , and put in 
distributed cache.So please have a check of the distributed cache local 
directory of every tasktracker Common like :
   {hadoop.tmp.dir}/mapred/local/taskTracker

2013/6/7 Li jianwei <ljw_c...@hotmail.com>




Hi FangKun:
Thanks for your reply!
I ran the "select count(*)" again, and check the JobConf, find the property you 
mentioned, they were as following:
hive.exec.plan 
hdfs://192.168.1.112:9100/tmp/hive-cyg_server/hive_2013-06-07_12-56-10_656_195237350266205704/-mr-10003/e1438d71-2497-4834-a89e-8b2e7d78448d

hive.exec.scratchdir /tmp/hive-cyg_server
when hive was running, I browsed the HDFS filesystem, the file specified by 
hive.exec.plan was there with permission rwsr-xr-x, but I didn't find any file 
had "HIVE_PLAN" in its name under any subdir of hive.exec.scratchdir. I also 
set the permission of hive.exec.scratchdir to rwxrwxrwx.

Is it not the problem in HDFS? According to the java exception, it is the 
native java method java.io.FileInputStream.open which can not access the file, 
which probably is in the local filesystem of the tasktracker node.  


Date: Fri, 7 Jun 2013 12:09:24 +0800
Subject: Re: What is HIVE_PLAN?
From: caofang...@gmail.com
To: user@hive.apache.org


It's kept in JobConf as part of the plan file name.
Check the link below 

http://hdfs-namenode:50030/jobconf.jsp?jobid=job_201306070901_0001

and  find   hive.exec.plan   and  hive.exec.scratchdir

Do you have proper Read and Write  permissions ?



2013/6/7 Li jianwei <ljw_c...@hotmail.com>





Hi, everyone:
I have set up a hadoop cluster on THREE windows7 machines with Cygwin, and made 
several test, which were all passed, with hadoop-test-1.1.2.jar and 
hadoop-examples-1.1.2.jar. 
Then I tried to run Hive 0.10.0 on my cluster ( also in Cygwin ). I could 
create tables, show them, load data into them and "select *" from them. But 
when I tried "select count(*)" from my table, I've got the following exception. 
My question is: what is that HIVE_PLANxxxxxx file? how is it created? where is 
it placed?


Would anyone give me some infomation? 
......
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):


  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>


Starting Job = job_201306070901_0001, Tracking URL = 
http://hdfs-namenode:50030/jobdetails.jsp?jobid=job_201306070901_0001


Kill Command = C:\hadoop-1.1.2\\bin\hadoop.cmd job  -kill job_201306070901_0001
Hadoop job information for Stage-1: number of mappers: 13; number of reducers: 1
2013-06-07 09:02:19,296 Stage-1 map = 0%,  reduce = 0%


2013-06-07 09:02:51,745 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201306070901_0001 with errors
Error during job, obtaining debugging information...
Job Tracking URL: 
http://hdfs-namenode:50030/jobdetails.jsp?jobid=job_201306070901_0001


Examining task ID: task_201306070901_0001_m_000014 (and more) from job 
job_201306070901_0001

Task with the most failures(4): 
-----
Task ID:
  task_201306070901_0001_m_000006

URL:
  
http://hdfs-namenode:50030/taskdetails.jsp?jobid=job_201306070901_0001&tipid=task_201306070901_0001_m_000006


-----
Diagnostic Messages for this Task:
java.lang.RuntimeException: java.io.FileNotFoundException: 
HIVE_PLANc632c8e2-257d-4cd4-b833-a09c7d249b2c (Access is denied)
        at 
org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:226)


        at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)
        at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:381)
        at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:374)


        at 
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:536)
        at 
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.<init>(MapTask.java:197)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:418)


        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Unknown Source)


        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
        at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.io.FileNotFoundException: 
HIVE_PLANc632c8e2-257d-4cd4-b833-a09c7d249b2c (Access is denied)


        at java.io.FileInputStream.open(Native Method)
        at java.io.FileInputStream.<init>(Unknown Source)
        at java.io.FileInputStream.<init>(Unknown Source)
        at 
org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:217)


        ... 12 more


FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched: 
Job 0: Map: 13  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec


                                          


-- 
Best wishs!Fangkun.Cao
                                          


-- 
Best wishs!Fangkun.Cao
                                          

Reply via email to