Hi It is because of space issues. Issue 'df -h' command on the TT node that reported this error, the partition used for dfs.data.dir should be full.
Regards Bejoy KS ________________________________ From: abhiTowson cal <abhishek.dod...@gmail.com> To: user@hive.apache.org Sent: Wednesday, July 25, 2012 9:48 PM Subject: HIVE ERROR hi all, I have overriden some properities in hive.Iam getting following error when executing a query is this error due to overriding properties or LOCAL FILE SYSTEM is out of space?? overriden properties set io.sort.mb=512; set io.sort.factor=100; set mapred.reduce.parallel.copies=40; set hive.map.aggr =true; set hive.exec.parallel=true; set hive.groupby.skewindata=true; set mapred.job.reuse.jvm.num.tasks=-1; 2012-07-25 11:30:41,426 Stage-59 map = 100%, reduce = 47% 2012-07-25 11:30:42,444 Stage-57 map = 100%, reduce = 28% Starting Job = job_201206281050_24226, Tracking URL Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapre 2012-07-25 11:30:43,959 Stage-34 map = 100%, reduce = 100% Ended Job = job_201206281050_24226 with errors FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask Job initialization failed: org.apache.hadoop.fs.FSError: java.io.IOException: No space left on device at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:201) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123) at java.io.FilterOutputStream.close(FilterOutputStream.java:140) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61) at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)