Speculative execution must be on for hadoop in your setup. With speculative
execution 'ON' framework starts running tasks on different nodes in
parallel. The one that finishes first contributes, the other one is killed.
-Shrijeet
On Mon, Jan 24, 2011 at 11:56 AM, Vijay wrote:
> Hi,
>
> When I r
You mentioned that you got the code from trunk so fair to assume you
are not hitting https://issues.apache.org/jira/browse/HIVE-1508
Worth checking still. Are all the open files - hive history files
(they look like hive_job_log*.txt) ? Like Viral suggested you can
check that by monitoring open fil
Hi Hive users and developers,
Any reason why change proposed in
https://issues.apache.org/jira/browse/HADOOP-4097 is absent in current
version?
When speculative execution is on I see AlreadyBeingCreatedException. I
was wondering it its because of HADOOP-4097.
-Shrijeet
I would say your hadoop configuration file(s) should have been your
class-path (core-site.xml in this case) . You are not supposed to put
hadoop parameters into hive conf files.
-Shrijeet
On Fri, Nov 19, 2010 at 4:57 PM, Stuart Smith wrote:
>
> Hello,
>
> Just wanted to let people know I track
One other alternative,
Add following property to your hive-site.xml
hive.aux.jars.path
file:///home/me/my.jar,file:///home/you/your.jar,file:///home/us/our.jar
On Thu, Nov 11, 2010 at 8:05 PM, Edward Capriolo wrote:
> On Thu, Nov 11, 2010 at 10:57 PM, bryan xu wrote:
>> Dear all,
>> I
You might want to look at oozie http://yahoo.github.com/oozie/ . The
trunk version doesn't support hive actions (yet I think). But Cloudera
packages a version that has hive support.
> I need to join data from these 3 tables to generate daily statistics but
> obviously, I do not want to reprocess e
Guru,
The error message is in your tasktracker log. Go the link pointed by hive
when the job fails to find the logs. In your case it was
http://xx.xx.xx.xxx:50030/taskdetails.jsp?jobid=job_201009280549_0050&tipid=task_201009280549_0050_r_00
.
This particular link might have expired by now, so r
David,
Your namenode is not configured (its not mentioned in what you sent). Add
this in core-site.xml
fs.default.name
hdfs://localhost:8020
The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the c