Hey friend!
Check this out http://transportpierrat.com/spot.php?4y
Ted Xu
Hi all,
I was doing TPC-H benchmark on Hive recently while I found some queries
went wrong.
Following are the two cases, both are MapJoin while the join key is bigint
type. After disabling auto convert join the error is gone.
Case 1.
Query (TPC-H query4):
create table q4_result as
select
o_orde
very much appreciated
> Thanks in advance
>
> Mark Wildig
>
>
> Sent from my iPhone
--
Ted Xu
Alibaba Inc.
Hangzhou, China
t. Please let me
know if you have any idea, thanks!
--
Ted Xu
Alibaba Inc.
Hangzhou, China
terNameToEnum != null) {
> 1081 totalTime += (System.currentTimeMillis() - beginTime);
> 1082 }
> 1083 }
>
> Thanks,
> Shaun
>
>
> On 6 June 2013 19:00, Ted Xu wrote:
>
>> Hi Shaun,
>>
>> This is weird. I'm not sure if there is any other
at 4:38 PM, Shaun Clowes wrote:
> Hi Ted,
>
> It's actually just one partition being created which is what makes it so
> weird.
>
> Thanks,
> Shaun
>
>
> On 6 June 2013 18:36, Ted Xu wrote:
>
>> Hi Shaun,
>>
>> Too many partitions in dynami
4), but it happens every time in EMR.
>
> Thanks,
> Shaun
>
--
Regards,
Ted Xu
he fields separator of INSERT OVERWRITE
> LOCAL DIRECTORY , does anyone have experience doing this ? thanks!
>
--
Regards,
Ted Xu
me/hadoop/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/hadoop/.versions/hive-0.8.1/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> 2013-05-25 10:21:57 Processing rows:20 Hashtable size: 19
> Memory usage: 782325272 rate: 0.839
> 2013-05-25 10:22:00 Processing rows:*212267*Hashtable size:
> 212267 Memory usage: 809524488 rate: 0.868
> 2013-05-25 10:22:00 Dump the hashtable into file:
> file:/tmp/hadoop/hive_2013-05-25_22-21-37_408_6569501641432754678/-local-10006/HashTable-Stage-6/MapJoin-bu-31--.hashtable
> Execution failed with exit status: 2
> Obtaining error information
>
>
--
Regards,
Ted Xu
java:25)
>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:197)
>>> Job Submission failed with exception
>>> 'java.io.FileNotFoundException(/var/mapr/cluster/mapred/jobTracker/staging/mapr406767829/.staging
>>> ())'
>>> Execution failed with exit status: 2
>>> Obtaining error information
>>>
>>> Task failed!
>>> Task ID:
>>> Stage-1
>>>
>>> Logs:
>>>
>>> /tmp/mapr/hive.log
>>> FAILED: Execution Error, return code 2 from
>>> org.apache.hadoop.hive.ql.exec.MapRedTask
>>>
>>> But when I tried :
>>>
>>> > select * from records;
>>>
>>> It works fine. I read that's because there is no reduce tasks in this
>>> query.
>>> Any idea?
>>>
>>> Thank's
>>>
>>> Gautier
>>>
>>
>>
>
--
Regards,
Ted Xu
like "hive.groupby.skewindata". If a single
query needs 2 different config for its different jobs, we have to insert
temporary tables first.
2. Its better to introduce some post execute analyze tools like "vaidya"
when job level hook is implemented.
Any ideas?
--
Best Regards,
Ted Xu
uts produced while executing an SQL query..
>
> While running the job I can list them in the tmp/ folder but they are
> getting deleted as soon as the job is over. Is there any way to
> prevent this ?
>
> Thanks
>
--
Best Regards,
Ted Xu
12 matches
Mail list logo