Nitin,

Following setting already there at HIVE.
set hive.exec.mode.local.auto=false;

Surprisingly , when it did following setting , it started working ....
set hive.auto.convert.join=true;

can you please help me understand , what had happened ?



Regards
Sanjiv Singh
Mob :  +091 9990-447-339

On Tue, Sep 22, 2015 at 11:41 AM, Nitin Pawar <nitinpawar...@gmail.com>
wrote:

> Can you try setting these
> set hive.exec.mode.local.auto=false;
>
>
> On Tue, Sep 22, 2015 at 11:25 AM, @Sanjiv Singh <sanjiv.is...@gmail.com>
> wrote:
>
>>
>>
>> *Hi Folks,*
>>
>>
>> *I am running given hive query . it is giving error while executing.
>> please help me get out of it and understand possible reason for error.*
>>
>> *Hive Query :*
>>
>> SELECT *
>> FROM  store_sales     ,  date_dim     ,  store     ,
>> household_demographics     ,  customer_address
>> WHERE store_sales.ss_sold_date_sk = date_dim.d_date_sk AND
>> store_sales.ss_store_sk = store.s_store_sk
>> AND store_sales.ss_hdemo_sk = household_demographics.hd_demo_sk  AND
>> store_sales.ss_addr_sk = customer_address.ca_address_sk
>> AND  ( date_dim.d_dom BETWEEN 1 AND 2 )
>> AND (household_demographics.hd_dep_count = 3 OR
>> household_demographics.hd_vehicle_count = -1 )
>> AND date_dim.d_year  IN (1998, 1998 + 1 , 1998 + 2 )  AND store.s_city
>> IN ('Midway','Fairview')  ;
>>
>>
>> *Note : *
>> All tables [store_sales     ,  date_dim     ,  store     ,
>> household_demographics     ,  customer_address] are in ORC format.
>> hive version  : 1.0.0
>>
>>
>> *Additional note :*
>> I also checked hive EXPLAIN for same query . It is failing at last stage
>> where is joining intermediate result to customer_address.
>> I also checked for null values on store_sales.ss_addr_sk ,
>> customer_address.ca_address_sk. which is not the case.
>> I also changed hive log level to DEBUG , not specific in log file
>> regarding error.
>>
>> I really wanted to understand why hive query is failing.
>> and how can be resolved ?
>> and where to look into ?
>> any help is highly appreciated.
>>
>>
>> *At Hive console :*
>>
>> Launching Job 4 out of 4
>> Number of reduce tasks not specified. Estimated from input data size: 1
>> In order to change the average load for a reducer (in bytes):
>>   set hive.exec.reducers.bytes.per.reducer=<number>
>> In order to limit the maximum number of reducers:
>>   set hive.exec.reducers.max=<number>
>> In order to set a constant number of reducers:
>>   set mapreduce.job.reduces=<number>
>> java.lang.NullPointerException
>>     at
>> org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:265)
>>     at
>> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:272)
>>     at
>> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:509)
>>     .......
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>> Job Submission failed with exception
>> 'java.lang.NullPointerException(null)'
>> FAILED: Execution Error, return code 1 from
>> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
>> MapReduce Jobs Launched:
>> Stage-Stage-5: Map: 2  Reduce: 1   Cumulative CPU: 4.08 sec   HDFS Read:
>> 746 HDFS Write: 96 SUCCESS
>> Stage-Stage-3: Map: 2  Reduce: 1   Cumulative CPU: 3.32 sec   HDFS Read:
>> 889 HDFS Write: 96 SUCCESS
>> Stage-Stage-1: Map: 2  Reduce: 1   Cumulative CPU: 3.21 sec   HDFS Read:
>> 889 HDFS Write: 96 SUCCESS
>>
>>
>>
>>
>> *Hive error (hive.log):*
>>
>> 2015-09-22 10:41:01,304 ERROR [main]: exec.Task
>> (SessionState.java:printError(833)) - Job Submission failed with exception
>> 'java.lang.NullPointerException(null)'
>> java.lang.NullPointerException
>>     at
>> org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:265)
>>     at
>> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:272)
>>     at
>> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:509)
>>     at
>> org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:624)
>>     at
>> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:616)
>>     at
>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
>>     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
>>     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
>>     at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
>>     at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
>>     at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
>>     at
>> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:429)
>>     at
>> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
>>     at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
>>     at
>> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>>     at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1604)
>>     at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1364)
>>     at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1177)
>>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004)
>>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:994)
>>     at
>> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:201)
>>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:153)
>>     at
>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:364)
>>     at
>> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:712)
>>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:631)
>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:570)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>>
>>
>>
>> Regards
>> Sanjiv Singh
>> Mob :  +091 9990-447-339
>>
>
>
>
> --
> Nitin Pawar
>

Reply via email to