Thanks Matouk,
I checked what you said, and finally I noticed that it's not the same
running "/etc/init.d/hive-server2 start" than "service hive-server2 start".
In the first case the CWD are being used by the script and it was causing
my issue.

Now I'm having another issue, after submiting the join query, I find in
logs this:

ERROR [main]: security.UserGroupInformation
(UserGroupInformation.java:doAs(1494)) - PriviledgedActionException as:hive/
huesec8.dtardon.cediant...@dtardon.cediant.es (auth:KERBEROS) cause:ENOENT:
No such file or directory
2014-11-06 11:53:47,165 ERROR [main]: security.UserGroupInformation
(UserGroupInformation.java:doAs(1494)) - PriviledgedActionException as:hive/
huesec8.dtardon.cediant...@dtardon.cediant.es (auth:KERBEROS) cause:ENOENT:
No such file or directory
2014-11-06 11:53:47,176 ERROR [main]: mr.ExecDriver
(SessionState.java:printError(545)) - Job Submission failed with exception
'org.apache.hadoop.io.nativeio.NativeIOException(No such file or directory)'
ENOENT: No such file or directory
    at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
    at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:158)
    at
org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:635)
    at
org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:468)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:596)
    at
org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:178)
    at
org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
    at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
    at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
    at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    at
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
    at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
    at
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:420)
    at
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:740)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

I already tried to send a job to yarn with hive user and it run
successfully.
Any idea?






2014-11-05 10:28 GMT+01:00 Matouk IFTISSEN <matouk.iftis...@ysance.com>:

> Hello Juan,
> As you see the problem is come from the permissions roles, I had have like
> this error before and pass it.
> check and compare :
>
>    1. your hadoop installation is done as 'root' or an other user (if
>    this is the suoer user)?
>    2. your hive execution (who -'user'- run hive script)
>    3. the users in the 'container-executor.cfg'  as you are in yarn mode
>
> Hope this helps you ;)
>
> 2014-11-05 9:45 GMT+01:00 Juan Carlos <juc...@gmail.com>:
>
>> I have a secured and HA hdfs cluster, and I have been trying to execute a
>> join operation with beeline CLI.
>>
>> My issue is that it try to execute mapreduce localy instead by yarn. I
>> set parameters
>>
>>     <property>
>>         <name>mapreduce.framework.name</name>
>>         <value>yarn</value>
>>     </property>
>>
>>     <property>
>>         <name>mapred.job.tracker</name>
>>         <value>anything</value>
>>     </property>
>>
>> I'm using hive 0.13 and hadoop 2.2.0
>>
>> In logs I see this:
>> ERROR [pool-2-thread-2]: mr.MapredLocalTask
>> (MapredLocalTask.java:execute(282)) - Exception: Cannot run program
>> "/usr/lib/hadoop/bin/hadoop" (in directory "/root"): error=13, Permission
>> denied
>> 2014-11-05 09:31:33,368 ERROR [pool-2-thread-2]: ql.Driver
>> (SessionState.java:printError(545)) - FAILED: Execution Error, return code
>> 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
>> 2014-11-05 09:31:33,368 INFO  [pool-2-thread-2]: log.PerfLogger
>> (PerfLogger.java:PerfLogEnd(135)) - </PERFLOG method=Driver.execute
>> start=1415176292766 end=1415176293368 duration=602
>> from=org.apache.hadoop.hive.ql.Driver>
>> 2014-11-05 09:31:33,368 INFO  [pool-2-thread-2]: log.PerfLogger
>> (PerfLogger.java:PerfLogBegin(108)) - <PERFLOG method=releaseLocks
>> from=org.apache.hadoop.hive.ql.Driver>
>> 2014-11-05 09:31:33,368 INFO  [pool-2-thread-2]: ZooKeeperHiveLockManager
>> (ZooKeeperHiveLockManager.java:releaseLocks(254)) -  about to release lock
>> for default/sample_08
>> 2014-11-05 09:31:33,429 INFO  [pool-2-thread-2]: ZooKeeperHiveLockManager
>> (ZooKeeperHiveLockManager.java:releaseLocks(254)) -  about to release lock
>> for default/sample_07
>> 2014-11-05 09:31:33,514 INFO  [pool-2-thread-2]: ZooKeeperHiveLockManager
>> (ZooKeeperHiveLockManager.java:releaseLocks(254)) -  about to release lock
>> for default
>> 2014-11-05 09:31:33,675 INFO  [pool-2-thread-2]: log.PerfLogger
>> (PerfLogger.java:PerfLogEnd(135)) - </PERFLOG method=releaseLocks
>> start=1415176293368 end=1415176293675 duration=307
>> from=org.apache.hadoop.hive.ql.Driver>
>> 2014-11-05 09:31:33,815 ERROR [pool-2-thread-2]: operation.Operation
>> (SQLOperation.java:run(202)) - Error running hive query:
>> org.apache.hive.service.cli.HiveSQLException: Error while processing
>> statement: FAILED: Execution Error, return code 1 from
>> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
>>     at
>> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:146)
>>     at
>> org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:68)
>>     at
>> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:199)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>>     at
>> org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:493)
>>     at
>> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:208)
>>     at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>     at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>     at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>     at java.lang.Thread.run(Thread.java:744)
>>
>>
>> Any idea? Anyone could point me what else to check?
>>
>> Regards
>>
>
>
>
> --
> ---------------
> Life and Relations are not binary
>
> *Matouk IFTISSEN | Consultant BI & Big Data[image: http://www.ysance.com] *
> 24 rue du sentier - 75002 Paris - www.ysance.com <http://www.ysance.com/>
> Fax : +33 1 73 72 97 26
> *Ysance sur* :*Twitter* <http://twitter.com/ysance>* | Facebook
> <https://www.facebook.com/pages/Ysance/131036788697> | Google+
> <https://plus.google.com/u/0/b/115710923959357341736/115710923959357341736/posts>
>  | LinkedIn
> <http://www.linkedin.com/company/ysance> | Newsletter
> <http://www.ysance.com/nous-contacter.html>*
> *Nos autres sites* : *ys4you* <http://wwww.ys4you.com/>* | labdecisionnel
> <http://www.labdecisionnel.com/> | decrypt <http://decrypt.ysance.com/>*
>

Reply via email to