[ 
https://issues.apache.org/jira/browse/HIVE-8836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14225423#comment-14225423
 ] 

Brock Noland commented on HIVE-8836:
------------------------------------

Similar warning on the other:
{noformat}
2014-11-25 14:39:26,714 INFO  ql.Driver (SessionState.java:printInfo(828)) - 
Query ID = hiveptest_20141125143939_77f4ba53-fc27-4efe-ad3d-85e23ea54748
2014-11-25 14:39:26,714 INFO  ql.Driver (SessionState.java:printInfo(828)) - 
Total jobs = 2
2014-11-25 14:39:26,714 INFO  ql.Driver (SessionState.java:printInfo(828)) - 
Launching Job 1 out of 2
2014-11-25 14:39:26,715 INFO  ql.Driver (Driver.java:launchTask(1643)) - 
Starting task [Stage-3:MAPRED] in serial mode
2014-11-25 14:39:26,715 INFO  exec.Task (SessionState.java:printInfo(828)) - In 
order to change the average load for a reducer (in bytes):
2014-11-25 14:39:26,715 INFO  exec.Task (SessionState.java:printInfo(828)) -   
set hive.exec.reducers.bytes.per.reducer=<number>
2014-11-25 14:39:26,715 INFO  exec.Task (SessionState.java:printInfo(828)) - In 
order to limit the maximum number of reducers:
2014-11-25 14:39:26,715 INFO  exec.Task (SessionState.java:printInfo(828)) -   
set hive.exec.reducers.max=<number>
2014-11-25 14:39:26,715 INFO  exec.Task (SessionState.java:printInfo(828)) - In 
order to set a constant number of reducers:
2014-11-25 14:39:26,715 INFO  exec.Task (SessionState.java:printInfo(828)) -   
set mapreduce.job.reduces=<number>
2014-11-25 14:39:26,715 DEBUG session.SparkSessionManagerImpl 
(SparkSessionManagerImpl.java:getSession(107)) - Existing session 
(34e37f91-2cac-4a31-aba7-85b711d8dad3) is reused.
2014-11-25 14:39:26,728 INFO  ql.Context (Context.java:getMRScratchDir(266)) - 
New scratch dir is 
file:/home/hiveptest/50.18.64.184-hiveptest-2/apache-svn-spark-source/itests/qtest-spark/target/tmp/scratchdir/hiveptest/e9216630-f66e-4b31-bc30-58078678a976/hive_2014-11-25_14-39-26_638_5903751068675432492-1
2014-11-25 14:39:26,775 INFO  client.SparkClientImpl 
(SparkClientImpl.java:onReceive(329)) - Received result for 
e288f226-2429-469a-9c53-07fabda12db3
2014-11-25 14:55:54,771 WARN  remote.ReliableDeliverySupervisor 
(Slf4jLogger.scala:apply$mcV$sp(71)) - Association with remote system 
[akka.tcp://8fe8195c-f1be-45e3-a9d1-b94b7caafcd9@10.227.4.181:38320] has 
failed, address is now gated for [5000] ms. Reason is: [Disassociated].
{noformat}

> Enable automatic tests with remote spark client.[Spark Branch]
> --------------------------------------------------------------
>
>                 Key: HIVE-8836
>                 URL: https://issues.apache.org/jira/browse/HIVE-8836
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>    Affects Versions: spark-branch
>            Reporter: Chengxiang Li
>            Assignee: Rui Li
>              Labels: Spark-M3
>             Fix For: spark-branch
>
>         Attachments: HIVE-8836.1-spark.patch, HIVE-8836.2-spark.patch, 
> HIVE-8836.3-spark.patch, HIVE-8836.4-spark.patch
>
>
> In real production environment, remote spark client should be used to submit 
> spark job for Hive mostly, we should enable automatic test with remote spark 
> client to make sure the Hive feature workable with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to