-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/58085/
-----------------------------------------------------------

(Updated March 30, 2017, 9:01 p.m.)


Review request for hive.


Changes
-------

Address the comments and fix the unit test failures.


Bugs: hive-16061
    https://issues.apache.org/jira/browse/hive-16061


Repository: hive-git


Description
-------

HIVE-16061: Some of console output is not printed to the beeline console


Diffs (updated)
-----

  common/src/java/org/apache/hadoop/hive/common/LogUtils.java 01b2e7c 
  
itests/hive-unit/src/test/java/org/apache/hive/service/cli/operation/TestOperationLoggingLayout.java
 e344e0f 
  ql/src/java/org/apache/hadoop/hive/ql/Driver.java d981119 
  ql/src/java/org/apache/hadoop/hive/ql/exec/TaskRunner.java a596e92 
  ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java 1945163 
  ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java 591ea97 
  ql/src/java/org/apache/hadoop/hive/ql/log/LogDivertAppender.java PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ColumnStatsSemanticAnalyzer.java 
08d0544 
  ql/src/java/org/apache/hadoop/hive/ql/session/OperationLog.java 18216f2 
  ql/src/test/results/clientpositive/beeline/drop_with_concurrency.q.out 
993329e 
  ql/src/test/results/clientpositive/beeline/escape_comments.q.out 2a05e53 
  
service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java
 8f08c2e 
  service/src/java/org/apache/hive/service/cli/operation/LogDivertAppender.java 
eaf1acb 
  service/src/java/org/apache/hive/service/cli/operation/Operation.java 11a820f 
  service/src/java/org/apache/hive/service/cli/operation/OperationManager.java 
3f8f68e 
  service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java 
f41092e 


Diff: https://reviews.apache.org/r/58085/diff/2/

Changes: https://reviews.apache.org/r/58085/diff/1-2/


Testing
-------

Test is done locally. You can see the beeline output now. 

0: jdbc:hive2://localhost:10000> select t1.key from src t1 join src t2 on 
t1.key=t2.key limit 10;
INFO  : Compiling 
command(queryId=axu_20170330125216_0de3bbf7-60f5-476d-b7eb-9861891d2961): 
select t1.key from src t1 join src t2 on t1.key=t2.key limit 10
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:t1.key, 
type:string, comment:null)], properties:null)
INFO  : Completed compiling 
command(queryId=axu_20170330125216_0de3bbf7-60f5-476d-b7eb-9861891d2961); Time 
taken: 0.219 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing 
command(queryId=axu_20170330125216_0de3bbf7-60f5-476d-b7eb-9861891d2961): 
select t1.key from src t1 join src t2 on t1.key=t2.key limit 10
WARN  : Hive-on-MR is deprecated in Hive 2 and may not be available in the 
future versions. Consider using a different execution engine (i.e. spark, tez) 
or using Hive 1.X releases.
INFO  : Query ID = axu_20170330125216_0de3bbf7-60f5-476d-b7eb-9861891d2961
INFO  : Total jobs = 1
INFO  : Starting task [Stage-4:MAPREDLOCAL] in serial mode
INFO  : Execution completed successfully
INFO  : MapredLocal task succeeded
INFO  : Launching Job 1 out of 1
INFO  : Starting task [Stage-3:MAPRED] in serial mode
INFO  : Number of reduce tasks is set to 0 since there's no reduce operator
INFO  : Starting Job = job_local1894165710_0002, Tracking URL = 
http://localhost:8080/
INFO  : Kill Command = 
/Users/axu/Documents/workspaces/tools/hadoop/hadoop-2.6.0/bin/hadoop job  -kill 
job_local1894165710_0002
INFO  : Hadoop job information for Stage-3: number of mappers: 0; number of 
reducers: 0
INFO  : 2017-03-30 12:52:21,788 Stage-3 map = 0%,  reduce = 0%
ERROR : Ended Job = job_local1894165710_0002 with errors
ERROR : FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask
INFO  : MapReduce Jobs Launched: 
INFO  : Stage-Stage-3:  HDFS Read: 0 HDFS Write: 0 FAIL
INFO  : Total MapReduce CPU Time Spent: 0 msec


Thanks,

Aihua Xu

Reply via email to