----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/58085/#review170638 -----------------------------------------------------------
common/src/java/org/apache/hadoop/hive/common/LogUtils.java Lines 52 (patched) <https://reviews.apache.org/r/58085/#comment243510> Any suggestions? Seems that is what it does. Can't think of anything else. ql/src/java/org/apache/hadoop/hive/ql/Driver.java Lines 1376-1380 (original) <https://reviews.apache.org/r/58085/#comment243512> We were using OperationLog incorrectly before. This log should not be for direct write, but we should be able to write to it through LOG.info/debug(). As you can see, there is already LOG.debug("Waiting to acquire compile lock: " + command); above. With this patch, it will write this message at DEBUG level. The loglevels in operationLog (execution, verbose) are actually to control which classes are allowed to output logs. It's very confusion though. ql/src/java/org/apache/hadoop/hive/ql/log/LogDivertAppender.java Lines 209 (patched) <https://reviews.apache.org/r/58085/#comment243513> Not really. HIVE_SERVER2_LOGGING_OPERATION_LOG_LOCATION is used to configure logLocation here (the base location). So it's still used. service/src/java/org/apache/hive/service/cli/operation/Operation.java Lines 219-252 (original) <https://reviews.apache.org/r/58085/#comment243514> Right. The file will be created by the routing appender. We just need to read from such file and output to the beeline. service/src/java/org/apache/hive/service/cli/operation/Operation.java Line 295 (original), 246 (patched) <https://reviews.apache.org/r/58085/#comment243516> Yeah. I was debating that since it's just one line. I will do that then. service/src/java/org/apache/hive/service/cli/operation/OperationManager.java Line 78 (original), 74 (patched) <https://reviews.apache.org/r/58085/#comment243518> You are right. We need call the register always. - Aihua Xu On March 30, 2017, 4:54 p.m., Aihua Xu wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/58085/ > ----------------------------------------------------------- > > (Updated March 30, 2017, 4:54 p.m.) > > > Review request for hive. > > > Repository: hive-git > > > Description > ------- > > HIVE-16061: Some of console output is not printed to the beeline console > > > Diffs > ----- > > common/src/java/org/apache/hadoop/hive/common/LogUtils.java > 01b2e7c2e0568eebde6af7fe9c1e359d7ec5e7e8 > ql/src/java/org/apache/hadoop/hive/ql/Driver.java > d981119d3f6eb8fba66bf7c16aee838280d1c969 > ql/src/java/org/apache/hadoop/hive/ql/exec/TaskRunner.java > a596e92d8d67e7a96d8164de086c4f2eca0b0403 > ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java > 1945163a0e2cfce53ee75c742143367ad23f97ed > ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java > 591ea973f97913cad4cd7e96dd1ffc6a301b4bb1 > ql/src/java/org/apache/hadoop/hive/ql/log/LogDivertAppender.java > PRE-CREATION > > ql/src/java/org/apache/hadoop/hive/ql/parse/ColumnStatsSemanticAnalyzer.java > 08d0544c49e6d2c2bbe473e8dfed4f38c1606ca7 > ql/src/java/org/apache/hadoop/hive/ql/session/OperationLog.java > 18216f25444a0b7867cbbf0ae0a5046a686b9e64 > > service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java > 8f08c2e2ad0bb73494b2cd23359af5594997ebcf > > service/src/java/org/apache/hive/service/cli/operation/LogDivertAppender.java > eaf1acbcfeb687eeebf1b3f7eed2099241fc46a2 > service/src/java/org/apache/hive/service/cli/operation/Operation.java > 11a820fae6bb6a5815e0efb113ec73a399b0b5bd > > service/src/java/org/apache/hive/service/cli/operation/OperationManager.java > 3f8f68e31fde0c8c578a1052b1f98b7a86a55089 > service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java > f41092ed15ac8b88de72222e436abd80bdb8639b > > > Diff: https://reviews.apache.org/r/58085/diff/1/ > > > Testing > ------- > > Test is done locally. You can see the beeline output now. > > 0: jdbc:hive2://localhost:10000> select t1.key from src t1 join src t2 on > t1.key=t2.key limit 10; > INFO : Compiling > command(queryId=axu_20170330125216_0de3bbf7-60f5-476d-b7eb-9861891d2961): > select t1.key from src t1 join src t2 on t1.key=t2.key limit 10 > INFO : Semantic Analysis Completed > INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:t1.key, > type:string, comment:null)], properties:null) > INFO : Completed compiling > command(queryId=axu_20170330125216_0de3bbf7-60f5-476d-b7eb-9861891d2961); > Time taken: 0.219 seconds > INFO : Concurrency mode is disabled, not creating a lock manager > INFO : Executing > command(queryId=axu_20170330125216_0de3bbf7-60f5-476d-b7eb-9861891d2961): > select t1.key from src t1 join src t2 on t1.key=t2.key limit 10 > WARN : Hive-on-MR is deprecated in Hive 2 and may not be available in the > future versions. Consider using a different execution engine (i.e. spark, > tez) or using Hive 1.X releases. > INFO : Query ID = axu_20170330125216_0de3bbf7-60f5-476d-b7eb-9861891d2961 > INFO : Total jobs = 1 > INFO : Starting task [Stage-4:MAPREDLOCAL] in serial mode > INFO : Execution completed successfully > INFO : MapredLocal task succeeded > INFO : Launching Job 1 out of 1 > INFO : Starting task [Stage-3:MAPRED] in serial mode > INFO : Number of reduce tasks is set to 0 since there's no reduce operator > INFO : Starting Job = job_local1894165710_0002, Tracking URL = > http://localhost:8080/ > INFO : Kill Command = > /Users/axu/Documents/workspaces/tools/hadoop/hadoop-2.6.0/bin/hadoop job > -kill job_local1894165710_0002 > INFO : Hadoop job information for Stage-3: number of mappers: 0; number of > reducers: 0 > INFO : 2017-03-30 12:52:21,788 Stage-3 map = 0%, reduce = 0% > ERROR : Ended Job = job_local1894165710_0002 with errors > ERROR : FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.mr.MapRedTask > INFO : MapReduce Jobs Launched: > INFO : Stage-Stage-3: HDFS Read: 0 HDFS Write: 0 FAIL > INFO : Total MapReduce CPU Time Spent: 0 msec > > > Thanks, > > Aihua Xu > >
