[ 
https://issues.apache.org/jira/browse/HIVE-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15305645#comment-15305645
 ] 

Hive QA commented on HIVE-13837:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12806574/HIVE-13837.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 99 failed/errored test(s), 9129 tests 
executed
*Failed tests:*
{noformat}
TestHWISessionManager - did not produce a TEST-*.xml file
TestJdbcWithMiniHA - did not produce a TEST-*.xml file
TestJdbcWithMiniMr - did not produce a TEST-*.xml file
TestMiniTezCliDriver-auto_join30.q-vector_decimal_10_0.q-acid_globallimit.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-auto_sortmerge_join_7.q-orc_merge9.q-tez_union_dynamic_partition.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-enforce_order.q-vector_partition_diff_num_cols.q-unionDistinct_1.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-groupby2.q-tez_dynpart_hashjoin_1.q-custom_input_output_format.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-insert_values_non_partitioned.q-schema_evol_orc_nonvec_mapwork_part.q-union5.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-join1.q-mapjoin_decimal.q-vectorized_distinct_gby.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-load_dyn_part2.q-selectDistinctStar.q-vector_decimal_5.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-mapjoin_mapjoin.q-cte_5.q-insert_into1.q-and-12-more - did 
not produce a TEST-*.xml file
TestMiniTezCliDriver-order_null.q-vector_acid3.q-orc_merge10.q-and-12-more - 
did not produce a TEST-*.xml file
TestMiniTezCliDriver-schema_evol_text_nonvec_mapwork_part_all_primitive.q-bucket3.q-tez_fsstat.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-schema_evol_text_nonvec_mapwork_table.q-vector_decimal_trailing.q-subquery_in.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-script_pipe.q-vectorized_parquet.q-vector_decimal_aggregate.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-union2.q-bucket4.q-union8.q-and-12-more - did not produce 
a TEST-*.xml file
TestMiniTezCliDriver-update_orig_table.q-explainuser_4.q-update_after_multiple_inserts.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_coalesce.q-cbo_windowing.q-tez_join.q-and-12-more - 
did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_distinct_2.q-tez_joins_explain.q-cte_mat_1.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_interval_2.q-constprog_dpp.q-dynamic_partition_pruning.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_varchar_4.q-smb_cache.q-transform_ppr2.q-and-7-more 
- did not produce a TEST-*.xml file
TestMiniTezCliDriver-vectorization_13.q-auto_sortmerge_join_13.q-tez_bmj_schema_evolution.q-and-12-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-external_table_with_space_in_location_path.q-schemeAuthority2.q-auto_sortmerge_join_16.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-bucketizedhiveinputformat.q-orc_mr_pathalias.q-and-1-more
 - did not produce a TEST-*.xml file
TestNegativeCliDriver-udf_invalid.q-nopart_insert.q-insert_into_with_schema.q-and-734-more
 - did not produce a TEST-*.xml file
TestOperationLoggingAPIWithTez - did not produce a TEST-*.xml file
TestSparkCliDriver-timestamp_lazy.q-union29.q-join23.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-union_remove_23.q-transform_ppr2.q-join20.q-and-12-more - 
did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_annotate_stats_join
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join17
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join29
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_16
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_4
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucket2
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucketsortoptimize_insert_4
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_cbo_semijoin
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_cbo_stats
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_cbo_subq_in
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_cbo_subq_not_in
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_cbo_union
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_disable_merge_for_bucketing
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby2_map_skew
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby2_noskew
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby3
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby3_map_multi_distinct
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby4_map
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby5_noskew
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby7_map_skew
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby8_map
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_having
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join14
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join22
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join26
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join34
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join35
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join39
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join6
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join8
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join_cond_pushdown_3
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join_nullsafe
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join_reorder
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_load_dyn_part11
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_load_dyn_part12
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_load_dyn_part2
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_load_dyn_part5
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_mapjoin_distinct
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_multi_insert_gby
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_nullgroup
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_ptf_streaming
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_router_join_ppr
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_runtime_skewjoin_mapjoin_spark
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_sample5
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_sample6
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_skewjoinopt14
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_skewjoinopt16
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_sort
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_stats1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_stats14
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_table_access_keys_stats
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_temp_table
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union_remove_11
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union_remove_17
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union_remove_20
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union_remove_9
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_char_4
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vectorization_11
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vectorization_3
org.apache.hadoop.hive.llap.daemon.impl.TestTaskExecutorService.testPreemptionQueueComparator
org.apache.hadoop.hive.llap.tez.TestConverters.testFragmentSpecToTaskSpec
org.apache.hadoop.hive.ql.exec.vector.expressions.TestVectorDateExpressions.testMultiThreadedVectorUDFDate
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testLocksInSubquery
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProviderWithACL.testSimplePrivileges
org.apache.hive.service.TestHS2ImpersonationWithRemoteMS.org.apache.hive.service.TestHS2ImpersonationWithRemoteMS
{noformat}

Test results: 
http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/437/testReport
Console output: 
http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/437/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-437/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 99 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12806574 - PreCommit-HIVE-MASTER-Build

> current_timestamp() output format is different in some cases
> ------------------------------------------------------------
>
>                 Key: HIVE-13837
>                 URL: https://issues.apache.org/jira/browse/HIVE-13837
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Pengcheng Xiong
>            Assignee: Pengcheng Xiong
>         Attachments: HIVE-13837.01.patch, HIVE-13837.02.patch
>
>
> As [~jdere] reports:
> {code}
> current_timestamp() udf returns result with different format in some cases.
> select current_timestamp() returns result with decimal precision:
> {noformat}
> hive> select current_timestamp();
> OK
> 2016-04-14 18:26:58.875
> Time taken: 0.077 seconds, Fetched: 1 row(s)
> {noformat}
> But output format is different for select current_timestamp() from all100k 
> union select current_timestamp() from over100k limit 5; 
> {noformat}
> hive> select current_timestamp() from all100k union select 
> current_timestamp() from over100k limit 5;
> Query ID = hrt_qa_20160414182956_c4ed48f2-9913-4b3b-8f09-668ebf55b3e3
> Total jobs = 1
> Launching Job 1 out of 1
> Tez session was closed. Reopening...
> Session re-established.
> Status: Running (Executing on YARN cluster with App id 
> application_1460611908643_0624)
> ----------------------------------------------------------------------------------------------
>         VERTICES      MODE        STATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED  
> ----------------------------------------------------------------------------------------------
> Map 1 ..........      llap     SUCCEEDED      1          1        0        0  
>      0       0  
> Map 4 ..........      llap     SUCCEEDED      1          1        0        0  
>      0       0  
> Reducer 3 ......      llap     SUCCEEDED      1          1        0        0  
>      0       0  
> ----------------------------------------------------------------------------------------------
> VERTICES: 03/03  [==========================>>] 100%  ELAPSED TIME: 0.92 s    
>  
> ----------------------------------------------------------------------------------------------
> OK
> 2016-04-14 18:29:56
> Time taken: 10.558 seconds, Fetched: 1 row(s)
> {noformat}
> explain plan for select current_timestamp();
> {noformat}
> hive> explain extended select current_timestamp();
> OK
> ABSTRACT SYNTAX TREE:
>   
> TOK_QUERY
>    TOK_INSERT
>       TOK_DESTINATION
>          TOK_DIR
>             TOK_TMP_FILE
>       TOK_SELECT
>          TOK_SELEXPR
>             TOK_FUNCTION
>                current_timestamp
> STAGE DEPENDENCIES:
>   Stage-0 is a root stage
> STAGE PLANS:
>   Stage: Stage-0
>     Fetch Operator
>       limit: -1
>       Processor Tree:
>         TableScan
>           alias: _dummy_table
>           Row Limit Per Split: 1
>           GatherStats: false
>           Select Operator
>             expressions: 2016-04-14 18:30:57.206 (type: timestamp)
>             outputColumnNames: _col0
>             ListSink
> Time taken: 0.062 seconds, Fetched: 30 row(s)
> {noformat}
> explain plan for select current_timestamp() from all100k union select 
> current_timestamp() from over100k limit 5;
> {noformat}
> hive> explain extended select current_timestamp() from all100k union select 
> current_timestamp() from over100k limit 5;
> OK
> ABSTRACT SYNTAX TREE:
>   
> TOK_QUERY
>    TOK_FROM
>       TOK_SUBQUERY
>          TOK_QUERY
>             TOK_FROM
>                TOK_SUBQUERY
>                   TOK_UNIONALL
>                      TOK_QUERY
>                         TOK_FROM
>                            TOK_TABREF
>                               TOK_TABNAME
>                                  all100k
>                         TOK_INSERT
>                            TOK_DESTINATION
>                               TOK_DIR
>                                  TOK_TMP_FILE
>                            TOK_SELECT
>                               TOK_SELEXPR
>                                  TOK_FUNCTION
>                                     current_timestamp
>                      TOK_QUERY
>                         TOK_FROM
>                            TOK_TABREF
>                               TOK_TABNAME
>                                  over100k
>                         TOK_INSERT
>                            TOK_DESTINATION
>                               TOK_DIR
>                                  TOK_TMP_FILE
>                            TOK_SELECT
>                               TOK_SELEXPR
>                                  TOK_FUNCTION
>                                     current_timestamp
>                   _u1
>             TOK_INSERT
>                TOK_DESTINATION
>                   TOK_DIR
>                      TOK_TMP_FILE
>                TOK_SELECTDI
>                   TOK_SELEXPR
>                      TOK_ALLCOLREF
>          _u2
>    TOK_INSERT
>       TOK_DESTINATION
>          TOK_DIR
>             TOK_TMP_FILE
>       TOK_SELECT
>          TOK_SELEXPR
>             TOK_ALLCOLREF
>       TOK_LIMIT
>          5
> STAGE DEPENDENCIES:
>   Stage-1 is a root stage
>   Stage-0 depends on stages: Stage-1
> STAGE PLANS:
>   Stage: Stage-1
>     Tez
>       DagId: hrt_qa_20160414183119_ec8e109e-8975-4799-a142-4a2289f85910:7
>       Edges:
>         Map 1 <- Union 2 (CONTAINS)
>         Map 4 <- Union 2 (CONTAINS)
>         Reducer 3 <- Union 2 (SIMPLE_EDGE)
>       DagName: 
>       Vertices:
>         Map 1 
>             Map Operator Tree:
>                 TableScan
>                   alias: all100k
>                   Statistics: Num rows: 100000 Data size: 15801336 Basic 
> stats: COMPLETE Column stats: COMPLETE
>                   GatherStats: false
>                   Select Operator
>                     Statistics: Num rows: 100000 Data size: 4000000 Basic 
> stats: COMPLETE Column stats: COMPLETE
>                     Select Operator
>                       expressions: 2016-04-14 18:31:19.0 (type: timestamp)
>                       outputColumnNames: _col0
>                       Statistics: Num rows: 200000 Data size: 8000000 Basic 
> stats: COMPLETE Column stats: COMPLETE
>                       Group By Operator
>                         keys: _col0 (type: timestamp)
>                         mode: hash
>                         outputColumnNames: _col0
>                         Statistics: Num rows: 1 Data size: 40 Basic stats: 
> COMPLETE Column stats: COMPLETE
>                         Reduce Output Operator
>                           key expressions: _col0 (type: timestamp)
>                           null sort order: a
>                           sort order: +
>                           Map-reduce partition columns: _col0 (type: 
> timestamp)
>                           Statistics: Num rows: 1 Data size: 40 Basic stats: 
> COMPLETE Column stats: COMPLETE
>                           tag: -1
>                           TopN: 5
>                           TopN Hash Memory Usage: 0.04
>                           auto parallelism: true
>             Execution mode: llap
>             LLAP IO: no inputs
>             Path -> Alias:
>               
> hdfs://os-r6-qugztu-hive-1-5.novalocal:8020/user/hcat/tests/data/all100k 
> [all100k]
>             Path -> Partition:
>               
> hdfs://os-r6-qugztu-hive-1-5.novalocal:8020/user/hcat/tests/data/all100k 
>                 Partition
>                   base file name: all100k
>                   input format: org.apache.hadoop.mapred.TextInputFormat
>                   output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
>                   properties:
>                     COLUMN_STATS_ACCURATE 
> {"BASIC_STATS":"true","COLUMN_STATS":{"t":"true","si":"true","i":"true","b":"true","f":"true","d":"true","s":"true","dc":"true","bo":"true","v":"true","c":"true","ts":"true"}}
>                     EXTERNAL TRUE
>                     bucket_count -1
>                     columns t,si,i,b,f,d,s,dc,bo,v,c,ts,dt
>                     columns.comments 
>                     columns.types 
> tinyint:smallint:int:bigint:float:double:string:decimal(38,18):boolean:varchar(25):char(25):timestamp:date
>                     field.delim |
>                     file.inputformat org.apache.hadoop.mapred.TextInputFormat
>                     file.outputformat 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
>                     location 
> hdfs://os-r6-qugztu-hive-1-5.novalocal:8020/user/hcat/tests/data/all100k
>                     name default.all100k
>                     numFiles 1
>                     numRows 100000
>                     rawDataSize 15801336
>                     serialization.ddl struct all100k { byte t, i16 si, i32 i, 
> i64 b, float f, double d, string s, decimal(38,18) dc, bool bo, varchar(25) 
> v, char(25) c, timestamp ts, date dt}
>                     serialization.format |
>                     serialization.lib 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>                     totalSize 15901336
>                     transient_lastDdlTime 1460612683
>                   serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>                 
>                     input format: org.apache.hadoop.mapred.TextInputFormat
>                     output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
>                     properties:
>                       COLUMN_STATS_ACCURATE 
> {"BASIC_STATS":"true","COLUMN_STATS":{"t":"true","si":"true","i":"true","b":"true","f":"true","d":"true","s":"true","dc":"true","bo":"true","v":"true","c":"true","ts":"true"}}
>                       EXTERNAL TRUE
>                       bucket_count -1
>                       columns t,si,i,b,f,d,s,dc,bo,v,c,ts,dt
>                       columns.comments 
>                       columns.types 
> tinyint:smallint:int:bigint:float:double:string:decimal(38,18):boolean:varchar(25):char(25):timestamp:date
>                       field.delim |
>                       file.inputformat 
> org.apache.hadoop.mapred.TextInputFormat
>                       file.outputformat 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
>                       location 
> hdfs://os-r6-qugztu-hive-1-5.novalocal:8020/user/hcat/tests/data/all100k
>                       name default.all100k
>                       numFiles 1
>                       numRows 100000
>                       rawDataSize 15801336
>                       serialization.ddl struct all100k { byte t, i16 si, i32 
> i, i64 b, float f, double d, string s, decimal(38,18) dc, bool bo, 
> varchar(25) v, char(25) c, timestamp ts, date dt}
>                       serialization.format |
>                       serialization.lib 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>                       totalSize 15901336
>                       transient_lastDdlTime 1460612683
>                     serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>                     name: default.all100k
>                   name: default.all100k
>             Truncated Path -> Alias:
>               
> hdfs://os-r6-qugztu-hive-1-5.novalocal:8020/user/hcat/tests/data/all100k 
> [all100k]
>         Map 4 
>             Map Operator Tree:
>                 TableScan
>                   alias: over100k
>                   Statistics: Num rows: 100000 Data size: 6631229 Basic 
> stats: COMPLETE Column stats: COMPLETE
>                   GatherStats: false
>                   Select Operator
>                     Statistics: Num rows: 100000 Data size: 4000000 Basic 
> stats: COMPLETE Column stats: COMPLETE
>                     Select Operator
>                       expressions: 2016-04-14 18:31:19.0 (type: timestamp)
>                       outputColumnNames: _col0
>                       Statistics: Num rows: 200000 Data size: 8000000 Basic 
> stats: COMPLETE Column stats: COMPLETE
>                       Group By Operator
>                         keys: _col0 (type: timestamp)
>                         mode: hash
>                         outputColumnNames: _col0
>                         Statistics: Num rows: 1 Data size: 40 Basic stats: 
> COMPLETE Column stats: COMPLETE
>                         Reduce Output Operator
>                           key expressions: _col0 (type: timestamp)
>                           null sort order: a
>                           sort order: +
>                           Map-reduce partition columns: _col0 (type: 
> timestamp)
>                           Statistics: Num rows: 1 Data size: 40 Basic stats: 
> COMPLETE Column stats: COMPLETE
>                           tag: -1
>                           TopN: 5
>                           TopN Hash Memory Usage: 0.04
>                           auto parallelism: true
>             Execution mode: llap
>             LLAP IO: no inputs
>             Path -> Alias:
>               
> hdfs://os-r6-qugztu-hive-1-5.novalocal:8020/user/hcat/tests/data/over100k 
> [over100k]
>             Path -> Partition:
>               
> hdfs://os-r6-qugztu-hive-1-5.novalocal:8020/user/hcat/tests/data/over100k 
>                 Partition
>                   base file name: over100k
>                   input format: org.apache.hadoop.mapred.TextInputFormat
>                   output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
>                   properties:
>                     COLUMN_STATS_ACCURATE 
> {"BASIC_STATS":"true","COLUMN_STATS":{"t":"true","si":"true","i":"true","b":"true","f":"true","d":"true","bo":"true","s":"true","bin":"true"}}
>                     EXTERNAL TRUE
>                     bucket_count -1
>                     columns t,si,i,b,f,d,bo,s,bin
>                     columns.comments 
>                     columns.types 
> tinyint:smallint:int:bigint:float:double:boolean:string:binary
>                     field.delim :
>                     file.inputformat org.apache.hadoop.mapred.TextInputFormat
>                     file.outputformat 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
>                     location 
> hdfs://os-r6-qugztu-hive-1-5.novalocal:8020/user/hcat/tests/data/over100k
>                     name default.over100k
>                     numFiles 1
>                     numRows 100000
>                     rawDataSize 6631229
>                     serialization.ddl struct over100k { byte t, i16 si, i32 
> i, i64 b, float f, double d, bool bo, string s, binary bin}
>                     serialization.format :
>                     serialization.lib 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>                     totalSize 6731229
>                     transient_lastDdlTime 1460612798
>                   serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>                 
>                     input format: org.apache.hadoop.mapred.TextInputFormat
>                     output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
>                     properties:
>                       COLUMN_STATS_ACCURATE 
> {"BASIC_STATS":"true","COLUMN_STATS":{"t":"true","si":"true","i":"true","b":"true","f":"true","d":"true","bo":"true","s":"true","bin":"true"}}
>                       EXTERNAL TRUE
>                       bucket_count -1
>                       columns t,si,i,b,f,d,bo,s,bin
>                       columns.comments 
>                       columns.types 
> tinyint:smallint:int:bigint:float:double:boolean:string:binary
>                       field.delim :
>                       file.inputformat 
> org.apache.hadoop.mapred.TextInputFormat
>                       file.outputformat 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
>                       location 
> hdfs://os-r6-qugztu-hive-1-5.novalocal:8020/user/hcat/tests/data/over100k
>                       name default.over100k
>                       numFiles 1
>                       numRows 100000
>                       rawDataSize 6631229
>                       serialization.ddl struct over100k { byte t, i16 si, i32 
> i, i64 b, float f, double d, bool bo, string s, binary bin}
>                       serialization.format :
>                       serialization.lib 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>                       totalSize 6731229
>                       transient_lastDdlTime 1460612798
>                     serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>                     name: default.over100k
>                   name: default.over100k
>             Truncated Path -> Alias:
>               
> hdfs://os-r6-qugztu-hive-1-5.novalocal:8020/user/hcat/tests/data/over100k 
> [over100k]
>         Reducer 3 
>             Execution mode: vectorized, llap
>             Needs Tagging: false
>             Reduce Operator Tree:
>               Group By Operator
>                 keys: KEY._col0 (type: timestamp)
>                 mode: mergepartial
>                 outputColumnNames: _col0
>                 Statistics: Num rows: 1 Data size: 40 Basic stats: COMPLETE 
> Column stats: COMPLETE
>                 Limit
>                   Number of rows: 5
>                   Statistics: Num rows: 1 Data size: 40 Basic stats: COMPLETE 
> Column stats: COMPLETE
>                   File Output Operator
>                     compressed: false
>                     GlobalTableId: 0
>                     directory: 
> hdfs://os-r6-qugztu-hive-1-5.novalocal:8020/tmp/hive/hrt_qa/ec0773d7-0ac2-45c7-b9cb-568bbed2c49c/hive_2016-04-14_18-31-19_532_3480081382837900888-1/-mr-10001/.hive-staging_hive_2016-04-14_18-31-19_532_3480081382837900888-1/-ext-10002
>                     NumFilesPerFileSink: 1
>                     Statistics: Num rows: 1 Data size: 40 Basic stats: 
> COMPLETE Column stats: COMPLETE
>                     Stats Publishing Key Prefix: 
> hdfs://os-r6-qugztu-hive-1-5.novalocal:8020/tmp/hive/hrt_qa/ec0773d7-0ac2-45c7-b9cb-568bbed2c49c/hive_2016-04-14_18-31-19_532_3480081382837900888-1/-mr-10001/.hive-staging_hive_2016-04-14_18-31-19_532_3480081382837900888-1/-ext-10002/
>                     table:
>                         input format: 
> org.apache.hadoop.mapred.SequenceFileInputFormat
>                         output format: 
> org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
>                         properties:
>                           columns _col0
>                           columns.types timestamp
>                           escape.delim \
>                           hive.serialization.extend.additional.nesting.levels 
> true
>                           serialization.escape.crlf true
>                           serialization.format 1
>                           serialization.lib 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>                         serde: 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>                     TotalFiles: 1
>                     GatherStats: false
>                     MultiFileSpray: false
>         Union 2 
>             Vertex: Union 2
>   Stage: Stage-0
>     Fetch Operator
>       limit: 5
>       Processor Tree:
>         ListSink
> Time taken: 0.301 seconds, Fetched: 284 row(s)
> {noformat}
> Both the queries used return timestamp with YYYY-MM-DD HH:MM:SS.fff format in 
> past releases.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to