[ 
https://issues.apache.org/jira/browse/HIVE-3844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893934#comment-13893934
 ] 

Venki Korukanti commented on HIVE-3844:
---------------------------------------

Updated the patch to use Pattern/Matcher to identify the format of timestamp. I 
thought about not using the BigDecimal, but the output has floating point 
errors.

Reg: Should we really be allowing numeric formats to be read as timestamp?
I have seen multiple customers that have logs with different timestamp formats 
and they want to create one table with timestamp schema and read all sources of 
data.

> Unix timestamps don't seem to be read correctly from HDFS as Timestamp column
> -----------------------------------------------------------------------------
>
>                 Key: HIVE-3844
>                 URL: https://issues.apache.org/jira/browse/HIVE-3844
>             Project: Hive
>          Issue Type: Bug
>          Components: Serializers/Deserializers
>    Affects Versions: 0.8.0
>            Reporter: Mark Grover
>            Assignee: Venki Korukanti
>         Attachments: HIVE-3844.1.patch.txt
>
>
> Serega Shepak pointed out that something like
> {code}
> select cast(date_occurrence as timestamp) from xvlr_data limit 10
> {code}
> where  date_occurrence has BIGINT type (timestamp in milliseconds) works. But 
> it doesn't work if the declared type is TIMESTAMP on column. The data in the 
> date_occurence column in unix timestamp in millis.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to