[ https://issues.apache.org/jira/browse/HIVE-25292?focusedWorklogId=621358&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-621358 ]
ASF GitHub Bot logged work on HIVE-25292: ----------------------------------------- Author: ASF GitHub Bot Created on: 11/Jul/21 10:15 Start Date: 11/Jul/21 10:15 Worklog Time Spent: 10m Work Description: shezhiming opened a new pull request #2467: URL: https://github.com/apache/hive/pull/2467 …H format by default <!-- Thanks for sending a pull request! Here are some tips for you: 1. If this is your first time, please read our contributor guidelines: https://cwiki.apache.org/confluence/display/Hive/HowToContribute 2. Ensure that you have created an issue on the Hive project JIRA: https://issues.apache.org/jira/projects/HIVE/summary 3. Ensure you have added or run the appropriate tests for your PR: 4. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP]HIVE-XXXXX: Your PR title ...'. 5. Be sure to keep the PR description updated to reflect all changes. 6. Please write your PR title to summarize what this PR proposes. 7. If possible, provide a concise example to reproduce the issue for a faster review. --> ### What changes were proposed in this pull request? <!-- Please clarify what changes you are proposing. The purpose of this section is to outline the changes and how this PR fixes the issue. If possible, please consider writing useful notes for better and faster reviews in your PR. See the examples below. 1. If you refactor some codes with changing classes, showing the class hierarchy will help reviewers. 2. If you fix some SQL features, you can provide some references of other DBMSes. 3. If there is design documentation, please add the link. 4. If there is a discussion in the mailing list, please add the link. --> create external table without schema will use db location's schema , instead of the metastore default fs. ### Why are the changes needed? <!-- Please clarify why the changes are needed. For instance, 1. If you propose a new API, clarify the use case for a new API. 2. If you fix a bug, you can clarify why it is a bug. --> In some cases, there will be multiple hadoop namenodes, such as using hadoop federation or hadoop rbf. And if you create table without location, it will use db location as base location, this behavior is similar to that case. ### Does this PR introduce _any_ user-facing change? <!-- Note that it means *any* user-facing change including all aspects such as the documentation fix. If yes, please clarify the previous behavior and the change this PR proposes - provide the console output, description, screenshot and/or a reproducable example to show the behavior difference if possible. If possible, please also clarify if this is a user-facing change compared to the released Hive versions or within the unreleased branches such as master. If no, write 'No'. --> yes , if user create external table without schema, it will use db location schema. ### How was this patch tested? <!-- If tests were added, say they were added here. Please make sure to add some test cases that check the changes thoroughly including negative and positive cases if possible. If it was tested in a way different from regular unit tests, please clarify how you tested step by step, ideally copy and paste-able, so that other reviewers can test and check, and descendants can verify in the future. If tests were not added, please describe why they were not added and/or why it was difficult to add. --> 1. start metastore with hdfs nameservive (e.g. hdfs://cluster ) , than create a database with other schema. like ``` create database myhive location 'hdfs://testing/my/myhive.db'; ``` 2. than in myhive , create a external table without schema , like ``` CREATE EXTERNAL TABLE `user.test_tbl` ( id string, name string ) LOCATION '/user/data/test_tbl' ``` 3. show table location to check table location is : hdfs://testing/user/data/test_tbl -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 621358) Time Spent: 20m (was: 10m) > to_unix_timestamp & unix_timestamp should support ENGLISH format by default > --------------------------------------------------------------------------- > > Key: HIVE-25292 > URL: https://issues.apache.org/jira/browse/HIVE-25292 > Project: Hive > Issue Type: Improvement > Components: Clients > Reporter: shezm > Assignee: shezm > Priority: Major > Labels: pull-request-available > Fix For: 3.2.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Hei > The to_unix_timestamp function is implemented by GenericUDFToUnixTimeStamp. > It uses SimpleDateFormat to parse the time of the string type. > But SimpleDateFormat does not specify the Locale parameter, that is, the > default locale of the jvm machine will be used. This will cause some > non-English local machines to be unable to run similar sql like : > > {code:java} > hive> select to_unix_timestamp('16/Mar/2017:12:25:01', 'dd/MMM/yyy:HH:mm:ss'); > OK > NULL > hive> select unix_timestamp('16/Mar/2017:12:25:01', 'dd/MMM/yyy:HH:mm:ss'); > OK > NULL > {code} > > At the same time, I found that in spark, to_unix_timestamp & unix_timestamp > also use SimpleDateFormat, and spark uses Locale.US by default, but this will > make it impossible to use local language syntax. For example, in the Chinese > environment, I can parse this result correctly in hive, > > {code:java} > hive> select to_unix_timestamp('16/三月/2017:12:25:01', 'dd/MMMM/yyy:HH:mm:ss'); > OK > 1489638301 > Time taken: 0.147 seconds, Fetched: 1 row(s) > OK > {code} > But spark will return Null. > Because English dates are more common dates, I think two SimpleDateFormats > are needed. The new SimpleDateFormat is initialized with the Locale.ENGLISH > parameter. > -- This message was sent by Atlassian Jira (v8.3.4#803005)