[
https://issues.apache.org/jira/browse/HIVE-1918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12985557#action_12985557
]
Krishna Kumar commented on HIVE-1918:
-------------------------------------
@Carl:
1. Taken care in the new patch.
2. Can you post some of the diffs that you get failures on? I had a problem
with running the tests on nfs mounted directories. That had to do with an
existing bug in the load functionality. This used to result in a
"MetaException: could not delete dir" error while trying to cleanup the effects
of the previous test. I have created a separate jira HIVE-1924 for this and
have attached a patch.
3. Have taken the whitelist approach, the whitelist now set as "hdfs,pfile".
> Add export/import facilities to the hive system
> -----------------------------------------------
>
> Key: HIVE-1918
> URL: https://issues.apache.org/jira/browse/HIVE-1918
> Project: Hive
> Issue Type: New Feature
> Components: Query Processor
> Reporter: Krishna Kumar
> Attachments: HIVE-1918.patch.txt
>
>
> This is an enhancement request to add export/import features to hive.
> With this language extension, the user can export the data of the table -
> which may be located in different hdfs locations in case of a partitioned
> table - as well as the metadata of the table into a specified output
> location. This output location can then be moved over to another different
> hadoop/hive instance and imported there.
> This should work independent of the source and target metastore dbms used;
> for instance, between derby and mysql.
> For partitioned tables, the ability to export/import a subset of the
> partition must be supported.
> Howl will add more features on top of this: The ability to create/use the
> exported data even in the absence of hive, using MR or Pig. Please see
> http://wiki.apache.org/pig/Howl/HowlImportExport for these details.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.