Did you try ALTER TABLE ... SET LOCATION ... ? maybe it could have worked.

On Wed, May 17, 2017 at 6:57 PM, Vihang Karajgaonkar <vih...@cloudera.com>
wrote:

> This is interesting and possibly a bug. Did you try changing them to
> managed tables and then dropping or truncating them? How do we reproduce
> this on our setup?
>
> On Tue, May 16, 2017 at 6:38 PM, Stephen Sprague <sprag...@gmail.com>
> wrote:
>
>> fwiw. i ended up re-creating the ec2 cluster with that same host name
>> just so i could drop those tables from the metastore.
>>
>> note to self.  be careful - be real careful - with "sharing" hive
>> metastores between different compute paradigms.
>>
>> Regards,
>> Stephen.
>>
>> On Tue, May 16, 2017 at 6:38 AM, Stephen Sprague <sprag...@gmail.com>
>> wrote:
>>
>>> hey guys,
>>> here's something bizarre.   i created about 200 external tables with a
>>> location something like this 'hdfs://<ec2-host>/path'.  this was three
>>> months ago and now i'm revisiting and want to drop these tables.
>>>
>>> ha! no can do!
>>>
>>> that <ec2-host> is long gone.
>>>
>>> Upon issuing the drop table command i get this:
>>>
>>> Error while processing statement: FAILED: Execution Error, return code 1
>>> from org.apache.hadoop.hive.ql.exec.DDLTask.
>>> MetaException(message:java.lang.IllegalArgumentException:
>>> java.net.UnknownHostException: <ec2-host>)
>>>
>>> where <ec2-host> is that old host name.
>>>
>>> so i ask is there a work around for this?  given they are external
>>> tables i'm surprised it "checks" that that location exists (or not.)
>>>
>>> thanks,
>>> Stephen
>>>
>>
>>
>

Reply via email to