Hi,
I have a question about how to get the location for a bunch of partitions.
My answer is: using the hive query `DESCRIBE EXTENDED
PARTITION()`
I'm getting back a json response (if i set return type to JSON) which has the
HDFS location in it.
But. If I have, lets say 1000 partitions and every
Ashutosh Bapat wrote:
> There are multiple ways
> 1. Query the HiveMetaStore directly.
do you mean via thrift client? or directly native jdbc?
But i think this is in an enterprise env not possible, when i'm not on the same
machine where hive server is running.
i believe the mysql or postgres ser
Gopal Vijayaraghavan wrote:
> That was the reason Hive shipped with metatool, though it remains fairly
> obscure outside of the devs.
>
> hive --service metatool -executeJDOQL "select database.name + '.' + tableName
> from org.apache.hadoop.hive.metastore.model.MTable"
>
> You need to join MPa
Vivek Shrivastava wrote:
> If you have access to HCatalog, it also has jdbc connection that would
> allow you to get faster response.
ah ok. sounds awesome as well! I will check.
thanks
marko
--
Marko Bauhardt
Software Engineer
www.datameer.com
Phone: +49 345 279 5030
Datameer GmbH
Magdeburg
Hi all,
I try to connect a hive server with the following config
* hive version 2.1.1 / hadoop 3.0.0 (cdh 6.1)
* SSL
* kereberos secured (having the keytab file on my disk)
I'm able to connect with beeline or with the apache driver via java
code.
In addition to that, I'm playing around with t