RE: Optimized Hive query

2016-06-13 Thread Markovitz, Dudu
Hi You don’t need to do anything, the optimizer does it for you. You can see that you get identical execution plans for the nested query and the flatten one. Dudu > create multiset table t (i int); > explain select * from t; +---

Re: Optimized Hive query

2016-06-13 Thread Aviral Agarwal
Yes I want to flatten the query. Also the Insert code is correct. Thanks, Aviral Agarwal On Tue, Jun 14, 2016 at 3:46 AM, Mich Talebzadeh wrote: > you want to flatten the query I understand. > > create temporary table tmp as select c from d; > > INSERT INTO TABLE a > SELECT c from tmp where >

same hdfs location with different schema exception

2016-06-13 Thread ????/??????
Hi all: I have a question when using hive. It's described as follows: Firstly, I create two table: CREATE TABLE `roncen_tmp`( `a` bigint, `b` bigint, `c` string); CREATE EXTERNAL TABLE `ext_roncen`( `aaa` bigint) LOCATION 'hdfs://xxx/user/hive/warehouse/roncen_

Issue in Insert Overwrite directory operation

2016-06-13 Thread Udit Mehta
Hi All, I see a weird issue when trying to do a "INSERT OVERWRITE DIRECTORY" operation. The query seems to work when I limit the data set but fails with the following exception if the data set is larger: Failed with exception Unable to move source hdfs://namenode/user/grp_admin/external_test1/out

Re: Optimized Hive query

2016-06-13 Thread Mich Talebzadeh
you want to flatten the query I understand. create temporary table tmp as select c from d; INSERT INTO TABLE a SELECT c from tmp where condition Is the INSERT code correct? HTH Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

Re: column statistics for non-primitive types

2016-06-13 Thread Mich Talebzadeh
which version of Hive are you using? Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw * http://talebzadehmich.wordpress.com On 13 June 2016 at 1

ORC does not support type conversion from INT to STRING.

2016-06-13 Thread Mahender Sarangam
Hi, We are facing issue while reading data from ORC table. We have created ORC table and dumped data into it. We have deleted cluster due to some reason. When we recreated cluster (using Metastore) and table pointing to same location. When we perform reading from ORC table. We see below error.

ORC does not support type conversion from INT to STRING.

2016-06-13 Thread Mahender Sarangam
Hi, We are facing issue while reading data from ORC table. We have created ORC table and dumped data into it. We have deleted cluster due to some reason. When we recreated cluster (using Metastore) and table pointing to same location. When we perform reading from ORC table. We see below error.

Optimized Hive query

2016-06-13 Thread Aviral Agarwal
Hi, I would like to know if there is a way to convert nested hive sub-queries into optimized queries. For example : INSERT INTO TABLE a.b SELECT * FROM ( SELECT c FROM d) into INSERT INTO TABLE a.b SELECT c FROM D This is a simple example but the solution should apply is there were deeper nesti

this really matters

2016-06-13 Thread TroopTrack Notifier
Hi, I've got some news for you, please read it, this really matters a lot! Please read here Yours truly, TroopTrack Notifier

column statistics for non-primitive types

2016-06-13 Thread Michael Häusler
Hi there, when testing column statistics I stumbled upon the following error message: DROP TABLE IF EXISTS foo; CREATE TABLE foo (foo BIGINT, bar ARRAY, foobar STRUCT); ANALYZE TABLE foo COMPUTE STATISTICS FOR COLUMNS; FAILED: UDFArgumentTypeException Only primitive type arguments are accepted

Re: LDAPS (Secure LDAP) Hive configuration

2016-06-13 Thread Jose Rozanec
Thank you for the quick response. Will try upgrading to version 2.1.0 Thanks! 2016-06-13 4:34 GMT-03:00 Oleksiy S : > Hello, >> >> We are working on a Hive 2.0.0 cluster, to configure LDAPS >> authentication, but I get some errors preventing a successful >> authentication. >> Does anyone have so

Using Parquet 1.7.0 for Hive 1.0.0

2016-06-13 Thread Mayank Shishir Shete
Hello Team , I am using Spark 1.6.1 to output parquet files which is using Parquet 1.7.0 . While reading these Parquet files from Hive I am getting Failed with exception java.io.IOException:java.lang.NullPointerException . How can I read Parquet 1.7.0 from Hive 1.0.0 ? FYI I have already tried

Re: LDAPS (Secure LDAP) Hive configuration

2016-06-13 Thread Oleksiy S
> > Hello, > > We are working on a Hive 2.0.0 cluster, to configure LDAPS authentication, > but I get some errors preventing a successful authentication. > Does anyone have some insight on how to solve this? > > *The problem* > The errors we get are (first is most frequent): > - sun.security.provid