Re: Hive Cli ORC table read error with limit option

2016-03-07 Thread Biswajit Nayak
Both the parameters are set to false by default. *hive> set hive.optimize.index.filter;* *hive.optimize.index.filter=false* *hive> set hive.orc.splits.include.file.footer;* *hive.orc.splits.include.file.footer=false* *hive> * >>>I suspect this might be related to having 0 row files in the buc

Re: How to set idle Hive jdbc connection out from java code using hive jdbc

2016-03-07 Thread Takahiko Saito
I don't think *hive.server2.idle.session.timeout can be set via client. It might need to be set on server side.* *So before you run your query, you might need to restart HS2 with that property set, but I'm not sure that's what you are looking for.* On Sun, Nov 29, 2015 at 11:50 PM, reena upadhyay

Re: Hive 2 insert error

2016-03-07 Thread Marcin Tustin
I believe updates and deletes have always had this constraint. It's at least hinted at by: https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-ConfigurationValuestoSetforINSERT,UPDATE,DELETE On Mon, Mar 7, 2016 at 7:46 PM, Mich Talebzadeh wrote: > Hi, > > I notice

Re: Hive 2 insert error

2016-03-07 Thread Gopal Vijayaraghavan
> Is this something new in Hive 2 as I don't recall having this issue >before? No. > | CREATE TABLE `sales3`( | > | `prod_id` bigint, | > | STORED AS INPUTFORMAT

Hive 2 insert error

2016-03-07 Thread Mich Talebzadeh
Hi, I noticed this one in Hive2. insert into sales3 select * from smallsales; FAILED: SemanticException [Error 10297]: Attempt to do update or delete on table sales3 that does not use an AcidOutputFormat or is not bucketed Is this something new in Hive 2 as I don't recall having this issue befor

Hive alter table concatenate loses data - can parquet help?

2016-03-07 Thread Marcin Tustin
Hi All, Following on from from our parquet vs orc discussion, today I observed hive's alter table ... concatenate command remove rows from an ORC formatted table. 1. Has anyone else observed this (fuller description below)? And 2. How to do parquet users handle the file fragmentation issue? Desc

Confused with Unicode character for "Record Separator". Is hive delimiter is Octal Representation.

2016-03-07 Thread mahender bigdata
Hi, We had plan of using Record Separator has delimiter for our Hive table. When we searched for Unicode character list, We found "Record Seperator" uses code "\*U001E*" has Unicode character. When we used "\U001E" in our Hive table script has delimiter. Has Query completed, we went to HDFS a

Re: Hive Cli ORC table read error with limit option

2016-03-07 Thread Gopal Vijayaraghavan
> cvarchar(2) ... > Num Buckets: 7 I suspect this might be related to having 0 row files in the buckets not having any recorded schema. You can also experiment with hive.optimize.index.filter=false, to see if the zero row case is artificially produced via predi

Re: Field delimiter in hive

2016-03-07 Thread mahender bigdata
Any help on this. On 3/3/2016 2:38 PM, mahender bigdata wrote: Hi, I'm bit confused to know which character should be taken as delimiter for hive table generically. Can any one suggest me best Unicode character which doesn't come has part of data. Here are the couple of options, Im thinking

Re: count(*) not allowed in order by

2016-03-07 Thread Mich Talebzadeh
Hi, You arte looking at the top 25 of result set so you will have to get full result set before looking at top 25 Something like this select rs.prod_id, rs.score from ( prod_id, count(prod_id) AS Score from sales GROUP BY prod_id ORDER BY Score DESC )rs LIMIT 25; HTH Dr Mich Talebzadeh

Re: count(*) not allowed in order by

2016-03-07 Thread Devopam Mittra
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+WindowingAndAnalytics This should help you , try rank/ dense rank as appropriate and mold it to best use for yourself Regards Dev On Mar 7, 2016 10:35 PM, "Awhan Patnaik" wrote: > I have to take the first 25 IDs ranked by count(*).

count(*) not allowed in order by

2016-03-07 Thread Awhan Patnaik
I have to take the first 25 IDs ranked by count(*). But the following is not allowed in Hive select id from T order by count(*) desc limit 25; Which yields a "NOt yet supported place for UDAF count". The way around it is the following select id, count(*) as cnt from T group by id order by cnt de