Re: drop table command hang

2015-03-13 Thread Jeff Zhang
I think I got the error. When I started hive, it will throw mysql key too long exception (only in log file but not in client log, this is very unfriendly IMO ) 2015-03-14 14:18:49,588 ERROR [main]: DataNucleus.Datastore (Log4JLogger.java:error(115)) - An exception was thrown while adding/validati

Re: drop table command hang

2015-03-13 Thread Jeff Zhang
It doesn't matter whether I truncate the table, it always hangs there. Very werid. On Wed, Mar 11, 2015 at 3:06 PM, Mich Talebzadeh wrote: > Have you truncated the table before dropping it? I > > > > Truncate table > > Drop table > > > > Mich Talebzadeh > > > > http://talebzadehmich.wordpres

Re: Which SerDe for Custom Binary Data.

2015-03-13 Thread karthik maddala
Currently we have data in NFS and we have proprietery tools to access the data. We are planning to move the data into HDFS and use HiveQL for accessing the data and run batch jobs. So looking out for the custom SerDe(assuming the existing SerDe's will not be able to read the underlying data) to re

Re: Which SerDe for Custom Binary Data.

2015-03-13 Thread Daniel Haviv
https://cwiki.apache.org/confluence/display/Hive/DeveloperGuide#DeveloperGuide-HowtoWriteYourOwnSerDe Daniel > On 13 במרץ 2015, at 17:56, karthik maddala wrote: > > > > I want to set up a DW based on Hive. However, my data does not come as handy > csv files but as binary files in a propr

RE: Which SerDe for Custom Binary Data.

2015-03-13 Thread Mich Talebzadeh
Hive as I use it is particularly useful for getting data out of relational tables and more importantly query that data using HiveQL (a variation of transact sql) . If your data is in binary format and assuming that you manage to store it in HDFS, how are you intending to access the data. At

Fwd: Which SerDe for Custom Binary Data.

2015-03-13 Thread karthik maddala
I want to set up a DW based on Hive. However, my data does not come as handy csv files but as binary files in a proprietary format. The binary file consists of serialized data using C language. Could you please suggest which input format to be used and how to write a custom SerDe for the abov

Which SerDe for Custom Binary Data.

2015-03-13 Thread karthik maddala
I want to set up a DW based on Hive. However, my data does not come as handy csv files but as binary files in a proprietary format. The binary file consists of serialized data using C language. Could you please suggest which input format to be used and how to write a custom SerDe for the abov

Re: Hive on Spark

2015-03-13 Thread Xuefu Zhang
You need to copy the spark-assembly.jar to your hive/lib. Also, you can check hive.log to get more messages. On Fri, Mar 13, 2015 at 4:51 AM, Amith sha wrote: > Hi all, > > > Recently i have configured Spark 1.2.0 and my environment is hadoop > 2.6.0 hive 1.1.0 Here i have tried hive on Spark w

Re: Bucket pruning

2015-03-13 Thread cobby
hi, thanks for the detailed response. i will experiment with your suggested orc bloom filter solution. it seems to me the obvious, most straight forward solution is to add support for hash partitioning. so i can do something like: create table T() partitioned by (x into num_partitions,..). upon

Hive on Spark

2015-03-13 Thread Amith sha
Hi all, Recently i have configured Spark 1.2.0 and my environment is hadoop 2.6.0 hive 1.1.0 Here i have tried hive on Spark while executing insert into i am getting the following g error. Query ID = hadoop2_20150313162828_8764adad-a8e4-49da-9ef5-35e4ebd6bc63 Total jobs = 1 Launching Job 1 out o

Re: insert table error

2015-03-13 Thread Raunak Jhawar
What version of Hive are you using. INSERT INTO ... VALUES is supported only from Hive 0.14 onwards. https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-InsertingvaluesintotablesfromSQL -- Thanks, Raunak Jhawar m: 09820890034 On Fri, Mar 13, 2015 at 4:45 P

Re: insert table error

2015-03-13 Thread Daniel Haviv
What is the error you get? Daniel > On 13 במרץ 2015, at 13:13, zhangjp wrote: > > case fail > CREATE TABLE students (name VARCHAR(64), age INT, gpa DECIMAL(3, 2)) > CLUSTERED BY (age) INTO 2 BUCKETS STORED AS ORC; > INSERT INTO TABLE students > VALUES ('fred flintstone', 35, 1.28), ('barne

insert table error

2015-03-13 Thread zhangjp
case fail CREATE TABLE students (name VARCHAR(64), age INT, gpa DECIMAL(3, 2)) CLUSTERED BY (age) INTO 2 BUCKETS STORED AS ORC; INSERT INTO TABLE students VALUES ('fred flintstone', 35, 1.28), ('barney rubble', 32, 2.32);‍

Applying UDFs on load.

2015-03-13 Thread karthik ramachandran
Hi all, I have been loading my data into Hive as Strings, and then using the SELECT INTO statement to apply UDFs to transform my data. I was just wondering if there is a better way to do this - perhaps a way to apply UDFs on a LOAD DATA call? Or something involving the new temporary tables featur

errors when "select * from table limit 10" througt jdbc client

2015-03-13 Thread zhangjp
When i query with "select col1 from table limit 10" it's ok ,but when i replace col1 to * it throws errors . Caused by: org.apache.thrift.TApplicationException: Internal error processing FetchResults at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)

Re: when start hive could not generate log file

2015-03-13 Thread zhangjp
I don't find any log. I updated the log dir in hive-log4j.properties ,but it also didn't work.‍ -- Original -- From: "Jianfeng (Jeff) Zhang";; Date: Fri, Mar 13, 2015 08:50 AM To: "user@hive.apache.org"; Subject: Re: when start hive could not generate log