Re: Copying all Hive tables from Prod to UAT

2016-05-25 Thread mahender bigdata
transfer including metadata Regards Suresh On Wednesday, May 25, 2016, mahender bigdata mailto:mahender.bigd...@outlook.com>> wrote: Any Document on it. On 4/8/2016 6:28 PM, Will Du wrote: did you try export and import statement in HQL?

Re: Copying all Hive tables from Prod to UAT

2016-05-25 Thread mahender bigdata
We are using HDP. Is there any feature in ambari On 5/25/2016 6:50 AM, Suresh Kumar Sethuramaswamy wrote: Hi If you are using CDH, via CM , Backup->replications you could do inter cluster hive data transfer including metadata Regards Suresh On Wednesday, May 25, 2016, mahender bigd

Re: Insert query with selective columns in Hive

2016-05-25 Thread mahender bigdata
Ping.. On 5/24/2016 12:57 PM, mahender bigdata wrote: Hi, Is there a way in Hive to insert specific columns rather than insert query with all columns options. Like I have table with 10 columns, in my insert statement, i would like to insert only 3 columns like below insert into tbl1

Re: Any way in hive to have functionality like SQL Server collation on Case sensitivity

2016-05-25 Thread mahender bigdata
ping.. On 5/24/2016 1:15 PM, mahender bigdata wrote: Hi, We would like to have feature in Hive where string comparison should ignore case sensitivity while joining on String Columns in hive. This feature helps us in reducing code of calling Upper or Lower function on Join columns. If it is

Re: Copying all Hive tables from Prod to UAT

2016-05-25 Thread mahender bigdata
Any Document on it. On 4/8/2016 6:28 PM, Will Du wrote: did you try export and import statement in HQL? On Apr 8, 2016, at 6:24 PM, Ashok Kumar > wrote: Hi, Anyone has suggestions how to create and copy Hive and Spark tables from Production to UAT. One way wo

Any way in hive to have functionality like SQL Server collation on Case sensitivity

2016-05-24 Thread mahender bigdata
Hi, We would like to have feature in Hive where string comparison should ignore case sensitivity while joining on String Columns in hive. This feature helps us in reducing code of calling Upper or Lower function on Join columns. If it is already there, please let me know settings to enable th

Insert query with selective columns in Hive

2016-05-24 Thread mahender bigdata
Hi, Is there a way in Hive to insert specific columns rather than insert query with all columns options. Like I have table with 10 columns, in my insert statement, i would like to insert only 3 columns like below insert into tbl1 (col1,col2,col10) values (1,2,3); insert into tbl1 (col1,col2

Re: Query Failing while querying on ORC Format

2016-05-17 Thread mahender bigdata
Talebzadeh LinkedIn /https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw/ http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> On 17 May 2016 at 21:20, mahender bigdata mailto:mahender.bigd...@outlook.com>> wrote: Hi Jorn,

Re: Query Failing while querying on ORC Format

2016-05-17 Thread mahender bigdata
_table to new_table INSERT INTO new_table SELECT *, LinkedIn /https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw/ http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> On 16 May 2016 at 23:53, mahender bigdata mailto:mahender.bigd...@outlo

Re: Query Failing while querying on ORC Format

2016-05-17 Thread mahender bigdata
://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> On 16 May 2016 at 23:53, mahender bigdata mailto:mahender.bigd...@outlook.com>> wrote: I'm on Hive 1.2 On 5/16/2016 12:02 PM, Matthew McCline wrote: ​ What version o

Re: Query Failing while querying on ORC Format

2016-05-16 Thread mahender bigdata
I'm on Hive 1.2 On 5/16/2016 12:02 PM, Matthew McCline wrote: ​ What version of Hive are you on? *From:* Mahender Sarangam *Sent:* Saturday, May 14, 2016 3:29 PM *To:* user@hive.apache.org *Subject:* Query Failing whi

Re: Query Failing while querying on ORC Format

2016-05-15 Thread mahender bigdata
://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw/ http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> On 15 May 2016 at 20:38, mahender bigdata mailto:mahender.bigd...@outlook.com>> wrote: Hi Mich, Is there any link missin

Re: Query Failing while querying on ORC Format

2016-05-15 Thread mahender bigdata
For Temporary, I'm disabling vectorization on ORC table. then it is working. On 5/15/2016 3:38 PM, mahender bigdata wrote: here is the error message https://issues.apache.org/jira/browse/HIVE-10598 Error: java.lang.RuntimeException: Error creating a bat

Re: Query Failing while querying on ORC Format

2016-05-15 Thread mahender bigdata
$VectorizedOrcRecordReader.createValue(VectorizedOrcInputFormat.java:112) ... 13 more here is the another URL : https://issues.apache.org/jira/browse/HIVE-10598 On 5/15/2016 12:38 PM, mahender bigdata wrote: Hi Mich, Is there any link missing ?. We have already added column. Some how the old partition data with new

Re: Query Failing while querying on ORC Format

2016-05-15 Thread mahender bigdata
more On 5/15/2016 12:38 PM, mahender bigdata wrote: Hi Mich, Is there any link missing ?. We have already added column. Some how the old partition data with new column is not failing to retrieving. /mahens On 5/14/2016 4:15 PM, Mich Talebzadeh wrote: that night help

Re: Query Failing while querying on ORC Format

2016-05-15 Thread mahender bigdata
Hi Mich, Is there any link missing ?. We have already added column. Some how the old partition data with new column is not failing to retrieving. /mahens On 5/14/2016 4:15 PM, Mich Talebzadeh wrote: that night help

Re: Any difference between LOWER and LCASE

2016-05-11 Thread mahender bigdata
isterGenericUDF("lcase", GenericUDFLower.class); Please verify that you've run the exact same queries. If you still see an issue, please share the relevant DDL (table/tables definition) and a small subset of data so I would be able to reproduce it. Thanks Dudu -----Original Messa

Hive cte Alias problem

2016-05-10 Thread mahender bigdata
Hi, I see peculiar difference while querying using CTE where I'm aliasing one of column in table to another column name in same table. Instead of getting values of source column, hive returns NULLS i.e column 8 values withcte_temp as ( select a.COLUMN1, a.Column2,a.Column2 asColumn8,ID fr

Any difference between LOWER and LCASE

2016-05-10 Thread mahender bigdata
Hi Team, Is there any difference between LOWER and LCASE functions in Hive. For one of the query, when we are using LOWER in where condition, it is failing to match record. When we changed to LCASE, it started matching. I surprised to see differences in LOWER and LCASE. Can any one know why t

Re: Unsupported SubQuery Expression '1': Only SubQuery expressions that are top level conjuncts are allowed

2016-05-10 Thread mahender bigdata
p* *by*… > 1” combined with “b2.col1 *is* *null*” implements the functionality of the “not exists” from the original query. The rest of the query stays quite the same. Dudu *From:*mahender bigdata [mailto:mahender.bigd...@outlook.com] *Sent:* Wednesday, May 04, 2016 7:39 PM *To:* user@hive.apac

Re: Unsupported SubQuery Expression '1': Only SubQuery expressions that are top level conjuncts are allowed

2016-05-04 Thread mahender bigdata
*b2.col1*is**null* *;* 10 1 NULL 10 1 20 2 NULL 30 2 40 4 40 NULL NULL 60 7 NULL 70 7 80 8 NULL NULL NULL *From:*mahender bigdata [mailto:mahender.bigd...@outlook.com] *Sent

Re: Unsupported SubQuery Expression '1': Only SubQuery expressions that are top level conjuncts are allowed

2016-05-03 Thread mahender bigdata
Dudu select A.Col1,A.Col2B.Col3 From Table1 A LEFT OUTER JOIN Table2 B ON A.Col3= B.Col3 AND NOT EXISTS(SELECT 1 FROM Table2 B WHERE B.Col1= A.Col1 GROUP BY A.Col1 HAVING COUNT(*)>1 ) AND (CASE WHEN ISNULL(A.Col2,'\;') = '\;' THEN 'NOT-NULL' ELSE

Unsupported SubQuery Expression '1': Only SubQuery expressions that are top level conjuncts are allowed

2016-05-02 Thread mahender bigdata
Hi, Is there a way to implement not exists in Hive. I'm using Hive 1.2. I'm getting below error "Unsupported SubQuery Expression '1': Only SubQuery expressions that are top level conjuncts are allowed" _Query:_ _ _ select A.Col1,A.Col2B.Col3 From Table1 A LEFT OUTER JOIN Table2 B

solution structure followed in regular hive projects

2016-04-29 Thread mahender bigdata
HI, We are building Hive Project, we would like to know is there any project hierarchy of script maintained in repository. Currently We see huge list of HQL files. It is becoming unmanageable. If there is solution structure or project structure followed in regular hive projects. Thanks

Re: get version number of hive-contrib in HQL

2016-04-28 Thread mahender bigdata
ping... On 4/27/2016 3:30 PM, mahender bigdata wrote: Hi, For generating row sequencing, we are using Hive-contrib library in Hive Script. We are using "ADD JAR /apps/dist/hive-1.2.1.2.3.3.1-5/lib/hive-contrib-1.2.1.2.3.3.1-5.jar"; and creating temporary function CREATE TEMPORAR

get version number of hive-contrib in HQL

2016-04-27 Thread mahender bigdata
Hi, For generating row sequencing, we are using Hive-contrib library in Hive Script. We are using "ADD JAR /apps/dist/hive-1.2.1.2.3.3.1-5/lib/hive-contrib-1.2.1.2.3.3.1-5.jar"; and creating temporary function CREATE TEMPORARY FUNCTION rwSequenceid AS 'org.apache.hadoop.hive.contrib.udf.UDF

Re: Is there a way to resolve Fair Scheduler Problem

2016-04-26 Thread mahender bigdata
load on the given queue those resources will be used by other queue. Hope this helps. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_yarn_resource_mgt/content/preemption.html Regards Khaja Hussain On Mon, Apr 25, 2016 at 7:16 PM, mahender bigdata mailto:mahender.bigd...@outloo

Is there a way to resolve Fair Scheduler Problem

2016-04-25 Thread mahender bigdata
Hi Team, Is there way to resolve Fair Queue Scheduler problem. Currently I see If application requires more resources, it fully consumes available resources leaving other submitted applications in *pending *or *accepted *state. Do i need to modify set yarn.scheduler.maximum-allocation-mb=512

Re: Best way of Unpivoting of hiva table data. Any Analytic function for unpivoting

2016-04-05 Thread mahender bigdata
” eliminates duplicated rows, therefore works much harder. Dudu *From:*mahender bigdata [mailto:mahender.bigd...@outlook.com] *Sent:* Tuesday, April 05, 2016 11:50 PM *To:* user@hive.apache.org *Subject:* Re: Best way of Unpivoting of hiva table data. Any Analytic function for unpivoting Hi Adrew

Re: Best way of Unpivoting of hiva table data. Any Analytic function for unpivoting

2016-04-05 Thread mahender bigdata
owever, the scalability of this approach will have limits. -Original Message- From: mahender bigdata [mailto:mahender.bigd...@outlook.com] Sent: Monday, March 28, 2016 5:47 PM To: user@hive.apache.org Subject: Best way of Unpivoting of hiva table data. Any Analytic

Re: Reopen https://issues.apache.org/jira/browse/YARN-2624

2016-03-29 Thread mahender bigdata
Ping.. On 3/26/2016 7:23 AM, mahender bigdata wrote: Can we reopen this Jira.. Looks like this issue can be reproed though in Apache site it says resolved. -Mahender On 3/25/2016 2:06 AM, Mahender Sarangam wrote: any update on this.. > Subject: Re: Hadoop 2.6 version ht

make best use of VCore in Hive

2016-03-28 Thread mahender bigdata
Hi, Currently we are doing join 2-3 big tables and couple of Left Joins. We are running on 40 node cluster, During query execution, we could see all the memory has been utilized completely (100%), which is perfect. But Number of VCore used are less than 50%. Is there a way to increase usage o

Best way of Unpivoting of hiva table data. Any Analytic function for unpivoting

2016-03-28 Thread mahender bigdata
Hi, Has any one implemented Unpivoting of Hive external table data. We would like Convert Columns into Multiple Rows. We have external table, which holds almost 2 GB of Data. is there best and quicker way of Converting columns into Row. Any Analytic functions available in Hive to do Unpivoting

Re: Reopen https://issues.apache.org/jira/browse/YARN-2624

2016-03-26 Thread mahender bigdata
he.org > From: mahender.bigd...@outlook.com > Date: Thu, 24 Mar 2016 12:20:57 -0700 > > > Is there any other way to do NM Node Cache directory, I'm using Windows > Cluster Hortan Works HDP System. > > /mahender > On 3/24/2016 11:27 AM, mahender bigdata wrote: >

Re: Hadoop 2.6 version https://issues.apache.org/jira/browse/YARN-2624

2016-03-24 Thread mahender bigdata
Is there any other way to do NM Node Cache directory, I'm using Windows Cluster Hortan Works HDP System. /mahender On 3/24/2016 11:27 AM, mahender bigdata wrote: Hi, Has any one is holding work around for this bug, Looks like this problem still persists in hadoop 2.6. Templeton Jo

Hadoop 2.6 version https://issues.apache.org/jira/browse/YARN-2624

2016-03-24 Thread mahender bigdata
Hi, Has any one is holding work around for this bug, Looks like this problem still persists in hadoop 2.6. Templeton Job get failed as soon as job is submitted. Please let us know as early as possible Application application_1458842675930_0002 failed 2 times due to AM Container for appattemp

Confused with Unicode character for "Record Separator". Is hive delimiter is Octal Representation.

2016-03-07 Thread mahender bigdata
Hi, We had plan of using Record Separator has delimiter for our Hive table. When we searched for Unicode character list, We found "Record Seperator" uses code "\*U001E*" has Unicode character. When we used "\U001E" in our Hive table script has delimiter. Has Query completed, we went to HDFS a

Re: Field delimiter in hive

2016-03-07 Thread mahender bigdata
Any help on this. On 3/3/2016 2:38 PM, mahender bigdata wrote: Hi, I'm bit confused to know which character should be taken as delimiter for hive table generically. Can any one suggest me best Unicode character which doesn't come has part of data. Here are the couple of o

Field delimiter in hive

2016-03-03 Thread mahender bigdata
Hi, I'm bit confused to know which character should be taken as delimiter for hive table generically. Can any one suggest me best Unicode character which doesn't come has part of data. Here are the couple of options, Im thinking off for Field Delimiter. Please let me know which is best one u

having problem while querying out select statement in TEZ

2016-03-01 Thread mahender bigdata
Hi, We have created ORC partition Bucketed Table in Hive with ~ has delimiter. Whenever i firing select statement on ORCPartitionBucketing Table, I keep getting error *org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector cannot be cast to org.apache.hadoop.hive.ql.exec.vector.LongColumnVec

How to Query running in background in tez

2016-02-28 Thread mahender bigdata
Hi, I have 2 queries regarding Hive Query 1. Is there a way to know which Hive Query is running in background by application ID, I would also like to know location of Log during running of the hive query in TEZ mode 2. If I'm having cluster 20 Nodes, If I submit a query, query takes ent

Re: Anyway to avoid creating subdirectories by "Insert with union²

2016-02-26 Thread mahender bigdata
Thanks Gopal will look into it On 2/24/2016 4:26 PM, Gopal Vijayaraghavan wrote: SET mapred.input.dir.recursive=TRUE; ... Can we set above setting as tblProperties or Hive Table properties. Not directly, those are MapReduce properties - they are not settable via Hive tables. That said, you c

Null pointer error with UNION ALL of Sub Queries

2016-02-24 Thread mahender bigdata
Hi, We are using Hive 1.2 version, I get Null pointer exception whenever i use UNION ALL along with Tez execution engine. I see there is JIRA raised for this, https://issues.apache.org/jira/browse/HIVE-7765. reason for getting exception is because of one of table has zero entries. Is this iss

Re: Anyway to avoid creating subdirectories by "Insert with union²

2016-02-24 Thread mahender bigdata
Thanks Gopal. This is a architectural change from Hive 0.13 to hive 1.2. We are migrating our hive query from 0.13 to 1.2. Previously it is running perfectly against 0.13 but same query in 1.2 is failing due to union/union-all performance improvement. because of creation of sub directories. W

Anyway to avoid creating subdirectories by "Insert with union”

2016-02-23 Thread mahender bigdata
Hi Below insert with union will create sub-directories while executing in Tez. set hive.execution.engine=tez; insert overwrite table t3 select * from t1 limit 1 union select * from t2 limit 2 ; Is there anyway to avoid creating sub-directories while running in tez? Or this is by d

a newline in column data ruin Hive

2016-02-23 Thread mahender bigdata
Hi, We are facing issue while loading/reading data from file which has line delimiter characters like \n has part of column data. When we try to query the Hive table, data with \n gets split up into multiple rows. Is there a way to tell hive to skip escape character like \n ( row delimiter o

Re: How to set default value for a certain field

2016-02-23 Thread mahender bigdata
Thanks Zack. But it tries to modify the actual data which has null value. This might cause data issue. correct me if I'm wrong. On 2/23/2016 11:03 AM, Riesland, Zack wrote: “null defined as” is what we use *From:*mahender bigdata [mailto:mahender.bigd...@outlook.com] *Sent:* Tu

Re: How to set default value for a certain field

2016-02-23 Thread mahender bigdata
Any idea on below requirement. On 2/19/2016 2:47 PM, mahender bigdata wrote: Hi, is there Ideal solution in Hive to specify default values at schema level. Currently we are using *COALESCE *operator in converting null values to default value, this would require reading entire table. But it

How to set default value for a certain field

2016-02-19 Thread mahender bigdata
Hi, is there Ideal solution in Hive to specify default values at schema level. Currently we are using *COALESCE *operator in converting null values to default value, this would require reading entire table. But it would be nice if some one has different approach of setting default values for

Re: TBLPROPERTIES K/V Comprehensive List

2016-02-19 Thread mahender bigdata
+1, Any information available ? On 2/10/2016 1:26 AM, Mathan Rajendran wrote: Hi , Is there any place where I can see a list of Key/Value Pairs used in Hive while creating a Table. I went through the code and find the java doc hive_metastoreConstants.java is having few constants list but no

Re: What is the real meaning of negative value in Vertex.

2016-02-19 Thread mahender bigdata
. : (+,-)/ Thanks Prasanth On Feb 18, 2016, at 5:08 PM, Kevin Vasko <mailto:kva...@gmail.com>> wrote: Typically when I have seen this the jobs were failing. Is yours completing successfully? -Kevin On Feb 18, 2016, at 4:58 PM, mahender bigdata mailto:mahender.bigd...@outlook.com>> wro

Re: What is the real meaning of negative value in Vertex.

2016-02-19 Thread mahender bigdata
Thanks Gopal. On 2/18/2016 3:24 PM, Gopal Vijayaraghavan wrote: Hi, If you use the newer in.place.progress UI, it will look much better as we have legends [1] which also shows killed tasks (due to pre-emption or to prevent DAG dead-locks). Map 1: 0(+77,-185)/122 Map 2: 1/1 0(+77, -185)/122

What is the real meaning of negative value in Vertex.

2016-02-18 Thread mahender bigdata
Hi , Can any one throw some information on below results of TEZ. What is the real meaning of negative value in Vertex. Please explain or share the link Map 1: 0(+77,-1)/122 Map 2: 1/1 Map 1: 0(+77,-2)/122 Map 2: 1/1 Map 1: 0(+77,-3)/122 Map 2: 1/1 ……. ………

Re: Stroing boolean value in Hive table

2016-02-18 Thread mahender bigdata
like: select case _boolvar_ when true then 1 when false then 0 end from … Alan. On Feb 18, 2016, at 04:18, mahender bigdata wrote: Hi, How can we store Boolean value with 1 or 0 instead of storing true or false string. we can make use of CAST function to convert boolean into 1 or 0. Is ther

Stroing boolean value in Hive table

2016-02-18 Thread mahender bigdata
Hi, How can we store Boolean value with 1 or 0 instead of storing true or false string. we can make use of CAST function to convert boolean into 1 or 0. Is there any built-in setting in hive, which enable and store hive Boolean column values in 0 or 1 instead of true and false.

Re: "PermGen space" error

2016-02-10 Thread mahender bigdata
at 2:32 PM, mahender bigdata mailto:mahender.bigd...@outlook.com>> wrote: Any update on this error. has anyone faced this issue On 2/7/2016 1:53 PM, mahender bigdata wrote: Hi Team, We are continuously getting *"PermGen space" *error. We have increase

Re: "PermGen space" error

2016-02-08 Thread mahender bigdata
Any update on this error. has anyone faced this issue On 2/7/2016 1:53 PM, mahender bigdata wrote: Hi Team, We are continuously getting *"PermGen space" *error. We have increased Mapper and its Heap size also. but no luck. we are using hive 1.2. When i search in google, it has

"PermGen space" error

2016-02-07 Thread mahender bigdata
Hi Team, We are continuously getting *"PermGen space" *error. We have increased Mapper and its Heap size also. but no luck. we are using hive 1.2. When i search in google, it has been said that preserved memory has exceeded. our cluster is 4 nodes, each 4 cpu core and 7 GB RAM. File or inte

Fastest way to get the row count

2016-01-12 Thread mahender bigdata
Hi Team, Is there any built-in command in hive to get ROWCOUNT of previous operation just like in SQL Server @@RowCount. For example: I have table with almost 2 GB of data per partition, We run select query on table with condition, which filters the records, we will be dumping those filtered

Benefit of following setting.

2016-01-08 Thread mahender bigdata
Hi, I have doubt on following setting, where i could not find clear meaning of these setting * SET hive.optimize.index.filter=false; * set hive.mapjoin.hybridgrace.hashtable=false; * set hive.optimize.null.scan=false; * Is there any negative by enabling hive.optimize.index.filter al

Re: Null Representation in Hive tables

2015-12-27 Thread mahender bigdata
Can any one update on this On 12/23/2015 9:37 AM, mahender bigdata wrote: Our Files are not text Files, they are csv and dat. Any possibility to include 2 serialization.null format in table property On 12/23/2015 9:16 AM, Edward Capriolo wrote: In text formats the null is accepted as \N. On

Re: Null Representation in Hive tables

2015-12-23 Thread mahender bigdata
Our Files are not text Files, they are csv and dat. Any possibility to include 2 serialization.null format in table property On 12/23/2015 9:16 AM, Edward Capriolo wrote: In text formats the null is accepted as \N. On Wed, Dec 23, 2015 at 12:00 PM, mahender bigdata mailto:mahender.bigd

Null Representation in Hive tables

2015-12-23 Thread mahender bigdata
Hi, Is there any possibility of mentioning both*"serialization.null.format"="" and **"serialization.null.format"="\000" *has table properties, currently we are creating external table, where there is chance of having data with empty string or \000, As a work around, we have created 2 extern

How can i make use of PIG UDF in Hive

2015-12-21 Thread mahender bigdata
Hi, Sorry for asking very noob question like how can i create PIG UDF and make use of PIG UDF in HIVE. Will it be similar like calling Hive UDF. Any pointer to PIG UDF creation ?

Re: Serde for all encoding standards.

2015-12-18 Thread mahender bigdata
... row format delimited fields terminated by '\t' ... TBLPROPERTIES("serialization.encoding"='GBK',); HTH (and let me know if it works) Gabriel Balan On 12/10/2015 5:53 PM, mahender bigdata wrote: Hi, I need help in reading UniCode file , I have created

Connection between TempletonJob and Worker Nodes remains in FIN_WAIT_2 state for long time

2015-12-11 Thread mahender bigdata
Hi, We have submitted too many jobs to webhcat (templeton) reason is our HQL has multiple hive statements,each hive statement is submitted as a job causing too many job, after some times all the submitted job are in pending state. later after waiting for 2 hrs, all the pending jobs got complet

Serde for all encoding standards.

2015-12-10 Thread mahender bigdata
Hi, I need help in reading UniCode file , I have created external table on top of my file CREATE External TABLE IF NOT EXISTS table1(`CC` string,`SRT` string ,`P C` string ,`Year` string ,`Month` string,`Address` string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' W

Create hive table with same schema without any data

2015-12-10 Thread mahender bigdata
Hi, Is there any alternate way for creating hive table with same table schema. We are currently doing with create table t1 as select * from t2 where 1 =2 or create table t1 like t2; Is there any other way of creating table with same schema as one of the table.

Re: Hive Support for Unicode languages

2015-12-09 Thread mahender bigdata
t; > > On 04 Dec 2015, at 01:25, mahender bigdata wrote: > > > > Hi Team, > > > > Does hive supports Hive Unicode like UTF-8,UTF-16 and UTF-32. I would like to see different language supported in hive table. Is there any serde which can show exactly japanese, chin

Re: Storing the Hive Query Results into Variable

2015-12-07 Thread mahender bigdata
/display/Hive/LanguageManual+VariableSubstitution 2. primer: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli Hope this helps you to move forward regards Dev On Fri, Dec 4, 2015 at 5:48 AM, mahender bigdata mailto:mahender.bigd...@outlook.com>> wrote: Hi, Is there

Hive Support for Unicode languages

2015-12-03 Thread mahender bigdata
Hi Team, Does hive supports Hive Unicode like UTF-8,UTF-16 and UTF-32. I would like to see different language supported in hive table. Is there any serde which can show exactly japanese, chineses character rather than showing symbols on Hive console. -Mahender

Storing the Hive Query Results into Variable

2015-12-03 Thread mahender bigdata
Hi, Is there option available to store hive results into variable like select @i= count(*) from HiveTable. or Storing Table Results into variable and make use of it later stage of Query. I tired using HQL CTE but the scope of CTE is limited to next select only, Is there a way to intermediate

How to register HCatlog Library as part of pig script file

2015-12-02 Thread mahender bigdata
Hi, We would like to make use of HCatlog table in our PIG Script file, Currently we are opening the PIG Command with -UseHCatlog option for registering or loading HCatlog library. Is there a way in PIG script to register HCatlog jar files and execute directly on pig command prompt directly with