I'm using hadoop 2.5.1 and sqoop 1.4.6.
I am using sqoop import for importing table from mysql database to be used
with hadoop. It is showing following error
Exception in thread "main" java.lang.NoSuchMethodError:
org.apache.hadoop.fs.FSOutputSummer
How to handle RAW data type of oracle in SQOOP import
>
> useradd -G hdfs root
>
> On Wed, Oct 5, 2016 at 2:07 PM, Raj hadoop wrote:
>
>> Im getting it when im trying to start hive
>>
>> hdpmaster001:~ # hive
>> WARNING: Use "yarn jar" to launch YARN applications.
>>
>> how can I execute the
Im getting it when im trying to start hive
hdpmaster001:~ # hive
WARNING: Use "yarn jar" to launch YARN applications.
how can I execute the same,
Thanks,
Raj.
On Wed, Oct 5, 2016 at 1:56 PM, Raj hadoop wrote:
> Hi All,
>
> Could someone help in to solve this issue,
>
Hi All,
Could someone help in to solve this issue,
Logging initialized using configuration in
file:/etc/hive/2.4.2.0-258/0/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=root, access=WRITE, in
Thanks everyone..
we are raising case with Hortonworks
On Wed, Aug 3, 2016 at 6:44 PM, Raj hadoop wrote:
> Dear All,
>
> In need or your help,
>
> we have horton works 4 node cluster,and the problem is hive is allowing
> only one user at a time,
>
> if any second resour
Dear All,
In need or your help,
we have horton works 4 node cluster,and the problem is hive is allowing
only one user at a time,
if any second resource need to login hive is not working,
could someone please help me in this
Thanks,
Rajesh
ofile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
http://talebzadehmich.wordpress.com
On 4 April 2016 at 20:02, Raj Hadoop wrote:
Sorry in a typo with your name - Mich.
On Monday, April 4, 2016 12:01 PM, Raj Hadoop wrote:
Thanks Mike. If Hive 2.0 is stable - i would definitely go f
Talebzadeh LinkedIn
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
http://talebzadehmich.wordpress.com
On 4 April 2016 at 18:25, Raj Hadoop wrote:
Hi,
I have downloaded apache hive 1.1.1 and trying to setup hive environment in my
hadoop cluster.
On one of the
Sorry in a typo with your name - Mich.
On Monday, April 4, 2016 12:01 PM, Raj Hadoop wrote:
Thanks Mike. If Hive 2.0 is stable - i would definitely go for it. But let me
troubleshoot 1.1.1 issues i am facing now.
here is my hive-site.xml. Can you please let me know if i am missing
Hi,
I have downloaded apache hive 1.1.1 and trying to setup hive environment in my
hadoop cluster.
On one of the nodes i installed hive and when i set all the variables and
environment i am getting the following error.Please advise.
[hadoop@z1 bin]$ hive
2016-04-04 10:12:45,686 WARN [main] conf
We are facing below mentioned error on storing dataset using HCatStorer.Can
someone please help us
STORE F INTO 'default.CONTENT_SVC_USED' using
org.apache.hive.hcatalog.pig.HCatStorer();
ERROR hive.log - Got exception: java.net.URISyntaxException Malformed
escape pair at index 9: thrift://%H
I am able to see the data in the table for all the columns when I issue the
following -
SELECT * FROM t1 WHERE dt1='2013-11-20'
But I am unable to see the column data when i issue the following -
SELECT cust_num FROM t1 WHERE dt1='2013-11-20'
The above shows null values.
How should I de
>
>
>
>There are better ways of doing this, but this one's quick and dirty :)
>
>
>Best Regards,
>Nishant Kelkar
>
>
>On Wed, Sep 10, 2014 at 12:48 PM, Raj Hadoop wrote:
>
>sort_array returns in ascending order. so the first element cannot be the
>large
ample, for "2-oct-2013" it will be 2013-10-02.
Best Regards,
Nishant Kelkar
On Wed, Sep 10, 2014 at 11:48 AM, Raj Hadoop wrote:
The
>
>SORT_ARRAY(COLLECT_SET(date))[0] AS latest_date
>
>is returning the lowest date. I need the largest date.
>
>
>
>-
The
SORT_ARRAY(COLLECT_SET(date))[0] AS latest_date
is returning the lowest date. I need the largest date.
On Wed, 9/10/14, Raj Hadoop wrote:
Subject: Re: Remove duplicate records in Hive
To: user@hive.apache.org
Date: Wednesday, September 10
.
Best Regards,Nishant
Kelkar
On Wed, Sep 10, 2014 at
10:04 AM, Raj Hadoop
wrote:
Hi,
I have a requirement in Hive to remove duplicate records (
they differ only by one column i.e a date column) and keep
the latest date record.
Sample :
Hive Table :
d2 is a higher
cno
Hi,
I have a requirement in Hive to remove duplicate records ( they differ only by
one column i.e a date column) and keep the latest date record.
Sample :
Hive Table :
d2 is a higher
cno,sqno,date
100 1 1-oct-2013
101 2 1-oct-2013
100 1 2-oct-2013
102 2 2-oct-2013
Output needed:
100 1 2-o
Can I update ( delete and insert kind of)just one row keeping the remaining
rows intact in Hive table using Hive INSERT OVERWRITE. There is no partition in
the Hive table.
INSERT OVERWRITE TABLE tablename SELECT col1,col2,col3 from tabx where
col2='abc';
Does the above work ? Please advise.
give some clue.
Thanks,
Szehon
On Thu, Mar 20, 2014 at 12:29 PM, Raj Hadoop wrote:
I am struggling on this one. Can any one throw some pointers on how to
troubelshoot this issue please?
>
>
>
>
>On Thursday, March 20, 2014 3:09 PM, Raj Hadoop wrote:
>
>Hello everyone,
I am struggling on this one. Can any one throw some pointers on how to
troubelshoot this issue please?
On Thursday, March 20, 2014 3:09 PM, Raj Hadoop wrote:
Hello everyone,
The HiveThrift Service was started succesfully.
netstat -nl | grep 1
tcp 0 0 0.0.0.0:1
Hello everyone,
The HiveThrift Service was started succesfully.
netstat -nl | grep 1
tcp 0 0 0.0.0.0:1 0.0.0.0:*
LISTEN
I am able to read tables from Hive through Tableau. When executing queries
through Tableau I am getting the followi
2014 at 1:59 PM, Raj hadoop wrote:
>
>> Query in HIVE
>>
>>
>>
>> I tried merge kind of operation in Hive to retain the existing records
>> and append the new records instead of dropping the table and populating it
>> again.
>>
>>
>
2014 at 1:59 PM, Raj hadoop wrote:
>
>> Query in HIVE
>>
>>
>>
>> I tried merge kind of operation in Hive to retain the existing records
>> and append the new records instead of dropping the table and populating it
>> again.
>>
>>
>
Query in HIVE
I tried merge kind of operation in Hive to retain the existing records and
append the new records instead of dropping the table and populating it
again.
If anyone can come help with any other approach other than this or the
approach to perform merge operation
will be great he
ing/files'
> U should not have to do anything
>
> Thanks
>
> Warm Regards
>
>
> Sanjay
>
> linkedin:http://www.linkedin.com/in/subramaniansanjay
>
> From: Raj hadoop
> Reply-To: "user@hive.apache.org"
> Date: Wednesday, March
Hi,
Help required to merge data in hive,
Ex:
Today file
-
Empno ename
1 abc
2 def
3 ghi
Tomorrow file
-
Empno ename
5 abcd
6 defg
7 ghij
Reg: should not drop the hive
stored. Just keep changing
until you get it right :)
On Tue, Mar 4, 2014 at 5:23 PM, Raj Hadoop wrote:
All,
>
>
>I loaded data from an Oracle query through Sqoop to HDFS file. This is bzip
>compressed files partitioned by one column date.
>
>
>I created a Hive table
All,
I loaded data from an Oracle query through Sqoop to HDFS file. This is bzip
compressed files partitioned by one column date.
I created a Hive table to point to the above location.
After loading lot of data , I realized the data type of one of the column was
wrongly given.
When I changed
All,
I have a 3 node hadoop cluster CDH 4.4 and every few days or when ever I load
some data through sqoop or query through hive , sometimes I get the following
error -
Call From <> to <> failed on connection exception:
java.net.ConnectException: Connection refused
This has become so freque
Thanks for the detailed explanation Yong. It helps.
Regards,
Raj
On Tuesday, February 25, 2014 9:18 PM, java8964 wrote:
Yes, it is good that the file sizes are evenly close, but not very important,
unless there are files very small (compared to the block size).
The reasons are:
Your fil
Hi,
I am loading data to HDFS files through sqoop and creating a Hive table to
point to these files.
The mapper files through sqoop example are generated like this below.
part-m-0
part-m-1
part-m-2
My question is -
1) For Hive query performance , how important or significant is
Thanks. Will try it.
On Tuesday, February 25, 2014 8:23 PM, Kuldeep Dhole
wrote:
Probably you should use tr_date='2014-01-01'
Considering tr_date partition is there
On Tuesday, February 25, 2014, Raj Hadoop wrote:
I am trying to create a Hive partition like 'tr_
I am trying to create a Hive partition like 'tr_date=2014-01-01'
FAILED: ParseException line 1:58 mismatched input '-' expecting ) near '2014'
in add partition statement
hive_ret_val: 64
Errors while executing Hive for bksd table for 2014-01-01
Are hyphen's not allowed in the partition directo
All,
One of the primary key columns in a Relational table has alpha numberic of 6
characters - varchar(6).
The first three characters has this pattern -
1st one - 1 to 9
2nd one - 1 to 9 or a -z
3rd one - 1 to 9 or a -z
Is this a good idea for performing queries ( can be any queries based
Hi,
My requirement is a typical Datawarehouse and ETL requirement. I need to
accomplish
1) Daily Insert transaction records to a Hive table or a HDFS file. This table
or file is not a big table ( approximately 10 records per day). I don't want to
Partition the table / file.
I am reading a
All,
Is there any way from the command prompt I can find which hive version I am
using and Hadoop version too?
Thanks in advance.
Regards,
Raj
Hi,
How can I just find out the physical location of a partitioned table in Hive.
Show partitions
gives me just the partition column info.
I want the location of the hdfs directory / files where the table is created.
Please advise.
Thanks,
Raj
I am trying to create a Hive sequence file from another table by running the
following -
Your query has the following error(s):
OK
FAILED: ParseException line 5:0 cannot recognize input near 'STORED' 'STORED'
'AS' in constant click the Error Log tab above for details
1
CREATE TABLE temp_xyz as
I want to do a simple test like this - but not working -
select ComplexUDFExample(List("a", "b", "c"), "b") from table1 limit 10;
FAILED: SemanticException [Error 10011]: Line 1:25 Invalid function 'List'
On Tuesday, February 4, 201
How to test a Hive GenericUDF which accepts two parameters List, T
List -> Can it be the output of a collect set. Please advise.
I have a generic udf which takes List, T. I want to test it how it works
through Hive.
On Monday, January 20, 2014 5:19 PM, Raj Hadoop wrote:
Hi,
I have the following requirement from a Hive table below.
CustNumActivityDatesRates
10010-Aug-13,12-Aug-13,20-Aug-1310,15,20
The data above says that
From 10 Aug to 11 Aug the rate is 10.
From 12 Aug to 19 Aug the rate is 15.
From 20-Aug to till date the rate is 20.
Note : The order is m
gt; On Thu, Jan 30, 2014 at 3:19 PM, Raj hadoop wrote:
>
>> Hi,
>>
>> Can someone help me how to delete duplicate records in Hive table,
>>
>> I know that delete and update are not supported by hive but still,
>>
>> if some know's some alternativ
Hi,
Can someone help me how to delete duplicate records in Hive table,
I know that delete and update are not supported by hive but still,
if some know's some alternative can help me in this
Thanks,
Raj.
The following is a an example for a GenericUDF. I wanted to test this through a
Hive query. Basically want to pass parameters some thing like "select
ComplexUDFExample('a','b','c') from employees limit 10".
---
Ok. I just figured out. I have to set classpath with EXPORT. Its working now.
On Friday, January 17, 2014 3:37 PM, Raj Hadoop wrote:
Hi,
I am trying to compile a basic hive UDF java file. I am using all the jar files
in my classpath but I am not able to compile it and getting the
Hi,
I am trying to compile a basic hive UDF java file. I am using all the jar files
in my classpath but I am not able to compile it and getting the following
error. I am using CDH4. Can any one advise please?
$ javac HelloWorld.java
HelloWorld.java:3: package org.apache.hadoop.hive.ql.exec does
e-your-own-tweets-part-two-loading-hive-sql-queries/
>>
>>https://github.com/kevinweil/elephant-bird
>>
>>
>>
>>
>>On Mon, Jan 6, 2014 at 9:36 AM, Raj Hadoop wrote:
>>
>>Hi,
>>>
>>>I am trying to load a data that is in JSON format
Hi,
I am trying to load a data that is in JSON format to Hive table. Can any one
suggest what is the method I need to follow?
Thanks,
Raj
like you're essentially doing a pivot function. Your best bet is to
write a custom UDAF or look at the windowing functions available in recent
releases.
Matt
On Dec 28, 2013 12:57 PM, "Raj Hadoop" wrote:
Dear All Hive Group Members,
>
>
>I have the following requirement.
Dear All Hive Group Members,
I have the following requirement.
Input:
Ticket#|Date of booking|Price
100|20-Oct-13|54
100|21-Oct-13|56
100|22-Oct-13|54
100|23-Oct-13|55
100|27-Oct-13|60
100|30-Oct-13|47
101|10-Sep-13|12
101|13-Sep-13|14
101|20-Oct-13|6
Expected Output:
Ticket#|Initial|Delta1
Hi,
I have a large set of text files. I have created a Hive table pointing to each
of these text files. I am looking to compress the files to save storage.
1) How should I compress the file to use LZO compression.
2) How to know whether LZO compression utility (command ?) is installed on the
H
Thanks Brad
On Monday, December 2, 2013 5:09 PM, Brad Ruderman
wrote:
Check out
size
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF
Thanks,
Brad
On Mon, Dec 2, 2013 at 5:05 PM, Raj Hadoop wrote:
hi,
>
>
>how to find number of elements in an arra
hi,
how to find number of elements in an array in Hive table?
thanks,
Raj
Hi ,
1) My requirement is to load a file ( a tar.gz file which has multiple tab
separated values files and one file is the main file which has huge data –
about 10 GB per day) to an externally partitioned hive table.
2) What I am doing is I have automated the process by extracting
Hi,
I have a web log files (text format). I want to load these files to a Hive
table in compressed format. How do I do it ?
Should I compress the text file (using any Linux utilities) and then create the
Hive table?
Can any one provide me the Hive syntax for loading the compressed file?
Thank
, id) as path_xxx from your_table
where id <1000
......
Cdt.
2013/11/4 Raj Hadoop
How can i use concat function? I did not get it. Can you please elaborate.
>
>
>My requirement is to create a HDFS directory like
>(cust_id>1000 and cust_id<2000)
>
>
>
>a
ou can use concat function or case to do this like:
Concat ('/data1/customer/', id)
.
Where id <1000
Etc..
Hope this help you ;)
Le 3 nov. 2013 23:51, "Raj Hadoop" a écrit :
All,
>
>
>I want to create partitions like the below and create a hive external ta
All,
I want to create partitions like the below and create a hive external table.
How can i do that ?
/data1/customer/id<1000
/data1/customer/id>1000 and id < 2000
/data1/customer/id >2000
Is this possible ( < and > symbols in folders ?)
My requirement is to partition the hive table based o
will affect the load and query time.
4. Think about compression as well before hand, as that will govern the data
split, and performance of your queries as well.
Regards,
Manish
Sent from my T-Mobile 4G LTE Device
Original message
From: Raj Hadoop
Date: 11/03/2013 7:
Hi,
I am sending this to the three dist-lists of Hadoop, Hive and Sqoop as this
question is closely related to all the three areas.
I have this requirement.
I have a big table in Oracle (about 60 million rows - Primary Key Customer Id).
I want to bring this to HDFS and then create
a Hive exter
Tim
On Thu, Oct 31, 2013 at 4:34 PM, Raj Hadoop wrote:
Hi,
>
>
>I am planning for a Hive External Partition Table based on a date.
>
>
>Which one of the below yields a better performance or both have the same
>performance?
>
>
>1) Partition based on one folder per d
cture of the data. In general in hive (depending on your cluster size) you
need to balance the number of files with the size, smaller number of files is
typically preferred but partitions will help when date restricting.
Thx,
Brad
On Thu, Oct 31, 2013 at 3:34 PM, Raj Hadoop wrote:
Hi,
>
Hi,
I am planning for a Hive External Partition Table based on a date.
Which one of the below yields a better performance or both have the same
performance?
1) Partition based on one folder per day
LIKE date INT
2) Partition based on one folder per year / month / day ( So it has three
folders)
Thanks. It worked for me now when i use it as an empty string.
From: Krishnan K
To: "user@hive.apache.org" ; Raj Hadoop
Sent: Thursday, October 17, 2013 11:11 AM
Subject: Re: Hive Query Questions - is null in WHERE
For string columns, nu
All,
When a query is executed like the below
select field1 from table1 where field1 is null;
I am getting the results which have empty values or nulls in field1. How does
is null work in Hive queries.
Thanks,
Raj
Yes, I have it.
Thanks,
Raj
From: Sonal Goyal
To: "user@hive.apache.org" ; Raj Hadoop
Sent: Monday, October 7, 2013 1:38 AM
Subject: Re: How to load /t /n file to Hive
Do you have the option to escape your tabs and newlines in your base fil
;user@hive.apache.org" ; Raj Hadoop
Sent: Friday, September 20, 2013 4:43 PM
Subject: Re: How to load /t /n file to Hive
Hi
One way that we used to solve that problem it's to transform the data when you
are creating/loading it, for example we've applied UrlEncode to each field on
create
hanks,
From: Nitin Pawar
To: "user@hive.apache.org" ; Raj Hadoop
Sent: Friday, September 20, 2013 3:15 PM
Subject: Re: How to load /t /n file to Hive
If your data contains new line chars, its better you write a custom map reduce
job and convert the dat
Please note that there is an escape chacter in the fields where the /t and /n
are present.
From: Raj Hadoop
To: Hive
Sent: Friday, September 20, 2013 3:04 PM
Subject: How to load /t /n file to Hive
Hi,
I have a file which is delimted by a tab. Also
Hi,
I have a file which is delimted by a tab. Also, there are some fields in the
file which has a tab /t character and a new line /n character in some fields.
Is there any way to load this file using Hive load command? Or do i have to use
a Custom Map Reduce (custom) Input format with java ?
Hi,
The hive thrift service is not running continously. I had to execute the
command (hive --service hiveserver &) very frequently . Can any one help me on
this?
Thanks,
Raj
dec;
SET mapreduce.output.fileoutputformat.compress=true;
Thanks
Sanjay
From: Raj Hadoop
Reply-To: "user@hive.apache.org" , Raj Hadoop
Date: Thursday, July 25, 2013 5:00 AM
To: Hive
Subject: Help in debugging Hive Query
All,
I am trying to determine visits for customer from omnitu
All,
I am trying to determine visits for customer from omniture weblog file using
Hive.
Table: omniture_web_data
Columns: visid_high,visid_low,evar23,visit_page_num
Sample Data:
visid_high,visid_low,evar23,visit_page_num
999,888,1003,10
999,888,1003,14
999,888,1003,6
999,777,1003,12
999,777,
All,
Can anyone give me tips on how to convert the following Oracle SQL to a Hive
query.
SELECT a.c100, a.c300, b.c400
FROM t1 a JOIN t2 b ON a.c200 = b.c200
WHERE a.c100 IN (SELECT DISTINCT a.c100
FROM t1 a JOIN t2 b ON a.c200 = b.c200
Hi ,
The log file that I am trying to load throuh Hive has some special characters
The field is shown below and the special characters ¿¿are also shown.
Shockwave Flash
in;Motive ManagementPlug-in;Google Update;Java(TM)Platform SE 7U21;McAfee
SiteAdvisor;McAfee Virtual Technician;W
? Any
tips please.
From: Sanjay Subramanian
To: "user@hive.apache.org" ; Raj Hadoop
Sent: Saturday, July 6, 2013 4:32 AM
Subject: Re: Loading a flat file + one additional field to a Hive table
How about this ?
Assume you have a log file called
oompaloompa.log
TIMESTAM
Hi,
Can any one please suggest the best way to do the following in Hive?
Load 'todays date stamp' + << ALL FIELDS C1,C2,C3,C4 IN A FILE F1 >> to a Hive
table T1 ( D1,C1,C2,C3,C4)
Can the following command be modified in some way to acheive the above
hive > load data local inpath '/
Adding to that
- Multiple files can be concatenated from the directory like
Example: cat 0-0 00-1 0-2 > final
From: Raj Hadoop
To: "user@hive.apache.org" ; "matouk.iftis...@ysance.com"
Sent: Friday, July 5, 2013 12:17
hive > set hive.io.output.fileformat=CSVTextFile;
hive > insert overwrite local directory '/usr/home/hadoop/da1/' select * from
customers
*** customers is a Hive table
From: Edward Capriolo
To: "user@hive.apache.org"
Sent: Friday, July 5, 2013 12:10 AM
Hi,
When I installed Hive earlier on my machine I used a oracle hive meta script.
Please find attached the script. HIVE worked fine for me on this box with no
issues.
I am trying to install Hive on another machine in a different Oracle metastore.
I executed the meta script but I am having is
n the same box you ran hive?
>
>
>
>
>On Mon, Jul 1, 2013 at 4:01 PM, Raj Hadoop wrote:
>
>Hi,
>>
>>My requirement is to load data from a (one column) Hive view to a CSV file.
>>After loading it, I dont see any file generated.
>>
>>I used the fo
Hi,
My requirement is to load data from a (one column) Hive view to a CSV file.
After loading it, I dont see any file generated.
I used the following commands to load data to file from a view v_june1
hive > set hive.io.output.fileformat=CSVTextFile;
hive > insert overwrite local directory '/u
Hi,
I have Hive metastore created in an Oracle database.
But when i execute my Hive queries , I see following directory and file created.
TempStatsStore (directory)
derby.log
What are this? Can one one suggest why derby log is created even though my
javax.jdo.option.ConnectionURL is pointi
Hi,
I am trying to run the following to load an Oracle table to Hive table using
Sqoop,
sqoop import --connect jdbc:oracle:thin:@//inferri.dm.com:1521/DBRM25 --table
DS12.CREDITS --username UPX1 --password piiwer --hive-import
Note: DS12 is a schema and UPX1 is the user through which the sche
user@hive.apache.org; Raj Hadoop
Sent: Friday, May 24, 2013 6:32 PM
Subject: Re: Apache Flume Properties File
so you spammed three big lists there, eh? with a general question for somebody
to serve up a solution on a silver platter for you -- all before you even read
any documentation on the subject m
Hi,
I just installed Apache Flume 1.3.1 and trying to run a small example to test.
Can any one suggest me how can I do this? I am going through the documentation
right now.
Thanks,
Raj
Hi,
I just finished setting up Apache sqoop 1.4.3. I am trying to test basic sqoop
import on Oracle.
sqoop import --connect jdbc:oracle:thin:@//intelli.dmn.com:1521/DBT --table
usr1.testonetwo --username usr123 --password passwd123
I am getting the error as
13/05/22 17:18:16 INFO manager
Hi,
My hive job logs are being written to /tmp/hadoop directory. I want to change
it to a different location i.e. a sub directory somehere under the 'hadoop'
user home directory.
How do I change it.
Thanks,
Ra
I am setting up a metastore on Oracle for Hive. I executed the script
hive-schema-0.9.0-sql file too succesfully.
When i ran this
hive > show tables;
I am getting the following error.
ORA-01950: no privileges on tablespace
What kind of Oracle privileges are required (Quota wise for Hive)
Sanjay -
This is the first location I tried. But Apache Hive 0.9.0 doesnt have an oracle
folder. It only had mysql and derby.
Thanks,
Raj
From: Sanjay Subramanian
To: "u...@hadoop.apache.org" ; Raj Hadoop
; Hive
Sent: Tuesday, May 21, 20
I got it. This is the link.
http://svn.apache.org/viewvc/hive/trunk/metastore/scripts/upgrade/oracle/hive-schema-0.9.0.oracle.sql?revision=1329416&view=co&pathrev=1329416
____
From: Raj Hadoop
To: Hive ; User
Sent: Tuesday, May 21, 2013 3:08 PM
Subject:
I am trying to get Oracle scripts for Hive Metastore.
http://mail-archives.apache.org/mod_mbox/hive-commits/201204.mbox/%3c20120423201303.9742b2388...@eris.apache.org%3E
The scripts in the above link has a + at the begining of each line. How should
I supposed to execute scripts like this thro
Thanks Sanjay
From: Sanjay Subramanian
To: bharath vissapragada ;
"user@hive.apache.org" ; Raj Hadoop
Cc: User
Sent: Tuesday, May 21, 2013 2:27 PM
Subject: Re: hive.metastore.warehouse.dir - Should it point to a physical
directory
Hi
So that means I need to create a HDFS ( Not an OS physical directory )
directory under Hadoop that need to be used in the Hive config file for this
property. Right?
From: Dean Wampler
To: Raj Hadoop
Cc: Sanjay Subramanian ;
"user@hive.apache.org&qu
yes thats what i meant. local physical directory. thanks.
From: bharath vissapragada
To: user@hive.apache.org; Raj Hadoop
Cc: User
Sent: Tuesday, May 21, 2013 1:59 PM
Subject: Re: hive.metastore.warehouse.dir - Should it point to a physical
directory
Hi
;user@hive.apache.org" ; Raj Hadoop
; Dean Wampler
Cc: User
Sent: Tuesday, May 21, 2013 1:53 PM
Subject: Re: hive.metastore.warehouse.dir - Should it point to a physical
directory
Notes below
From: Raj Hadoop
Reply-To: "user@hive.apache.org" , Raj Hadoop
Date: Tuesday, May
Ok.I got it. My questions -
1) Should a local physical directory be created before using this property?
2) Should a HDFS file directory be created from Hadoop before using this
property?
From: Dean Wampler
To: user@hive.apache.org; Raj Hadoop
Cc: User
Can some one help me on this ? I am stuck installing and configuring Hive with
Oracle. Your timely help is really aprreciated.
From: Raj Hadoop
To: Hive ; User
Sent: Tuesday, May 21, 2013 1:08 PM
Subject: hive.metastore.warehouse.dir - Should it point to a
Hi,
I am configurinig Hive. I ahve a question on the property
hive.metastore.warehouse.dir.
Should this point to a physical directory. I am guessing it is a logical
directory under Hadoop fs.default.name. Please advise whether I need to create
any directory for the variable hive.metastore.wa
1 - 100 of 107 matches
Mail list logo