Hi
Hdfs/hive to any RDBMS , sqoop is the right tool to go for. It is
exclusively meant for to and fro parallel data transfers between Rdbms and hdfs.
Regards
Bejoy KS
Sent from handheld, please excuse typos.
-Original Message-
From: wzc
Date: Sat, 5 May 2012 02:18:39
To:
Reply-T
hi Nicole,
Thanks for your response.
I will try your method and use '\001' as the separator(since there
contains tabs in the query results).
2012/5/5 Gesli, Nicole
> What I'd do is write the query output into local directory like this:
>
> INSERT OVERWRITE LOCAL DIRECTORY '/mydir'
> SELECT
What I'd do is write the query output into local directory like this:
INSERT OVERWRITE LOCAL DIRECTORY '/mydir'
SELECT …
The output columns will be delimited with ^A (\001). If you have to have tab
delimited format you can replace them like this:
cat /mydir/* | tr "\001" "\t" >> /mynewdir/myfil
Hi all:
I am new to hive, and I try to run a query through hive CLI and load the
result into mysql.
I try to redirect the CLI output to a tmp file and load the tmp file into
mysql table. The problem here is that some columns of our query result may
contains special chars, such as tab(\t), new line(
I have this issue for all hive queries.
In fact, I've tried only two types of queries (with UNION ALL) and simple
queries like SELECT field1, field2 FROM table SOME_TABLE WHERE
field2=SOME_CONST;
Thanks.
On Fri, May 4, 2012 at 5:58 PM, Bejoy KS wrote:
> **
> Hi Alexander
> Are you have a singl
Hi Alexander
Are you have a single node execution issue only for this particular query
that involves Union all or is it same across all hive queries.
Regards
Bejoy KS
Sent from handheld, please excuse typos.
-Original Message-
From: Alexander Goryunov
Date: Fri, 4 May 2012 17:23:
Hi Alexander
Are you have a single node execution issue only for this particular query
that involves Union all or is it same across all hive queries.
Regards
Bejoy KS
Sent from handheld, please excuse typos.
-Original Message-
From: Alexander Goryunov
Date: Fri, 4 May 2012 17:23:
On 4 May 2012, at 14:10, Bhavesh Shah wrote:
> Hello all,
> I have Elastic Mapreduce instance. While executing hive job flow I needed
> Subnet ID to access the VPC.
> Is there any way to add/create the Amazon Elastic Mapreduce Instance in that
> VPC?
If you're using the Ruby client
(http://a
Hi Bejoy KS,
Thanks for your answer.
from job.xml:
*mapred.job.tracker* =full.namenode.hostname:8021
On Fri, May 4, 2012 at 5:07 PM, Bejoy Ks wrote:
> Hi Alexander
> Since the tasks are just executing on local node. Looks like hive
> map reduce jobs are running locally. What is the valu
Hello all,
I have Elastic Mapreduce instance. While executing hive job flow I needed
Subnet ID to access the VPC.
Is there any way to add/create the Amazon Elastic Mapreduce Instance in
that VPC?
--
Regards,
Bhavesh Shah
Hi Alexander
Since the tasks are just executing on local node. Looks like hive map
reduce jobs are running locally. What is the value for mapred.job.tracker in
your job.xml or from mapred-site.xml?
Regards
Bejoy KS
From: Alexander Goryunov
To: user@hive
11 matches
Mail list logo