Great! Thx
From: Ravi . [mailto:iphul...@gmail.com]
Sent: Sunday, June 05, 2011 01:40 AM
To: user@hive.apache.org ; Sun, Michael
Cc: sangee@gmail.com
Subject: Re: configuring Hive server and Hadoop server in separate machines
No, But you need to Hadoop and Hive binary files to client
Hey,
Hive tables are nothing but some meta-data overlay on top of folders in HDFS
containing table data. So I guess hdfs-sink of flume suffices.
Please correct me if I am wrong.
Thanks
On Sun, Jun 5, 2011 at 1:52 AM, Prashanth R wrote:
> Hi,
>
> Just throwing this out to get some good ideas.
No, But you need to Hadoop and Hive binary files to client. Just copy the
Hadoop home and Hive home dir contents to client and export it as
HADOOP_HOME and HIVE_HOME.
On Sat, Jun 4, 2011 at 10:23 PM, Sun, Michael wrote:
> Do you need to copy all hive jars into hardoop lib?
>
> *From*: Ravi . [ma
Do you need to copy all hive jars into hardoop lib?
From: Ravi . [mailto:iphul...@gmail.com]
Sent: Sunday, June 05, 2011 12:56 AM
To: user@hive.apache.org ; sangeetha s
Subject: Re: configuring Hive server and Hadoop server in separate machines
It's possible to setup separate node as hiv
It's possible to setup separate node as hive client and it's very common
practice.
It's not required to setup Hive client on the any of the Hadoop master nodes
or slave nodes (NameNode ,JobTracker ,DataNode)
You can setup Hive client on a separate node which can connect to the
NameNode and JobTrac
Is not this just the HDFS sink?
On Sat, Jun 4, 2011 at 1:22 PM, Prashanth R wrote:
> Hi,
>
> Just throwing this out to get some good ideas. Is anyone aware of any sink
> for flume that would write / load data directly to the hive tables? If not,
> one solution that I could think of is dump the
Hi,
Just throwing this out to get some good ideas. Is anyone aware of any sink
for flume that would write / load data directly to the hive tables? If not,
one solution that I could think of is dump the data to hdfs or s3 and have a
periodic map reduce job load it to hive.
--
- Prash
Hi all,
I have setup a hadoop cluster with 4 nodes running on Centos 5.4 machines.
Right now I had configured hive in the master node (hadoop server node) and
i am able to run hive queries via jdbc client (web application)
successfully.
Is it required to configure the hive server in the same mac