Hi Farah, If the data are on another server, you still need to move it one
way or another.  A bare bone way to do this is to use `hadoop fs -put ...`
command after which you can create external or managed table in Hive.  If
the data are in a relational DB you can use sqoop.  You can also look into
Flume or Oozie workflow management tools to manage the whole process.

--
Alex K
<http://www.cloudera.com/company/press-center/hadoop-world-nyc/>

On Thu, Mar 1, 2012 at 8:20 AM, Omer, Farah <fo...@microstrategy.com> wrote:

>  Hello,
>
> Could anybody tell me how can I load data into a Hive table when the flat
> file is existing on another server and bit locally on Hadoop node.
>
> For example, I am trying to load the table LU_CUSTOMER, and the flat file
> for this table exists on some other RH linux server: 10.11.12.13. The
> LU_CUSTOMER flat file is about 30 GB in size, hence if I move it locally to
> the Hadoop node, that will take a long time. I am trying to avoid this
> loading onto Hadoop node part.
> So I wonder if there is a way to load the table directly from the other
> server.
>
> The syntax that I know currently is: LOAD DATA LOCAL INPATH
> '/home/nzdata/CLOUD/SCRIPT/LU_CUSTOMER.txt' OVERWRITE INTO TABLE
> LU_CUSTOMER;
>
> But in case I want to load from the other server directly, the path won’t
> be local.
>
> Any suggestions? Is that possible….
>
> Thanks.
>
> *Farah Omer*
>
> *Senior DB Engineer, MicroStrategy, Inc.*
> T: 703 2702230
> E: *fo...@microstrategy.com* <fo...@microstrategy.com>
> *http://www.microstrategy.com* <http://www.microstrategy.com>
>
>
>
>

Reply via email to