Hi Shaik

AFAIK it is not possible in hadoop. The hdfs storage concept is different from 
RAID, In hdfs your file is broken down to blocks and each of these blocks are 
stored in one or more Data Nodes in your cluster based on the replication 
factor.


Regards
Bejoy KS

Sent from handheld, please excuse typos.

-----Original Message-----
From: shaik ahamed <shaik5...@gmail.com>
Date: Mon, 13 Aug 2012 15:42:38 
To: <user@hive.apache.org>
Reply-To: user@hive.apache.org
Subject: loading data in HDFS similar to raid concept(i.e i have 100GB data
 file load as 30GB in one node, 40 GB in other node and 30GB in other node

Hi Users,


                         Is it possible in HDFS to load 100GB file into
30GB, 30GB & 40GB (similar to Raid) concept .If so please let me know the
way in achieving it.


Thanks in advance


Regards,
shaik.

Reply via email to