Thanks, I have another doubt.I just want to run the examples and see how it works.I am trying to copy the file from local file system to hdfs using the command
bin/hadoop fs -put conf input It is giving the following error. 09/03/29 05:50:54 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.NoRouteToHostException: No route to host 09/03/29 05:50:54 INFO hdfs.DFSClient: Abandoning block blk_-5733385806393158149_1053 I have only one datanode in my cluster and my replication factor is also 1(as configured in the conf file in hadoop-site.xml).Can you please provide the solution for this. Thanks in advance SreeDeepya sree deepya wrote: > > Hi sir/madam, > > I am SreeDeepya,doing Mtech in IIIT.I am working on a project named cost > effective and scalable storage server.Our main goal of the project is to > be > able to store images in a server and the data can be upto petabytes.For > that > we are using HDFS.I am new to hadoop and am just learning about it. > Can you please clarify some of the doubts I have. > > > > At present we configured one datanode and one namenode.Jobtracker is > running > on namenode and tasktracker on datanode.Now namenode also acts as > client.Like we are writing programs in the namenode to store or retrieve > images.My doubts are > > 1.Can we put the client and namenode in two separate systems? > > 2.Can we access the images from the datanode of hadoop cluster from a > machine in which hdfs is not there? > > 3.At present we may not have data upto petabytes but will be in > gigabytes.Is > hadoop still efficient in storing mega and giga bytes of data???? > > > Thanking you, > > Yours sincerely, > SreeDeepya > > -- View this message in context: http://www.nabble.com/hdfs-doubt-tp22764502p22765332.html Sent from the Hadoop core-user mailing list archive at Nabble.com.
