Correction it is fs.default.name not fs.dfs.name.
Thanks,
Usman
Thanks Steve,
We just conducted a quick test by taking a node which has the same version of hadoop as the namenode and datanodes and we changed the fs.dfs.name on this node to point to the master on port 9000. We did a put/get and it worked. It worked. All our machines (potential clients we can use) are on same the LAN. This will give us the ability to put a multitude of files into HDFS quickly.
Usman Waheed wrote:
Hi All,

Is it possible to make a node just a hadoop client so that it can put/get files into HDFS but not act as a namenode or datanode? I already have a master node and 3 datanodes but need to execute puts/gets into hadoop in parallel using more than just one machine other than the master.


Anything on the LAN can be a client of the filesystem, you just need appropriate hadoop configuration files to talk to the namenode and job tracker. I don't know how well the (custom) IPC works over long distances, and you have to keep the versions in sync for everything to work reliably.

Reply via email to