riginal Message-
From: Sasha Dolgy [mailto:sdo...@gmail.com]
Sent: Monday, May 18, 2009 9:50 AM
To: core-user@hadoop.apache.org
Subject: Re: proper method for writing files to hdfs
Ok, on the same page with that.
Going back to the original question. In our scenario we are trying to
stream d
-Original Message-
From: Sasha Dolgy [mailto:sdo...@gmail.com]
Sent: Monday, May 18, 2009 9:50 AM
To: core-user@hadoop.apache.org
Subject: Re: proper method for writing files to hdfs
Ok, on the same page with that.
Going back to the original question. In our scenario we are trying to
Ok, on the same page with that.
Going back to the original question. In our scenario we are trying to
stream data into HDFS and despite the posts and hints I've been
reading, it's still tough to crack this nut and this is why I thought
(and thankfully I wasn't right) that we were going about this
point the
namenode's data so you can recover from a namenode failure that has
corrupted data.
Bill
-Original Message-
From: Sasha Dolgy [mailto:sdo...@gmail.com]
Sent: Monday, May 18, 2009 9:34 AM
To: core-user@hadoop.apache.org
Subject: Re: proper method for writing files to hdf
Hi Bill,
Thanks for that. If the NameNode is unavailable, how do we find the
secondary name node? Is there a way to deal with this in the code or
should a load balancer of some type sit above each and only direct
traffic to the name node if its listening?
-sd
On Mon, May 18, 2009 at 2:09 PM, B
Sasha,
Connecting to the namenode is the proper way to establish the hdfs
connection. Afterwards the Hadoop client handler that is called by your
code will go directly to the datanodes. There is no reason for you to
communicate directly with a datanode nor is there a way for you to even know
wher