Thanks a lot for your reply, Daniel, very helpful.

About (1) :  I will consider this way, thanks. Also beside multiple
clusters, are there any other options to do so? Thanks.
About (2), if I understand correctly, HDFS used the quorum journal
manager(QJM) for HA, and the client  still only communicates with the
active namenode, not both node, am I understanding right? Thanks.

On Fri, Jul 22, 2016 at 1:27 PM, Daniel Templeton <dan...@cloudera.com>
wrote:

> On 7/22/16 8:45 AM, Kun Ren wrote:
>
>> (1).  How to create/start multiple namenodes?
>>
>
> Just pretend like you have two separate HDFS clusters and set them up that
> way.  (That is actually what you have.)
>
> (2). Once I have multiple Namenodes running, do you think what is the
>> best/simple way to change the HDFS client code to let the clients send the
>> requests to a random Namenode?
>>
>
> The client code is already built to try multiple NNs to handle HA. You can
> look there for inspiration.  If you want random, grab a random number and
> mod it by the number of NNs, then use that as an index into the list of NNs.
>
> (3). I need to support the communication between the Namenodes, my current
>> plan is to create one more protocol that supports the communication
>> between
>> the Namenodes, something like the clientProtocol and
>> ClientDataNodeProtocol. Do you think is it easy to do so? Or do you have
>> other suggestions to support the communication between the Namenodes?
>>
>
> You will indeed need to define a new protocol.  Not the easiest thing in
> the world, but there are plenty of docs on protobuf.  Good luck!
>
> Daniel
>

Reply via email to