Re: Multiple namenodes

2016-07-22 Thread Kun Ren
Thanks a lot for your suggestions. On Fri, Jul 22, 2016 at 3:58 PM, Daniel Templeton wrote: > On 7/22/16 12:23 PM, Kun Ren wrote: > >> Thanks a lot for your reply, Daniel, very helpful. >> >> About (1) : I will consider this way, thanks. Also beside multiple >>

Re: Multiple namenodes

2016-07-22 Thread Kun Ren
communicates with the active namenode, not both node, am I understanding right? Thanks. On Fri, Jul 22, 2016 at 1:27 PM, Daniel Templeton wrote: > On 7/22/16 8:45 AM, Kun Ren wrote: > >> (1). How to create/start multiple namenodes? >> > > Just pretend like you have two separa

Multiple namenodes

2016-07-22 Thread Kun Ren
Hi Genius, I am currently involved in a project that will create/start multiple namenodes(It is different with Federation that: We want to partition the metadata not only by directory, and may support other partition schemes, and we want to support the distributed operations that cross multiple na

Start client side daemon

2016-07-22 Thread Kun Ren
Hi Genius, I understand that we use the command to start namenode and datanode. But I don't know how HDFS starts client side and creates the Client side object(Like DistributedFileSystem), and client side RPC server? Could you please point it out how HDFS start the client side dameon? If the clien

Re: Cp command is not atomic

2016-05-25 Thread Kun Ren
t's sufficient for your use case that the final rename step is > atomic. > > --Chris Nauroth > > > > > On 5/25/16, 8:21 AM, "Kun Ren" wrote: > > >Hi Genius, > > > >If I understand correctly, the shell command "cp" for the HDFS is not &g

HDFS Federation-- cross namenodes operations

2016-05-25 Thread Kun Ren
Hi Genius, Does HDFS Federation support the cross namenodes operations? For example: ./bin/hdfs dfs -cp input1/a.xml input2/b.xml Supposed that input1 belongs namenode 1, and input 2 belongs namenode 2, does Federation support this operation? And if not, why? Thanks.

Cp command is not atomic

2016-05-25 Thread Kun Ren
Hi Genius, If I understand correctly, the shell command "cp" for the HDFS is not atomic, is that correct? For example: ./bin/hdfs dfs -cp input/a.xml input/b.xml This command actually does 3 things, 1. read input/a.xml; 2. Create a new file input/b.xml; 3. Write the content of a.xml to b.xml;

Re: cp and mv

2016-05-23 Thread Kun Ren
the shell, and > org.apache.hadoop.fs.shell package is a good place to start. Specially, > have a look at CommandWithDestination.java. > > Ciao, > > L > > On May 20, 2016, at 12:05 PM, Kun Ren wrote: > > Hi Genius, > > Currently I debugged the cp and mv operations, for

cp and mv

2016-05-20 Thread Kun Ren
Hi Genius, Currently I debugged the cp and mv operations, for example: (1) ./bin/hdfs dfs -cp input/a.xml input/b.xml (2)./bin/hdfs dfs -mv input/a.xml input/b.xml My understanding is that for operation cp, it will create a new file b.xml, and will copy the content of a.xml to b.xml; For mv opera

Re: Compile proto

2016-05-10 Thread Kun Ren
Yes, this fixed the problem. Thanks a lot for your reply. On Tue, May 10, 2016 at 2:13 PM, Colin McCabe wrote: > Hi Kun Ren, > > You have to add your new proto file to the relevant pom.xml file. > > best, > Colin > > On Fri, May 6, 2016, at 13:04, Kun Ren wrote: > >

Compile proto

2016-05-06 Thread Kun Ren
Hi Genius, I added a new proto into the HADOOP_DIR/hadoop-common-project/hadoop-common/src/main/proto, however,every time when I run the following Maven commands: mvn install -DskipTests mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs= true It only compiles a

Get the methodName and parameters from the Call object in server.java

2016-05-05 Thread Kun Ren
Hi Genius, I want to intercept the requests in the processRpcRequest() method in the listener component in server.java, for example if I want to intercept the "mkdirs" and "append" request, I just try to get the method name and parameters before this line: callQueue.put(call); Currently

Get the methodName and parameters from the Call object in server.java

2016-05-02 Thread Kun Ren
Hi Genius, I want to intercept the requests in the processRpcRequest() method in the listener component in server.java, for example if I want to intercept the "mkdirs" and "append" request, I just try to get the method name and parameters before this line: callQueue.put(call); Currently

Re: handlerCount

2016-04-28 Thread Kun Ren
elease-2.7.2/hadoop-hdfs-project > /hadoop-hdfs/src/main/resources/hdfs-default.xml#L602-L606 > > > https://github.com/apache/hadoop/blob/rel/release-2.7.2/hadoop-hdfs-project > /hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java#L473-L > 474 > > > --Ch

handlerCount

2016-04-28 Thread Kun Ren
Hi Genius, I have a quick question: I remembered I saw the default value for HandlerCout is 10(The number of Handler threads), but I can not find where it is defined in the source code, could you please point to me where I can find it in the 2.7.2 codebase? Thanks a lot.

Re: HDFS Federation

2016-04-28 Thread Kun Ren
> will work as is. > 73,Kihwal > > > From: Kun Ren > To: hdfs-dev@hadoop.apache.org > Sent: Wednesday, April 27, 2016 7:29 PM > Subject: HDFS Federation > > Hi Genius, > > I have two questions about the HDFS Federation: > (1) Since there are multiple

HDFS Federation

2016-04-27 Thread Kun Ren
confirm that the Hadoop 2.7.2 support HDFS Federation, but in default there is only 1 namenode, is this correct? Meanwhile, do you think it is possible to configure the HDFS Fderation in the pseudo distributed mode in one node? Thanks so much in advance. Best, Kun Ren

Change log level

2016-04-19 Thread Kun Ren
Hi All, I compiled the source code, and used eclipse to remotely debug the code, I want to see the Debug information from the log, so I changed the log level for some classes, for example, I changed the FsShell's log level to DEBUG(change it from http://localhost:50070/logLevel), then I add the fo