Thanks a lot for your suggestions.
On Fri, Jul 22, 2016 at 3:58 PM, Daniel Templeton
wrote:
> On 7/22/16 12:23 PM, Kun Ren wrote:
>
>> Thanks a lot for your reply, Daniel, very helpful.
>>
>> About (1) : I will consider this way, thanks. Also beside multiple
>>
communicates with the
active namenode, not both node, am I understanding right? Thanks.
On Fri, Jul 22, 2016 at 1:27 PM, Daniel Templeton
wrote:
> On 7/22/16 8:45 AM, Kun Ren wrote:
>
>> (1). How to create/start multiple namenodes?
>>
>
> Just pretend like you have two separa
Hi Genius,
I am currently involved in a project that will create/start multiple
namenodes(It is different with Federation that: We want to partition the
metadata not only by directory, and may support other partition schemes,
and we want to support the distributed operations that cross multiple
na
Hi Genius,
I understand that we use the command to start namenode and datanode. But I
don't know how HDFS starts client side and creates the Client side
object(Like DistributedFileSystem), and client side RPC server? Could you
please point it out how HDFS start the client side dameon?
If the clien
t's sufficient for your use case that the final rename step is
> atomic.
>
> --Chris Nauroth
>
>
>
>
> On 5/25/16, 8:21 AM, "Kun Ren" wrote:
>
> >Hi Genius,
> >
> >If I understand correctly, the shell command "cp" for the HDFS is not
&g
Hi Genius,
Does HDFS Federation support the cross namenodes operations?
For example:
./bin/hdfs dfs -cp input1/a.xml input2/b.xml
Supposed that input1 belongs namenode 1, and input 2 belongs namenode 2,
does Federation support this operation? And if not, why?
Thanks.
Hi Genius,
If I understand correctly, the shell command "cp" for the HDFS is not
atomic, is that correct?
For example:
./bin/hdfs dfs -cp input/a.xml input/b.xml
This command actually does 3 things, 1. read input/a.xml; 2. Create a new
file input/b.xml; 3. Write the content of a.xml to b.xml;
the shell, and
> org.apache.hadoop.fs.shell package is a good place to start. Specially,
> have a look at CommandWithDestination.java.
>
> Ciao,
>
> L
>
> On May 20, 2016, at 12:05 PM, Kun Ren wrote:
>
> Hi Genius,
>
> Currently I debugged the cp and mv operations, for
Hi Genius,
Currently I debugged the cp and mv operations, for example:
(1) ./bin/hdfs dfs -cp input/a.xml input/b.xml
(2)./bin/hdfs dfs -mv input/a.xml input/b.xml
My understanding is that for operation cp, it will create a new file b.xml,
and will copy the content of a.xml to b.xml; For mv opera
Yes, this fixed the problem. Thanks a lot for your reply.
On Tue, May 10, 2016 at 2:13 PM, Colin McCabe wrote:
> Hi Kun Ren,
>
> You have to add your new proto file to the relevant pom.xml file.
>
> best,
> Colin
>
> On Fri, May 6, 2016, at 13:04, Kun Ren wrote:
> >
Hi Genius,
I added a new proto into the
HADOOP_DIR/hadoop-common-project/hadoop-common/src/main/proto,
however,every time when I run the following Maven commands:
mvn install -DskipTests
mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs=
true
It only compiles a
Hi Genius,
I want to intercept the requests in the processRpcRequest() method in the
listener component in server.java, for example if I want to intercept the
"mkdirs" and "append" request, I just try to get the method name and
parameters before this line:
callQueue.put(call);
Currently
Hi Genius,
I want to intercept the requests in the processRpcRequest() method in the
listener component in server.java, for example if I want to intercept the
"mkdirs" and "append" request, I just try to get the method name and
parameters before this line:
callQueue.put(call);
Currently
elease-2.7.2/hadoop-hdfs-project
> /hadoop-hdfs/src/main/resources/hdfs-default.xml#L602-L606
>
>
> https://github.com/apache/hadoop/blob/rel/release-2.7.2/hadoop-hdfs-project
> /hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java#L473-L
> 474
>
>
> --Ch
Hi Genius,
I have a quick question:
I remembered I saw the default value for HandlerCout is 10(The number of
Handler threads), but I can not find where it is defined in the source
code, could you please point to me where I can find it in the 2.7.2
codebase? Thanks a lot.
> will work as is.
> 73,Kihwal
>
>
> From: Kun Ren
> To: hdfs-dev@hadoop.apache.org
> Sent: Wednesday, April 27, 2016 7:29 PM
> Subject: HDFS Federation
>
> Hi Genius,
>
> I have two questions about the HDFS Federation:
> (1) Since there are multiple
confirm that the Hadoop 2.7.2 support HDFS Federation, but
in default there is only 1 namenode, is this correct? Meanwhile, do you
think it is possible to configure the HDFS Fderation in the pseudo
distributed mode in one node?
Thanks so much in advance.
Best,
Kun Ren
Hi All,
I compiled the source code, and used eclipse to remotely debug the code, I
want to see the Debug information from the log, so I changed the log level
for some classes, for example, I changed the FsShell's log level to
DEBUG(change it from http://localhost:50070/logLevel), then I add the
fo
18 matches
Mail list logo