2010 at 7:33 PM, Alberich de megres
wrote:
> Retaking this thread,
> and sorry to insist:
>
> using the wiki steps to run hdfs leads to a:
> Hadoop common not found
>
>
>
> On Sat, Jun 12, 2010 at 12:49 AM, Alberich de megres
> wrote:
>> Oks,
>> Thanks a
Retaking this thread,
and sorry to insist:
using the wiki steps to run hdfs leads to a:
Hadoop common not found
On Sat, Jun 12, 2010 at 12:49 AM, Alberich de megres
wrote:
> Oks,
> Thanks a lot!!!
>
> I'm going to try it now.
>
> On Fri, Jun 11, 2010 at 2:46 AM, Jitend
Hello again!
A quick question:
- How you debug hdfs?
I'm using eclipse, but which class do i need to run?
thanks!
>
> On 6/10/10 2:01 PM, "Alberich de megres" wrote:
>
> Thanks!
>
> Can i compile just the source at the repo and use it just as is?
> I mean, without having any hadoop source code (except the hdfs code at
> the web i told you). Or without the need to integrate it
mand line "bin/hadoop dfs".
>
>
> On 6/10/10 1:16 PM, "Alberich de megres" wrote:
>
> Thanks for the quick reply,
>
> But I'm talking about just hdfs.. is it posible to test it separately?
> with source code available at:
> http://github.com/apach
wrote:
> This link should help.
> http://wiki.apache.org/hadoop/QuickStart
>
>
> On 6/10/10 12:20 PM, "Alberich de megres" wrote:
>
> Hello!
>
> I'm new on HDFS, i just downloaded the source code and compiled it.
>
> Now I want to excecure it on
Hello!
I'm new on HDFS, i just downloaded the source code and compiled it.
Now I want to excecure it on 2 machines.. but i don't know how to start servers.
Is there any web/doc or someone can point me some light on how to start?
Thanks!!
Alberich
ther way to accomplish what you want to do before attempting a client
>> reimplementation in C right now.. if you only need to talk to the namenode
>> and not the datanodes it might be a little easier but still, lots of work
>> that will probably be obsolete after another release or
Client" which has sub-classes like
> Call, and Connection which wrap the actual java IO. This all lives in
> the org.apache.hadoop.ipc package.
>
> Be sure to use a good IDE like IJ or Eclipse to browse the code, it
> makes following all this stuff much easier.
>
>
>
silly question, but i am really lost at this point.
Thanks for the patience.
On Fri, Apr 2, 2010 at 2:11 AM, Alberich de megres
wrote:
> Hi Jay!
>
> thanks for the answear but i'm asking for what it works it sends?
> blockreport is an interface in DatanodeProtocol that has no
>
reference to an actual namenode, it's a wrapper for a network
> protocol created by that RPC.waitForProxy call -- so when it calls
> namenode.blockReport, it's sending that information over RPC to the namenode
> instance over the network
>
> On Thu, Apr 1, 2010 at 5:50 AM
Hi everyone!
sailing throught the hdfs source code that comes with hadoop 0.20.2, i
could not understand how hdfs sends blockreport to nameNode.
As i can see, in
src/hdfs/org/apache/hadoop/hdfs/server/datanode/DataNode.java we
create this.namenode interface with RPC.waitForProxy call (wich i
coul
12 matches
Mail list logo