Re: create error

2010-07-05 Thread Aaron Kimball
Is there a reason you're using that particular interface? That's very
low-level.

See http://wiki.apache.org/hadoop/HadoopDfsReadWriteExample for the proper
API to use.

- Aaron

On Sat, Jul 3, 2010 at 1:36 AM, Vidur Goyal wrote:

> Hi,
>
> I am trying to create a file in hdfs . I am calling create from an
> instance of DFSClient. This is a part of code that i am using
>
> byte[] buf = new byte[65536];
>int len;
>while ((len = dis.available()) != 0) {
>if (len < buf.length) {
>break;
>} else {
>dis.read(buf, 0, buf.length);
>ds.write(buf, 0, buf.length);
>}
>}
>
> dis is DataInputStream for the local file system from which i am copying
> the file and ds is the DataOutputStream to hdfs.
>
> and i get these errors.
>
> 2010-07-03 13:45:07,480 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(127.0.0.1:50010,
> storageID=DS-455297472-127.0.0.1-50010-1278144155322, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.io.EOFException: while trying to read 65557 bytes
>at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:265)
>at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:309)
>at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:373)
>at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:525)
>at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:357)
>at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
>at java.lang.Thread.run(Thread.java:636)
>
>
> When i run the loop for number of times that is a multiple of block size ,
> the operation runs just fine. As soon as i change the buffer array size to
> a non block size , it starts giving errors.
> I am in middle of a project . Any help will be appreciated.
>
> thanks
> vidur
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
>


Debug HDFS

2010-07-05 Thread Alberich de megres
Hello again!

 A quick question:
- How you debug hdfs?

I'm using eclipse, but which class do i need to run?
thanks!


Re: Debug HDFS

2010-07-05 Thread Jeff Zhang
You can debug MiniDFSCluster which is for unit test of HDFS.



On Mon, Jul 5, 2010 at 8:53 AM, Alberich de megres wrote:

> Hello again!
>
>  A quick question:
> - How you debug hdfs?
>
> I'm using eclipse, but which class do i need to run?
> thanks!
>



-- 
Best Regards

Jeff Zhang


Re: Newbie point to start

2010-07-05 Thread Alberich de megres
Retaking this thread,
and sorry to insist:

using the wiki steps to run hdfs leads to a:
Hadoop common not found



On Sat, Jun 12, 2010 at 12:49 AM, Alberich de megres
 wrote:
> Oks,
> Thanks a lot!!!
>
> I'm going to try it now.
>
> On Fri, Jun 11, 2010 at 2:46 AM, Jitendra Nath Pandey
>  wrote:
>> You can checkout hadoop-20 branch from 
>> http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20/ ,
>>  and build and run following the steps on the wiki.
>>
>>
>>
>> On 6/10/10 2:01 PM, "Alberich de megres"  wrote:
>>
>> Thanks!
>>
>> Can i compile just the source at the repo and use it just as is?
>> I mean, without having any hadoop source code (except the hdfs code at
>> the web i told you). Or without the need to integrate it with a hadoop
>> compiiled code? Just as if a diferent or standalone project.
>>
>>
>>
>> On Thu, Jun 10, 2010 at 10:50 PM, Jitendra Nath Pandey
>>  wrote:
>>> You can test hdfs without setting up map-reduce cluster if that's what you 
>>> mean.
>>>
>>> Instead of bin/start-all.sh , use bin/start-dfs.sh and you can skip 
>>> configurations related to mapreduce.
>>>
>>> To test it, use DFS command line  "bin/hadoop dfs".
>>>
>>>
>>> On 6/10/10 1:16 PM, "Alberich de megres"  wrote:
>>>
>>> Thanks for the quick reply,
>>>
>>> But I'm talking about just hdfs.. is it posible to test it separately?
>>> with source code available at:
>>> http://github.com/apache/hadoop-hdfs
>>>
>>> I compiled it, and now i want to test it. (aside from hadoop)
>>>
>>>
>>> On Thu, Jun 10, 2010 at 9:37 PM, Jitendra Nath Pandey
>>>  wrote:
 This link should help.
    http://wiki.apache.org/hadoop/QuickStart


 On 6/10/10 12:20 PM, "Alberich de megres"  wrote:

 Hello!

 I'm new on HDFS, i just downloaded the source code and compiled it.

 Now I want to excecure it on 2 machines.. but i don't know how to start 
 servers.

 Is there any web/doc or someone can point me some light on how to start?

 Thanks!!
 Alberich


>>>
>>>
>>
>>
>


Re: Newbie point to start

2010-07-05 Thread Alberich de megres
sorry, i paste the wrong message:

bin/start-dfs.sh: line 50: /bin/hadoop-daemon.sh: file dont' exists

I was asking if it was possible to run hdfs without hadoop files,
onlye with files on the svn repo (or this git mirror):
http://github.com/apache/hadoop-hdfs.git

thanks!!

On Mon, Jul 5, 2010 at 7:33 PM, Alberich de megres
 wrote:
> Retaking this thread,
> and sorry to insist:
>
> using the wiki steps to run hdfs leads to a:
> Hadoop common not found
>
>
>
> On Sat, Jun 12, 2010 at 12:49 AM, Alberich de megres
>  wrote:
>> Oks,
>> Thanks a lot!!!
>>
>> I'm going to try it now.
>>
>> On Fri, Jun 11, 2010 at 2:46 AM, Jitendra Nath Pandey
>>  wrote:
>>> You can checkout hadoop-20 branch from 
>>> http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20/ ,
>>>  and build and run following the steps on the wiki.
>>>
>>>
>>>
>>> On 6/10/10 2:01 PM, "Alberich de megres"  wrote:
>>>
>>> Thanks!
>>>
>>> Can i compile just the source at the repo and use it just as is?
>>> I mean, without having any hadoop source code (except the hdfs code at
>>> the web i told you). Or without the need to integrate it with a hadoop
>>> compiiled code? Just as if a diferent or standalone project.
>>>
>>>
>>>
>>> On Thu, Jun 10, 2010 at 10:50 PM, Jitendra Nath Pandey
>>>  wrote:
 You can test hdfs without setting up map-reduce cluster if that's what you 
 mean.

 Instead of bin/start-all.sh , use bin/start-dfs.sh and you can skip 
 configurations related to mapreduce.

 To test it, use DFS command line  "bin/hadoop dfs".


 On 6/10/10 1:16 PM, "Alberich de megres"  wrote:

 Thanks for the quick reply,

 But I'm talking about just hdfs.. is it posible to test it separately?
 with source code available at:
 http://github.com/apache/hadoop-hdfs

 I compiled it, and now i want to test it. (aside from hadoop)


 On Thu, Jun 10, 2010 at 9:37 PM, Jitendra Nath Pandey
  wrote:
> This link should help.
>    http://wiki.apache.org/hadoop/QuickStart
>
>
> On 6/10/10 12:20 PM, "Alberich de megres"  wrote:
>
> Hello!
>
> I'm new on HDFS, i just downloaded the source code and compiled it.
>
> Now I want to excecure it on 2 machines.. but i don't know how to start 
> servers.
>
> Is there any web/doc or someone can point me some light on how to start?
>
> Thanks!!
> Alberich
>
>


>>>
>>>
>>
>