Sorry it was my fault.

Instead of starting my job as bin/hadoop jar job.jar I ran it as
bin/hadoop -cp job.jar.

I thought it would be the same.

Thanks anyway

Vasyl

2009/6/2 Aaron Kimball <[email protected]>:
> Can you post the contents of your hadoop-site.xml file here?
> - Aaron
>
> On Sat, May 30, 2009 at 2:44 AM, Vasyl Keretsman <[email protected]> wrote:
>
>> Hi all,
>>
>> I am just getting started with hadoop 0.20 and trying to run a job in
>> pseudo-distributed mode.
>>
>> I configured hadoop according to the tutorial, but it seems it does
>> not work as expected.
>>
>> My map/reduce tasks are running sequencially and output result is
>> stored on local filesystem instead of the dfs space.
>> Job tracker does not see the running job at all.
>> I have checked the logs but don't see any errors either. I have also
>> copied some files manually to the dfs to make sure it works.
>>
>> The only difference between the manual and my configuration is that I
>> had to change the ports for the job tracker and namenode as 9000 and
>> 9001 are already used by other apps on my workstation.
>>
>> Any hints?
>>
>> Thanks
>>
>> Regards,
>>
>> Vasyl
>>
>

Reply via email to