I wonder how you are able to run the job without a JT. You must have this
on your mapred-site.xml file :
        <property>
<name>mapred.job.tracker</name>
 <value>localhost:9001</value>
</property>

Also add "hadoop.tmp.dir" in core-site.xml, and "dfs.name.dir" &
"dfs.data.dir" in hdfs-site.xml.

Regards,
    Mohammad Tariq



On Wed, Dec 12, 2012 at 6:46 PM, imen Megdiche <imen.megdi...@gmail.com>wrote:

> For mapred-site.xml :
>
> <configuration>
>
> <property>
> <name>mapred.map.tasks</name>
> <value>6</value>
> </property>
>
> </configuration>
>
> for core-site.xml :
> <configuration>
>
> <!-- <property>
> <name>fs.default.name</name>
> <value>hdfs://localhost:9100</value>
> </property> -->
>
> </configuration>
>
>  on hdfs-site.xml  nothing
>
>
>
>
>
> 2012/12/12 Mohammad Tariq <donta...@gmail.com>
>
>> Can I have a look at your config files?
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>>
>> On Wed, Dec 12, 2012 at 6:31 PM, imen Megdiche 
>> <imen.megdi...@gmail.com>wrote:
>>
>>> i run the start-all.sh and all daemons starts without problems. But i
>>> the log of the tasktracker look like this :
>>>
>>>
>>> 2012-12-12 13:53:45,495 INFO org.apache.hadoop.mapred.TaskTracker:
>>> STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting TaskTracker
>>> STARTUP_MSG:   host = megdiche-OptiPlex-GX280/127.0.1.1
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 1.0.4
>>> STARTUP_MSG:   build =
>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>> ************************************************************/
>>> 2012-12-12 13:53:47,009 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>> hadoop-metrics2.properties
>>> 2012-12-12 13:53:47,331 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>> MetricsSystem,sub=Stats registered.
>>> 2012-12-12 13:53:47,336 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>> period at 10 second(s).
>>> 2012-12-12 13:53:47,336 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: TaskTracker metrics
>>> system started
>>> 2012-12-12 13:53:48,165 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>> registered.
>>> 2012-12-12 13:53:48,192 WARN
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
>>> exists!
>>> 2012-12-12 13:53:48,513 ERROR org.apache.hadoop.mapred.TaskTracker: Can
>>> not start task tracker because java.lang.IllegalArgumentException: Does not
>>> contain a valid host:port authority: local
>>>     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:162)
>>>     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:128)
>>>     at
>>> org.apache.hadoop.mapred.JobTracker.getAddress(JobTracker.java:2560)
>>>     at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:1426)
>>>     at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3742)
>>>
>>> 2012-12-12 13:53:48,519 INFO org.apache.hadoop.mapred.TaskTracker:
>>> SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down TaskTracker at megdiche-OptiPlex-GX280/
>>> 127.0.1.1
>>> ************************************************************/
>>>
>>>
>>>
>>>
>>> 2012/12/12 Mohammad Tariq <donta...@gmail.com>
>>>
>>>> I would check if all the daemons are running properly or not, before
>>>> anything else. If some problem is found, next place to track is the log of
>>>> each daemon.
>>>>
>>>> The correct command to check the status of a job from command line is :
>>>> hadoop job -status jobID.
>>>> (Mind the 'space' after job and remove 'command' from the statement)
>>>>
>>>> HTH
>>>>
>>>> Regards,
>>>>     Mohammad Tariq
>>>>
>>>>
>>>>
>>>> On Wed, Dec 12, 2012 at 6:14 PM, imen Megdiche <imen.megdi...@gmail.com
>>>> > wrote:
>>>>
>>>>> My goal is to analyze the response time of MapReduce depending on the size
>>>>> of the input files. I need to change the number of map and / or Reduce
>>>>> tasks and recover the execution time. S it turns out that nothing
>>>>> works locally on my pc :
>>>>> neither hadoop job-status command job_local_0001 (which return no job
>>>>> found )
>>>>> nor localhost: 50030
>>>>> I will be very grateful if you can help m better understand these
>>>>> problem
>>>>>
>>>>>
>>>>> 2012/12/12 Mohammad Tariq <donta...@gmail.com>
>>>>>
>>>>>> Are you working locally?What exactly is the issue?
>>>>>>
>>>>>> Regards,
>>>>>>     Mohammad Tariq
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Dec 12, 2012 at 6:00 PM, imen Megdiche <
>>>>>> imen.megdi...@gmail.com> wrote:
>>>>>>
>>>>>>> no
>>>>>>>
>>>>>>>
>>>>>>> 2012/12/12 Mohammad Tariq <donta...@gmail.com>
>>>>>>>
>>>>>>>> Any luck with "localhost:50030"??
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>>     Mohammad Tariq
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, Dec 12, 2012 at 5:53 PM, imen Megdiche <
>>>>>>>> imen.megdi...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> i run the job through the command line
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2012/12/12 Mohammad Tariq <donta...@gmail.com>
>>>>>>>>>
>>>>>>>>>> You have to replace "JobTrackerHost" in "JobTrackerHost:50030"
>>>>>>>>>> with the actual name of the machine where JobTracker is running.
>>>>>>>>>> For example, If you are working on a local cluster, you have to use
>>>>>>>>>> "localhost:50030".
>>>>>>>>>>
>>>>>>>>>> Are you running your job through the command line or some IDE?
>>>>>>>>>>
>>>>>>>>>> Regards,
>>>>>>>>>>     Mohammad Tariq
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Wed, Dec 12, 2012 at 5:42 PM, imen Megdiche <
>>>>>>>>>> imen.megdi...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> excuse me the data size is 98 MB
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> 2012/12/12 imen Megdiche <imen.megdi...@gmail.com>
>>>>>>>>>>>
>>>>>>>>>>>> the size of data 49 MB and n of map 4
>>>>>>>>>>>> the web UI JobTrackerHost:50030 does not wok, what should i do
>>>>>>>>>>>> to make this appear , i work on ubuntu
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> 2012/12/12 Mohammad Tariq <donta...@gmail.com>
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Imen,
>>>>>>>>>>>>>
>>>>>>>>>>>>>      You can visit the MR web UI at "JobTrackerHost:50030" and
>>>>>>>>>>>>> see all the useful information like no. of mappers, no of 
>>>>>>>>>>>>> reducers, time
>>>>>>>>>>>>> taken  for the execution etc.
>>>>>>>>>>>>>
>>>>>>>>>>>>> One quick question for you, what is the size of your data and
>>>>>>>>>>>>> what is the no of maps which you are getting right now?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>     Mohammad Tariq
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Dec 12, 2012 at 5:11 PM, imen Megdiche <
>>>>>>>>>>>>> imen.megdi...@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thank you Mohammad but the number of map tasks still the same
>>>>>>>>>>>>>> in the execution. Do you know how to capture the time spent on 
>>>>>>>>>>>>>> execution.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 2012/12/12 Mohammad Tariq <donta...@gmail.com>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hi Imen,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>     You can add "mapred.map.tasks" property in your
>>>>>>>>>>>>>>> mapred-site.xml file.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> But, it is just a hint for the InputFormat. Actually no. of
>>>>>>>>>>>>>>> maps is actually determined by the no of InputSplits created by
>>>>>>>>>>>>>>> the InputFormat.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> HTH
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>>     Mohammad Tariq
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Wed, Dec 12, 2012 at 4:11 PM, imen Megdiche <
>>>>>>>>>>>>>>> imen.megdi...@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I try to force the number of map for the mapreduce job with
>>>>>>>>>>>>>>>> the command :
>>>>>>>>>>>>>>>>   public static void main(String[] args) throws Exception {
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>           JobConf conf = new JobConf(WordCount.class);
>>>>>>>>>>>>>>>>              conf.set("mapred.job.tracker", "local");
>>>>>>>>>>>>>>>>          conf.set("fs.default.name", "local");
>>>>>>>>>>>>>>>>           conf.setJobName("wordcount");
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>           conf.setOutputKeyClass(Text.class);
>>>>>>>>>>>>>>>>          conf.setOutputValueClass(IntWritable.class);
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>            conf.setNumMapTask(6);
>>>>>>>>>>>>>>>>           conf.setMapperClass(Map.class);
>>>>>>>>>>>>>>>>           conf.setCombinerClass(Reduce.class);
>>>>>>>>>>>>>>>>           conf.setReducerClass(Reduce.class);
>>>>>>>>>>>>>>>> ...
>>>>>>>>>>>>>>>> }
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> But it doesn t work.
>>>>>>>>>>>>>>>> What can i do to modify the number of map and reduce tasks.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Thank you
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to