My goal is to analyze the response time of MapReduce depending on the size of the input files. I need to change the number of map and / or Reduce tasks and recover the execution time. S it turns out that nothing works locally on my pc : neither hadoop job-status command job_local_0001 (which return no job found ) nor localhost: 50030 I will be very grateful if you can help m better understand these problem
2012/12/12 Mohammad Tariq <donta...@gmail.com> > Are you working locally?What exactly is the issue? > > Regards, > Mohammad Tariq > > > > On Wed, Dec 12, 2012 at 6:00 PM, imen Megdiche <imen.megdi...@gmail.com>wrote: > >> no >> >> >> 2012/12/12 Mohammad Tariq <donta...@gmail.com> >> >>> Any luck with "localhost:50030"?? >>> >>> Regards, >>> Mohammad Tariq >>> >>> >>> >>> On Wed, Dec 12, 2012 at 5:53 PM, imen Megdiche >>> <imen.megdi...@gmail.com>wrote: >>> >>>> i run the job through the command line >>>> >>>> >>>> 2012/12/12 Mohammad Tariq <donta...@gmail.com> >>>> >>>>> You have to replace "JobTrackerHost" in "JobTrackerHost:50030" with >>>>> the actual name of the machine where JobTracker is running. For >>>>> example, If you are working on a local cluster, you have to use >>>>> "localhost:50030". >>>>> >>>>> Are you running your job through the command line or some IDE? >>>>> >>>>> Regards, >>>>> Mohammad Tariq >>>>> >>>>> >>>>> >>>>> On Wed, Dec 12, 2012 at 5:42 PM, imen Megdiche < >>>>> imen.megdi...@gmail.com> wrote: >>>>> >>>>>> excuse me the data size is 98 MB >>>>>> >>>>>> >>>>>> 2012/12/12 imen Megdiche <imen.megdi...@gmail.com> >>>>>> >>>>>>> the size of data 49 MB and n of map 4 >>>>>>> the web UI JobTrackerHost:50030 does not wok, what should i do to >>>>>>> make this appear , i work on ubuntu >>>>>>> >>>>>>> >>>>>>> 2012/12/12 Mohammad Tariq <donta...@gmail.com> >>>>>>> >>>>>>>> Hi Imen, >>>>>>>> >>>>>>>> You can visit the MR web UI at "JobTrackerHost:50030" and see >>>>>>>> all the useful information like no. of mappers, no of reducers, time >>>>>>>> taken >>>>>>>> for the execution etc. >>>>>>>> >>>>>>>> One quick question for you, what is the size of your data and what >>>>>>>> is the no of maps which you are getting right now? >>>>>>>> >>>>>>>> Regards, >>>>>>>> Mohammad Tariq >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Dec 12, 2012 at 5:11 PM, imen Megdiche < >>>>>>>> imen.megdi...@gmail.com> wrote: >>>>>>>> >>>>>>>>> Thank you Mohammad but the number of map tasks still the same in >>>>>>>>> the execution. Do you know how to capture the time spent on execution. >>>>>>>>> >>>>>>>>> >>>>>>>>> 2012/12/12 Mohammad Tariq <donta...@gmail.com> >>>>>>>>> >>>>>>>>>> Hi Imen, >>>>>>>>>> >>>>>>>>>> You can add "mapred.map.tasks" property in your >>>>>>>>>> mapred-site.xml file. >>>>>>>>>> >>>>>>>>>> But, it is just a hint for the InputFormat. Actually no. of maps >>>>>>>>>> is actually determined by the no of InputSplits created by the >>>>>>>>>> InputFormat. >>>>>>>>>> >>>>>>>>>> HTH >>>>>>>>>> >>>>>>>>>> Regards, >>>>>>>>>> Mohammad Tariq >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Dec 12, 2012 at 4:11 PM, imen Megdiche < >>>>>>>>>> imen.megdi...@gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> I try to force the number of map for the mapreduce job with the >>>>>>>>>>> command : >>>>>>>>>>> public static void main(String[] args) throws Exception { >>>>>>>>>>> >>>>>>>>>>> JobConf conf = new JobConf(WordCount.class); >>>>>>>>>>> conf.set("mapred.job.tracker", "local"); >>>>>>>>>>> conf.set("fs.default.name", "local"); >>>>>>>>>>> conf.setJobName("wordcount"); >>>>>>>>>>> >>>>>>>>>>> conf.setOutputKeyClass(Text.class); >>>>>>>>>>> conf.setOutputValueClass(IntWritable.class); >>>>>>>>>>> >>>>>>>>>>> conf.setNumMapTask(6); >>>>>>>>>>> conf.setMapperClass(Map.class); >>>>>>>>>>> conf.setCombinerClass(Reduce.class); >>>>>>>>>>> conf.setReducerClass(Reduce.class); >>>>>>>>>>> ... >>>>>>>>>>> } >>>>>>>>>>> >>>>>>>>>>> But it doesn t work. >>>>>>>>>>> What can i do to modify the number of map and reduce tasks. >>>>>>>>>>> >>>>>>>>>>> Thank you >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> >