ou the result.
>
> Regards,
> Mohammad Tariq
>
>
>
> On Thu, Dec 13, 2012 at 7:51 PM, imen Megdiche wrote:
>
>> I don t understand why you mean with "Same holds good for Hive or Pig"
>> , do you mean i would rather compare datawarehouses with h
I don t understand why you mean with "Same holds good for Hive or Pig" ,
do you mean i would rather compare datawarehouses with hive or Pig.
Great, you help me so much. Mohammad.
2012/12/13 Mohammad Tariq
> If you are going to do some OLTP kinda thing, I would not suggest Hadoop.
> Same holds
he
>> jobqueue/cluster capacity, cpu time will increase
>> On Dec 13, 2012 4:02 PM, "imen Megdiche" wrote:
>>
>>> Hello,
>>>
>>> I am trying to increase the number of map and reduce tasks for a job and
>>> even for the same data size, I
;
> On Wed, Dec 12, 2012 at 7:45 PM, imen Megdiche wrote:
>
>> have you please commented the configuration of hadoop on cluster
>>
>> thanks
>>
>>
>> 2012/12/12 Mohammad Tariq
>>
>>> You are always welcome. If you still need any help, you can
long with few small(but
> necessary) explanations.
>
> Regards,
> Mohammad Tariq
>
>
>
> On Wed, Dec 12, 2012 at 7:31 PM, imen Megdiche wrote:
>
>> thank you very much you re awsome.
>>
>> Fixed
>>
>>
>> 2012/12/12 Mohammad Tariq
>
thank you very much you re awsome.
Fixed
2012/12/12 Mohammad Tariq
> Uncomment the property in core-site.xml. That is a must. After doing this
> you have to restart the daemons?
>
> Regards,
> Mohammad Tariq
>
>
>
> On Wed, Dec 12, 2012 at 7:08 PM, imen Megdich
For mapred-site.xml :
mapred.map.tasks
6
for core-site.xml :
on hdfs-site.xml nothing
2012/12/12 Mohammad Tariq
> Can I have a look at your config files?
>
> Regards,
> Mohammad Tariq
>
>
>
> On Wed, Dec 12, 2012 at 6:31 PM, imen Megdiche wro
ach daemon.
>
> The correct command to check the status of a job from command line is :
> hadoop job -status jobID.
> (Mind the 'space' after job and remove 'command' from the statement)
>
> HTH
>
> Regards,
> Mohammad Tariq
>
>
>
> On Wed
return no job found
)
nor localhost: 50030
I will be very grateful if you can help m better understand these problem
2012/12/12 Mohammad Tariq
> Are you working locally?What exactly is the issue?
>
> Regards,
> Mohammad Tariq
>
>
>
> On Wed, Dec 12, 2012 at 6:00 PM, imen
no
2012/12/12 Mohammad Tariq
> Any luck with "localhost:50030"??
>
> Regards,
> Mohammad Tariq
>
>
>
> On Wed, Dec 12, 2012 at 5:53 PM, imen Megdiche wrote:
>
>> i run the job through the command line
>>
>>
>> 2012/12/12 M
se "localhost:50030".
>
> Are you running your job through the command line or some IDE?
>
> Regards,
> Mohammad Tariq
>
>
>
> On Wed, Dec 12, 2012 at 5:42 PM, imen Megdiche wrote:
>
>> excuse me the data size is 98 MB
>>
>>
&
excuse me the data size is 98 MB
2012/12/12 imen Megdiche
> the size of data 49 MB and n of map 4
> the web UI JobTrackerHost:50030 does not wok, what should i do to make
> this appear , i work on ubuntu
>
>
> 2012/12/12 Mohammad Tariq
>
>> Hi Imen,
>>
>&
nformation like no. of mappers, no of reducers, time taken for the
> execution etc.
>
> One quick question for you, what is the size of your data and what is the
> no of maps which you are getting right now?
>
> Regards,
> Mohammad Tariq
>
>
>
> On Wed, Dec 12, 2012 a
it is just a hint for the InputFormat. Actually no. of maps is
> actually determined by the no of InputSplits created by the InputFormat.
>
> HTH
>
> Regards,
> Mohammad Tariq
>
>
>
> On Wed, Dec 12, 2012 at 4:11 PM, imen Megdiche wrote:
>
>> Hi,
>>
>
; 'hadoop' command for instead:
> $HADOOP_HOME/bin/hadoop job -status job_xxx
>
>
>
>
> --
> Best Regards,
> longmans
>
> At 2012-12-12 17:56:45,"imen Megdiche" wrote:
>
> I think that my job id is in this line :
>
> 12/12/12 10:43:00 I
you jobid and use this command:
> $HADOOP_HOME/hadoop job -status job_xxx
>
>
>
>
> --
> Best Regards,
> longmans
>
> At 2012-12-12 17:23:39,"imen Megdiche" wrote:
>
> Hi,
>
> I want to know from the output of the execution of the example of
> ma
I just configured hive.metastore.warehouse.dir with the new path and it
works.
Fixed ..
2012/11/28 Nitin Pawar
> can you try providing a location 'path_to_file' and create the table again
>
>
> On Wed, Nov 28, 2012 at 2:09 PM, imen Megdiche wrote:
>
>> Hel
Hello,
It is possible to write map and merge outputs in external files in order to
see them. Otherwhise how can i do to see the intermediate results.
Thank you
Hello,
I got this error when trying to create a test table with hive
FAILED: Error in metadata: MetaException (message: Got exception:
java.io.FileNotFoundException File file :/ user / hive / warehouse / test does
not exist.)
I changed the default directories warhouse hive.metastore.warehouse.di
hello,
I want to know the principle of hiveql requests i.e how it translates these
queries into MapReduce job. Is there any piece of source code that can
explain that.
thank you very much for your responses
ia HDP. You do not need
> Cygwin. You can download this from Microsoft:
> http://hortonworks.com/partners/microsoft/
>
> We're working to get this open source code back into mainline Apache now.
>
>
> On Tue, Nov 13, 2012 at 5:57 AM, imen Megdiche wrote:
>
>> He
Hello,
I can not find a solution to run hive under cygwin.
Although hadoop works very well, the command hive starts to turn as
infinite
Thank you in advance for your answers
22 matches
Mail list logo