Ok, but if i would search a solution faor warehousing big data, it s rather
hive a best solution actually. I know that facebook uses Hive.
2012/12/13 Mohammad Tariq
> I said that because under the hood each query(Hive or Pig) gets converted
> into a MapReduce job first, and gives you the result
I said that because under the hood each query(Hive or Pig) gets converted
into a MapReduce job first, and gives you the result.
Regards,
Mohammad Tariq
On Thu, Dec 13, 2012 at 7:51 PM, imen Megdiche wrote:
> I don t understand why you mean with "Same holds good for Hive or Pig" ,
> do you
I don t understand why you mean with "Same holds good for Hive or Pig" ,
do you mean i would rather compare datawarehouses with hive or Pig.
Great, you help me so much. Mohammad.
2012/12/13 Mohammad Tariq
> If you are going to do some OLTP kinda thing, I would not suggest Hadoop.
> Same holds
You are welcome.
First things first. We can never compare Hadoop with traditional warehouse
systems or DBMSs. Both are meant for different purposes.
One small example, consider you have 1G of data, then there is nothing that
could match RDBMSs. You'll get the results instantly, as you have specif
thank you for your explanantions. I work in a pseudo distributed mode and
not in cluster. Does your recommendation are also available in this mode
and how can i do to have an execution time increasing in function of the
nbr of map reduces tasks, if it is possible.
I don t understand in general ho
Hello Imen,
If you have huge no of tasks then the overhead of managing the map
and reduce task creation begins to dominate the total job execution time.
Also, more tasks means you need more free cpu slots. If the slots are not
free then the data block of interest will be moved to some other
If the number of maps or reducers your job launched are more than the
jobqueue/cluster capacity, cpu time will increase
On Dec 13, 2012 4:02 PM, "imen Megdiche" wrote:
> Hello,
>
> I am trying to increase the number of map and reduce tasks for a job and
> even for the same data size, I noticed th