As for the number of 22, I guess that your table have multiple files, probably
2.
HIVE will divide the desired number of map tasks evenly among the files of the
table. And the number of map tasks for a file may be increased because the file
size can't be divided exactly by it.
-Original Mes
Hi Rui,
I combined your suggestion with the answer from
SO(http://stackoverflow.com/questions/20816726/fail-to-increase-hive-mapper-tasks),
and it works:
set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat;
set mapred.map.tasks = 20;
select count(*) from dw_stage.st_dw_marketing_to
Hi, You can try set mapred.map.tasks = 19.
It seems that HIVE is using the old Hadoop MapReduce API and so
mapred.max.split.size won't work.
-Original Message-
From: Ji Zhang [mailto:zhangj...@gmail.com]
Sent: Thursday, January 02, 2014 3:56 PM
To: user@hive.apache.org
Subject: Fail to I
Guys,
I am using storm to read data stream from our socket server, entry by
entry, and then write them to file: one entry per file. At some point, i
need to import the data into my hive table. There are several approaches i
could think of:
1. directly write to hive hdfs file whenever I get the ent
Hello Hive Champs,
I have a case statement, where I need to check the date passed through
parameter,
If date is 1st date of the month then keep it as it as
else
set the parameter date to 1st date of the month.
and then later opretation are being performed on that date into hive quries,
I have
Hello All,
I have a case statement, where I need to check the date passed through
parameter,
If date is 1st date of the month then keep it as it as
else
set the parameter date to 1st date of the month.
and then later opretation are being performed on that date into hive quries,
I have wrote t