Huang, the number of records are huge and we do not know what your table
definition is or what your cluster capacity is?

there are multiple reasons that query is slow
Can you share all the details on
1) Whats your table definition?
2) Whats the cluster capacity?
3) when you launched query did the cluster have enough capacity to start
your job ?
4) how many mappers were launched and how many are getting executed in
parallel?

others may ask other questions as well


On Fri, Jul 19, 2013 at 7:38 AM, ch huang <justlo...@gmail.com> wrote:

> the table records are more than 120000000
>
>
> On Fri, Jul 19, 2013 at 9:34 AM, Stephen Boesch <java...@gmail.com> wrote:
>
>> one mapper.  how big is the table?
>>
>>
>> 2013/7/18 ch huang <justlo...@gmail.com>
>>
>>> i wait long time,no result ,why hive is so slow?
>>>
>>> hive> select cookie,url,ip,source,vsid,token,residence,edate from
>>> hb_cookie_history where edate>='1371398400500' and edate<='1371400200500';
>>> Total MapReduce jobs = 1
>>> Launching Job 1 out of 1
>>> Number of reduce tasks is set to 0 since there's no reduce operator
>>> Starting Job = job_1374138311742_0007, Tracking URL =
>>> http://CH22:8088/proxy/application_1374138311742_0007/<http://ch22:8088/proxy/application_1374138311742_0007/>
>>> Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill
>>> job_1374138311742_0007
>>>
>>
>>
>


-- 
Nitin Pawar

Reply via email to