best way to write a UDF is write few test cases around it with few expected
datasets to capture the errors while developing itself rather than in a
hive session


On Tue, Jun 26, 2012 at 8:06 PM, Jan DolinĂ¡r <dolik....@gmail.com> wrote:

> Hi,
>
> Check the hadoop logs of the failed task. My best guess is that there is
> an uncaught exception thrown somewhere in your code. The logs will tell
> where and what caused the problem.
>
> Best regards,
> Jan
>
>
> On Tue, Jun 26, 2012 at 4:20 PM, Yue Guan <pipeha...@gmail.com> wrote:
>
>> Hi, hive users
>>
>> I have the following udf:
>>
>> package com.name.hadoop.hive.udf;
>>
>> import java.util.Set;
>>
>> import org.apache.commons.lang.StringUtils;
>>
>> import org.apache.hadoop.hive.ql.exec.UDF;
>> import org.apache.hadoop.io.Text;
>>
>>
>> public class MyUDF extends UDF {
>>
>>    private Map<Long, Set<Long>> aMapping;
>>    private final Text result = new Text();
>>
>>    public MyUDF() throws Exception {
>>        aMapping = someModuleSamePackage.getMapping();
>>    }
>>
>>    public Text evaluate(final Text o) throws Exception {
>>        result.clear();
>>
>>        if (o != null) {
>>            Long id = new Long(o.toString());
>>            Set<Long> ids = aMapping.get(id);
>>            if (ids != null) {
>>                    String resultString = StringUtils.join(ids, ",");
>>                    result.set(resultString);
>>            }
>>        }
>>
>>        return result;
>>    }
>> }
>>
>> However, I always get FAILED: Execution Error, return code 2 from
>> org.apache.hadoop.hive.ql.exec.MapRedTask. Anyone suggests anything to
>> work around this? Thank you in advance.
>>
>
>


-- 
Nitin Pawar

Reply via email to