Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: [Error:
java.lang.Double cannot be cast to
org.apache.hadoop.hive.serde2.io.DoubleWritable]

Getting below error while running hive UDF on spark but the UDF is working
perfectly fine in Hive..


public Object get(Object name) {
          int pos = getPos((String)name);
 if(pos<0) return null;
 String f = "string";
          Object obj= list.get(pos);
 if(obj==null) return null;
 ObjectInspector ins =
((StructField)colnames.get(pos)).getFieldObjectInspector();
 if(ins!=null) f = ins.getTypeName();
 switch (f) {
   case "double" :  return ((DoubleWritable)obj).get();
            case "bigint" :  return ((LongWritable)obj).get();
            case "string" :  return ((Text)obj).toString();
   default  :  return obj;
 }
}



On Tue, Jan 24, 2017 at 9:19 PM, Takeshi Yamamuro <linguin....@gmail.com>
wrote:

> Hi,
>
> Could you show us the whole code to reproduce that?
>
> // maropu
>
> On Wed, Jan 25, 2017 at 12:02 AM, Deepak Sharma <deepakmc...@gmail.com>
> wrote:
>
>> Can you try writing the UDF directly in spark and register it with spark
>> sql or hive context ?
>> Or do you want to reuse the existing UDF jar for hive in spark ?
>>
>> Thanks
>> Deepak
>>
>> On Jan 24, 2017 5:29 PM, "Sirisha Cheruvu" <siri8...@gmail.com> wrote:
>>
>>> Hi Team,
>>>
>>> I am trying to keep below code in get method and calling that get mthod
>>> in another hive UDF
>>> and running the hive UDF using Hive Context.sql procedure..
>>>
>>>
>>> switch (f) {
>>>     case "double" :  return ((DoubleWritable)obj).get();
>>>             case "bigint" :  return ((LongWritable)obj).get();
>>>             case "string" :  return ((Text)obj).toString();
>>>     default  :  return obj;
>>>   }
>>> }
>>>
>>> Suprisingly only LongWritable and Text convrsions are throwing error but
>>> DoubleWritable is working
>>> So I tried changing below code to
>>>
>>> switch (f) {
>>>     case "double" :  return ((DoubleWritable)obj).get();
>>>             case "bigint" :  return ((DoubleWritable)obj).get();
>>>             case "string" :  return ((Text)obj).toString();
>>>     default  :  return obj;
>>>   }
>>> }
>>>
>>> Still its throws error saying Java.Lang.Long cant be convrted
>>> to org.apache.hadoop.hive.serde2.io.DoubleWritable
>>>
>>>
>>>
>>> its working fine on hive but throwing error on spark-sql
>>>
>>> I am importing the below packages.
>>> import java.util.*;
>>> import org.apache.hadoop.hive.serde2.objectinspector.*;
>>> import org.apache.hadoop.io.LongWritable;
>>> import org.apache.hadoop.io.Text;
>>> import org.apache.hadoop.hive.serde2.io.DoubleWritable;
>>>
>>> .Please let me know why it is making issue in spark when perfectly
>>> running fine on hive
>>>
>>
>
>
> --
> ---
> Takeshi Yamamuro
>

Reply via email to