Hi,

I am using Hive 0.12 with Hadoop 2.2 and trying to insert data in a new ORC
table with an INSERT SELECT statement from a TEXT file based table and I am
running into the following error (I have trimmed some of the data showed in
the error):

Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime
Error while processing row
{"id":"1932685422","ad_id":"7325801318", .... , "account_id":"6875965212"}
        at
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:544)
        at
org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:157)
        ... 8 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 26
        at
org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerWriterV2.preparePatchedBlob(RunLengthIntegerWriterV2.java:593)
        at
org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerWriterV2.determineEncoding(RunLengthIntegerWriterV2.java:541)
        at
org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerWriterV2.write(RunLengthIntegerWriterV2.java:797)
        at
org.apache.hadoop.hive.ql.io.orc.WriterImpl$IntegerTreeWriter.write(WriterImpl.java:744)
...

That error is produced when the ad_id column in the destination table has
the type BIGINT. When I change the column type to STRING the insert works
fine.

>From what I see that value is no nearly big enough to cause any overflow
issues in a BIGINT.

Is this a known bug or do I have to do anything in particular for this to
work?

Thanks,
Juan.

Reply via email to