[ https://issues.apache.org/jira/browse/HIVE-19959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
t oo updated HIVE-19959: ------------------------ Priority: Blocker (was: Major) > 'Hive on Spark' error - > org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered > unregistered class ID: 109 > ----------------------------------------------------------------------------------------------------------------------- > > Key: HIVE-19959 > URL: https://issues.apache.org/jira/browse/HIVE-19959 > Project: Hive > Issue Type: Bug > Components: Spark > Affects Versions: 2.3.2, 2.3.3 > Environment: env: hive 2.3.3 spark 2.0.0 in standalone mode scratch > dir on S3 hive table on s3 hadoop 2.8.3 installed no hdfs setup > Reporter: t oo > Priority: Blocker > > connecting to beeline and running SELECT * works but when running select > count(*) get below error: > 18/05/01 07:41:37 INFO Utilities: Open file to read in plan: > s3a://redacted/tmp/31f5ffb5-f318-45f1-b07d-1fac0b406c89/hive_2018-05-01_07-41-09_102_7250900080631620338- > 2/-mr-10004/bbb93046-5d8f-4b6e-888e-c86bfeb57e3f/map.xml > 18/05/01 07:41:37 INFO PerfLogger: <PERFLOG method=deserializePlan > from=org.apache.hadoop.hive.ql.exec.Utilities> > 18/05/01 07:41:37 INFO Utilities: Deserializing MapWork via kryo > 18/05/01 07:41:37 ERROR Utilities: Failed to load plan: > s3a://redacted/tmp/31f5ffb5-f318-45f1-b07d-1fac0b406c89/hive_2018-05-01_07-41-09_102_7250900080631620338- > 2/-mr-10004/bbb93046-5d8f-4b6e-888e-c86bfeb57e3f/map.xml: > org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered > unregistered class ID: 109 > Serialization trace: > properties (org.apache.hadoop.hive.ql.plan.PartitionDesc) > aliasToPartnInfo (org.apache.hadoop.hive.ql.plan.MapWork) > org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered > unregistered class ID: 109 > Serialization trace: > properties (org.apache.hadoop.hive.ql.plan.PartitionDesc) > aliasToPartnInfo (org.apache.hadoop.hive.ql.plan.MapWork) > at > org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:119) > at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:610) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer$ObjectField.read(FieldSerializer.java:599) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:221) > at > org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:134) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:17) > at > org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:648) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer$ObjectField.read(FieldSerializer.java:605) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:221) > at > org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:626) > at > org.apache.hadoop.hive.ql.exec.Utilities.deserializeObjectByKryo(Utilities.java:1082) > at > org.apache.hadoop.hive.ql.exec.Utilities.deserializePlan(Utilities.java:973) > at > org.apache.hadoop.hive.ql.exec.Utilities.deserializePlan(Utilities.java:987) > at > org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:423) > at org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:302) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:268) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:484) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:477) > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:715) > at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:246) > at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:209) > at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:102) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) > at > org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47) > at org.apache.spark.scheduler.Task.run(Task.scala:85) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > env: hive 2.3.3 spark 2.0.0 in standalone mode scratch dir on S3 hive table > on s3 hadoop 2.8.3 installed no hdfs setup -- This message was sent by Atlassian JIRA (v7.6.3#76005)