[
https://issues.apache.org/jira/browse/HUDI-102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212020#comment-17212020
]
Bharat Dighe commented on HUDI-102:
-----------------------------------
I am able to reproduce this.
scala> spark.sql("select * from users_mor_rt");
res10: org.apache.spark.sql.DataFrame = [_hoodie_commit_time: string,
_hoodie_commit_seqno: string ... 9 more fields]
scala> spark.sql("select * from users_mor_rt").show();
20/10/11 19:38:01 WARN hadoop.ParquetRecordReader: Can not initialize counter
due to context is not a instance of TaskInputOutputContext, but is
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
20/10/11 19:38:01 ERROR executor.Executor: Exception in task 0.0 in stage 106.0
(TID 102)
java.lang.UnsupportedOperationException: Cannot inspect
org.apache.hadoop.io.Text
at
org.apache.hadoop.hive.ql.io.parquet.serde.ArrayWritableObjectInspector.getStructFieldData(ArrayWritableObjectInspector.java:152)
> Beeline/Hive Client - select * on real-time views fails with schema related
> errors for tables with deep-nested schema #439
> --------------------------------------------------------------------------------------------------------------------------
>
> Key: HUDI-102
> URL: https://issues.apache.org/jira/browse/HUDI-102
> Project: Apache Hudi
> Issue Type: Bug
> Components: Hive Integration
> Reporter: Vinoth Chandar
> Priority: Major
> Labels: help-wanted
>
> https://github.com/apache/incubator-hudi/issues/439
--
This message was sent by Atlassian Jira
(v8.3.4#803005)