It turned out a bug in my code. In the select clause the list of fields is
misaligned with the schema of the target table. As a consequence the map
data couldn’t be cast to some other type in the schema.
Thanks anyway.
On 9/26/14, 8:08 PM, "Cheng Lian" wrote:
>Would you mind to provide the DDL
Would you mind to provide the DDL of this partitioned table together
with the query you tried? The stacktrace suggests that the query was
trying to cast a map into something else, which is not supported in
Spark SQL. And I doubt whether Hive support casting a complex type to
some other type.
Would you mind to provide the DDL of this partitioned table together
with the query you tried? The stacktrace suggests that the query was
trying to cast a map into something else, which is not supported in
Spark SQL. And I doubt whether Hive support casting a complex type to
some other type.
It might be a problem when inserting into a partitioned table. It worked
fine to when the target table was unpartitioned.
Can you confirm this?
Thanks,
Du
On 9/26/14, 4:48 PM, "Du Li" wrote:
>Hi,
>
>I was loading data into a partitioned table on Spark 1.1.0
>beeline-thriftserver. The table
Hi,
I was loading data into a partitioned table on Spark 1.1.0
beeline-thriftserver. The table has complex data types such as map and array>. The query is like ³insert overwrite
table a partition (Š) select в and the select clause worked if run
separately. However, when running the insert query,