Replying to myself as I found my issue, I hadn't updated the schema of my
partitions correctly, I've only updated the table schema, the error went
away when I updated my partitions. All data was query-able old and newly
landed data.



Op do 26 jul. 2018 om 11:22 schreef Patrick Duin <patd...@gmail.com>:

>
> I'm encountering errors in Hive 2.3.2 when reading sets of Parquet files,
> where the schema has evolved.
>
> The error I'm seeing is :
> Failed with exception java.io.IOException:java.lang.RuntimeException: Hive
> internal error: conversion of string to array<string>not supported yet.
>
> My schema has a top-level column of struct type: that has changed from:
>
> myColumn struct<c1:string, c2:string, c3:string>
>
> To
>
> myColumn struct<c1:string, c2:string, new_column:array<string>, c3:string>
>
> I've update my table with the new column type using the DDL below but then
> see the aforementioned error when selecting the data.
>
> I've tried to force column lookup by name rather than by index using the
> setting:
>
> parquet.column.index.access=false
>
> But I see the same error. Are these kind of schema evolutions supported
> (nested column insertion)? What are my options for resolving this issue?
>
> Many thanks,
>
> Patrick.
>

Reply via email to