Thank you Russell. I wasn't able to make positive progress in either
direction. Is it possible to use the unshaded iceberg-spark3 jar and do
*spark.read().format("iceberg").load(table)*? Without the shaded
iceberg-spark3-runtime, I got "Failed to find data source: iceberg".

On Mon, Aug 2, 2021 at 12:37 PM Russell Spitzer <russell.spit...@gmail.com>
wrote:

> I think you have two options,
>
> Use the unshaded iceberg jar and reshade your resultant app so that it is
> shaded identically to iceberg-spark3-runtime,
>
> Compile against the shaded iceberg-spark-3 runtime jar and shaded parquet
> libs
>
> On Mon, Aug 2, 2021 at 2:28 PM Huadong Liu <huadong...@gmail.com> wrote:
>
>> Hi,
>>
>> I have a Java app that writes Iceberg files with the core API. As a
>> result, it uses the unshaded parquet package. I am now extending the app to
>> read the table with Spark. Unfortunately the iceberg-spark3-runtime uses
>> the shaded parquet package and I am getting:
>>
>> *java.lang.ClassCastException: org.apache.parquet.schema.MessageType
>> cannot be cast to
>> org.apache.iceberg.shaded.org.apache.parquet.schema.MessageType *
>>
>> Any idea to workaround this without separating the app? This
>> <https://lists.apache.org/thread.html/r4999ad55590b7483af9e23da2762bde34659148f74ad4fa9d88f7235%40%3Cdev.iceberg.apache.org%3E>
>>  is
>> a related thread in the past. Thanks.
>>
>> --
>> Huadong
>>
>>
>>

Reply via email to