Thanks, Josh! I guess it makes sense that without the base class you can't
load the Parquet class.
We'll have to watch out for Hadoop/Flink issues. I think we hit one as well
where not having Configuration in the Flink classpath could prevent loading
modules correctly.
Ryan
On Sun, Sep 26, 2021
Hi
I found the reason why this exception 'java.lang.NoClassDefFoundError:
org/apache/iceberg/shaded/org/apache/parquet/hadoop/ParquetInputFormat' was
raised. Actually, it was because of the absence of class
'org/apache/hadoop/mapreduce/lib/input/FileInputFormat'. After I put
the hadoop-mapreduce-
Hi openinx
I do not get you. what do you mean by 'Looks like the line 112 in
HadoopReadOptions is not the first line accessing the variables in
ParquetInputFormat.'?
The parquet file I want to read was wrote by iceberg table without any
explicit specified, no file format and no parquet version was
Hi Joshua
Can you check what's the parquet version you are using ? Looks like the
line 112 in HadoopReadOptions is not the first line accessing the variables
in ParquetInputFormat.
[image: image.png]
On Wed, Sep 22, 2021 at 11:07 PM Joshua Fan wrote:
> Hi
> I am glad to use iceberg as table
Hi
I am glad to use iceberg as table source in flink sql, flink version is
1.13.2, and iceberg version is 0.12.0.
After changed the flink version from 1.12 to 1.13, and changed some codes
in FlinkCatalogFactory, the project can be build successfully.
First, I tried to write data into iceberg by f