RD,
Trying to figure if there are regressions expected between
reader and data. Bypassing metadata is easy for us coz data is in separate
directory. ETL pipeline can point the reader config to the correct
location.
On Wed, May 15, 2019 at 5:14 PM RD wrote:
> Is backporting relevant da
Is backporting relevant datasource patches to Spark 2.3 a non starter? If
this were doable I believe this is much simpler than bypassing Iceberg
metadata to read files directly.
-R
On Wed, May 15, 2019 at 3:02 PM Gautam wrote:
> Just wanted to add, from what I have tested so far I see this work
Just wanted to add, from what I have tested so far I see this working fine
with Vanilla Spark reading Iceberg data.
On Wed, May 15, 2019 at 2:59 PM Gautam wrote:
> Hello There,
> I am currently doing some testing with Vanilla Spark
> Readers' ability to read Iceberg generate
Hello There,
I am currently doing some testing with Vanilla Spark
Readers' ability to read Iceberg generated data. This is both from an
Iceberg/Parquet Reader interoperability and Spark version backward
compatibility standpoint (e.g. Spark distributions running v2.3.x which
do