n defaulting to v2.
>
>
>
> *From: *Dongjoon Hyun
> *Date: *Friday, 15 September 2023 at 05:36
> *To: *Will Raschkowski
> *Cc: *dev@spark.apache.org
> *Subject: *Re: Plans for built-in v2 data sources in Spark 4
>
> *CAUTION:* This email originates from an extern
ust my
understanding – curious if I’m thinking about this correctly).
Anyway, thank you for the pointer.
From: Dongjoon Hyun
Date: Friday, 15 September 2023 at 05:36
To: Will Raschkowski
Cc: dev@spark.apache.org
Subject: Re: Plans for built-in v2 data sources in Spark 4
CAUTION: This email orig
ser to supporting bucketing and partitioning in v2 and then defaulting to v2.
From: Dongjoon Hyun
Date: Friday, 15 September 2023 at 05:36
To: Will Raschkowski
Cc: dev@spark.apache.org
Subject: Re: Plans for built-in v2 data sources in Spark 4
CAUTION: This email originates from an external party (o
Hi, Will.
According to the following JIRA, as of now, there is no plan or on-going
discussion to switch it.
https://issues.apache.org/jira/browse/SPARK-44111 (Prepare Apache Spark
4.0.0)
Thanks,
Dongjoon.
On Wed, Sep 13, 2023 at 9:02 AM Will Raschkowski
wrote:
> Hey everyone,
>
>
>
> I was w
Hey everyone,
I was wondering what the plans are for Spark's built-in v2 file data sources in
Spark 4.
Concretely, is the plan for Spark 4 to continue defaulting to the built-in v1
data sources? And if yes, what are the blockers for defaulting to v2? I see,
just as example, that writing Hive-p