Hi Spark dev folks,
First of all kudos on this new Data Source v2, API looks simple and it
makes easy to develop a new data source and use it.
With my current work, I am trying to implement a new data source V2 writer
with Spark 2.3 and I was wondering how I will get the info about partition
by c
Hi Aakash
On Tue, Dec 17, 2019 at 12:42 PM aakash aakash
wrote:
> Hi Spark dev folks,
>
> First of all kudos on this new Data Source v2, API looks simple and it
> makes easy to develop a new data source and use it.
>
> With my current work, I am trying to implement a new data source V2 writer
>
Thanks Andrew!
It seems there is a drastic change in 3.0, going through it.
-Aakash
On Tue, Dec 17, 2019 at 11:01 AM Andrew Melo wrote:
> Hi Aakash
>
> On Tue, Dec 17, 2019 at 12:42 PM aakash aakash
> wrote:
>
>> Hi Spark dev folks,
>>
>> First of all kudos on this new Data Source v2, API loo
Same result as last time. It all looks good and tests pass for me on
Ubuntu with all profiles enables (Hadoop 3.2 + Hive 2.3), building
from source.
'pyspark-3.0.0.dev2.tar.gz' appears to be the desired python artifact name, yes.
+1
On Tue, Dec 17, 2019 at 12:36 AM Yuming Wang wrote:
>
> Please v