No, it is not at all dead! There just isn't any kind of expectation or
commitment that the 3.0.0 release will be held up in any way if DSv2 is not
ready to go when the rest of 3.0.0 is. There is nothing new preventing
continued work on DSv2 or its eventual inclusion in a release.
On Sun, Mar 3, 20
Hi, I am kind of new at the whole Apache process (not specifically Spark). Does
that means that the DataSourceV2 is dead or stays experimental? Thanks for
clarifying for a newbie.
jg
> On Mar 3, 2019, at 11:21, Ryan Blue wrote:
>
> This vote fails with the following counts:
>
> 3 +1 votes:
This vote fails with the following counts:
3 +1 votes:
- Matt Cheah
- Ryan Blue
- Sean Owen (binding)
1 -0 vote:
- Jose Torres
2 -1 votes:
- Mark Hamstra (binding)
- Midrul Muralidharan (binding)
Thanks for the discussion, everyone, It sounds to me that the main
objection i
gt;>>
>>>>>>
>>>>>>
>>>>>> Are identifiers and namespaces going to be rolled under one of those
six points?
>>>>>>
>>>>>>
>>>>>>
>>>>>> Fro
or the release of Spark 3.0, then I'm -1.
>>>>> A major release is just not about adding new features. Rather, it is
>>>>> about making changes to the existing public API. As such, I'm opposed to
>>>>> any new feature or any API addi
a functional DSv2 implementation
> being a blocker for the release of Spark 3.0, then I'm -1. A major release
> is just not about adding new features. Rather, it is about making changes
> to the existing public API. As such, I'm opposed to any new feature or any
> API addition b
not about adding new features. Rather, it is about making changes
> to the existing public API. As such, I'm opposed to any new feature or any
> API addition being considered a blocker of the 3.0.0 release.
> >>>>
> >>>>
> >>>> On Thu, Feb 2
ajor release
>>>> is just not about adding new features. Rather, it is about making changes
>>>> to the existing public API. As such, I'm opposed to any new feature or any
>>>> API addition being considered a blocker of the 3.0.0 release.
;> to the existing public API. As such, I'm opposed to any new feature or any
>>>> API addition being considered a blocker of the 3.0.0 release.
>>>>
>>>>
>>>> On Thu, Feb 28, 2019 at 9:09 AM Matt Cheah wrote:
>>>>
>>
PI addition being considered a blocker of the 3.0.0 release.
>>>>
>>>>
>>>> On Thu, Feb 28, 2019 at 9:09 AM Matt Cheah wrote:
>>>>
>>>>> +1 (non-binding)
>>>>>
>>>>>
>>>>>
>>>>>
1 (non-binding)
>>>>
>>>>
>>>>
>>>> Are identifiers and namespaces going to be rolled under one of those
>>>> six points?
>>>>
>>>>
>>>>
>>>> *From: *Ryan Blue
>>>> *Reply-To: *&
AM Matt Cheah wrote:
>
>> +1 (non-binding)
>>
>>
>>
>> Are identifiers and namespaces going to be rolled under one of those six
>> points?
>>
>>
>>
>> *From: *Ryan Blue
>> *Reply-To: *"rb...@netflix.com"
>> *Dat
>
>>>
>>>
>>> Are identifiers and namespaces going to be rolled under one of those six
>>> points?
>>>
>>>
>>>
>>> *From: *Ryan Blue
>>> *Reply-To: *"rb...@netflix.com"
>>> *Date: *Thursday, Fe
Blue
> *Reply-To: *"rb...@netflix.com"
> *Date: *Thursday, February 28, 2019 at 8:39 AM
> *To: *Spark Dev List
> *Subject: *[VOTE] Functional DataSourceV2 in Spark 3.0
>
>
>
> I’d like to call a vote for committing to getting DataSourceV2 in a
> functional state for
+1 (non-binding)
Are identifiers and namespaces going to be rolled under one of those six points?
From: Ryan Blue
Reply-To: "rb...@netflix.com"
Date: Thursday, February 28, 2019 at 8:39 AM
To: Spark Dev List
Subject: [VOTE] Functional DataSourceV2 in Spark 3.0
I’d like
I’d like to call a vote for committing to getting DataSourceV2 in a
functional state for Spark 3.0.
For more context, please see the discussion thread, but here is a quick
summary about what this commitment means:
- We think that a “functional DSv2” is an achievable goal for the Spark
3.0 r
16 matches
Mail list logo