Once again, I’d have to agree with Sean.
Let’s table the meaning of SPIP for another time, say. I think a few of us are
trying to understand what does “accelerator resource aware” mean. As far as I
know, no one is discussing API here. But on google doc, JIRA and on email and
off list, I have se
I think treating SPIPs as this high-level takes away much of the point
of VOTEing on them. I'm not sure that's even what Reynold is
suggesting elsewhere; we're nowhere near discussing APIs here, just
what 'accelerator aware' even generally means. If the scope isn't
specified, what are we trying to
Hi Felix,
Just to clarify, we are voting on the SPIP, not the companion scoping doc.
What is proposed and what we are voting on is to make Spark
accelerator-aware. The companion scoping doc and the design sketch are to
help demonstrate that what features could be implemented based on the use
cases
ok, thanks!
Bests,
Takeshi
On Sun, Mar 3, 2019 at 10:22 AM Xiao Li wrote:
> Thank you, Shane!
>
> Xiao
>
> shane knapp 于2019年3月2日周六 下午4:28写道:
>
>> adding new k8s functionality?
>>
>> something need upgrading in jenkins?
>>
>> are logs not being archived?
>>
>> odd build failure (and i mean *od
No, it is not at all dead! There just isn't any kind of expectation or
commitment that the 3.0.0 release will be held up in any way if DSv2 is not
ready to go when the rest of 3.0.0 is. There is nothing new preventing
continued work on DSv2 or its eventual inclusion in a release.
On Sun, Mar 3, 20
Hi, I am kind of new at the whole Apache process (not specifically Spark). Does
that means that the DataSourceV2 is dead or stays experimental? Thanks for
clarifying for a newbie.
jg
> On Mar 3, 2019, at 11:21, Ryan Blue wrote:
>
> This vote fails with the following counts:
>
> 3 +1 votes:
Great points Sean.
Here’s what I’d like to suggest to move forward.
Split the SPIP.
If we want to propose upfront homogeneous allocation (aka spark.task.gpus),
this should be one on its own and for instance, I really agree with Sean (like
I did in the discuss thread) that we can’t simply non-go
This vote fails with the following counts:
3 +1 votes:
- Matt Cheah
- Ryan Blue
- Sean Owen (binding)
1 -0 vote:
- Jose Torres
2 -1 votes:
- Mark Hamstra (binding)
- Midrul Muralidharan (binding)
Thanks for the discussion, everyone, It sounds to me that the main
objection i
I'm for this in general, at least a +0. I do think this has to have a
story for what to do with the existing Mesos GPU support, which sounds
entirely like the spark.task.gpus config here. Maybe it's just a
synonym? that kind of thing.
Requesting different types of GPUs might be a bridge too far, b