Might be out of topic: regarding SPARK-24211 (flaky tests in
StreamingJoinSuite) I might volunteer to take a look, but if things are not
flaky with branch 2.4 and EOL on branch 2.3 is coming sooner (in some
months), I wonder we still want to tackle it in any way.
2019년 2월 7일 (목) 오후 2:21, Sean Owen
+1 from me. I built and tested the source release on the same env and
this time not seeing failures. Good, no idea what happened.
I updated Fix Version on JIRAs that were marked as 2.3.4 but went in
before the RC2 tag.
I'm kinda concerned that this test keeps failing in branch 2.3:
org.apache.sp
the PRB executes the following scripts:
./dev/run-tests-jenkins
./build/sbt unsafe/test
SBT QA tests:
./dev/run-tests
maven QA tests:
ZINC_PORT=$(python -S -c "import random; print random.randrange(3030,4030)")
MVN="build/mvn --force -DzincPort=$ZINC_PORT"
$MVN \
-DskipTests \
-P"hadoop-2
It's mostly in this folder: https://github.com/apache/spark/tree/master/dev
Xin
On Wed, Feb 6, 2019 at 3:55 PM Tom Graves
wrote:
> I'm curious if we have it documented anywhere or if there is a good place
> to look, what exact commands Spark runs in the pull request builds and the
> QA builds?
I'm curious if we have it documented anywhere or if there is a good place to
look, what exact commands Spark runs in the pull request builds and the QA
builds?
Thanks,Tom
+1 from me as well.
Il giorno mer 6 feb 2019 alle ore 16:58 Yanbo Liang ha
scritto:
> +1 for the proposal
>
>
>
> On Thu, Jan 31, 2019 at 12:46 PM Mingjie Tang wrote:
>
>> +1, this is a very very important feature.
>>
>> Mingjie
>>
>> On Thu, Jan 31, 2019 at 12:42 AM Xiao Li wrote:
>>
>>> Chan
+1 for the proposal
On Thu, Jan 31, 2019 at 12:46 PM Mingjie Tang wrote:
> +1, this is a very very important feature.
>
> Mingjie
>
> On Thu, Jan 31, 2019 at 12:42 AM Xiao Li wrote:
>
>> Change my vote from +1 to ++1
>>
>> Xiangrui Meng 于2019年1月30日周三 上午6:20写道:
>>
>>> Correction: +0 vote does
Thanks Ryan
On Tue, Feb 5, 2019 at 10:28 PM Ryan Blue wrote:
> Shubham,
>
> DataSourceV2 passes Spark's internal representation to your source and
> expects Spark's internal representation back from the source. That's why
> you consume and produce InternalRow: "internal" indicates that Spark
> d