Thanks for the check, Felix!
Yea, I'll wait for the new test report.
But, it never took so much time to run tests in branch-2.3 before, ...
network issue?
On Wed, Jan 23, 2019 at 2:19 AM Felix Cheung
wrote:
> I’ve tried a couple of times. The latest test run took 12 hr+
>
> 1 aborted suite:
> 0
Recently, I came across this bug:
https://issues.apache.org/jira/browse/SPARK-26706.
It seems appropriate to include it in 2.3.3, doesn't it?
Thanks,
Anton
ср, 23 янв. 2019 г. в 13:08, Takeshi Yamamuro :
> Thanks for the check, Felix!
>
> Yea, I'll wait for the new test report.
> But, it never
I'm not clear if it's a correctness bug from that description, and if
it's not a regression, no it does not need to go into 2.3.3. If it's a
real bug, sure it can be merged to 2.3.x.
On Wed, Jan 23, 2019 at 7:54 AM Anton Okolnychyi
wrote:
>
> Recently, I came across this bug:
> https://issues.ap
Hi,
I want to write custom window functions in spark which are also optimisable
for catalyst.
Can you provide some hints where to start?
Also posting to DEVLIST as I believe this is a rather exotic topic.
Best,
Georg
It is a correctness bug. I have updated the description with an example. It
has been there for a while, so I am not sure about the priority.
ср, 23 янв. 2019 г. в 14:48, Sean Owen :
> I'm not clear if it's a correctness bug from that description, and if
> it's not a regression, no it does not nee
Hi Herman,
Thanks a lot. So far I only found most of the documentation about UDAF.
Could you point me anywhere (besides just reading spark's source code)
which explains how to work with custom AggregateFunctions?
Best,
Georg
Am Mi., 23. Jan. 2019 um 16:02 Uhr schrieb Herman van Hovell <
her...@d
Got it. Thank you for sharing that, Reynold.
So, you mean they will use `Apache Spark 3.0.0` on the old clusters with
Hive 0.x, right?
If that happens actually, no problem to keep them.
Bests,
Dongjoon.
On Tue, Jan 22, 2019 at 11:49 PM Xiao Li wrote:
> Based on my experience in development o
It is not even an old “cluster”. It is a central metastore shares by
multiple clusters.
On Wed, Jan 23, 2019 at 10:04 AM Dongjoon Hyun
wrote:
> Got it. Thank you for sharing that, Reynold.
>
> So, you mean they will use `Apache Spark 3.0.0` on the old clusters with
> Hive 0.x, right?
>
> If that
-1
Agreed with Anton that this bug will potentially corrupt the data
silently. As he is ready to submit a PR, I'll suggest to wait to
include the fix. Thanks!
Sincerely,
DB Tsai
--
Web: https://www.dbtsai.com
PGP Key ID: 0x5CED8B896A6BDFA0
-1 too.
I just upgraded https://issues.apache.org/jira/browse/SPARK-26682 to
blocker. It's a small fix and we should make it in 2.3.3.
On Thu, Jan 17, 2019 at 6:49 PM Takeshi Yamamuro wrote:
>
> Please vote on releasing the following candidate as Apache Spark version
> 2.3.3.
>
> The vote is op
-1
https://issues.apache.org/jira/browse/SPARK-26709 is another blocker ticket
that returns incorrect results.
Marcelo Vanzin 于2019年1月23日周三 下午12:01写道:
> -1 too.
>
> I just upgraded https://issues.apache.org/jira/browse/SPARK-26682 to
> blocker. It's a small fix and we should make it in 2.3.3.
Thanks, all.
I'll start a new vote as rc2 after the two issues above resolved.
Best,
Takeshi
On Thu, Jan 24, 2019 at 7:59 AM Xiao Li wrote:
> -1
>
> https://issues.apache.org/jira/browse/SPARK-26709 is another blocker
> ticket that returns incorrect results.
>
>
> Marcelo Vanzin 于2019年1月23日周
12 matches
Mail list logo