Thank you so much!  Any update on getting the RC1 up for vote?

Jason.

________________________________
From: 郑瑞峰 <ruife...@foxmail.com>
Sent: Wednesday, 5 August 2020 12:54 PM
To: Jason Moore <jason.mo...@quantium.com.au.INVALID>; Spark dev list 
<dev@spark.apache.org>
Subject: 回复: [DISCUSS] Apache Spark 3.0.1 Release

Hi all,
I am going to prepare the realease of 3.0.1 RC1, with the help of Wenchen.


------------------ 原始邮件 ------------------
发件人: "Jason Moore" <jason.mo...@quantium.com.au.INVALID>;
发送时间: 2020年7月30日(星期四) 上午10:35
收件人: "dev"<dev@spark.apache.org>;
主题: Re: [DISCUSS] Apache Spark 3.0.1 Release


Hi all,



Discussion around 3.0.1 seems to have trickled away.  What was blocking the 
release process kicking off?  I can see some unresolved bugs raised against 
3.0.0, but conversely there were quite a few critical correctness fixes waiting 
to be released.



Cheers,

Jason.



From: Takeshi Yamamuro <linguin....@gmail.com>
Date: Wednesday, 15 July 2020 at 9:00 am
To: Shivaram Venkataraman <shiva...@eecs.berkeley.edu>
Cc: "dev@spark.apache.org" <dev@spark.apache.org>
Subject: Re: [DISCUSS] Apache Spark 3.0.1 Release



> Just wanted to check if there are any blockers that we are still waiting for 
> to start the new release process.

I don't see any on-going blocker in my area.

Thanks for the notification.



Bests,

Tkaeshi



On Wed, Jul 15, 2020 at 4:03 AM Dongjoon Hyun 
<dongjoon.h...@gmail.com<mailto:dongjoon.h...@gmail.com>> wrote:

Hi, Yi.



Could you explain why you think that is a blocker? For the given example from 
the JIRA description,



spark.udf.register("key", udf((m: Map[String, String]) => m.keys.head.toInt))

Seq(Map("1" -> "one", "2" -> "two")).toDF("a").createOrReplaceTempView("t")

checkAnswer(sql("SELECT key(a) AS k FROM t GROUP BY key(a)"), Row(1) :: Nil)



Apache Spark 3.0.0 seems to work like the following.



scala> spark.version

res0: String = 3.0.0



scala> spark.udf.register("key", udf((m: Map[String, String]) => 
m.keys.head.toInt))

res1: org.apache.spark.sql.expressions.UserDefinedFunction = 
SparkUserDefinedFunction($Lambda$1958/948653928@5d6bed7b,IntegerType,List(Some(class[value[0]:
 map<string,string>])),None,false,true)



scala> Seq(Map("1" -> "one", "2" -> 
"two")).toDF("a").createOrReplaceTempView("t")



scala> sql("SELECT key(a) AS k FROM t GROUP BY key(a)").collect

res3: Array[org.apache.spark.sql.Row] = Array([1])



Could you provide a reproducible example?



Bests,

Dongjoon.





On Tue, Jul 14, 2020 at 10:04 AM Yi Wu 
<yi...@databricks.com<mailto:yi...@databricks.com>> wrote:

This probably be a blocker: https://issues.apache.org/jira/browse/SPARK-32307



On Tue, Jul 14, 2020 at 11:13 PM Sean Owen 
<sro...@gmail.com<mailto:sro...@gmail.com>> wrote:

https://issues.apache.org/jira/browse/SPARK-32234 ?

On Tue, Jul 14, 2020 at 9:57 AM Shivaram Venkataraman
<shiva...@eecs.berkeley.edu<mailto:shiva...@eecs.berkeley.edu>> wrote:
>
> Hi all
>
> Just wanted to check if there are any blockers that we are still waiting for 
> to start the new release process.
>
> Thanks
> Shivaram
>




--

---
Takeshi Yamamuro

Reply via email to