Wenchen, in Transport, users provide the input parameter signatures and
output parameter signature as part of the API. Compile-time checks are done
by parsing the type signatures and matching them to the type tree received
at compile-time. This also helps with inferring the concrete output type.
T
Starting with my +1 (binding).
2021년 2월 22일 (월) 오후 3:56, Hyukjin Kwon 님이 작성:
> Please vote on releasing the following candidate as Apache Spark version
> 3.1.1.
>
> The vote is open until February 24th 11PM PST and passes if a majority +1
> PMC votes are cast, with a minimum of 3 +1 votes.
>
> [
Please vote on releasing the following candidate as Apache Spark version
3.1.1.
The vote is open until February 24th 11PM PST and passes if a majority +1
PMC votes are cast, with a minimum of 3 +1 votes.
[ ] +1 Release this package as Apache Spark 3.1.1
[ ] -1 Do not release this package because
I think I have made it clear that it's simpler for the UDF developers to
deal with the input parameters directly, instead of getting them from a
row, as you need to provide the index and type (e.g. row.getLong(0)). It's
also coherent with the existing Spark Scala/Java UDF APIs, so that Spark
users
Ok, thanks.
Sean Owen 于2021年2月21日周日 上午12:33写道:
> Do you just mean you want to adjust the code style rules? Yes you can do
> that in IJ, just a matter of finding the indent rule to adjust.
> The Spark style is pretty normal stuff, though not 100% consistent.I
> prefer the first style in this case