[ 
https://issues.apache.org/jira/browse/FLINK-10955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16693968#comment-16693968
 ] 

ASF GitHub Bot commented on FLINK-10955:
----------------------------------------

hequn8128 commented on a change in pull request #7150: [FLINK-10955] Extend 
release notes for Apache Flink 1.7.0
URL: https://github.com/apache/flink/pull/7150#discussion_r235222632
 
 

 ##########
 File path: docs/release-notes/flink-1.7.md
 ##########
 @@ -22,6 +22,89 @@ under the License.
 
 These release notes discuss important aspects, such as configuration, 
behavior, or dependencies, that changed between Flink 1.6 and Flink 1.7. Please 
read these notes carefully if you are planning to upgrade your Flink version to 
1.7.
 
+### Scala 2.12 support
+
+When using Scala `2.12` you might have to add explicit type annotations in 
places where they were not required when using Scala `2.11`.
+This is an excerpt from the `TransitiveClosureNaive.scala` example in the 
Flink code base that shows the changes that could be required.
+
+Previous code:
+```
+val terminate = prevPaths
+ .coGroup(nextPaths)
+ .where(0).equalTo(0) {
+   (prev, next, out: Collector[(Long, Long)]) => {
+     val prevPaths = prev.toSet
+     for (n <- next)
+       if (!prevPaths.contains(n)) out.collect(n)
+   }
+}
+```
+
+With Scala `2.12` you have to change it to:
+```
+val terminate = prevPaths
+ .coGroup(nextPaths)
+ .where(0).equalTo(0) {
+   (prev: Iterator[(Long, Long)], next: Iterator[(Long, Long)], out: 
Collector[(Long, Long)]) => {
+       val prevPaths = prev.toSet
+       for (n <- next)
+         if (!prevPaths.contains(n)) out.collect(n)
+     }
+}
+```
+
+The reason for this is that Scala `2.12` changes how lambdas are implemented.
+They now use the lambda support using SAM interfaces introduced in Java 8.
+This makes some method calls ambiguous because now both Scala-style lambdas 
and SAMs are candidates for methods were it was previously clear which method 
would be invoked.
+
+### Removal of the legacy mode
+
+Flink no longer supports the legacy mode.
+If you depend on this, then please use Flink `1.6.x`.
+
+### Savepoints being used for recovery
+
+Savepoints are now used while recovering.
+Previously when using exactly-once sink one could get into problems with 
duplicate output data when a failure occured after a savepoint was taken but 
before the next checkpoint occured.
 
 Review comment:
   A nice release notes! Replace occured with occurred here? There are two 
places in this sentence.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Extend release notes for Flink 1.7
> ----------------------------------
>
>                 Key: FLINK-10955
>                 URL: https://issues.apache.org/jira/browse/FLINK-10955
>             Project: Flink
>          Issue Type: Bug
>          Components: Documentation
>    Affects Versions: 1.7.0
>            Reporter: Till Rohrmann
>            Assignee: Till Rohrmann
>            Priority: Critical
>              Labels: pull-request-available
>             Fix For: 1.7.1
>
>
> We should extend the release notes for Flink 1.7 to include the release notes 
> of all fixed issues with fix version {{1.7.0}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to