zentol commented on a change in pull request #7150: [FLINK-10955] Extend 
release notes for Apache Flink 1.7.0
URL: https://github.com/apache/flink/pull/7150#discussion_r235135547
 
 

 ##########
 File path: docs/release-notes/flink-1.7.md
 ##########
 @@ -22,6 +22,89 @@ under the License.
 
 These release notes discuss important aspects, such as configuration, 
behavior, or dependencies, that changed between Flink 1.6 and Flink 1.7. Please 
read these notes carefully if you are planning to upgrade your Flink version to 
1.7.
 
+### Scala 2.12 support
+
+When using Scala `2.12` you might have to add explicit type annotations in 
places where they were not required when using Scala `2.11`.
+This is an excerpt from the `TransitiveClosureNaive.scala` example in the 
Flink code base that shows the changes that could be required.
+
+Previous code:
+```
+val terminate = prevPaths
+ .coGroup(nextPaths)
+ .where(0).equalTo(0) {
+   (prev, next, out: Collector[(Long, Long)]) => {
+     val prevPaths = prev.toSet
+     for (n <- next)
+       if (!prevPaths.contains(n)) out.collect(n)
+   }
+}
+```
+
+With Scala `2.12` you have to change it to:
+```
+val terminate = prevPaths
+ .coGroup(nextPaths)
+ .where(0).equalTo(0) {
+   (prev: Iterator[(Long, Long)], next: Iterator[(Long, Long)], out: 
Collector[(Long, Long)]) => {
+       val prevPaths = prev.toSet
+       for (n <- next)
+         if (!prevPaths.contains(n)) out.collect(n)
+     }
+}
+```
+
+The reason for this is that Scala `2.12` changes how lambdas are implemented.
+They now use the lambda support using SAM interfaces introduced in Java 8.
+This makes some method calls ambiguous because now both Scala-style lambdas 
and SAMs are candidates for methods were it was previously clear which method 
would be invoked.
+
+### Removal of the legacy mode
+
+Flink no longer supports the legacy mode.
+If you depend on this, then please use Flink `1.6.x`.
+
+### Savepoints being used for recovery
+
+Savepoints are now used while recovering.
+Previously when using exactly-once sink one could get into problems with 
duplicate output data when a failure occured after a savepoint was taken but 
before the next checkpoint occured.
+This results in the fact that savepoints are no longer exclusively under the 
control of the user.
+Savepoint should not be moved nor deleted if there was no newer checkpoint or 
savepoint taken.
+
+### MetricQueryService runs in separate thread pool
+
+The metric query service runs now in its own `ActorSystem`.
+It needs consequently to open a new port for the query services to communicate 
with each other.
+This port can be configured via `metrics.internal.query-service.port` and is 
set by default to `0`.
+
+### Granularity of latency metrics
+
+The default granularity for latency metrics has been modified.
+To restore the previous behavior users have to explicitly set the granularity 
to `metrics.latency.granularity: subtask`.
+
+### Latency marker activation
+
+Latency metrics are now disabled by default, which all affect all jobs that do 
not explicitly set the `latencyTrackingInterval` via 
`ExecutionConfig#setLatencyTrackingInterval`.
+To restore the previous default behavior users have to configure the 
`metrics.latency.interval` in `flink-conf.yaml`.
+
+### Relocation of Hadoop's Netty dependency
+
+We now also relocate Hadoop's Netty dependency from `io.netty` to 
`org.apache.flink.hadoop.shaded.io.netty`.
+You can now bundle your own version of Netty into your job but may no longer 
assume that `io.netty` is present in the `flink-shaded-hadoop2-uber-*.jar` file.
+
+### Local recovery fixed
+
+With the improvements to Flink's scheduling, it can no longer happen that 
recoveries require more slots than before if local recovery is enabled.
+Consequently, we encourage our users to use the local recovery feature which 
can be enabled by `state.backend.local-recovery: true` in `flink-conf.yaml`.
+
+### Support for multi slot TaskManagers
+
+Flink now properly supports `TaskManagers` with multiple slots.
+Consequently, `TaskManagers` can now be started with an arbitrary number of 
slots and it is no longer recommended to start them with a single slot.
+
+### StandaloneJobClusterEntrypoint generates JobGraph with fixed JobID
+
+The `StandaloneJobClusterEntrypoint` now starts all jobs with a fixed `JobID`.
 
 Review comment:
   This is a rather technical explanation; mention instead which script/command 
is affected by this.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to