cuspymd commented on a change in pull request #4116:
URL: https://github.com/apache/zeppelin/pull/4116#discussion_r633402731
##########
File path:
zeppelin-plugins/launcher/flink/src/main/java/org/apache/zeppelin/interpreter/launcher/FlinkInterpreterLauncher.java
##########
@@ -45,25 +48,44 @@ public FlinkInterpreterLauncher(ZeppelinConfiguration
zConf, RecoveryStorage rec
throws IOException {
Map<String, String> envs = super.buildEnvFromProperties(context);
- String flinkHome = updateEnvsForFlinkHome(envs, context);
-
+ String flinkHome = getFlinkHome(context);
if (!envs.containsKey("FLINK_CONF_DIR")) {
envs.put("FLINK_CONF_DIR", flinkHome + "/conf");
}
envs.put("FLINK_LIB_DIR", flinkHome + "/lib");
envs.put("FLINK_PLUGINS_DIR", flinkHome + "/plugins");
- // yarn application mode specific logic
- if ("yarn-application".equalsIgnoreCase(
- context.getProperties().getProperty("flink.execution.mode"))) {
- updateEnvsForYarnApplicationMode(envs, context);
+ String mode = context.getProperties().getProperty("flink.execution.mode");
+ String a = FLINK_EXECUTION_MODES.stream().collect(Collectors.joining(",
"));
Review comment:
variable `a` seems be not used.
##########
File path:
zeppelin-plugins/launcher/flink/src/main/java/org/apache/zeppelin/interpreter/launcher/FlinkInterpreterLauncher.java
##########
@@ -45,25 +48,44 @@ public FlinkInterpreterLauncher(ZeppelinConfiguration
zConf, RecoveryStorage rec
throws IOException {
Map<String, String> envs = super.buildEnvFromProperties(context);
- String flinkHome = updateEnvsForFlinkHome(envs, context);
-
+ String flinkHome = getFlinkHome(context);
if (!envs.containsKey("FLINK_CONF_DIR")) {
envs.put("FLINK_CONF_DIR", flinkHome + "/conf");
}
envs.put("FLINK_LIB_DIR", flinkHome + "/lib");
envs.put("FLINK_PLUGINS_DIR", flinkHome + "/plugins");
- // yarn application mode specific logic
- if ("yarn-application".equalsIgnoreCase(
- context.getProperties().getProperty("flink.execution.mode"))) {
- updateEnvsForYarnApplicationMode(envs, context);
+ String mode = context.getProperties().getProperty("flink.execution.mode");
+ String a = FLINK_EXECUTION_MODES.stream().collect(Collectors.joining(",
"));
+ if (!FLINK_EXECUTION_MODES.contains(mode)) {
+ throw new IOException("Not valid flink.execution.mode: " +
+ mode + ", valid modes ares: " +
+ FLINK_EXECUTION_MODES.stream().collect(Collectors.joining(",
")));
+ }
+ if (isApplicationMode(mode)) {
+ updateEnvsForApplicationMode(mode, envs, context);
}
return envs;
}
- private String updateEnvsForFlinkHome(Map<String, String> envs,
- InterpreterLaunchContext context) throws
IOException {
+ private void verifyExecutionMode(String mode) {
+
+ }
+
+ private boolean isApplicationMode(String mode) {
+ return "yarn-application".equals(mode) ||
"kubernetes-application".equals(mode);
Review comment:
It would be better to use functions declared below,
`isYarnApplicationMode(), `isK8sApplicationMode()`.
##########
File path:
zeppelin-interpreter/src/main/java/org/apache/zeppelin/interpreter/remote/RemoteInterpreterServer.java
##########
@@ -142,6 +141,7 @@
private ScheduledExecutorService resultCleanService =
Executors.newSingleThreadScheduledExecutor();
private boolean isTest;
+ private boolean isFlinkK8sApplicationMode = false;
Review comment:
`RemoteInterpreterServer` is a high-level object that handles
`Interpreter`s identically. It seems a little odd that this object needs to
know the operation mode of a particular interpreter.
##########
File path:
flink/interpreter/src/main/scala/org/apache/zeppelin/flink/FlinkScalaInterpreter.scala
##########
@@ -312,9 +323,12 @@ class FlinkScalaInterpreter(val properties: Properties) {
// remote mode
if (mode == ExecutionMode.YARN_APPLICATION) {
val yarnAppId = System.getenv("_APP_ID");
- LOGGER.info("Use FlinkCluster in yarn application mode, appId:
{}", yarnAppId)
+ LOGGER.info("Use FlinkCluster in yarn-application mode, appId:
{}", yarnAppId)
this.jmWebUrl = "http://localhost:" +
HadoopUtils.getFlinkRestPort(yarnAppId)
this.displayedJMWebUrl =
HadoopUtils.getYarnAppTrackingUrl(yarnAppId)
+ } else if (mode == ExecutionMode.KUBERNETES_APPLICATION) {
+ LOGGER.info("Use FlinkCluster in kubernetes-application mode")
+ this.jmWebUrl = "http://localhost:8083"
} else {
Review comment:
The block expression starting from line 286 is too long, making the
function less readable. It would be nice to separate it into a separate
function.
##########
File path:
zeppelin-plugins/launcher/flink/src/main/java/org/apache/zeppelin/interpreter/launcher/FlinkInterpreterLauncher.java
##########
@@ -45,25 +48,44 @@ public FlinkInterpreterLauncher(ZeppelinConfiguration
zConf, RecoveryStorage rec
throws IOException {
Map<String, String> envs = super.buildEnvFromProperties(context);
- String flinkHome = updateEnvsForFlinkHome(envs, context);
-
+ String flinkHome = getFlinkHome(context);
if (!envs.containsKey("FLINK_CONF_DIR")) {
envs.put("FLINK_CONF_DIR", flinkHome + "/conf");
}
envs.put("FLINK_LIB_DIR", flinkHome + "/lib");
envs.put("FLINK_PLUGINS_DIR", flinkHome + "/plugins");
- // yarn application mode specific logic
- if ("yarn-application".equalsIgnoreCase(
- context.getProperties().getProperty("flink.execution.mode"))) {
- updateEnvsForYarnApplicationMode(envs, context);
+ String mode = context.getProperties().getProperty("flink.execution.mode");
+ String a = FLINK_EXECUTION_MODES.stream().collect(Collectors.joining(",
"));
+ if (!FLINK_EXECUTION_MODES.contains(mode)) {
+ throw new IOException("Not valid flink.execution.mode: " +
+ mode + ", valid modes ares: " +
+ FLINK_EXECUTION_MODES.stream().collect(Collectors.joining(",
")));
+ }
+ if (isApplicationMode(mode)) {
+ updateEnvsForApplicationMode(mode, envs, context);
}
return envs;
}
- private String updateEnvsForFlinkHome(Map<String, String> envs,
- InterpreterLaunchContext context) throws
IOException {
+ private void verifyExecutionMode(String mode) {
+
+ }
Review comment:
It seems be unused.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]