xintongsong commented on a change in pull request #11323: [FLINK-16439][k8s] 
Make KubernetesResourceManager starts workers using WorkerResourceSpec 
requested by SlotManager
URL: https://github.com/apache/flink/pull/11323#discussion_r410006159
 
 

 ##########
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesResourceManager.java
 ##########
 @@ -237,57 +230,73 @@ private void recoverWorkerNodesFromPreviousAttempts() 
throws ResourceManagerExce
                        ++currentMaxAttemptId);
        }
 
-       private void requestKubernetesPod() {
-               numPendingPodRequests++;
+       private void requestKubernetesPod(WorkerResourceSpec 
workerResourceSpec) {
+               final KubernetesTaskManagerParameters parameters =
+                       
createKubernetesTaskManagerParameters(workerResourceSpec);
+
+               final KubernetesPod taskManagerPod =
+                       
KubernetesTaskManagerFactory.createTaskManagerComponent(parameters);
+               kubeClient.createTaskManagerPod(taskManagerPod);
+
+               podWorkerResources.put(parameters.getPodName(), 
workerResourceSpec);
+               final int pendingWorkerNum = 
notifyNewWorkerRequested(workerResourceSpec);
 
                log.info("Requesting new TaskManager pod with <{},{}>. Number 
pending requests {}.",
-                       defaultMemoryMB,
-                       defaultCpus,
-                       numPendingPodRequests);
+                       parameters.getTaskManagerMemoryMB(),
+                       parameters.getTaskManagerCPU(),
+                       pendingWorkerNum);
+               log.info("TaskManager {} will be started with {}.", 
parameters.getPodName(), workerResourceSpec);
+       }
+
+       private KubernetesTaskManagerParameters 
createKubernetesTaskManagerParameters(WorkerResourceSpec workerResourceSpec) {
+               final TaskExecutorProcessSpec taskExecutorProcessSpec =
+                       
TaskExecutorProcessUtils.processSpecFromWorkerResourceSpec(flinkConfig, 
workerResourceSpec);
 
                final String podName = String.format(
                        TASK_MANAGER_POD_FORMAT,
                        clusterId,
                        currentMaxAttemptId,
                        ++currentMaxPodId);
 
+               final ContaineredTaskManagerParameters taskManagerParameters =
+                       ContaineredTaskManagerParameters.create(flinkConfig, 
taskExecutorProcessSpec);
+
                final String dynamicProperties =
                        
BootstrapTools.getDynamicPropertiesAsString(flinkClientConfig, flinkConfig);
 
-               final KubernetesTaskManagerParameters 
kubernetesTaskManagerParameters = new KubernetesTaskManagerParameters(
+               return new KubernetesTaskManagerParameters(
                        flinkConfig,
                        podName,
                        dynamicProperties,
                        taskManagerParameters);
-
-               final KubernetesPod taskManagerPod =
-                       
KubernetesTaskManagerFactory.createTaskManagerComponent(kubernetesTaskManagerParameters);
-
-               log.info("TaskManager {} will be started with {}.", podName, 
taskExecutorProcessSpec);
-               kubeClient.createTaskManagerPod(taskManagerPod);
        }
 
        /**
         * Request new pod if pending pods cannot satisfy pending slot requests.
         */
-       private void requestKubernetesPodIfRequired() {
-               final int requiredTaskManagers = 
getNumberRequiredTaskManagers();
+       private void requestKubernetesPodIfRequired(WorkerResourceSpec 
workerResourceSpec) {
+               final int pendingWorkerNum = 
getNumPendingWorkersFor(workerResourceSpec);
+               int requiredTaskManagers = 
getRequiredResources().get(workerResourceSpec);
 
-               while (requiredTaskManagers > numPendingPodRequests) {
-                       requestKubernetesPod();
+               while (requiredTaskManagers-- > pendingWorkerNum) {
+                       requestKubernetesPod(workerResourceSpec);
                }
        }
 
        private void removePodIfTerminated(KubernetesPod pod) {
                if (pod.isTerminated()) {
                        kubeClient.stopPod(pod.getName());
 
 Review comment:
   It seems there are two issues.
   1. `numPendingPodRequests`/`PendingWorkerCounter` may go out of sync, if pod 
is stopped in `onError` before `onAdded` is called.
   2. Whether should we remove the pod or not when `onError` is called.
   
   I would suggest the following.
   - Try to resolve (1) in this PR, based on the current behavior that pod will 
be removed when `onError` is called. We can check whether the pod is requested 
in the current attempt by looking up in `podWorkerResources`, and whether the 
`onAdded` has been called by looking up in `workerNodes`. If the pod is 
requested in the current attempt and `onAdded` is not called yet, then we 
should decrease the pending count.
   - Leave (2) to be discussed and resolved in FLINK-17177. We may or may not 
need to remove the pod when `onError` is called. Moreover, if we remove the pod 
and decrease the pending count, should we try to request another new pod? To 
answer these questions, I guess we need to better understand under what 
circumstances will an `ERROR` be received.
   
   WDYT? @wangyang0918 @zhengcanbin @tillrohrmann 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to