caozhen1937 commented on a change in pull request #12296:
URL: https://github.com/apache/flink/pull/12296#discussion_r429616913



##########
File path: docs/ops/deployment/native_kubernetes.zh.md
##########
@@ -193,66 +191,66 @@ $ ./bin/flink run-application -p 8 -t 
kubernetes-application \
   local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
 
-Note: Only "local" is supported as schema for application mode. This assumes 
that the jar is located in the image, not the Flink client.
+注意:应用程序模式只支持 "local" 作为 schema。假设 jar 位于镜像中,而不是 Flink 客户端中。
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be 
added to user classpath.
+注意:镜像的"$FLINK_HOME/usrlib" 目录下的所有 jar 将会被加到用户 classpath 中。
 
-### Stop Flink Application
+### 停止 Flink 应用程序
 
-When an application is stopped, all Flink cluster resources are automatically 
destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded 
Jobs, complete.
+当应用程序停止时,所有 Flink 集群资源都会自动销毁。
+与往常一样,在手动取消作业或完成作业的情况下,作业可能会停止。
 
 {% highlight bash %}
 $ ./bin/flink cancel -t kubernetes-application 
-Dkubernetes.cluster-id=<ClusterID> <JobID>
 {% endhighlight %}
 
-## Kubernetes concepts
+## Kubernetes 概念
 
-### Namespaces
+### 命名空间
 
-[Namespaces in 
Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)
 are a way to divide cluster resources between multiple users (via resource 
quota).
-It is similar to the queue concept in Yarn cluster. Flink on Kubernetes can 
use namespaces to launch Flink clusters.
-The namespace can be specified using the `-Dkubernetes.namespace=default` 
argument when starting a Flink cluster.
+[Kubernetes 
中的命名空间](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)是一种在多个用户之间划分集群资源的方法(通过资源配额)。
+它类似于 Yarn 集群中的队列概念。Flink on Kubernetes 可以使用命名空间来启动 Flink 集群。
+启动 Flink 集群时,可以使用`-Dkubernetes.namespace=default` 参数来指定命名空间。
 
-[ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) 
provides constraints that limit aggregate resource consumption per namespace.
-It can limit the quantity of objects that can be created in a namespace by 
type, as well as the total amount of compute resources that may be consumed by 
resources in that project.
+[资源配额](https://kubernetes.io/docs/concepts/policy/resource-quotas/)提供了限制每个命名空间的合计资源消耗的约束。
+它可以按类型限制可在命名空间中创建的对象数量,以及该项目中的资源可能消耗的计算资源总量。
 
-### RBAC
+### 基于角色的访问控制
 
-Role-based access control 
([RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)) is a 
method of regulating access to compute or network resources based on the roles 
of individual users within an enterprise.
-Users can configure RBAC roles and service accounts used by Flink JobManager 
to access the Kubernetes API server within the Kubernetes cluster. 
+基于角色的访问控制([RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/))是一种在企业内部基于单个用户的角色来调节对计算或网络资源的访问的方法。
+用户可以配置 RBAC 角色和服务账户,Flink JobManager 使用这些角色和服务帐户访问 Kubernetes 集群中的 Kubernetes 
API server。
 
-Every namespace has a default service account, however, the `default` service 
account may not have the permission to create or delete pods within the 
Kubernetes cluster.
-Users may need to update the permission of `default` service account or 
specify another service account that has the right role bound.
+每个命名空间有默认的服务账户,但是`默认`服务账户可能没有权限在 Kubernetes 集群中创建或删除 pod。
+用户可能需要更新`默认`服务账户的权限或指定另一个绑定了正确角色的服务账户。
 
 {% highlight bash %}
 $ kubectl create clusterrolebinding flink-role-binding-default 
--clusterrole=edit --serviceaccount=default:default
 {% endhighlight %}
 
-If you do not want to use `default` service account, use the following command 
to create a new `flink` service account and set the role binding.
-Then use the config option `-Dkubernetes.jobmanager.service-account=flink` to 
make the JobManager pod using the `flink` service account to create and delete 
TaskManager pods.
+如果你不想使用`默认`服务账户,使用以下命令创建一个新的 `flink` 服务账户并设置角色绑定。
+然后使用配置项`-Dkubernetes.jobmanager.service-account=flink` 来使 JobManager pod 使用 
`flink` 服务账户去创建和删除 TaskManager pod。
 
 {% highlight bash %}
 $ kubectl create serviceaccount flink
 $ kubectl create clusterrolebinding flink-role-binding-flink 
--clusterrole=edit --serviceaccount=default:flink
 {% endhighlight %}
 
-Please reference the official Kubernetes documentation on [RBAC 
Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) 
for more information.
+有关更多信息,请参考 Kubernetes 官方文档 [RBAC 
授权](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)。
 
-## Background / Internals
+## 背景/内部构造
 
-This section briefly explains how Flink and Kubernetes interact.
+本节简要解释了 Flink 和 Kubernetes 如何交互。
 
 <img src="{{ site.baseurl }}/fig/FlinkOnK8s.svg" class="img-responsive">
 
-When creating a Flink Kubernetes session cluster, the Flink client will first 
connect to the Kubernetes ApiServer to submit the cluster description, 
including ConfigMap spec, Job Manager Service spec, Job Manager Deployment spec 
and Owner Reference.
-Kubernetes will then create the Flink master deployment, during which time the 
Kubelet will pull the image, prepare and mount the volume, and then execute the 
start command.
-After the master pod has launched, the Dispatcher and 
KubernetesResourceManager are available and the cluster is ready to accept one 
or more jobs.
+创建 Flink Kubernetes session 集群时,Flink 客户端首先将连接到 Kubernetes ApiServer 
提交集群描述信息,包括 ConfigMap 描述信息、Job Manager Service 描述信息、Job Manager Deployment 
描述信息和 Owner Reference。
+Kubernetes 将创建 Flink master 的 deployment,在此期间 Kubelet 将拉取镜像,准备并挂载卷,然后执行 start 
命令。
+master pod 启动后,Dispatcher 和 KubernetesResourceManager 都可用并且集群准备好接受一个或更多作业。

Review comment:
       同意




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to