Hi Chenyu,

First of all, there are two different ways of deploying Flink on Kubernetes.
- Standalone Kubernetes [1], which uses yaml files to deploy a Flink
Standalone cluster on Kubernetes.
- Native Kubernetes [2], which Flink ResourceManager interacts with
Kubernetes API Server and allocates resources dynamically.

>From what you've described, it seems to me you are using the standalone
Kubernetes deployment. The codes you find are for the native Kubernetes
deployment, and should have no effect in your case.

Here are examples how to mount flink-conf.yaml and log4j-console.properties
in a session cluster [3]. Please be aware that in standalone Kubernetes
deployment, Flink looks for log4j-console.properties instead of
log4j.properties. By default, this will write the logs to stdout, so that
the logs can be viewed by the `kubectl logs` command.

Thank you~

Xintong Song


[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/resource-providers/standalone/kubernetes/
[2]
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/resource-providers/native_kubernetes/
[3]
https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/resource-providers/standalone/kubernetes.html#session-cluster-resource-definitions

On Sun, Jun 20, 2021 at 10:58 AM Chenyu Zheng <chenyu.zh...@hulu.com> wrote:

> Hi contributors!
>
> I’m trying to setup Flink v1.12.2 in Kubernetes Session Mode, but I found
> that I cannot mount log4j.properties in configmap to the jobmanager
> container. Is this a expected behavior? Could you share me some ways to
> mount log4j.properties to my container?
>
> My yaml:
>
>
>
> *apiVersion: v1*
>
> *data:*
>
> *  flink-conf.yaml: |-*
>
> *    taskmanager.numberOfTaskSlots: 1*
>
> *    blob.server.port: 6124*
>
> *    kubernetes.rest-service.exposed.type: ClusterIP*
>
> *    kubernetes.jobmanager.cpu: 1.00*
>
> *    high-availability.storageDir:
> s3p://hulu-caposv2-flink-s3-bucket/session-cluster-test/ha-backup/*
>
> *    queryable-state.proxy.ports: 6125*
>
> *    kubernetes.service-account: stream-app*
>
> *    high-availability:
> org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory*
>
> *    jobmanager.memory.process.size: 1024m*
>
> *    taskmanager.memory.process.size: 1024m*
>
> *    kubernetes.taskmanager.annotations:
> cluster-autoscaler.kubernetes.io/safe-to-evict:false
> <http://cluster-autoscaler.kubernetes.io/safe-to-evict:false>*
>
> *    kubernetes.namespace: test123*
>
> *    restart-strategy: fixed-delay*
>
> *    restart-strategy.fixed-delay.attempts: 5*
>
> *    kubernetes.taskmanager.cpu: 1.00*
>
> *    state.backend: filesystem*
>
> *    parallelism.default: 4*
>
> *    kubernetes.container.image:
> cubox.prod.hulu.com/proxy/flink:1.12.2-scala_2.11-java8-stdout7
> <http://cubox.prod.hulu.com/proxy/flink:1.12.2-scala_2.11-java8-stdout7>*
>
> *    kubernetes.taskmanager.labels:
> capos_id:session-cluster-test,stream-component:jobmanager*
>
> *    state.checkpoints.dir:
> s3p://hulu-caposv2-flink-s3-bucket/session-cluster-test/checkpoints/*
>
> *    kubernetes.cluster-id: session-cluster-test*
>
> *    kubernetes.jobmanager.annotations:
> cluster-autoscaler.kubernetes.io/safe-to-evict:false
> <http://cluster-autoscaler.kubernetes.io/safe-to-evict:false>*
>
> *    state.savepoints.dir:
> s3p://hulu-caposv2-flink-s3-bucket/session-cluster-test/savepoints/*
>
> *    restart-strategy.fixed-delay.delay: 15s*
>
> *    taskmanager.rpc.port: 6122*
>
> *    jobmanager.rpc.address: session-cluster-test-flink-jobmanager*
>
> *    kubernetes.jobmanager.labels:
> capos_id:session-cluster-test,stream-component:jobmanager*
>
> *    jobmanager.rpc.port: 6123*
>
> *  log4j.properties: |-*
>
> *    logger.kafka.name <http://logger.kafka.name> = org.apache.kafka*
>
> *    logger.hadoop.level = INFO*
>
> *    appender.rolling.type = RollingFile*
>
> *    appender.rolling.filePattern = ${sys:log.file}.%i*
>
> *    appender.rolling.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p
> %-60c %x - %m%n*
>
> *    logger.netty.name <http://logger.netty.name> =
> org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline*
>
> *    rootLogger = INFO, rolling*
>
> *    logger.akka.name <http://logger.akka.name> = akka*
>
> *    appender.rolling.strategy.type = DefaultRolloverStrategy*
>
> *    logger.akka.level = INFO*
>
> *    appender.rolling.append = false*
>
> *    logger.hadoop.name <http://logger.hadoop.name> = org.apache.hadoop*
>
> *    appender.rolling.fileName = ${sys:log.file}*
>
> *    appender.rolling.policies.type = Policies*
>
> *    rootLogger.appenderRef.rolling.ref = RollingFileAppender*
>
> *    logger.kafka.level = INFO*
>
> *    appender.rolling.name <http://appender.rolling.name> =
> RollingFileAppender*
>
> *    appender.rolling.layout.type = PatternLayout*
>
> *    appender.rolling.policies.size.type = SizeBasedTriggeringPolicy*
>
> *    appender.rolling.policies.size.size = 100MB*
>
> *    appender.rolling.strategy.max = 10*
>
> *    logger.netty.level = OFF*
>
> *    logger.zookeeper.name <http://logger.zookeeper.name> =
> org.apache.zookeeper*
>
> *    logger.zookeeper.level = INFO*
>
> *kind: ConfigMap*
>
> *metadata:*
>
> *  labels:*
>
> *    app: session-cluster-test*
>
> *    capos_id: session-cluster-test*
>
> *  name: session-cluster-test-flink-config*
>
> * namespace: test123*
>
>
>
> *--- *
>
>
>
> *apiVersion: batch/v1*
>
> *kind: Job*
>
> *metadata:*
>
> *  labels:*
>
> *    capos_id: session-cluster-test*
>
> *  name: session-cluster-test-flink-startup*
>
> *  namespace: test123*
>
> *spec:*
>
> *  backoffLimit: 6*
>
> *  completions: 1*
>
> *  parallelism: 1*
>
> *  template:*
>
> *    metadata:*
>
> *      annotations:*
>
> *        caposv2.prod.hulu.com/streamAppSavepointId
> <http://caposv2.prod.hulu.com/streamAppSavepointId>: "0"*
>
> *        cluster-autoscaler.kubernetes.io/safe-to-evict
> <http://cluster-autoscaler.kubernetes.io/safe-to-evict>: "false"*
>
> *      creationTimestamp: null*
>
> *      labels:*
>
> *        capos_id: session-cluster-test*
>
> *        stream-component: start-up*
>
> *    spec:*
>
> *      containers:*
>
> *      - command:*
>
> *        - ./bin/kubernetes-session.sh*
>
> *        - -Dkubernetes.cluster-id=session-cluster-test*
>
> *        image:
> cubox.prod.hulu.com/proxy/flink:1.12.2-scala_2.11-java8-stdout7
> <http://cubox.prod.hulu.com/proxy/flink:1.12.2-scala_2.11-java8-stdout7>*
>
> *        imagePullPolicy: IfNotPresent*
>
> *        name: flink-startup*
>
> *        resources: {}*
>
> *        securityContext:*
>
> *          runAsUser: 9999*
>
> *        terminationMessagePath: /dev/termination-log*
>
> *        terminationMessagePolicy: File*
>
> *        volumeMounts:*
>
> *        - mountPath: /opt/flink/conf*
>
> *          name: flink-config-volume*
>
> *      dnsPolicy: ClusterFirst*
>
> *      restartPolicy: Never*
>
> *      schedulerName: default-scheduler*
>
> *      securityContext: {}*
>
> *      serviceAccount: stream-app*
>
> *      serviceAccountName: stream-app*
>
> *      terminationGracePeriodSeconds: 30*
>
> *      volumes:*
>
> *      - configMap:*
>
> *          defaultMode: 420*
>
> *          items:*
>
> *          - key: flink-conf.yaml*
>
> *            path: flink-conf.yaml*
>
> *          - key: log4j.properties*
>
> *            path: log4j.properties*
>
> *          name: session-cluster-test-flink-config*
>
> *        name: flink-config-volume*
>
> *  ttlSecondsAfterFinished: 86400*
>
>
>
> I cannot see log4j.properties in jobmanager container volume mount.
>
> *volumes:*
>
> *  - configMap:*
>
> *      defaultMode: 420*
>
> *      items:*
>
> *      - key: flink-conf.yaml*
>
> *        path: flink-conf.yaml*
>
> *      name: flink-config-session-cluster-test*
>
> *name: flink-config-volume*
>
>
>
> And there doesn’t log config file in jobmanager container.
>
> *root@session-cluster-test-689b595f8f-dg4h6:/opt/flink# ls -l
> $FLINK_HOME/conf/*
>
> *total 0*
>
> *lrwxrwxrwx 1 root root 22 Jun 19 09:23 flink-conf.yaml ->
> ..data/flink-conf.yaml*
>
>
>
> After I deep dive in the flink source code, I found the root cause could
> be here:
>
>
> https://github.com/apache/flink/blob/release-1.13.1/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/decorators/FlinkConfMountDecorator.java#L104
>
> It only add flink-conf.yaml to container volume mount.
>
>
>
> Could you please give me some guide or support? Thanks so much!
>
>
>
> BRs.
>
> Chenyu Zheng
>
>
>

Reply via email to