[ https://issues.apache.org/jira/browse/FLINK-27130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17520334#comment-17520334 ]
Adrian Zhong edited comment on FLINK-27130 at 4/11/22 7:00 AM: --------------------------------------------------------------- [~wangyang0918] Thanks for replying. I have tried '-m yarn-cluster' with -yD env.java.opts.client="-Dkafka.start_from_timestamp=1648828800000", output: {code:java} -Dkafka.start_from_timestamp Not found{code} and if I get it from environment configuration: {code:java} String s = executionEnv.getConfiguration().get(CoreOptions.FLINK_CLI_JVM_OPTIONS); System.err.println("FLINK_CLI_JVM_OPTIONS" + s); {code} output : {code:java} FLINK_CLI_JVM_OPTIONS-Dkafka.start_from_timestamp=1648828800000 //Environment works! -Dkafka.start_from_timestamp Not found //system properties still does not work!{code} was (Author: adrian z): [~wangyang0918] Thanks for replying. I have tried '-m yarn-cluster' with -yD env.java.opts.client="-Dkafka.start_from_timestamp=1648828800000", output: {code:java} -Dkafka.start_from_timestamp Not found{code} and if I get it from environment configuration: {code:java} String s = executionEnv.getConfiguration().get(CoreOptions.FLINK_CLI_JVM_OPTIONS); System.err.println("FLINK_CLI_JVM_OPTIONS" + s); {code} output : {code:java} FLINK_CLI_JVM_OPTIONS-Dkafka.start_from_timestamp=1648828800000 //Environmnet works! -Dkafka.start_from_timestamp Not found //system properties still does not work!{code} > unable to pass custom System properties through command line > ------------------------------------------------------------ > > Key: FLINK-27130 > URL: https://issues.apache.org/jira/browse/FLINK-27130 > Project: Flink > Issue Type: Bug > Components: Client / Job Submission > Affects Versions: 1.13.0, 1.13.6 > Reporter: Adrian Zhong > Priority: Major > > I'm using Flink YARN-PER-JOB mode to submit a job. > I'm wondering what is wrong, and what is preventing job class to read system > properties specified +through command line+ which seems to be per-jvm. > > I have searched all related issues, read unit tests for CliFrontend and > DynamicProperties, however, I can't figure out. > > Here is my job class: > {code:java} > public static void main(String[] args) { > String property = System.getProperty("kafka.start_from_timestamp"); > if (property == null) { > //-Dkafka.start_from_timestamp=1648828800000 > System.err.println("-Dkafka.start_from_timestamp Not found"); > System.err.println("This are Properties Found in this JVM:"); > System.err.println(System.getProperties().stringPropertyNames()); > } else { > System.err.println("-Dkafka.start_from_timestamp is" + property); > } //.... > } {code} > outputs: > {code:java} > -Dkafka.start_from_timestamp Not found > This are Properties Found in this JVM: > [zookeeper.sasl.client, java.runtime.name, sun.boot.library.path, > java.vm.version, java.vm.vendor, java.vendor.url, path.separator, > java.vm.name, file.encoding.pkg, user.country, sun.java.launcher, > sun.os.patch.level, java.vm.specification.name, user.dir, > java.runtime.version, java.awt.graphicsenv, java.endorsed.dirs, os.arch, > java.io.tmpdir, line.separator, java.vm.specification.vendor, os.name, > log4j.configuration, sun.jnu.encoding, java.library.path, > java.specification.name, java.class.version, sun.management.compiler, > os.version, user.home, user.timezone, java.awt.printerjob, file.encoding, > java.specification.version, log4j.configurationFile, user.name, > java.class.path, log.file, java.vm.specification.version, > sun.arch.data.model, java.home, sun.java.command, java.specification.vendor, > user.language, awt.toolkit, java.vm.info, java.version, java.ext.dirs, > sun.boot.class.path, java.vendor, logback.configurationFile, > java.security.auth.login.config, file.separator, java.vendor.url.bug, > sun.cpu.endian, sun.io.unicode.encoding, sun.cpu.isalist] {code} > Environment: > JDK: Oracle 1.8/25.121-b13 > Flink flink-1.13.0 > > What I have tried: > {code:java} > -Denv.java.opts.client="-Dkafka.start_from_timestamp=1648828800000" > -Denv.java.opts="-Dkafka.start_from_timestamp=1648828800001" > -Dkafka.start_from_timestamp=1648828800002 > -yD env.java.opts="kafka.start_from_timestamp=1648828800003" {code} > submit command: > {code:java} > bin/flink run -yarnjobManagerMemory 1G --yarntaskManagerMemory 1G --yarnqueue > root.users.appuser --yarnslots 1 --yarnname SocketWindowWordCount -m > yarn-cluster --class com.slankka.learn.rtc.SocketWindowWordCount > -Denv.java.opts="-Dkafka.start_from_timestamp=1648828800001" > -Dkafka.start_from_timestamp=1648828800002 -yD > env.java.opts="kafka.start_from_timestamp=1648828800003" -d > /data/files_upload/socketWindowWordCount.jar -hostname 10.11.159.156 --port > 7890 {code} > Another approach: > when I put JVM args into flink-conf.yaml, it works. > > -- This message was sent by Atlassian Jira (v8.20.1#820001)