You can change flink-conf.yaml "jobmanager.address" or "jobmanager.port"
options before run the program or take a look at RemoteStreamEnvironment
which enables configuring host and port.

Best,
tison.


Som Lima <somplastic...@gmail.com> 于2020年4月19日周日 下午5:58写道:

> Hi,
>
> After running
>
> $ ./bin/start-cluster.sh
>
> The following line of code defaults jobmanager  to localhost:6123
>
> final  ExecutionEnvironment env = Environment.getExecutionEnvironment();
>
> which is same on spark.
>
> val spark =
> SparkSession.builder.master(local[*]).appname("anapp").getOrCreate
>
> However if I wish to run the servers on a different physical computer.
> Then in Spark I can do it this way using the spark URI in my IDE.
>
> Conf =
> SparkConf().setMaster("spark://<hostip>:<port>").setAppName("anapp")
>
> Can you please tell me the equivalent change to make so I can run my
> servers and my IDE from different physical computers.
>
>
>
>
>
>
>
>
>
>
>
>
>

Reply via email to