Glad to hear that.

Som Lima <somplastic...@gmail.com> 于2020年4月20日周一 上午8:08写道:

> I will thanks.  Once I had it set up and working.
> I switched  my computers around from client to server to server to client.
> With your excellent instructions I was able to do it in 5 .minutes
>
> On Mon, 20 Apr 2020, 00:05 Jeff Zhang, <zjf...@gmail.com> wrote:
>
>> Som, Let us know when you have any problems
>>
>> Som Lima <somplastic...@gmail.com> 于2020年4月20日周一 上午2:31写道:
>>
>>> Thanks for the info and links.
>>>
>>> I had a lot of problems I am not sure what I was doing wrong.
>>>
>>> May be conflicts with setup from apache spark.  I think I may need to
>>> setup users for each development.
>>>
>>>
>>> Anyway I kept doing fresh installs about four altogether I think.
>>>
>>> Everything works fine now
>>> Including remote access  of zeppelin on machines across the local area
>>> network.
>>>
>>> Next step  setup remote clusters
>>>  Wish me luck !
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Sun, 19 Apr 2020, 14:58 Jeff Zhang, <zjf...@gmail.com> wrote:
>>>
>>>> Hi Som,
>>>>
>>>> You can take a look at flink on zeppelin, in zeppelin you can connect
>>>> to a remote flink cluster via a few configuration, and you don't need to
>>>> worry about the jars. Flink interpreter will ship necessary jars for you.
>>>> Here's a list of tutorials.
>>>>
>>>> 1) Get started https://link.medium.com/oppqD6dIg5
>>>> <https://t.co/PTouUYYTrv?amp=1> 2) Batch https://
>>>> link.medium.com/3qumbwRIg5 <https://t.co/Yo9QAY0Joj?amp=1> 3)
>>>> Streaming https://link.medium.com/RBHa2lTIg5
>>>> <https://t.co/sUapN40tvI?amp=1> 4) Advanced usage https://
>>>> link.medium.com/CAekyoXIg5 <https://t.co/MXolULmafZ?amp=1>
>>>>
>>>>
>>>> Zahid Rahman <zahidr1...@gmail.com> 于2020年4月19日周日 下午7:27写道:
>>>>
>>>>> Hi Tison,
>>>>>
>>>>> I think I may have found what I want in example 22.
>>>>>
>>>>> https://www.programcreek.com/java-api-examples/?api=org.apache.flink.configuration.Configuration
>>>>>
>>>>> I need to create Configuration object first as shown .
>>>>>
>>>>> Also I think  flink-conf.yaml file may contain configuration for
>>>>> client rather than  server. So before starting is irrelevant.
>>>>> I am going to play around and see but if the Configuration class
>>>>> allows me to set configuration programmatically and overrides the yaml 
>>>>> file
>>>>> then that would be great.
>>>>>
>>>>>
>>>>>
>>>>> On Sun, 19 Apr 2020, 11:35 Som Lima, <somplastic...@gmail.com> wrote:
>>>>>
>>>>>> Thanks.
>>>>>> flink-conf.yaml does allow me to do what I need to do without making
>>>>>> any changes to client source code.
>>>>>>
>>>>>> But
>>>>>> RemoteStreamEnvironment constructor  expects a jar file as the third
>>>>>> parameter also.
>>>>>>
>>>>>> RemoteStreamEnvironment
>>>>>> <https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/java/org/apache/flink/streaming/api/environment/RemoteStreamEnvironment.html#RemoteStreamEnvironment-java.lang.String-int-java.lang.String...->
>>>>>> (String
>>>>>> <http://docs.oracle.com/javase/7/docs/api/java/lang/String.html?is-external=true>
>>>>>>  host,
>>>>>> int port, String
>>>>>> <http://docs.oracle.com/javase/7/docs/api/java/lang/String.html?is-external=true>
>>>>>> ... jarFiles)
>>>>>> Creates a new RemoteStreamEnvironment that points to the master
>>>>>> (JobManager) described by the given host name and port.
>>>>>>
>>>>>> On Sun, 19 Apr 2020, 11:02 tison, <wander4...@gmail.com> wrote:
>>>>>>
>>>>>>> You can change flink-conf.yaml "jobmanager.address" or
>>>>>>> "jobmanager.port" options before run the program or take a look at
>>>>>>> RemoteStreamEnvironment which enables configuring host and port.
>>>>>>>
>>>>>>> Best,
>>>>>>> tison.
>>>>>>>
>>>>>>>
>>>>>>> Som Lima <somplastic...@gmail.com> 于2020年4月19日周日 下午5:58写道:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> After running
>>>>>>>>
>>>>>>>> $ ./bin/start-cluster.sh
>>>>>>>>
>>>>>>>> The following line of code defaults jobmanager  to localhost:6123
>>>>>>>>
>>>>>>>> final  ExecutionEnvironment env =
>>>>>>>> Environment.getExecutionEnvironment();
>>>>>>>>
>>>>>>>> which is same on spark.
>>>>>>>>
>>>>>>>> val spark =
>>>>>>>> SparkSession.builder.master(local[*]).appname("anapp").getOrCreate
>>>>>>>>
>>>>>>>> However if I wish to run the servers on a different physical
>>>>>>>> computer.
>>>>>>>> Then in Spark I can do it this way using the spark URI in my IDE.
>>>>>>>>
>>>>>>>> Conf =
>>>>>>>> SparkConf().setMaster("spark://<hostip>:<port>").setAppName("anapp")
>>>>>>>>
>>>>>>>> Can you please tell me the equivalent change to make so I can run
>>>>>>>> my servers and my IDE from different physical computers.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>
>>>> --
>>>> Best Regards
>>>>
>>>> Jeff Zhang
>>>>
>>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>>
>

-- 
Best Regards

Jeff Zhang

Reply via email to