Re: Flink 1.10 Out of memory

2020-04-24 Thread Som Lima
@Zahir what the Apache means is don't be like Jesse Anderson who recommended Flink on the basis Apache only uses maps as seen in video. While Flink uses ValueState and State in Streaming API. Although it appears Jesse Anderson only looked as deep as the data stream helloworld. You are required

Re: Running 2 instances of LocalExecutionEnvironments with same consumer group.id

2020-04-22 Thread Som Lima
I followed the link , may be same Suraj is advertising DataBricks webinar going on live right now. On Wed, 22 Apr 2020, 18:38 Gary Yao, wrote: > Hi Suraj, > > This question has been asked before: > > > http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-kafka-consumers-don

Re: How to use OpenTSDB as Source?

2020-04-22 Thread Som Lima
e let me know if you have other problems. > > Best, > Jark > > [1]: > https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/KafkaTableSourceSinkFactory.java > > > On Wed, 22 Apr 2020 a

Re: How to use OpenTSDB as Source?

2020-04-22 Thread Som Lima
For sake of brevity the code example does not show the complete code for setting up the environment using EnvironmentSettings class EnvironmentSettings settings = EnvironmentSettings.newInstance()... As you can see comparatively the same protocol is not followed when showing setting up the envi

Re: Job manager URI rpc address:port

2020-04-20 Thread Som Lima
l On Sun, 19 Apr 2020, 11:02 tison, wrote: > You can change flink-conf.yaml "jobmanager.address" or "jobmanager.port" > options before run the program or take a look at RemoteStreamEnvironment > which enables configuring host and port. > > Best, > tison. &g

Re: FLINK JOB solved

2020-04-20 Thread Som Lima
Mon, 20 Apr 2020, 12:07 Som Lima, wrote: > Yes exactly that is the change I am having to make. Changing FLINK JOB > default localhost to ip of server computer in the browser. > > I followed the instructions as per your > link. > > https://medium.com/@zjffdu/flink-on-zeppel

Re: FLINK JOB

2020-04-20 Thread Som Lima
ration in > zeppelin side to replace the localhost to real ip. > > Som Lima 于2020年4月20日周一 下午4:44写道: > >> I am only running the zeppelin word count example by clicking the >> zeppelin run arrow. >> >> >> On Mon, 20 Apr 2020, 09:42 Jeff Zhang, wrote: >&g

Re: FLINK JOB

2020-04-20 Thread Som Lima
I am only running the zeppelin word count example by clicking the zeppelin run arrow. On Mon, 20 Apr 2020, 09:42 Jeff Zhang, wrote: > How do you run flink job ? It should not always be localhost:8081 > > Som Lima 于2020年4月20日周一 下午4:33写道: > >> Hi, >> >> FLINK J

FLINK JOB

2020-04-20 Thread Som Lima
Hi, FLINK JOB url defaults to localhost i.e. localhost:8081. I have to manually change it to server :8081 to get Apache flink Web Dashboard to display.

Re: Job manager URI rpc address:port

2020-04-19 Thread Som Lima
i=org.apache.flink.configuration.Configuration >>>> >>>> I need to create Configuration object first as shown . >>>> >>>> Also I think flink-conf.yaml file may contain configuration for client >>>> rather than server. So before starting

Re: Job manager URI rpc address:port

2020-04-19 Thread Som Lima
; >> Also I think flink-conf.yaml file may contain configuration for client >> rather than server. So before starting is irrelevant. >> I am going to play around and see but if the Configuration class allows >> me to set configuration programmatically and overrides the yaml

Re: Job manager URI rpc address:port

2020-04-19 Thread Som Lima
or "jobmanager.port" > options before run the program or take a look at RemoteStreamEnvironment > which enables configuring host and port. > > Best, > tison. > > > Som Lima 于2020年4月19日周日 下午5:58写道: > >> Hi, >> >> After running >> >>

Job manager URI rpc address:port

2020-04-19 Thread Som Lima
Hi, After running $ ./bin/start-cluster.sh The following line of code defaults jobmanager to localhost:6123 final ExecutionEnvironment env = Environment.getExecutionEnvironment(); which is same on spark. val spark = SparkSession.builder.master(local[*]).appname("anapp").getOrCreate However