Re: Flink 1.10 Out of memory

2020-04-19 Thread Xintong Song
Hi Lasse, >From what I understand, your problem is that JVM tries to fork some native process (if you look at the exception stack the root exception is thrown from a native method) but there's no enough memory for doing that. This could happen when either Mesos is using cgroup strict mode for memo

Re: Modelling time for complex events generated out of simple ones

2020-04-19 Thread Salva Alcántara
In my case, the relationship between input and output events is that output events are generated out of some rules based on input events. Essentially, output events correspond to specific patterns / sequences of input events. You can think of output events as detecting certain anomalies or abnormal

Re: Flink Conf "yarn.flink-dist-jar" Question

2020-04-19 Thread Yang Wang
Hi tison, I think i get your concerns and points. Take both FLINK-13938[1] and FLINK-14964[2] into account, i will do in the following steps. * Enrich "-yt/--yarnship" to support HDFS directory * Enrich "-yt/--yarnship" to specify local resource visibility. It is "APPLICATION" by default. It coul

Re:Re: multi-sql checkpoint fail

2020-04-19 Thread forideal
Hi Tison, Jark Wu: Thanks for your reply !!! What's the statebackend are you using? Is it Heap statebackend? rocksdb backend uses incremental checkpoint. Could you share the stack traces? I looked at the flame chart myself and found that it was stuck at the end of the w

Re: Job manager URI rpc address:port

2020-04-19 Thread Jeff Zhang
Glad to hear that. Som Lima 于2020年4月20日周一 上午8:08写道: > I will thanks. Once I had it set up and working. > I switched my computers around from client to server to server to client. > With your excellent instructions I was able to do it in 5 .minutes > > On Mon, 20 Apr 2020, 00:05 Jeff Zhang, wr

Re: Job manager URI rpc address:port

2020-04-19 Thread Som Lima
I will thanks. Once I had it set up and working. I switched my computers around from client to server to server to client. With your excellent instructions I was able to do it in 5 .minutes On Mon, 20 Apr 2020, 00:05 Jeff Zhang, wrote: > Som, Let us know when you have any problems > > Som Lima

Re: Job manager URI rpc address:port

2020-04-19 Thread Jeff Zhang
Som, Let us know when you have any problems Som Lima 于2020年4月20日周一 上午2:31写道: > Thanks for the info and links. > > I had a lot of problems I am not sure what I was doing wrong. > > May be conflicts with setup from apache spark. I think I may need to > setup users for each development. > > > Anyw

[ANNOUNCE] Weekly Community Update 2020/16

2020-04-19 Thread Konstantin Knauf
Dear community, happy to share this (and last) week's community update after a short Easter break. A lot has happened in the community in the meantime. Stateful Functions 2.0.0 was released, the releases of Flink 1.10.1 and 1.9.3 are around the corner, a couple of new FLIPs and blog posts... ...a

Problem getting watermark right with event time

2020-04-19 Thread Sudan S
Hi, I am having a problem getting watermark right. The setup is - I have a Flink Job which reads from a Kafka topic, uses Protobuf Deserialization, uses Sliding Window of (120seconds, 30 seconds), sums up the value and finally returns the result. The code is pasted below. The problem here is, I'

Re: Job manager URI rpc address:port

2020-04-19 Thread Som Lima
Thanks for the info and links. I had a lot of problems I am not sure what I was doing wrong. May be conflicts with setup from apache spark. I think I may need to setup users for each development. Anyway I kept doing fresh installs about four altogether I think. Everything works fine now Inclu

Re: Modelling time for complex events generated out of simple ones

2020-04-19 Thread Yun Tang
Hi Salva I think this depends on what the relationship between you output and input events. If the output ones are just simple wrapper of input ones, e.g. adding some simple properties or just read from one place and write to another place, I think the output events could hold time which is inh

Modelling time for complex events generated out of simple ones

2020-04-19 Thread Salva Alcántara
My flink application generates output (complex) events based on the processing of (simple) input events. The generated output events are to be consumed by other external services. My application works using event-time semantics, so I am bit in doubt regarding what should I use as the output events'

Re: Job manager URI rpc address:port

2020-04-19 Thread Jeff Zhang
Hi Som, You can take a look at flink on zeppelin, in zeppelin you can connect to a remote flink cluster via a few configuration, and you don't need to worry about the jars. Flink interpreter will ship necessary jars for you. Here's a list of tutorials. 1) Get started https://link.medium.com/oppqD

Re: Job manager URI rpc address:port

2020-04-19 Thread Zahid Rahman
Hi Tison, I think I may have found what I want in example 22. https://www.programcreek.com/java-api-examples/?api=org.apache.flink.configuration.Configuration I need to create Configuration object first as shown . Also I think flink-conf.yaml file may contain configuration for client rather tha

Re: Job manager URI rpc address:port

2020-04-19 Thread Som Lima
Thanks. flink-conf.yaml does allow me to do what I need to do without making any changes to client source code. But RemoteStreamEnvironment constructor expects a jar file as the third parameter also. RemoteStreamEnvironment

Re: Job manager URI rpc address:port

2020-04-19 Thread tison
You can change flink-conf.yaml "jobmanager.address" or "jobmanager.port" options before run the program or take a look at RemoteStreamEnvironment which enables configuring host and port. Best, tison. Som Lima 于2020年4月19日周日 下午5:58写道: > Hi, > > After running > > $ ./bin/start-cluster.sh > > The

Job manager URI rpc address:port

2020-04-19 Thread Som Lima
Hi, After running $ ./bin/start-cluster.sh The following line of code defaults jobmanager to localhost:6123 final ExecutionEnvironment env = Environment.getExecutionEnvironment(); which is same on spark. val spark = SparkSession.builder.master(local[*]).appname("anapp").getOrCreate However

Re: How to to in Flink to support below HIVE SQL

2020-04-19 Thread Rui Li
Hey Xiaohua & Jark, I'm sorry for overlooking the email. Adding to Jark's answers: DISTRIBUTE BY => the functionality and syntax are not supported. We can consider this as a candidate feature for 1.12. named_struct => you should be able to call this function with Hive module LATERAL VIEW => the s