Hi everybody,
I would like to collect the statistics and the real output cardinalities about
the execution of many jobs as json files. I know that exist a REST interface
that can be used but I was looking for something simpler. In practice, I would
like to get the information showed in the Web
Returning the discussion to the mailing list ( it accidentally went to a
side channel because of a direct reply).
What I was referring to, is the event-time processing semantic, which is
based on the watermarks mechanism [1].
If you are using it, the event time at your KeyedBroadcastProcessFuction
Hi,
I am running flink job with following functionality:
1. I consume stream1 and stream2 from two kafka topics and assign the
watermarks to the events of two streams by extracting the timestamps from the
events in streams.
2. Then, I am connecting two streams and calling KeyedCoProcessFun
++
Here, I am registering the callback time for an even with processing time and
calculating the time value as events time + expiryTimeout value.
Can this be the issue here due to hybrid timings usage?
Also, do we need any special handling if we use event time semantics for
callback timeouts reg
Hi Team,
I am writing my first stateful fun basic hello application. I am getting
the following Exception.
$ ./bin/flink run -c
org.apache.flink.statefun.flink.core.StatefulFunctionsJob
./stateful-sun-hello-java-1.0-SNAPSHOT-jar-with-dependencies.jar
---
Hi Jaswin,
If I understand right, I think you could add the logic in the onTimer
callback. In this callback, OnTimerContext.output(xx, outputTag) could be used
to output data to the specific sideout. Besides, you should need a new state to
store the elements to output in the onTimer callba
You also have to set the boolean cancel-job parameter.
On 22/05/2020 22:47, M Singh wrote:
Hi:
I am using Flink 1.6.2 and trying to create a savepoint using the
following curl command using the following references
(https://ci.apache.org/projects/flink/flink-docs-release-1.6/monitoring/rest_a
Hi Jaswin,
I think the event time timer and process time timer in Flink should be
fully decoupled: the event time timer is trigger by the watermark received, and
the processing time is trigger by physical clock, and you may think them as two
seperated timelines and have no guarantee on their
Hi Jary,
What you mean by step banlence . Could you please provide a concrete example
On Fri, May 22, 2020 at 3:46 PM Jary Zhen wrote:
> Hello everyone,
>
>First,a brief pipeline introduction:
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
> consume multi kafka
Hi Team,
1. How can we enable checkpointing in stateful-fun2.0
2. How to set parallelism
Thanks,
Dinesh.
I am still not able to get much after reading the stuff. Please help with
some basic code to start to build this window and trigger.
Another option I am thinking is I just use a Richflatmap function and use
the keyed state to build this logic. Is that the correct approach?
On Fri, May 22, 2020
Hi Annemarie,
You need to use http client to connect to the job managaer.
//Creating a HttpClient object
CloseableHttpClient httpclient = HttpClients.createDefault();
//Creating a HttpGet object
HttpGet httpget = new HttpGet("https://${jobmanager:port}/jobs ");
//Exec
Hi Andrey,
We don't use Rocks DB. As I said in the original email I am using File
Based. Even though our cluster is on Kubernetes out Flink cluster is
Flink's stand alone resource manager. We have not yet integrated our Flink
with Kubernetes.
Thanks,
Josson
On Fri, May 22, 2020 at 3:37 AM Andre
Hi Andrey,
To clarify the above email. I am using Heap Based State and not Rocks DB.
Thanks,
Josson
On Sat, May 23, 2020, 17:37 Josson Paul wrote:
> Hi Andrey,
> We don't use Rocks DB. As I said in the original email I am using File
> Based. Even though our cluster is on Kubernetes out Flin
14 matches
Mail list logo