Hi,
I was playing about with flink using the docker images provided,
however I noticed that the entry point is a bash script.
There is a problem in using bash as the PID1 process in a docker
container as docker sends SIGTERM, but bash doesn't send this to its
child processes.
This means for exam
Hi,
I have some questions regarding the Queryable State feature:
Is it possible to use the QueryClient to get a list of keys for a given State?
At the moment it is not possible to use ListState - will this ever be
introduced?
My first impression is that I would need one of these 2 to be able to
Hi,
I have a similar sounding use case and just yesterday was
experimenting with this approach:
Use 2 separate streams: one for model events, one for data events.
Connect these 2, key the resulting stream and then use a
RichCoFlatMapFunction to ensure that each data event is enriched with
the lat
raph();
> jobGraph.setSavepointRestoreSettings(SavepointRestoreSettings.forPath(savepointPath));
>
> boolean printUpdates = true;
> cluster.submitJobAndWait(jobGraph, printUpdates);
>
>
>
> We could think about exposing the SavepointSettings to the
> StreamGraph. Then
Hi,
is it possible to restore from an external checkpoint / savepoint
while using a local stream environment?
I ask because I want to play around with some concepts from within my
IDE, without building a jar and deploying my job.
Thanks,
Kat
--
_
ment on the job to analyze traces.
>>
>> Hope this helps,
>> Fabian
>>
>> [1]
>> https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/windows.html#global-windows
>> [2]
>> https://ci.apache.org/projects/flink/flink-docs-release-1.1/a
I have been playing around with Flink for a few weeks to try to
ascertain whether or not it meets our use cases, and also what best
practices we should be following. I have a few questions I would
appreciate answers to.
Our scenario is that we want to process a lot of event data into
cases. A cas