Thank you Chesnay. Good to know there are few wrappers available to get
best of both worlds. I may mostly go without piggybacking though to have
more control and learning for now, but I will keep an eye for new benefits
I will get in future via piggybacking.
The UDF point looks like a deal breaker,
That all nodes in a Flink Cluster are involved simultaneously in processing the
data?Programmatically, graphically...I need to stress CPU , MEM and all
resources to their max.How can I guarantee this is happening in Flink
Cluster?Out of 4 nodes, this is the highest resource usage I see from
"to
Hi Stephan,
Thanks for the reply. I should have been a bit clearer but actually I was
not trying to restore a 1.0 savepoint with 1.2. I ran my job on 1.2 from
scratch (starting with no state), then took a savepoint and tried to
restart it from the savepoint - and that's when I get this exception.
Hi Josh!
The state code on 1.2-SNAPSHOT is undergoing pretty heavy changes right
now, in order to add the elasticity feature (change parallelism or running
jobs and still maintaining exactly once guarantees).
At this point, no 1.0 savepoints can be restored in 1.2-SNAPSHOT. We will
try and add com
Hi,
I have a Flink job which uses the RocksDBStateBackend, which has been
running on a Flink 1.0 cluster.
The job is written in Scala, and I previously made some changes to the job
to ensure that state could be restored. For example, whenever I call `map`
or `flatMap` on a DataStream, I pass a na
Great!
If you send me your JIRA user I can add you as a contributor and assign the
issue to you.
Best, Fabian
Von: jaxbihani
Is the query stream also a Flink job? Is this use case not supported by
keeping state within a single Flink job?
https://ci.apache.org/projects/flink/flink-docs-master/dev/state.html
FLINK-3779 recently added "queryable state" to allow external processes
access to operator state.
https://issue
Hi,
right now this is not possible, I'm afraid.
I'm looping in Max who has done some work in that direction. Maybe he's got
something to say.
Cheers,
Aljoscha
On Wed, 14 Sep 2016 at 03:54 Will Du wrote:
> Hi folks,
> How to stop link job given job_id through java API?
> Thanks,
> Will
>
Hi,
the log message about the leader session should be unrelated to Kafka. What
exactly do you mean by "fails to read"? You don't get elements? Or it fails
with some message?
Cheers,
Aljoscha
On Tue, 13 Sep 2016 at 17:10 Simone Robutti
wrote:
> Hello,
>
> while running a job on Flink 1.1.2 on a