itly is
> usually always recommended before moving to production. See [1].
>
> If your job code hasn’t changed across the restores, then it should be
> fine even if you didn’t set the UID.
>
>
> [1] https://ci.apache.org/projects/flink/flink-docs-
> release-1.3/ops/produ
Does this mean I can use the same consumer group G1 for the newer version
A'? And inspite of same consumer group, A' will receive messages from all
partitions when its started from savepoint?
I am using Flink 1.2.1. Does the above plan require setting uid on the
Kafka source in the job?
Thanks,
M
//lists.apache.org/thread.html/
> a1a0d04e7707f4b0ac8b8b2f368110b898b2ba11463d32f9bba73968@
> %3Cuser.flink.apache.org%3E
>
>
> Nico
>
> On Thursday, 1 June 2017 20:30:59 CEST Moiz S Jinia wrote:
> > Bump..
> >
> > On Tue, May 30, 2017 at 10:17 PM, Moiz S Jinia
> wro
Bump..
On Tue, May 30, 2017 at 10:17 PM, Moiz S Jinia wrote:
> In a checkpointed Flink job will doing a graceful restart make it resume
> from last known internal checkpoint? Or are all checkpoints discarded when
> the job is stopped?
>
> If discarded, what will be the resume point?
>
> Moiz
>
In a checkpointed Flink job will doing a graceful restart make it resume
from last known internal checkpoint? Or are all checkpoints discarded when
the job is stopped?
If discarded, what will be the resume point?
Moiz
All other operators could still benefit from a higher
> parallelism.
>
> > Am 30.05.2017 um 09:49 schrieb Moiz S Jinia :
> >
> > For a keyed stream (where the key is also the message key in the source
> kafka topic), is the parallelism of the job restricted to the number of
For a keyed stream (where the key is also the message key in the source
kafka topic), is the parallelism of the job restricted to the number of
partitions in the topic?
Source topic has 5 partitions, but available task slots are 12. (3 task
managers each with 4 slots)
Moiz
oiz,
>
> state.clear() refers to the state that you have registered in your job,
> using the getState()
> from the runtimeContext.
>
> Timers are managed by Flink’s timer service and they are cleaned up by
> Flink itself when
> the job terminates.
>
> Kostas
>
> On May 26
A follow on question. Since the registered timers are part of the managed
key state, do the timers get cancelled when i call state.clear()?
Moiz
On Thu, May 25, 2017 at 10:20 PM, Moiz S Jinia wrote:
> Awesome. Thanks.
>
> On Thu, May 25, 2017 at 10:13 PM, Eron Wright
> wro
Awesome. Thanks.
On Thu, May 25, 2017 at 10:13 PM, Eron Wright wrote:
> Yes, registered timers are stored in managed keyed state and should be
> fault-tolerant.
>
> -Eron
>
> On Thu, May 25, 2017 at 9:28 AM, Moiz S Jinia
> wrote:
>
>> With a checkpointed Rock
With a checkpointed RocksDB based state backend, can I expect the
registered processing timers to be fault tolerant? (along with the managed
keyed state).
Example -
A task manager instance owns the key k1 (from a keyed stream) that has
registered a processing timer with a timestamp thats a day ahe
est,
> Aljoscha
>
> On 3. May 2017, at 12:36, Moiz S Jinia wrote:
>
> The kind of program I intend to submit would be one that sets up
> a StreamExecutionEnvironment, connects to a stream from a Kafka topic, and
> uses a PatternStream over the kafka events. I could hav
Does Flink (with a persistent State backend such as RocksDB) work well with
long running Patterns of this type? (running into days)
Pattern.begin("start").followedBy("end").within(Time.days(3))
Is there some gotchas here or things to watch out for?
Thanks,
Moiz
e
REST API for submitting a program with some configuration params.
Does that sound like it'd work or am I missing something?
Moiz
On Wed, May 3, 2017 at 3:23 PM, Moiz S Jinia wrote:
> Not sure I understand Operators. What I need is to have a Pattern that
> starts consuming from a Kaf
added to. An existing custom operator?
>
> The REST interface only allows for managing the lifecycle of a job, not
> modifying their graph structure.
>
> On 3. May 2017, at 11:43, Moiz S Jinia wrote:
>
> Thanks for the references. Looking at the REST API, would adding new
nalytics-king/
>
> On 3. May 2017, at 10:02, Moiz S Jinia wrote:
>
> Is there an API that allows remotely adding, modifying, and cancelling
> Flink jobs? Example - changing the time window of a deployed Pattern,
> adding new Patterns, etc.
>
> Whats the best way to go about th
Is there an API that allows remotely adding, modifying, and cancelling
Flink jobs? Example - changing the time window of a deployed Pattern,
adding new Patterns, etc.
Whats the best way to go about this? To the end user the Pattern would
manifest as rules that can be updated anytime.
Moiz
urce:
>
> https://ci.apache.org/projects/flink/flink-docs-
> release-1.2/setup/building.html
>
> Kostas
>
> On Apr 29, 2017, at 7:15 PM, Moiz S Jinia wrote:
>
> I meant maven dependencies that i can use by generating them from sources.
>
> On Sat, Apr 29, 2017 at 10:31
I meant maven dependencies that i can use by generating them from sources.
On Sat, Apr 29, 2017 at 10:31 PM, Moiz S Jinia wrote:
> Ok I'll try that. Its just that I'd rather use a stable version.
> Are there any instructions for building binaries from latest sources?
>
> M
case against that. The changes until the final release are going to be
> minor hopefully and we can
> always help you adjust your program accordingly.
>
> Hope this helps,
> Kostas
>
> On Apr 29, 2017, at 6:23 PM, Moiz S Jinia wrote:
>
> Oh ok thats a bit far off. Is there
The 1.3 is scheduled for the beginning of June.
>
> Cheers,
> Kostas
>
> On Apr 29, 2017, at 6:16 PM, Moiz S Jinia wrote:
>
> Thanks Dawid!
> Yes thats what i was expecting. I'll give it a try.
>
> When do you expect 1.3.0 stable to be out?
>
> Moiz
>
>
on so any comments,
> tests or suggestions are welcome.
>
>
> Z pozdrowieniami! / Cheers!
>
> Dawid Wysakowicz
>
> *Data/Software Engineer*
>
> Skype: dawid_wys | Twitter: @OneMoreCoder
>
> <http://getindata.com/>
>
> 2017-04-29 12:14 G
When using "next", this pattern works fine for the both a match as well as
a timeout:
Pattern pattern = Pattern.begin("start")
.where(evt -> evt.value.equals("ni"))
.next("last").where(evt ->
evt.value.equals("ar")).within(Time.seconds(5));
1. "ni" then "ar" within 5 seconds - tri
23 matches
Mail list logo