Hi,
I am sorry it worked with the BoundedOutOfOrdernessTimestampExtractor,
somehow I replayed my events from kafka and the older events were also on
the bus and it didnt correlate with my new events.
Now i cleaned up my code and restarted it from the begninning and it works.
Thanks a lot for
Hi Kostas,
I am okay with processing time at the moment but as my events already have a
creation timestamp added to them and also to explore further the event time
aspect with FlinkCEP, I proceeded further with evaluating with event time.
For this I tried both
1. AscendingTimestampExtractor: usi
You could also remove the autoWatermarkInterval, if you are satisfied with
ProcessingTime.
Although keep in mind that processingTime assigns timestamps to elements based
on the order
that they arrive to the operator. This means that replaying the same stream can
give different
results.
If you
Hi Kostas,
My application didn't have any timestamp extractor nor my events had any
timestamp. Still I was using event time for processing it, probably that's
why it was blocked.
Now I removed the part where I mention timechracteristics as Event time and
it works now.
For example:
Previously:
Hi Biplob,
Great to hear that everything worked out and that you are not blocked!
For the timestamp assigning issue, you mean that you specified no timestamp
extractor in your job and all your elements had Long.MIN_VALUE timestamp right?
Kostas
> On May 31, 2017, at 1:28 PM, Biplob Biswas wrot
Hi Dawid,
Thanks for the response. Timeout patterns work like a charm, I saw it
previously but didn't understood what it does, thanks for explaining that.
Also, my problem with no alerts is solved now.
The problem was that I was using "Event Time" for processing whereas my
events didn't have any
Hi Biplob,
The message you mention should not be a problem here. It just says you
can't use your events as POJOs (e.g. you can't use keyBy("chargedAccount")
).
Your code seems fine and without some example data I think it will be hard
to help you.
As for the PART 2 of your first email.
In 1.3 we
Hello Kostas,
I made the necessary changes and adapted the code to reflect the changes
with 1.4-Snapshot. I still have similar behaviour, I can see that the data
is there after partitionedinut stream but no alerts are being raised.
I see some info log on my console as follows:
INFO o.a.f.a.java
Hi Biplob,
For the 1.4 version, the input of the select function has changed to expect a
list of
matching events (Map> map instead of Map map), as
we have added quantifiers.
Also the FIlterFunction has changed to SimpleCondition.
The documentation is lagging a bit behind, but it is coming s
Hello Kostas,
Thanks for the suggestions.
I checked and I am getting my events in the partitionedInput stream when i
am printing it but still nothing on the alert side. I checked flink UI for
backpressure and all seems to be normal (I am having at max 1000 events per
second on the kafka topic so
One additional comment, from your code it seems you are using Flink 1.2.
It would be worth upgrading to 1.3. The updated CEP library includes a lot of
new features and bugfixes.
Cheers,
Kostas
> On May 26, 2017, at 3:33 PM, Kostas Kloudas
> wrote:
>
> Hi Biplob,
>
> From a first scan of the
Hi Biplob,
From a first scan of the code I cannot find sth fishy.
You are working on ProcessingTime, given that you do not
provide any time characteristic specification, right?
In this case, if you print your partitionedInput stream, do you
see elements flowing as expected?
If elements are fl
12 matches
Mail list logo