Hi,
I have a flink job that I can trigger a save point for with no problem.
However, If I cancel the job then try to run it with the save point, I get the
following exception. Any ideas how can I debug or fix it? I am using the exact
same jar so I did not modify the program in any manner. Using
ly remove the corresponding entry from Zookeeper. If
> this is the problem, I suggest using Flink’s Zookeeper namespaces feature, to
> isolate different runs of a job.
>
> Best,
> Stefan
>
>
>> Am 07.12.2016 um 13:20 schrieb Al-Isawi Rami :
>>
>> Hi,
>&g
Hi,
I have faulty flink streaming program running on a cluster that is consuming
from kafka,so I brought the cluster down. Now I have a new version that has the
fix. Now if I bring up the flink cluster again, the old faulty program will be
recovered and it will consume and stream faulty results
erwise the
event time windows won’t work.
Cheers,
Till
On Tue, Aug 16, 2016 at 2:42 PM, Al-Isawi Rami
mailto:rami.al-is...@comptel.com>> wrote:
Hi,
Why this combination is not possible? even though I am setting
"assignTimestampsAndWatermarks
“ correctly on the DataStream.
I would
Hi,
Why this combination is not possible? even though I am setting
"assignTimestampsAndWatermarks
“ correctly on the DataStream.
I would like Flink to be ticking on processing time, but also utilize the
TumblingEventTimeWindows which is based on event time.
It is not possible because of :
java
this with my own ReduceFunction.
stream
.keyBy("someKey")
.reduce(CustomReduceFunction) // sum whatever fields you want and return the
result
I think it does make sense that Flink could provide a generic sum function that
could sum over multiple fields, though.
-Jamie
On Tue, Jun 7,
f "sum", you can just specify them one after the other, like:
stream.sum(1).sum(2)
This works, because summing the two fields are independent. However,
in the case of "keyBy", the information is needed from both fields at
the same time to produce the key.
Best,
Gábor
2016-06-0
Hi,
Is there any reason why “keyBy" accepts multi-field, while for example “sum”
does not.
-Rami
Disclaimer: This message and any attachments thereto are intended solely for
the addressed recipient(s) and may contain confidential information. If you are
not the intended recipient, please notif
t's wrong with doing that update in the Flink job via an HTTP REST call
(updating the customer resource), rather than writing directly to a database?
The reason I'd like to do it this way is to decouple the underlying database
from Flink.
Josh
On Mon, May 23, 2016 at 2:35 PM, Al-Isawi
d(String productId) {
this.productId = productId;
}
}
On Mon, May 23, 2016 at 3:40 PM, Al-Isawi Rami
mailto:rami.al-is...@comptel.com>> wrote:
Thanks Flavio, but as you can see in my code I have already declared my pojo to
achieve those conditions:
public class PojoExample {
public
public fields. If the field name is foo the getter and setters must
be called getFoo() and setFoo().
I don't know whether you need to implement also hashCode() and equals() actually
Best,
Flavio
On Mon, May 23, 2016 at 3:24 PM, Al-Isawi Rami
mailto:rami.al-is...@comptel.com>> wrote:
Hi Josh,
I am no expert in Flink yet, but here are my thoughts on this:
1. what about you stream an event to flink everytime the DB of items have an
update? then in some background thread you get the new data from the DB let it
be through REST (if it is only few updates a day) then load the res
Hi,
I was trying to test some specific issue, but now I cannot seem to get the very
basic case working. It is most likely that I am blind to something, would
anyone have quick look at it?
https://gist.github.com/rami-alisawi/d6ff33ae2d4d6e7bb1f8b329e3e5fa77
It is just a collection of pojos wher
13 matches
Mail list logo