I solved the problem by following another person's recommendation on the
other post about using a wrapper POJO.
So, I used a wrapper MonitoringTuple to wrap the Tuple and that solved my
problem with varying number of fields in the Tuple interface.
public class MonitoringTuple {
> private Tupl
Hi Chesnay,
Sorry for causing the confusion. I solved the problem by following another
person's recommendation on the other post about using a wrapper POJO.
So, I used a wrapper MonitoringTuple to wrap the Tuple and that solved my
problem with varying number of fields in the Tuple interface.
publi
Hi Fabian,
Sorry, but I am still confused about your guide. If I union the Toggle
stream with the StateReportTrigger stream, would that means I need to make
my Toggles broadcasted states? Or there's some way to modify the keyed
states from within the processBroadcastElement() method?
I tried to i
Thanks Fabian,
I'm looking into a way to enrich it without having to know the internal
fields of the original event type.
Right now what I managed to do is to map Car into a TaggedEvent prior
to the SQL query, tags being empty, then run the SQL query selecting *origin,
enrich(.. ) as tags*
Not sur
You are right, thanks. But something is still not totally clear to me. I'll
reuse your diagram with a little modification:
DataStream a = ...
a.map(A).map(B).keyBy().timeWindow(C)
and execute this with parallelism 2. However, keyBy only generates one single
key value, and assume they all go
Thanks, I didn't know that. But it is checkpoints to RocksDB, isn't it? BTW, is
this special treatment of operator state documented anywhere?
On 2019/05/09 07:39:34, Fabian Hueske wrote:
> Hi,
>
> Yes, IMO it is more clear.
> However, you should be aware that operator state is maintained on he
Hi,
In an unwindowed key stream while using event time semantics is state
stored indefinitely or does it get expired eventually (was wondering if the
state inherits the event time of the element that updated, and if it
expires when the watermark goes past it).
Thanks,
Frank
Hi,
Please find my response below.
Am Fr., 3. Mai 2019 um 16:16 Uhr schrieb an0 :
> Thanks, but it does't seem covering this rule:
> --- Quote
> Watermarks are generated at, or directly after, source functions. Each
> parallel subtask of a source function usually generates its watermarks
> indep
Hi,
Passing a Context through a DataStream definitely does not work.
You'd need to have the keyed state that you want to scan over in the
KeyedBroadcastProcessFunction.
For the toggle filter use case, you would need to have a unioned stream
with Toggle and StateReport events.
For the output, you
Thank you Congxian and Fabian.
@Fabian: could you please give a bit more details? My understanding is: to
pass the context itself and an OutputTag to the KeyedStateFunction parameter
of KeyedBroadcastProcessFunction#Context.applyToKeyedState(), and from
within that KeyedStateFunction.process() se
Hi everybody,
any news on this? For us would be VERY helpful to have such a feature
because we need to execute a call to a REST service once a job ends.
Right now we do this after the env.execute() but this works only if the job
is submitted via the CLI client, the REST client doesn't execute anyth
Hi,
The KeyedBroadcastProcessFunction has a method to iterate over all keys of
a keyed state.
This function is available via the Context object of the processBroadcast()
method.
Hence you need a broadcasted message to trigger the operation.
Best, Fabian
Am Do., 9. Mai 2019 um 08:46 Uhr schrieb C
Hi,
you can use the value construction function ROW to create a nested row (or
object).
However, you have to explicitly reference all attributes that you will add.
If you have a table Cars with (year, modelName) a query could look like
this:
SELECT
ROW(year, modelName) AS car,
enrich(year, m
Hi,
Yes, IMO it is more clear.
However, you should be aware that operator state is maintained on heap only
(not in RocksDB).
Best, Fabian
Am Mi., 8. Mai 2019 um 20:44 Uhr schrieb an0 :
> I switched to using operator list state. It is more clear. It is also
> supported by RocksDBKeyedStateBacke
Hi,
I created FLINK-12460 to update the documentation.
Cheers, Fabian
Am Mi., 8. Mai 2019 um 17:48 Uhr schrieb Flavio Pompermaier <
pomperma...@okkam.it>:
> Great, thanks Till!
>
> On Wed, May 8, 2019 at 4:20 PM Till Rohrmann wrote:
>
>> Hi Flavio,
>>
>> taskmanager.tmp.dirs is the deprecated
Ok, thanks a lot for the clarification! Adding an "Example" on the right of
"Description" would be very helpful (IMHO)
Best,
Flavio
On Wed, May 8, 2019 at 6:19 PM Xingcan Cui wrote:
> Hi Flavio,
>
> In the description, resultX is just an identifier for the result of the
> first meeting conditio
Hi Steve,
afaik there is no such thing in Flink. I agree that Flink's testing
utilities should be improved. If you implement such a source, then you
might be able to contribute it back to the community. That would be super
helpful.
Cheers,
Till
On Wed, May 8, 2019 at 6:40 PM Steven Nelson
wrote
Hi, Mu
Is there anything looks like `Received late message for now expired
checkpoint attempt ${checkpointID} from ${taskkExecutionID} of job ${jobID}` in
JM log?
If yes, that means this task complete the checkpoint too long (maybe receive
barrier too late, maybe spend too much time to do che
18 matches
Mail list logo