map state
with some extra value states to simulate it.
Best,
Xingcan
On Mon, Nov 21, 2022 at 9:20 PM tao xiao wrote:
> any suggestion is highly appreciated
>
> On Tue, Nov 15, 2022 at 8:50 PM tao xiao wrote:
>
>> Hi team,
>>
>> I have a Flink job that joins
Hi Timo,
Thanks for the reply! The document is really helpful. I can solve my
current problem with some workarounds. Will keep an eye on this topic!
Best,
Xingcan
On Fri, May 21, 2021, 12:08 Timo Walther wrote:
> Hi Xingcan,
>
> we had a couple of discussions around the timestamp
the Flink DataType to TIMESTAMP_WITH_LOCAL_TIME_ZONE, Iceberg
complains "t*imestamptz cannot be promoted to timestamp".*
Does anyone have any thoughts on this?
Thanks,
Xingcan
Hi Juha and Chesnay,
I do appreciate your prompt responses! I'll also continue to investigate
this issue.
Best,
Xingcan
On Wed, Jan 27, 2021, 04:32 Chesnay Schepler wrote:
> (setting this field is currently not possible from a Flink user
> perspective; it is something I will
so that the
metrics data is buffered in memory and causes OOM.
I'm running Flink 1.11.2 on EMR-6.2.0 with flink-metrics-datadog-1.11.2.jar.
Thanks,
Xingcan
,
Xingcan
[1]
https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/cluster_setup.html
<https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/cluster_setup.html>
> On Sep 7, 2019, at 3:30 AM, Dipanjan Mazumder wrote:
>
> Hi Xingcan,
>
> Thank
.
Hope that helps.
Best,
Xingcan
> On Sep 7, 2019, at 1:57 AM, Dipanjan Mazumder wrote:
>
> Hi Guys,
>
> I was going through the Flink sql client configuration YAML from the
> training example and came across a section in the conf
Congrats Rong!
Best,
Xingcan
> On Jul 11, 2019, at 1:08 PM, Shuyi Chen wrote:
>
> Congratulations, Rong!
>
> On Thu, Jul 11, 2019 at 8:26 AM Yu Li <mailto:car...@gmail.com>> wrote:
> Congratulations Rong!
>
> Best Regards,
> Yu
>
>
Yes, Mans. You can use both processing-time and event-time timers if you set
the time characteristic to event-time. They'll be triggered by their own time
semantics, separately. (actually there’s no watermark for processing time)
Cheers,
Xingcan
> On Jul 9, 2019, at 11:40 AM, M Sing
Hi Aljoscha,
Thanks for your response.
With all this preliminary information collected, I’ll start a formal process.
Thank everybody for your attention.
Best,
Xingcan
> On Jul 8, 2019, at 10:17 AM, Aljoscha Krettek wrote:
>
> I think this would benefit from a FLIP, that neatly su
7;s
no doubt that its concept has drifted.
As the split/select is quite an ancient API, I cc'ed this to more members. It
couldn't be better if you can share your opinions on this.
Thanks,
Xingcan
[1]
https://lists.apache.org/thread.html/f94ea5c97f96c705527dcc809b0e2b69e87a4c5d400cb7
or but a different name) that can be used to replace the existing
split/select.
3) Keep split/select but change the behavior/semantic to be "correct".
Note that this is just a vote for gathering information, so feel free to
participate and share your opinions.
The voting time will end
Hi Flavio,
In the description, resultX is just an identifier for the result of the first
meeting condition.
Best,
Xingcan
> On May 8, 2019, at 12:02 PM, Flavio Pompermaier wrote:
>
> Hi to all,
> in the documentation of the Table Conditional functions [1] the example is
> in
Hi,
As the message said, some columns share the same names. You could first rename
the columns of one table with the `as` operation [1].
Best,
Xingcan
[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/tableApi.html#scan-projection-and-filter
> On Mar 15, 2019, at 9:03
Hi Karl,
I think this is a bug and created FLINK-11769
<https://issues.apache.org/jira/browse/FLINK-11769> to track it.
Best,
Xingcan
> On Feb 26, 2019, at 2:02 PM, Karl Jin wrote:
>
> I removed the multiset> field and the join worked fine.
> The field was create
gt;> |-- i_uc_pk: String
>> |-- i_uc_update_ts: TimeIndicatorTypeInfo(rowtime)
>> |-- image_count: Long
>> |-- i_data: Multiset>
>>
>> On Mon, Feb 25, 2019 at 4:54 PM Xingcan Cui wrote:
>>
>>> Hi Karl,
>>>
>>> It seems that
Hi Karl,
It seems that some field types of your inputs were not properly extracted.
Could you share the result of `printSchema()` for your input tables?
Best,
Xingcan
> On Feb 25, 2019, at 4:35 PM, Karl Jin wrote:
>
> Hello,
>
> First time posting, so please let me know if
Hi Abhijeet,
If you want to perform window-join in the DataStream API, the window
configurations on both sides must be exactly the same.
For your case, maybe you can try adding a 5 mins delay on event times (and
watermarks) of the faster stream.
Hope that helps.
Best,
Xingcan
> On Nov
possible); 2) filter out some records or
reduce the number of fields in advance.
Best,
Xingcan
[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql.html#joins
> On Nov 21, 2018, at 2:06 AM, liujiangang wrote:
>
> I am using IntervalJoin function to join two streams
Hi Henry,
In most SQL conventions, single quotes are for Strings, while double quotes are
for identifiers.
Best,
Xingcan
> On Oct 31, 2018, at 7:53 PM, 徐涛 wrote:
>
> Hi Experts,
> When I am running the following SQL in FLink
Hi Vishal,
Actually, you could provide multiple MapStateDescriptors for the `broadcast()`
method and then use them, separately.
Best,
Xingcan
> On Sep 18, 2018, at 9:29 PM, Vishal Santoshi
> wrote:
>
> I could do that, but I was under the impression that 2 or more disparate
Hi Vishal,
You could try 1) merging these two rule streams first with the `union` method
if they get the same type or 2) connecting them and encapsulate the records
from both sides to a unified type (e.g., scala Either).
Best,
Xingcan
> On Sep 18, 2018, at 8:59 PM, Vishal Santoshi
>
,
Xingcan
> On Sep 18, 2018, at 6:50 AM, Rong Rong wrote:
>
> This is in fact a very strange behavior.
>
> To add to the discussion, when you mentioned: "raw Flink (windowed or not)
> nor when using Flink CEP", how were the comparisons being done?
> Also, were you
Hi John,
I’ve not dug into this yet, but IMO, it shouldn’t be the case. I just wonder
how do you judge that the data in the first five seconds are not processed by
the system?
Best,
Xingcan
> On Sep 17, 2018, at 11:21 PM, John Stone wrote:
>
> Hello,
>
> I'm checking if
Hi,
I’ve tested your code in my local environment and everything worked fine. It’s
a little weird to see your output like that. I wonder if you could give more
information about your environment, e.g., your flink version and execution
settings.
Thanks,
Xingcan
> On Sep 16, 2018, at 3:19
d the `processElement()` to modify the
states. But this API does not really broadcast the states. You should keep the
logic for `processBraodcastElement()` deterministic. Maybe the equation below
could make the pattern clear.
+ = =
Best,
Xingcan
> On Aug 27, 2018, at 10:23 PM, Radu Tudoran wrote:
choices.
Best,
Xingcan
> On Aug 21, 2018, at 6:03 PM, 徐涛 wrote:
>
> Hi Fabian,
> Is the behavior a bit weird? Because it leads to data inconsistency.
>
> Best,
> Henry
>
>> 在 2018年8月21日,下午5:14,Fabian Hueske > <mailto:fhue...@gmail.com>>
Hi Averell,
With the CoProcessFunction, you could get access to the time-related services
which may be useful when maintaining the elements in Stream_C and you could get
rid of type casting with the Either class.
Best,
Xingcan
> On Aug 15, 2018, at 3:27 PM, Averell wrote:
>
> Thank
Hi Averell,
I am also in favor of option 2. Besides, you could use CoProcessFunction
instead of CoFlatMapFunction and try to wrap elements of stream_A and stream_B
using the `Either` class.
Best,
Xingcan
> On Aug 15, 2018, at 2:24 PM, vino yang wrote:
>
> Hi Averell,
>
>
HI Soheil,
That may relate to your parallelism since each extractor instance compute its
own watermarks. Try to print the max timestamps with the current thread’s name
and you will notice this.
Best,
Xingcan
> On Jul 30, 2018, at 3:05 PM, Soheil Pourbafrani wrote:
>
> Using Flink
other Dataset as
the result. That means you cannot easily apply the model on streams, at least
for now.
There are two options to solve this. (1) Train the dataset using another
framework to produce a simple function. (2) Adjust your model serving as a
series of batch jobs.
Hope that helps,
Xingcan
cts/flink/flink-docs-master/dev/api_concepts.html#rich-functions>
Best,
Xingcan
> On Jul 19, 2018, at 6:56 PM, Steffen Wohlers wrote:
>
> Hi all,
>
> I’m new to Apache Flink and I have the following issue:
>
> I would like to enrich data via map function. For that I call
API [2].
Hope that helps.
Best,
Xingcan
[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/windows.html#global-windows
<https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/windows.html#global-windows>
[2]
https://ci.apache.org/pr
Hi Rakkesh,
Did you call `execute()`on your `StreamExecutionEnvironment`?
Best,
Xingcan
> On Jul 18, 2018, at 5:12 PM, Titus Rakkesh wrote:
>
> Dear Friends,
> I have 2 streams of the below data types.
>
> DataStream> splittedActivationTuple;
>
> Dat
Hi Soheil,
The `getSideOutput()` method is defined on the operator instead of the
datastream.
You can invoke it after any action (e.g., map, window) performed on a
datastream.
Best,
Xingcan
> On Jul 17, 2018, at 3:36 PM, Soheil Pourbafrani wrote:
>
> Hi, according to the document
Hi Ashwin,
I encountered this problem before. You should make sure that the version for
your Flink cluster and the version you run the SQL-Client are exactly the same.
Best,
Xingcan
> On Jul 3, 2018, at 10:00 PM, Chesnay Schepler wrote:
>
> Can you provide us with the JobMan
Yes, that makes sense and maybe you could also generate dynamic intervals
according to the time spans.
Thanks,
Xingcan
> On May 16, 2018, at 9:41 AM, Dhruv Kumar wrote:
>
> As a part of my PhD research, I have been working on few optimization
> algorithms which try to jointly op
Hi Chengzhi,
more details about partitioning mechanisms can be found at
https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/#physical-partitioning
<https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/#physical-partitioning>.
Best,
Xingcan
Hi Dhruv,
since there are timestamps associated with each record, I was wondering why you
try to replay them with a fixed interval. Can you give a little explanation
about that?
Thanks,
Xingcan
> On May 16, 2018, at 2:11 AM, Ted Yu wrote:
>
> Please see the following:
Function[3].
IMO, the watermark may not work as expected for your use case. Besides, since
the file will be updated unpredictably, it's hard to guarantee the precision of
results.
Hope that helps,
Xingcan
[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/datastream_api.html#
Hi Chengzhi,
you said the Stream B which comes from a file will be updated unpredictably. I
wonder if you could share more about how to judge an item (from Stream A I
suppose) is not in the file and what watermark generation strategy did you
choose?
Best,
Xingcan
> On May 12, 2018, at 12
) `property1` and `property2` are not arrays; (2) their
types have overridden the `hashCode()` method [2].
Hope that helps,
Xingcan
[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/types_serialization.html#rules-for-pojo-types
<https://ci.apache.org/projects/flink/flink-docs-master/
Yes, Chengzhi. That’s exactly what I mean. But you should be careful with the
semantics of your pipeline. The problem cannot be gracefully solved if there’s
a natural time offset between the two streams.
Best, Xingcan
> On 14 Apr 2018, at 4:00 AM, Chengzhi Zhao wrote:
>
> H
time offset to one
of your streams, which can make them more “close” to each other.
Best,
Xingcan
> On 13 Apr 2018, at 11:48 PM, Chengzhi Zhao wrote:
>
> Hi, flink community,
>
> I had an issue with slow watermark advances and needs some help here. So here
> is what ha
Hi Shishal,
you could manually separate the late events from the main stream in your
process function with the "side outputs"[1].
Best,
Xingcan
[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/side_output.html
<https://ci.apache.org/projects/flink/flink-do
ink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/WindowWordCount.java>.
Hope that helps,
Xingcan
> On 12 Mar 2018, at 1:33 PM, Miyuru Dayarathna wrote:
>
> Hi,
>
> I need to create a sliding window of 4 events in Flink streaming application.
>
ala>
to hold back watermarks. We’ll continue improving the SQL/Table API part.
Best,
Xingcan
> On 9 Mar 2018, at 4:08 AM, Yan Zhou [FDS Science] wrote:
>
> Hi Xingcan, Timo,
>
> Thanks for the information.
> I am going to convert the result table to DataStream
Hi Dhruv,
there’s no need to implement the window logic with the low-level
`ProcessFunction` yourself. Flink has provided built-in window operators and
you just need to implement the `WindowFunction` for that [1].
Best,
Xingcan
[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev
will be materialized as if
they were common fields (with the timestamp type). Currently, due to the
semantics problem, the time-windowed join cannot be performed on retract
streams. But you could try non-windowed join [3] after we fix this.
Best,
Xingcan
[1] https://issues.apache.org/jira/br
Hi Kant,
the non windowed stream-stream join is not equivalent to the full-history join,
though they get the same SQL form. The retention times for records must be set
to leverage the storage consumption and completeness of the results.
Best,
Xingcan
> On 7 Mar 2018, at 8:02 PM, kant kod
, 2]. The time-windowed outer join
has been supported since version 1.5 and the non-windowed outer join is still
work in progress.
Hope that helps.
Best,
Xingcan
[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/tableApi.html#joins
<https://ci.apache.org/projects/fl
ocess function or
split the key space for your map state into static bins, thus you could
calculate a set of outdated keys before removing them.
Hope that helps.
Best,
Xingcan
> On 5 Mar 2018, at 4:19 PM, Alberto Mancini wrote:
>
>
Hi Nagananda,
adding `flink-streaming-scala_${scala version}` to your maven dependency would
solve this.
Best,
Xingcan
> On 5 Mar 2018, at 2:21 PM, Nagananda M wrote:
>
> Hi All,
> Am trying to compile a sample program in apache flink using TableEnvironment
> and facin
Hi Felipe,
the `sortPartition()` method just LOCALLY sorts each partition of a dataset. To
achieve a global sorting, use this method after a `partitionByRange()` (e.g.,
`result.partitionByRange(0).sortPartition(0, Order.ASCENDING)`).
Hope that helps,
Xingcan
> On 3 Mar 2018, at 9:33
Hi,
for periodically generated watermarks, you should use
`ExecutionConfig.setAutoWatermarkInterval()` to set an interval.
Hope that helps.
Best,
Xingcan
> On 3 Mar 2018, at 12:08 PM, sundy <543950...@qq.com> wrote:
>
>
>
> Hi, I got a problem in Flink and need your
Hi Esa and Fabian,
sorry for my inaccurate conclusion before, but I think the reason is clear now.
The org.apache.flink.streaming.api.scala._ and org.apache.flink.api.scala._
should not be imported simultaneously due to conflict. Just remove either of
them.
Best,
Xingcan
> On 22 Feb 2
objects (e.g., org.apache.flink.table.api.scala._) and they are the same.
@Esa, you can temporarily solve the problem by importing
org.apache.flink.streaming.api.scala.asScalaStream in your code and we'll
continue working on this issue.
Best,
Xingcan
> On 22 Feb 2018, at 4:47 PM, Esa H
Hi Esa,
just to remind that don’t miss the dot and underscore.
Best,
Xingcan
> On 22 Feb 2018, at 3:59 PM, Esa Heikkinen
> wrote:
>
> Hi
>
> Actually I have also line “import org.apache.flink.streaming.api.scala” on my
> code, but this line seems to be highlighted wea
e() in processElement() and
once it exceeds the next window boundary, all the cached data should be
processed as if the window is fired.
Note that currently, there are only memory-based operator states provided.
Hope this helps,
Xingcan
> On 19 Feb 2018, at 4:34 PM, Julien wrote:
>
> H
operators/#physical-partitioning>
to manually distribute your alert data and simulate an window operation by
yourself in a ProcessFuncton.
3. You may also choose to use some external systems such as in-memory store,
which can work as a cache for your queries.
Best,
Xingcan
> On 19 Feb 2
with the Kafka console consumer tool to see
whether they can be consumed completely.
Besides, I wonder if you could provide the code for you Flink pipeline. That’ll
be helpful.
Best,
Xingcan
> On 18 Feb 2018, at 7:52 PM, Niclas Hedhman wrote:
>
>
> So, the producer is run (a
parallelism changes” may also refer to a parallelism
change after the job restarts (e.g., when a node crashes). Flink can make sure
that all the processing tasks and states will be safely re-distributed across
the new cluster.
Hope that helps.
Best,
Xingcan
> On 13 Feb 2018, at 5:18 PM, m
eline should be feasible I think.
However, If you want to join three streams, you may first join S1 with S2 to
produce S12 with a CoProcessFunction, and then set another CoProcessFunction to
join S12 with S3.
Hope that helps.
Best,
Xingcan
> On 10 Feb 2018, at 1:06 PM, m@xi wrote:
>
>
the "larger
stream", which should have produced completed results, and perform a nested
loop join (i.e., whenever comes a new record, join it with the fully cached
set).
Hope this helps.
Best,
Xingcan
On Tue, Jan 30, 2018 at 7:42 PM, Marchant, Hayden
wrote:
> We have a use case wh
ink-docs-release-1.4/dev/table/sql.html#joins>.
Note that due to some reasons, the UDTF left outer join cannot support
arbitrary conditions now.
Hope that helps.
Best,
Xingcan
On 15/01/2018 6:11 PM, XiangWei Huang wrote:
Hi all,
Is it possible to join records read from a kafka stream with o
fire them when necessary.
Hope that helps.
Best,
Xingcan
On Thu, Dec 14, 2017 at 7:09 AM, Yan Zhou [FDS Science]
wrote:
> Hi,
>
>
> I am building a data pipeline with a lot of streaming join and
> over window aggregation. And flink SQL have these feature supported. However,
> th
rations since both *DataStream* and
*KeyedStream* can define their own global windows. Compared with other
windows (e.g., tumbling or sliding ones), it's more flexible to implement
your own triggers on it.
Hope that helps.
Best,
Xingcan
On Wed, Nov 15, 2017 at 2:12 AM, M Singh wrote:
> Hi:
&g
partitions (one per
key).
If the physical devices work in different time systems due to delay, the
event streams from them should be treated separately.
Hope that helps.
Best,
Xingcan
On Wed, Nov 8, 2017 at 11:48 PM, Shailesh Jain
wrote:
> Hi,
>
> I'm working on implementing a us
v/stream/state/state.html>
.
Hope that helps.
Best,
Xingcan
On Fri, Nov 3, 2017 at 2:11 PM, Maxim Parkachov
wrote:
> Hi Xingcan,
>
> On Fri, Nov 3, 2017 at 3:38 AM, Xingcan Cui wrote:
>
>> Hi Maxim,
>>
>> if I understand correctly, you actually need to JOIN the f
ually*. Maybe you can
try changing your function to be applied
on streams with such "coarse-grained" ordering. However, if the fully
ordered stream is necessary in your
application, I'm afraid you must cache and re-emit them in a user-defined
processFunction.
Best,
Xingcan
On Tue, Sep
certain records.
For more information, please refer to [1] and [2].
Best,
Xingcan
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/event_time.html
[2]
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/event_timestamps_watermarks.html
On Fri, Sep 8, 2017 at 4:24 AM
Hi Peter,
That's a good idea, but may not be applicable with an iteration operator.
The operator can
not determine when to generate the "end-of-stream message" for the feedback
stream.
The provided function (e.g., filter(_ > 0).map(_ - 1)) is stateless and has
no side-effec
he/flink/streaming/runtime/tasks/StreamIterationHead.java#L80>
).
Hope everything is considered this time : )
Best,
Xingcan
On Sat, Sep 2, 2017 at 10:08 PM, Peter Ertl wrote:
>
> Am 02.09.2017 um 04:45 schrieb Xingcan Cui :
>
> In your codes, all the the long values will sub
o (_ + 1) and you'll see the
infinite results. For more information,
you can refer to this
<https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/datastream_api.html#iterations>
or
read the javadoc.
Hope that helps.
Best,
Xingcan
On Fri, Sep 1, 2017 at 5:29 PM, Peter Ertl w
Hi Aljoscha,
Thanks for your explanation. I'll try what you suggests.
Best,
Xingcan
On Fri, Jun 16, 2017 at 5:19 PM, Aljoscha Krettek
wrote:
> Hi,
>
> I’m afraid that’s not possible out-of-box with the current APIs. I
> actually don’t know why the user-facing Partitioner only
,
Xingcan
ts to all index nodes (only left to left and right to
right).
Best,
Xingcan
On Fri, Mar 3, 2017 at 1:49 AM, Jain, Ankit wrote:
> If I understood correctly, you have just implemented flink broadcasting by
> hand J.
>
>
>
> You are still sending out the whole points dataset to each
is an all-to-all mapping, is it possible to
define a many-to-many mapping function that broadcasts shapes to more than
one (but not all) index area?
Best,
Xingcan
On Thu, Feb 23, 2017 at 7:07 PM, Fabian Hueske wrote:
> Hi Gwen,
>
> sorry I didn't read your answer, I was still wr
seen another thread discussing
about this, but can not find it now. What do you think?
Best,
Xingcan
On Thu, Feb 23, 2017 at 6:41 AM, Fabian Hueske wrote:
> Hi Gwen,
>
> Flink usually performs a block nested loop join to cross two data sets.
> This algorithm spills one input to disk
Hi Vasia,
sorry that I should have read the archive before (it's already been posted
in FLINK-1526, though with an ugly format). Now everything's clear and I
think this thread should be closed here.
Thanks. @Vasia @Greg
Best,
Xingcan
On Tue, Feb 14, 2017 at 3:55 PM, Vasiliki Kala
empty)
the "pact.runtime.workset-empty-aggregator" will judge convergence of the
delta iteration and then the iteration just terminates. Is this a bug?
Best,
Xingcan
On Mon, Feb 13, 2017 at 5:24 PM, Xingcan Cui wrote:
> Hi Greg,
>
> Thanks for your attention.
>
> It
ns'
implementation? I can't think it clearly now.), so that's it for now.
Besides, I don't think there will be someone who really would love to write
a graph algorithm with Flink native operators and that's why gelly is
designed, isn't it?
Best,
Xingcan
On Fri, Feb 10
) in edges during a vertex-centric
iteration. Then what can we do if an algorithm really need that?
Thanks for your patience.
Best,
Xingcan
On Fri, Feb 10, 2017 at 4:50 PM, Vasiliki Kalavri wrote:
> Hi Xingcan,
>
> On 9 February 2017 at 18:16, Xingcan Cui wrote:
>
>> Hi Vasia,
>
6(MST Lib&Example). Considering the
complexity, the example is not
provided.)
Really appreciate for all your help.
Best,
Xingcan
On Thu, Feb 9, 2017 at 5:36 PM, Vasiliki Kalavri
wrote:
> Hi Xingcan,
>
> On 7 February 2017 at 10:10, Xingcan Cui wrote:
>
>> Hi all,
>>
>
method for vertices. When no message received, a
vertex just quit the next iteration. Should I manually send messages (like
heartbeat) to keep the vertices active?
c) I think we may need an initialization method in the ComputeFunction.
Any opinions? Thanks.
Best,
Xingcan
uot;Flink style".
I am glad to hear that adding the accumulator is just in progress. As far
as I can see, the operations it supplies will adequately meet the demands.
I will stay focus on this topic.
Best,
Xingcan
On Wed, Jan 11, 2017 at 7:28 PM, Aljoscha Krettek
wrote:
> Hi,
> (
olate some principles of Flink (probably about states), but I insist
that unnecessary calculations should be avoided in stream processing.
So, could you give some advices, I am all ears : ), or if you think that
is feasible, I'll think carefully and try to complete it.
Thank you and merry Chr
.
However, it seems that Flink deals with the window in a different way and
supplies more "formalized" APIs.
So, any tips on how to adapt these delta awareness caches in Flink or do
some refactors to make them suitable?
Thanks.
Best,
Xingcan
88 matches
Mail list logo