Thanks Till for the reply. The suggestions are really helpful for the
topic. Maybe something I mention is not clear or not detail. Here are what
I want to say:
1. Changing log level is not suitable for the topic as you said. Because
our inner log4j is old, so this feature is implemented in a
Thanks for starting this discussion. I do see the benefit of dynamically
configuring your Flink job and the cluster running it. Some of the use
cases which were mentioned here are already possible. E.g. adjusting the
log level dynamically can be done by configuring an appropriate logging
backend an
big +1 for this feature,
1. Reset kafka offset in certain cases.
2. Stop checkpoint in certain cases.
3. Change log level for debug.
刘建刚 于2021年6月11日周五 下午12:17写道:
> Thanks for all the discussions and suggestions. Since the topic has
> been discussed for about a week, it is time to
Thanks for all the discussions and suggestions. Since the topic has
been discussed for about a week, it is time to have a conclusion and new
ideas are welcomed at the same time.
First, the topic starts with use cases in restful interface. The
restful interface supported many useful interact
>
> 2. There are two kinds of existing special elements, special stream
> records (e.g. watermarks) and events (e.g. checkpoint barrier). They all
> flow through the whole DAG, but events needs to be acknowledged by
> downstream and can overtake records, while stream records are not). So I’m
> wond
> producing control events from JobMaster is similar to triggering a
savepoint.
Paul, here is what I see the difference. Upon job or jobmanager recovery,
we don't need to recover and replay the savepoint trigger signal.
On Tue, Jun 8, 2021 at 8:20 PM Paul Lam wrote:
> +1 for this feature. Setti
+1 for this feature. Setting up a separate control stream is too much for many
use cases, it would very helpful if users can leverage the built-in control
flow of Flink.
My 2 cents:
1. @Steven IMHO, producing control events from JobMaster is similar to
triggering a savepoint. The REST api is no
option 2 is probably not feasible, as checkpoint may take a long time or
may fail.
Option 1 might work, although it complicates the job recovery and
checkpoint. After checkpoint completion, we need to clean up those control
signals stored in HA service.
On Tue, Jun 8, 2021 at 1:14 AM 刘建刚 wrote:
Thanks for the reply. It is a good question. There are multi choices as
follows:
1. We can persist control signals in HighAvailabilityServices and replay
them after failover.
2. Only tell the users that the control signals take effect after they
are checkpointed.
Steven Wu [via Apach
I can see the benefits of control flow. E.g., it might help the old (and
inactive) FLIP-17 side input. I would suggest that we add more details of
some of the potential use cases.
Here is one mismatch with using control flow for dynamic config. Dynamic
config is typically targeted/loaded by one sp
+1 on separating the effort into two steps:
1. Introduce a common control flow framework, with flexible interfaces
for generating / reacting to control messages for various purposes.
2. Features that leverating the control flow can be worked on
concurrently
Meantime, keeping collectin
Very thanks Jiangang for bringing this up and very thanks for the discussion!
I also agree with the summarization by Xintong and Jing that control flow seems
to be
a common buidling block for many functionalities and dynamic configuration
framework
is a representative application that frequentl
I'm big +1 for this feature.
1. Limit the input qps.
2. Change log level for debug.
in my team, the two examples above are needed
JING ZHANG 于2021年6月8日周二 上午11:18写道:
> Thanks Jiangang for bringing this up.
> As mentioned in Jiangang's email, `dynamic configuration framework`
> provides ma
Thanks Jiangang for bringing this up.
As mentioned in Jiangang's email, `dynamic configuration framework`
provides many useful functions in Kuaishou, because it could update job
behavior without relaunching the job. The functions are very popular in
Kuaishou, we also see similar demands in maillist
Thanks Xintong Song for the detailed supplement. Since flink is
long-running, it is similar to many services. So interacting with it or
controlling it is a common desire. This was our initial thought when
implementing the feature. In our inner flink, many configs used in yaml can
be adjusted by dyn
Thanks Xintong for the summary,
I'm big +1 for this feature.
Xintong's summary for Table/SQL's needs is correct.
The "custom (broadcast) event" feature is important to us
and even blocks further awesome features and optimizations in Table/SQL.
I also discussed offline with @Yun Gao several times
Thanks Jiangang for bringing this up, and Steven & Peter for the feedback.
I was part of the preliminary offline discussions before this proposal went
public. So maybe I can help clarify things a bit.
In short, despite the phrase "control mode" might be a bit misleading, what
we truly want to do
Thank you for the reply. I have checked the post you mentioned. The dynamic
config may be useful sometimes. But it is hard to keep data consistent in
flink, for example, what if the dynamic config will take effect when
failover. Since dynamic config is a desire for users, maybe flink can
support it
I agree with Steven. This logic can be added in a dynamic config framework
that can bind into Flink operators. We probably don't need to let Flink
runtime handle it.
On Fri, Jun 4, 2021 at 8:11 AM Steven Wu wrote:
> I am not sure if we should solve this problem in Flink. This is more like
> a dy
I am not sure if we should solve this problem in Flink. This is more like a
dynamic config problem that probably should be solved by some configuration
framework. Here is one post from google search:
https://medium.com/twodigits/dynamic-app-configuration-inject-configuration-at-run-time-using-sprin
20 matches
Mail list logo