One way I thought to achieve this is -
1. For failures, add a special record to collection in RichAsyncFunction
2. Filter out those special records from the DataStream and push to another
Kafka
Let me know if it makes sense.
On Fri, Jun 11, 2021 at 10:40 AM Satish Saley
wrote:
> Hi,
> - I have
Hi,
- I have a kafka consumer to read events.
- Then, I have RichAsyncFunction to call a remote service to get
more information about that event.
If the remote call fails after X number of retries, I don't want flink to
fail the job and start processing from the beginning. Instead I would like
to
Thanks for the suggestions and feedback on the PR.
A variation of hybrid source that can switch back and forth was
brought up before and it is something that will be eventually
required. It was also suggested by Stephan that in the future there
may be more than one implementation of hybrid source
Thanks Till for the reply. The suggestions are really helpful for the
topic. Maybe something I mention is not clear or not detail. Here are what
I want to say:
1. Changing log level is not suitable for the topic as you said. Because
our inner log4j is old, so this feature is implemented in a
hehuiyuan created FLINK-22976:
-
Summary: Whether to consider adding config-option to control
whether to exclude record.key value from record.value value
Key: FLINK-22976
URL: https://issues.apache.org/jira/browse/F
Jun Zhang created FLINK-22975:
-
Summary: Specify port or range for k8s service
Key: FLINK-22975
URL: https://issues.apache.org/jira/browse/FLINK-22975
Project: Flink
Issue Type: Improvement
Hi,
I apologize that I forgot the attachments in my last post. I'll repost my
question with attachments this time:
*I have successfully run the project "table-walkthrough" on IDEA (w/t
errors but warnings)*, *I'm now trying to build this project by using the
"docker-compose" command* as the tutor
tinawenqiao created FLINK-22974:
---
Summary: No execution checkpointing config desc in flink-conf.yaml
Key: FLINK-22974
URL: https://issues.apache.org/jira/browse/FLINK-22974
Project: Flink
Issu
Thanks for starting this discussion. I do see the benefit of dynamically
configuring your Flink job and the cluster running it. Some of the use
cases which were mentioned here are already possible. E.g. adjusting the
log level dynamically can be done by configuring an appropriate logging
backend an
Piotr Nowojski created FLINK-22973:
--
Summary: Provide benchmark for unaligned checkpoints performance
Key: FLINK-22973
URL: https://issues.apache.org/jira/browse/FLINK-22973
Project: Flink
I
Hi devs,
I'd like to start a discussion about "Lifecycle of ShuffleMaster and its
Relationship with JobMaster and PartitionTracker". (These are things we
found when moving our external shuffle to the pluggable shuffle service
framework.)
The mail client may fail to display the right format. If so
Hi Lu,
longer heartbeat timeouts will have the effect that a loss of component
(e.g. a TaskManager) will take longer to be detected. This will affect the
recovery speed of your application in case of such a situation. On the
upside, longer heartbeat timeouts allow working on less reliable
infrastr
big +1 for this feature,
1. Reset kafka offset in certain cases.
2. Stop checkpoint in certain cases.
3. Change log level for debug.
刘建刚 于2021年6月11日周五 下午12:17写道:
> Thanks for all the discussions and suggestions. Since the topic has
> been discussed for about a week, it is time to
Dawid Wysakowicz created FLINK-22972:
Summary: Deprecate/Remove StreamOperator#dispose method
Key: FLINK-22972
URL: https://issues.apache.org/jira/browse/FLINK-22972
Project: Flink
Issue
Xintong Song created FLINK-22971:
Summary: Initialization of Testcontainers unstable on Azure
Key: FLINK-22971
URL: https://issues.apache.org/jira/browse/FLINK-22971
Project: Flink
Issue Type
15 matches
Mail list logo