Hi Tejas,
Yes, you can write a typefactory for enum. But I am assuming Flink should be
able to recognize enum by default…
Anyways, you can do something like this:
Types.ENUM(RuleType.class);
This will return you a TypeInfomation which can be used to construct a
typefactory..
BTW, could you t
Updated the FieldDefinition class inline to avoid confusion. I am just listing
a few fields in the class (not all). It is all following suggested POJO
approach.
From: Fuyao Li
Date: Thursday, May 12, 2022 at 09:46
To: Weihua Hu
Cc: User
Subject: Re: [External] : Re: How to define
TypeInformation.of(new TypeHint>() {});
}
}
But I still get the following errors.
Generic types have been disabled in the ExecutionConfig and type java.util.Set
is treated as a generic type.
Thanks,
Fuyao
From: Weihua Hu
Date: Thursday, May 12, 2022 at 07:24
To: Fuyao Li
Cc: user
Su
Hi Community,
I have a POJO that has nested recursively resolved structure. How should I
define the @TypeInfo annotation correctly to avoid stack overflow exception
when starting the application.
Basically,
Class Metadata
Map fields
Class FieldDefinition
Metadata parentMetadata
The Metadata c
egards,
Fuyao
From: Yun Gao
Date: Wednesday, February 16, 2022 at 00:54
To: Fuyao Li , user
Subject: Re: Re: [External] : Re: Use TwoPhaseCommitSinkFunction or
StatefulSink+TwoPhaseCommittingSink to achieve EXACTLY_ONCE sink?
Hi Fuyao,
Very sorry for the late reply.
For the question 1, I think it
mantic, it seems impossible for me to handle it at Flink code side.
Thanks,
Fuyao
From: Fuyao Li
Date: Thursday, February 10, 2022 at 15:48
To: Yun Gao , user
Subject: Re: [External] : Re: Use TwoPhaseCommitSinkFunction or
StatefulSink+TwoPhaseCommittingSink to achieve EXACTLY_ONCE sink?
Hel
om/apache/flink/blob/release-1.14/flink-core/src/main/java/org/apache/flink/api/connector/sink/Sink.java>
you mentioned.
Thank you very much for the help!
Fuyao
From: Yun Gao
Date: Wednesday, February 9, 2022 at 23:17
To: Fuyao Li , user
Subject: [External] : Re: Use TwoPhaseCommitSin
Hello Community,
I have two questions regarding Flink custom sink with EXACTLY_ONCE semantic.
1. I have a SDK that could publish messages based on HTTP (backed by Oracle
Streaming Service --- very similar to Kafka). This will be my Flink
application’s sink. Is it possible to use this SDK
From: Fuyao Li
Date: Tuesday, November 2, 2021 at 14:14
To: David Morávek , nicolaus.weid...@ververica.com
Cc: user , Yang Wang , Robert
Metzger , tonysong...@gmail.com ,
Sandeep Sooryaprakash
Subject: [External] : Re: Possibility of supporting Reactive mode for native
Kubernetes application
uesday, November 2, 2021 at 05:53
To: Fuyao Li
Cc: user , Yang Wang , Robert
Metzger , tonysong...@gmail.com ,
Sandeep Sooryaprakash
Subject: Re: [External] : Re: Possibility of supporting Reactive mode for
native Kubernetes application mode
Similar to Reactive mode, checkpoint must be enabled to s
://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/checkpointing/#enabling-and-configuring-checkpointing
Best,
Fuyao
From: David Morávek
Date: Friday, October 29, 2021 at 23:11
To: Fuyao Li
Cc: user , Yang Wang , Robert
Metzger , tonysong...@gmail.com
Subject: [External
Hello Community,
I am checking the reactive mode for Flink deployment. I noticed that this is
supported in Kubernetes environment, but only for standalone Kubernetes as of
now. I have read some previous discussion threads regarding this issue. See
[1][2][3][4][5][6].
Question 1:
It seems that
Thanks! I got your point. Will try it out.
From: Chesnay Schepler
Date: Tuesday, October 19, 2021 at 01:44
To: Fuyao Li , user
Cc: Rohit Gupta
Subject: Re: [External] : Re: How to enable customize logging library based on
SLF4J for Flink deployment in Kubernetes
1) Adding it as a dependency
Xie
Date: Monday, October 18, 2021 at 13:34
To: Arvid Heise
Cc: Fuyao Li , user@flink.apache.org
Subject: Re: [External] : Timeout settings for Flink jobs?
It's promising that I can #isEndOfStream at the source. Is there a way I can
terminate a job from the sink side instead? We want to term
,
Fuyao
From: Chesnay Schepler
Date: Tuesday, September 28, 2021 at 07:06
To: Fuyao Li , user
Cc: Rohit Gupta
Subject: [External] : Re: How to enable customize logging library based on
SLF4J for Flink deployment in Kubernetes
Could you clarify whether this internal framework uses a custom slfj4
Hi Sharon,
I think for DataStream API, you can override the isEndOfStream() method in the
DeserializationSchema to control the input data source to end and thus end the
workflow.
Thanks,
Fuyao
From: Sharon Xie
Date: Monday, October 11, 2021 at 12:43
To: user@flink.apache.org
Subject: [Extern
Hi Flink Community,
I am trying enable a company internal logging framework built upon SLF4J and
log4j. This logging framework has another separate jar and specific logging
configurations. After debugging, I am able to make Flink application running
correctly in the local IDE with the internal
Hello James,
To stream real time data out of the database. You need to spin up a CDC
instance. For example, Debezium[1]. With the CDC engine, it streams out changed
data to Kafka (for example). You can consume the message from Kafka using
FlinkKafkaConsumer.
For history data, it could be consid
Hello Aissa,
I guess you might be interested in this video:
https://www.youtube.com/watch?v=X3L75Rz64Ns&list=PL2oL9cdRCATGOSFvG3O5QbSuAcvkmr_KV&index=19
Thanks,
Fuyao
From: Aissa Elaffani
Date: Thursday, July 15, 2021 at 03:55
To: user@flink.apache.org
Subject: [External] : Big data architect
his, maybe you can share more insights
about this.
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/cli/
Thanks,
Fuyao
From: Yang Wang
Date: Friday, May 7, 2021 at 20:45
To: Fuyao Li
Cc: Austin Cawley-Edwards , matth...@ververica.com
, user
Subject: Re: [Exte
/flink-native-k8s-operator/issues/4
[2]
https://github.com/apache/flink/blob/release-1.12/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/decorators/FlinkConfMountDecorator.java
Have a good weekend!
Best,
Fuyao
From: Fuyao Li
Date: Tuesday, May 4, 2021 at 19:52
To: Austin
-operator/blob/master/src/main/java/org/apache/flink/kubernetes/operator/controller/FlinkApplicationController.java
Thank you so much.
Best,
Fuyao
From: Fuyao Li
Date: Tuesday, May 4, 2021 at 19:34
To: Austin Cawley-Edwards , matth...@ververica.com
, Yang Wang
Cc: user
Subject: Re: [External] : Re
/blob/master/src/main/java/org/apache/flink/kubernetes/operator/controller/FlinkApplicationController.java#L176
Best,
Fuyao
From: Fuyao Li
Date: Tuesday, May 4, 2021 at 15:23
To: Austin Cawley-Edwards , matth...@ververica.com
Cc: user , Yang Wang , Austin
Cawley-Edwards
Subject: Re: [External
14:47
To: matth...@ververica.com
Cc: Fuyao Li , user , Yang Wang
, Austin Cawley-Edwards
Subject: [External] : Re: StopWithSavepoint() method doesn't work in Java based
flink native k8s operator
Hey all,
Thanks for the ping, Matthias. I'm not super familiar with the details of @
image, this seems to be not suitable, please
correct me if I am wrong.
For the log issue, I am still a bit confused. Why it is not available in
kubectl logs. How should I get access to it?
Thanks.
Best,
Fuyao
From: Fuyao Li
Date: Sunday, May 2, 2021 at 00:36
To: user , Yang Wang
Subject
perform this image update
operation. Thanks!
Best,
Fuyao
From: Fuyao Li
Date: Friday, April 30, 2021 at 18:03
To: user , Yang Wang
Subject: [External] : Re: StopWithSavepoint() method doesn't work in Java based
flink native k8s operator
Hello Community, Yang,
I have one more questio
rversAvailable: Subscription time out
The logs stops here, flink applications logs doesn’t get printed here
anymore-
Best,
Fuyao
From: Fuyao Li
Date: Friday, April 30, 2021 at 16:50
To: user , Yang Wang
Subject: [External] : StopWithSavepoint() method doesn't work in
Hello Shipeng,
I am not an expert in Flink, just want to share some of my thoughts. Maybe
others can give you better ideas.
I think there is no directly available Protobuf support for Flink SQL. However,
you can write a user-defined format to support it [1].
If you use DataStream API, you can le
Hello Community, Yang,
I am trying to extend the flink native Kubernetes operator by adding some new
features based on the repo [1]. I wrote a method to release the image update
functionality. [2] I added the
triggerImageUpdate(oldFlinkApp, flinkApp, effectiveConfig);
under the existing method.
Best,
Fuyao
From: Fuyao Li
Date: Tuesday, April 13, 2021 at 19:10
To: Yang Wang
Cc: user
Subject: Re: [External] : Re: Conflict in the document - About native
Kubernetes per job mode
Hello Yang,
I also created a PR for this issue. Please take a look.
Refer to
https://github.com/apache/flin
Hello Yang,
I also created a PR for this issue. Please take a look.
Refer to https://github.com/apache/flink/pull/15602
Thanks,
Fuyao
From: Fuyao Li
Date: Tuesday, April 13, 2021 at 18:23
To: Yang Wang
Cc: user
Subject: Re: [External] : Re: Conflict in the document - About native
Kubernetes
Hello Yang,
I tried to create a ticket https://issues.apache.org/jira/browse/FLINK-22264
I just registered as a user and I can’t find a place to assign the task to
myself… Any idea on this jira issue?
Thanks.
Best,
Fuyao
From: Yang Wang
Date: Tuesday, April 13, 2021 at 03:01
To: Fuyao Li
Cc
Hello Yang,
It is very kind of you to give such a detailed explanation! Thanks for
clarification.
For the small document fix I mentioned, what do you think?
Best,
Fuyao
From: Yang Wang
Date: Monday, April 12, 2021 at 23:03
To: Fuyao Li
Cc: user , Yan Wang
Subject: [External] : Re: Conflict
Hello Community, Yang,
I noticed a conflict in the document for per-job mode support for Kubernetes.
In the doc here [1], it mentions
in a Flink Job Cluster, the available cluster manager (like YARN or Kubernetes)
is used to spin up a cluster for each submitted job and this cluster is
available
Hi Yang,
Thanks for the reply, those information is very helpful.
Best,
Fuyao
From: Yang Wang
Date: Tuesday, April 6, 2021 at 01:11
To: Fuyao Li
Cc: user
Subject: Re: [External] : Re: Need help with executing Flink CLI for native
Kubernetes deployment
Hi Fuyao,
Sorry for the late reply
, Flink SQL
will make join easier.
Reference:
[1]
https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/broadcast_state.html
Best,
Fuyao
From: B.B.
Date: Monday, April 5, 2021 at 06:27
To: Fuyao Li
Subject: Re: [External] : Union of more then two streams
Hi Fuyao,
thanks for
for your help.
Best,
Fuyao
From: Fuyao Li
Date: Thursday, April 1, 2021 at 12:22
To: Yang Wang
Cc: user
Subject: Re: [External] : Re: Need help with executing Flink CLI for native
Kubernetes deployment
Hi Yang,
Thanks for sharing the insights.
For problem 1:
I think I can’t do telnet in the
Hello BB,
Just want to share you some of my immature ideas. Maybe some experts can give
you better solutions and advice.
1. DataStream based solution:
* To do a union, as you already know, you must have the datastream to be
of the same format. Otherwise, you can’t do it. There is a wo
://github.com/GoogleCloudPlatform/flink-on-k8s-operator
[2] https://github.com/lyft/flinkk8soperator
[3] https://youtu.be/pdFPr_VOWTU
Best,
Fuyao
From: Yang Wang
Date: Tuesday, March 30, 2021 at 19:15
To: Fuyao Li
Cc: user
Subject: Re: [External] : Re: Need help with executing Flink CLI for native
to find a reason, could you give me some hints?
3. In production, what is the suggested approach to list and cancel jobs?
The current manual work of “kubectl exec” into pods is not very reliable.. How
to automate this process and integrate this CI/CD? Please share some blogs
there is any,
Hi Community, Yang,
I am new to Flink on native Kubernetes and I am trying to do a POC for native
Kubernetes application mode on Oracle Cloud Infrastructure. I was following the
documentation here step by step: [1]
I am using Flink 1.12.1, Scala 2.11, java 11.
I was able to create a native Kube
Hello Xiong,
You can expose monitors through Metric system of Flink.
https://ci.apache.org/projects/flink/flink-docs-stable/ops/metrics.html
Metrics can be exposed by metric reporter:
https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/metric_reporters.html
That includes Promet
Hi Flink Community,
After configuring the JDBC timeout time, I still could not get rid of the issue.
https://issues.apache.org/jira/browse/FLINK-21674
I created a JIRA task to describe the problem. Any suggestion is appreciated.
Best regards,
Fuyao
From: Fuyao Li
Date: Wednesday, March 3, 2021
Fuyao
From: XU Qinghui
Date: Tuesday, March 2, 2021 at 13:40
To: Fuyao Li
Cc: user , Timo Walther
Subject: [External] : Re: Need help with JDBC Broken Pipeline Issue after some
idle time
It sounds like the jdbc driver's connection is closed somehow, and probably has
nothing to do with fl
tFormat.java:195)
... 30 more
Thanks,
Best regards,
Fuyao
From: Fuyao Li
Date: Tuesday, March 2, 2021 at 10:33
To: user , Timo Walther
Subject: Need help with JDBC Broken Pipeline Issue after some idle time
Hi Flink Community,
I need some help with JDBC sink in Datastrea
Hi Flink Community,
I need some help with JDBC sink in Datastream API. I can produce some records
and sink it to database correctly. However, if I wait for 5 minutes between
insertions. I will run into broken pipeline issue. Ater that, the Flink
application will restart and recover from checkpo
be an option?
> >>
> >> "Some type cast behavior of retracted streams I can't explain."
> >>
> >> toAppendStream/toRetractStream still need an update to the new type
> >> system. This is explained in FLIP-136 which will be part of Flink 1.
com/v3/__https://cwiki.apache.org/confluence/display/FLINK/FLIP-136*3A**AImprove*interoperability*between*DataStream*and*Table*API__;JSsrKysrKysr!!GqivPVa7Brio!ItHlGfYT1dLQeAolQoFNfXPN876842lnF4hOE7cxmmTJY4tJkXUmkz7JfAjyGyQ$
On 13.11.20 21:39, Fuyao Li wrote:
Hi Matthias,
Just to provide mo
7JMW06Who$
[3]
https://urldefense.com/v3/__https://cwiki.apache.org/confluence/display/FLINK/FLIP-136*3A**AImprove*interoperability*between*DataStream*and*Table*API__;JSsrKysrKysr!!GqivPVa7Brio!ItHlGfYT1dLQeAolQoFNfXPN876842lnF4hOE7cxmmTJY4tJkXUmkz7JfAjyGyQ$
On 13.11.20 21:39, Fuyao Li wrote:
Hi
o
On Fri, Nov 13, 2020 at 11:57 AM Fuyao Li wrote:
> Hi Matthias,
>
> One more question regarding Flink table parallelism, is it possible to
> configure the parallelism for Table operation at operator level, it seems
> we don't have such API available, right? Thanks!
>
> B
Hi Matthias,
One more question regarding Flink table parallelism, is it possible to
configure the parallelism for Table operation at operator level, it seems
we don't have such API available, right? Thanks!
Best,
Fuyao
On Fri, Nov 13, 2020 at 11:48 AM Fuyao Li wrote:
> Hi Matthias,
&g
ventTime I'm going to pull in
> Timo from the SDK team as I don't see an issue with your code right away.
>
> Best,
> Matthias
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/event_timestamps_watermarks.html#using-watermark-strategies
>
> On
The test workflow attachment is not added in the previous email, sorry
for the confusion, please refer to the describe text workflow.. Thanks.
On 11/12/20 16:17, fuyao...@oracle.com wrote:
Hi All,
Just to add a little more context to the problem. I have a full outer
join operation before th
Hi All,
Just to add a little more context to the problem. I have a full outer
join operation before this stage. The source data stream for full outer
join is a Kafka Source. I also added timestamp and watermarks to the
FlinkKafkaConsumer. After that, it makes no difference to the result,
stil
Hi Community,
Regarding this problem, could someone give me an explanation? Thanks.
Best,
Fuyao
On 11/10/20 16:56, fuyao...@oracle.com wrote:
Hi Kevin,
Sorry for the name typo...
On 11/10/20 16:48, fuyao...@oracle.com wrote:
Hi Kavin,
Thanks for your example. I think I have already don
Hi Experts,
I am trying to use to implement a KeyedProcessFunction with onTimer()
callback. I need to use event time and I meet some problems with making
the watermark available to my operator. I meet some strange behaviors.
I have a joined retracted stream without watermark or timestamp
inf
.addSink(businessObjectSink)
.name("businessObjectSink")
.uid("businessObjectSink")
.setParallelism(1);
Best regards,
Fuyao
On Mon, Nov 2, 2020 at 4:53 PM Fuyao Li wrote:
> Hi Flink Community,
>
> I am doing some re
retracted datastream, do I need to explicitly attach it to the
stream environment? I think it is done by default, right? Just want to
confirm it. I do have the env.execute() at the end of the code.
I understand this is a lot of questions, thanks a lot for your patience
to look through my email! If there is anything unclear, please reach out to
me. Thanks!
Best regards,
Fuyao Li
58 matches
Mail list logo