;
>.addSink(SinkforCondtiton2);
>
>
>
>
>
> *From:* Mingliang Liu
> *Sent:* Monday, June 3, 2024 1:03 PM
> *To:* mejri houssem
> *Cc:* user@flink.apache.org
> *Subject:* Re: Implementing Multiple sink
>
>
>
> *NOTICE: This email is from an e
Hello community,
We have a use case in our Flink job that requires the implementation of
multiple sinks. I need to filter messages based on certain conditions
(information in the message) to determine which sink to dispatch them to.
To clarify, I would like to implement logic in the operator tha
Hello Flink community ,
We are currently working on a Flink job that consumes messages from
RabbitMQ, with checkpointing configured to at-least-once mode.
In our job, we make external API requests to retrieve information. If the
external api is down or a timeout is occured, we currently throw an
1)You can use the application cluster mode you can find how to configure in the
official flink documentation
https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/resource-providers/standalone/kubernetes.html#deploy-application-cluster
2)for HA you can use kubernetes HA:
http
hello,
here's some of full GC log:
OpenJDK 64-Bit Server VM (25.232-b09) for linux-amd64 JRE (1.8.0_232-b09),
built on Oct 18 2019 15:04:46 by "jenkins" with gcc 4.8.2 20140120 (Red Hat
4.8.2-15)
Memory: 4k page, physical 976560k(946672k free), swap 0k(0k free)
CommandLine flags: -XX:Compresse
n order to debug this problem.
Le jeu. 9 sept. 2021 à 22:25, houssem a écrit :
> Hello ,
>
> with respect to the api-server i dotn re
>
> On 2021/09/09 11:37:49, Yang Wang wrote:
> > I think @Robert Metzger is right. You need to
> check
> > whether your Kubernetes
ytes in 161 ms)."
> > "Renew deadline reached after 60 seconds while renewing lock
> > ConfigMapLock: myNs - myJob-dispatcher-leader
> > (1bcda6b0-8a5a-4969-b9e4-2257c4478572)"
> > "Stopping SessionDispatcherLeaderProcess."
> >
> > At some
or
> > your cluster is lacking hadoop classes. Please make sure that there are
> > hadoop jars in the lib directory of Flink, or your cluster has set the
> > HADOOP_CLASSPATH environment variable.
> >
> > mejri houssem 于2021年9月4日周六 上午12:15写道:
> >>
> >
e code does not directly refer to it.
> logback.qos.ch
>
>
> Regards,
> Alexis.
>
>
> From: houssem
> Sent: Wednesday, September 1, 2021 7:02 PM
> To: user@flink.apache.org
> Subject: Re: logback variable substitution in kubernetes
&g
; Refer to the documentation[1] for how to use logback.
>
> [1].
> https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/advanced/logging/#configuring-logback
>
> Best,
> Yang
>
> houssem 于2021年9月1日周三 下午5:00写道:
>
> > Yes i did this verification and i
Maybe you could tunnel into the Kubernetes pod via "kubectl exec" and do
> such verification.
>
> Best,
> Yang
>
> houssem 于2021年8月31日周二 下午7:28写道:
>
> >
> > Hello,
> >
> > I am running a flink application cluster in standalone kubernetes mode an
Hello,
I am running a flink application cluster in standalone kubernetes mode and i a
using logback
as a logging framework , th problem is i am not able tu use environment
variables configured in my pod inside my logback-console.xml file .
I copied this file from my file system while buil
ernetes versions and HA configuration
> [2]? (I'm assuming you're using Kubernetes for HA, not ZK).
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/config/#kubernetes-jobmanager-replicas
> [2]
> https://ci.apache.org/projects/flink/flink-docs-mast
hello i am deploying a flink application cluster with kubernetes HA mode, but i
am facing this recurrent problem and i didn't know how to solve it.
Any help would be appreciated.
this of the jobManager:
{"@timestamp":"2021-08-27T14:19:42.447+02:00","@version":"1","message":"Exception
occurr
14 matches
Mail list logo