Congratulations
Best,
Weihua
On Tue, Mar 19, 2024 at 10:56 AM Rodrigo Meneses wrote:
> Congratulations
>
> On Mon, Mar 18, 2024 at 7:43 PM Yu Chen wrote:
>
> > Congratulations!
> > Thanks to release managers and everyone involved!
> >
> > Best,
> > Yu Chen
> >
> >
> > > 2024年3月19日 01:01,J
Hi,
Which Flink version are you using? I haven't reproduced this issue by using
the master branch.
Best,
Weihua
On Tue, Jul 25, 2023 at 2:56 AM Allen Wang wrote:
> Hello,
>
> Our job has operators of source -> sink -> global committer. We have
> created two slot sharing groups, one for sourc
Hi Madan
Flink UI reads the log file and displays it in the UI, so if the UI is
accurate,
There shouldn't be any issues with Flink writing files.
You can check if there are any issues while transferring log file data to
Splunk.
Best,
Weihua
On Wed, Jul 12, 2023 at 1:02 AM Madan D via user
wr
Hi, Brendan
It looks like it's invoking your main method referring to the log. You can
add more logs in the main method to figure out which part takes too long.
Best,
Weihua
On Tue, Jun 27, 2023 at 5:06 AM Brendan Cortez
wrote:
> No, I'm using a collection source + 20 same JDBC lookups + Kafk
t; and this info came from yarn resoucemanager
>
> 获取 Outlook for iOS <https://aka.ms/o0ukef>
> --
> *发件人:* tan yao
> *发送时间:* Thursday, May 25, 2023 8:14:45 PM
> *收件人:* Weihua Hu
> *抄送:* user
> *主题:* Re: Web UI don't show up In
Hi,
Are there any reported exceptions? Did you try using curl to query the rest
API, such as "curl http://{ip:port}/overview";
Best,
Weihua
On Thu, May 25, 2023 at 8:49 AM tan yao wrote:
> Hi all,
> I find a strange thing with flink 1.17 deployed on yarn (CDH 6.x), flink
> web ui can not show
Hi
High-Availability in Application Mode is only supported for
single-execute() applications.[1]
And the reason is[2]:
The added complexity stems mainly from the fact that the different jobs
> within an application may at any point be in different stages of their
> execution,
e.g. some may be r
Hi Sharif,
You could not catch exceptions globally.
For exceptions that can be explicitly ignored for your business, you need
to add a try-catch in the operators.
For exceptions that are not catched, Flink will trigger a recovery from
failure automatically[1].
[1]
https://nightlies.apache.org/fl
Hi, Luke
You can enable "execution.attached", then env.execute() will wait until the
job is finished.
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#execution-attached
Best,
Weihua
On Fri, May 12, 2023 at 8:59 AM Shammon FY wrote:
> Hi Luke,
>
> Maybe you ca
Hi,
checkpoints are only used in failover for one job. Once a job is cancelled,
the related checkpoint-count metadata (stored on HA) will be removed.
But the checkpoint data could be retained if you configured it.
IIUC, the redeploy/update job will cancel the old job and then start a new
one.
The
Hi,
How many partitions does your kafka topic have?
One possibility is that the kafka topic has only one partition,
and when the source parallelism is set to 2, one of the source
tasks cannot consume data and generate the watermark, so
the downstream operator cannot align the watermark and cannot
Hi,
if for some reason there exists a checkpoint by same name.
>
Could you give more details about your scenarios here?
>From your description, I guess this problem occurred when a job restart,
does this restart is triggered personally?
In common restart processing, the job will retrieve the late
HA store and
> start the job.
> Since the current method of running the script standalone-job.sh requires
> a job at start. If I provide the same jobId, will it restore from the HA
> first, or just start anew?
>
> Best,
> Shubham Bansal
>
>
> On Sat, May 6, 2023 at 9:34
Hi, Shubham
Which Flink version are you using?
AFAIK, the JobManager will recover the job from the HA store first, and it
won't submit the same job (identify be jobID) again if it has already been
recovered.
Best,
Weihua
On Tue, May 2, 2023 at 8:02 PM Shubham Bansal wrote:
> Hello everyone,
>
Hi,
The Flink client always exits when the application submission is complete.
Can I ask why do you need to wait on the Client side?
Best,
Weihua
On Sat, Apr 22, 2023 at 4:03 AM Vinay Londhe wrote:
> Hello
>
> I have started to play around with Flink recently and I had a question
> about the
Hi,
It's a DEBUG level log.
Does this submission failed?
Best,
Weihua
On Fri, Apr 21, 2023 at 5:44 PM “岁月飞扬” <1639467...@qq.com> wrote:
>
> Hello!
>
> When we use flink version 1.14 to commit tasks to kubernetes, we get an
> error using Session mode. The following error is reported some time a
Hi, Viktor
We cannot hide "$internal.application.program-args" currently. Only
configuration options whose key contains "SENSITIVE_KEYS" will be hidden.
So, as an alternative, you might want to try passing some hidden
information through the environment variable[1], and name it
with "SENSITIVE_K
Hi, Mark
Flink will load ui service automatically if the flink-runtime-web jars in
classpath.
So, adding the dependency of flink-runtime-web is the right way.
You need to reload the maven project after the pod.xml changed.
And check whether the classpath includes flink-runtime-web classes or not.
Hi,
AFAIK, the reactive mode always restarts the whole pipeline now.
Best,
Weihua
On Tue, Apr 11, 2023 at 8:38 AM Talat Uyarer via user
wrote:
> Hi All,
>
> We use Flink 1.13 with reactive mode for our streaming jobs. When we have
> an issue/exception on our pipeline. Flink rescheduled all ta
退订请发送任意邮件到 user-unsubscr...@flink.apache.org,可以参考[1]
[1]
https://flink.apache.org/community.html#how-to-subscribe-to-a-mailing-list
Best,
Weihua
On Mon, Apr 10, 2023 at 9:04 AM 柒朵 <1303809...@qq.com> wrote:
> 退订
>
Hi,
Thanks for reporting this issue.
I created a ticket to track this[1].
[1]https://issues.apache.org/jira/browse/FLINK-31752
Best,
Weihua
On Fri, Apr 7, 2023 at 12:54 AM Ioannis Polyzos
wrote:
> Any chance something is broken in the Records Sent metrics in the UI in
> the 1.17 release? I
Hi, David
The jarURI is required[1], otherwise Flink doesn't know which jar should be
used.
If you are using application mode, you can set jarURI to
"local:///opt/flink/usrlib/your-job.jar", and the jar will not upload to
H/A storage.
Best,
Weihua
On Wed, Apr 5, 2023 at 5:51 PM David Causse w
Hi, Le
It looks like a DNS issue. Maybe you can try to ping or nslookup the
'my-first-flink-cluster-rest.default'
on flink operator pods to check whether dns service is normal.
Best,
Weihua
On Wed, Apr 5, 2023 at 12:43 PM Le Xu wrote:
> Hello!
>
> I'm trying out the Kubernetes sample
>
Hi,
That's because the ConfigMap volume is always read-only.
Currently /docker-entrypoint.sh will try to update some configs in docker
Environment. But these are not needed in kubernetes.
So I think we can ignore those errors safely when using the operator/native
kubernetes integration.
There is
Hi,
you need send email to user-unsubscr...@flink.apache.org with any contents or
subject
[1] https://flink.apache.org/community.html#how-to-subscribe-to-a-mailing-list
Best,
Weihua
> On Apr 3, 2023, at 11:15, 风 <919417...@qq.com> wrote:
>
> 退订
Hi,
you need send email to user-unsubscr...@flink.apache.org with any contents
or subject
[1]
https://flink.apache.org/community.html#how-to-subscribe-to-a-mailing-list
Best,
Weihua
On Fri, Mar 31, 2023 at 4:02 PM z wrote:
>
>
Hi, Sofya
The MiniClusterWithClientResource does not provide UI by default.
But you can enable it by adding flink-runtime-web dependency to do some
debug.
Add this dependency to your pom.xml. And flink will load the web ui
automatically.
org.apache.flink
flink-runtime-web
${project.ve
he/s3checkup/original-demo-helloworld-0.0.1-SNAPSHOT.jar
> and it is visible in the yarn UI like this as completed and the terminal
> looks as in the screenshots attached
>
> On Wed, 29 Mar 2023 at 09:31, Weihua Hu wrote:
>
>> Hi,
>>
>>
>>> jobs are visible in
Hi,
> jobs are visible in YARN resource manager Web UI
Which mode are you using to submit a job to YARN? Session mode or Perjob
mode.
IIUC, the YARN UI only shows Flink clusters, not jobs. Instead, the jobs
will be shown
in Flink ui.
Best,
Weihua
On Tue, Mar 28, 2023 at 11:16 PM Martijn Visse
Hi,
Yarn session clusters should auto allocate new task managers if slots are
not enough.
And the new submitting should not affect running jobs.
Could you provide the failure log?
Best,
Weihua
On Tue, Mar 21, 2023 at 11:57 AM Si-li Liu wrote:
> I use this command to launch a flink yarn sessi
Hi,
Could you describe your usage scenario in detail?
Why do you need to get the broadcast stream in sink?
And could you split an operator from the sink to deal with broadcast stream?
Best,
Weihua
On Mon, Mar 6, 2023 at 10:57 AM zhan...@eastcom-sw.com <
zhan...@eastcom-sw.com> wrote:
>
> h
Hi,
The cause of the error is clearly shown in the log. You need to check the
logic in 'com.flink.batch.lbs.running.hbase.udf.
BatchRunningTimesHBaseUdfRichSinkFunction#flush'
Best,
Weihua
On Fri, Feb 24, 2023 at 4:26 PM simple <1028108...@qq.com> wrote:
>
> 2023-02-24 16:05:26,104 INFO
> org
Hi,
Maybe you can use CURRENT_WATERMARK()[1] to handle some late data.
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/functions/systemfunctions/
Best,
Weihua
On Tue, Feb 21, 2023 at 1:46 PM wang <24248...@163.com> wrote:
> Hi dear engineers,
>
> One question as t
> Thanks for your response, I am familiar with those calculations, the one I
> don't understand is the Maximum Non-Heap value.
>
> Regards,
> Alexis.
>
> On Tue, 21 Feb 2023, 04:45 Weihua Hu, wrote:
>
>> Hi, Alexis
>>
>> 1. With those configuration, Flink
Hi, Alexis
1. With those configuration, Flink will set JVM parameters -Xms and -Xmx to
673185792(642m),-XX:MaxDirectMemorySize to 67108864(64m),-XX:MaxMetaspaceSize
to 157286400(150m), you can find more information from [1]
2. As the hint in Flink UI: "The maximum heap displayed might differ from
Hi, Meghajit
What kind of session cluster are you using? Standalone or Native?
If it's standalone, maybe you can check if TaskManager with heavy gc is
running more tasks than others. If so, we can enable
"cluster.evenly-spread-out-slots=true" to balance tasks in all task
managers.
Best,
Weihua
Hi
Flink 1.9 is not updated since 2020-04-24, it's recommended to use the
latest stable version 1.15.1.
Best,
Weihua
On Thu, Aug 18, 2022 at 5:36 AM Milind Vaidya wrote:
> Hi
>
> Trying to compile and build Flink jars based on Scala 2.12.
>
> Settings :
> Java 8
> Maven 3.6.3 / 3.8.6
>
> Many
Hi, Hemanga
Could not acquire the minimum required resources
-
This log just shows that there are not enough task managers to schedule
your job.
Referring to your description, maybe there was some problem with creating
the task manager. Maybe you can check the status of the task manager pod
wh
Hi, John
This just shows how many direct buffers are allocated through
'Bytebuffer.allocateDirect'.
And the Used will equals Capacity because we can not get the real usage of
DirectBuffer.
Best,
Weihua
On Thu, Jul 21, 2022 at 12:54 AM John Tipper
wrote:
> Sorry, pressed send too early.
>
> W
Hi,
Can you see any exception logs?
Where is this code running? is it a standalone cluster with one TaskManager?
Best,
Weihua
On Tue, Jul 26, 2022 at 4:18 AM wrote:
> If I get it correctly this is the way how I can save to CSV:
>
> https://nightlies.apache.org/flink/flink-docs-master/docs/co
27;t
> work at the moment.
>
> I am still looking for a full working example that adds the required
> Python packages on top of a Flink 1.15.0 base image :)
>
> Gyula
>
> On Tue, Jul 5, 2022 at 5:36 PM Weihua Hu wrote:
>
>> In addition, you can try providing the D
In addition, you can try providing the Dockerfile
Best,
Weihua
On Tue, Jul 5, 2022 at 11:24 PM Weihua Hu wrote:
> Hi,
>
> The base image flink:1.15.0 is built from openjdk:11-jre, and this image
> only installs jre but not jdk.
> It looks like the package you want to install
Hi, jonas
If you restart flink cluster by delete/create deployment directly, it will
be automatically restored from the latest checkpoint[1], so maybe just
enabling the checkpoint is enough.
But if you want to use savepoint, you need to check whether the latest
savepoint is successful (check wheth
Hi,
The base image flink:1.15.0 is built from openjdk:11-jre, and this image
only installs jre but not jdk.
It looks like the package you want to install (pemja) depends on jdk. you
need install openjdk-11-jdk in dockerfile,
take a look to how it is installed in the official image:
https://hub.do
Hi, Kevin
I have two minor tips that you can have a try.
1. check the severity of the skewed data and try to solve it at the logic,
or reduce the skew by keyby multiple times
2. increase the checkpoint timeout appropriately
Best,
Weihua
On Fri, Jul 1, 2022 at 9:29 AM yuxia wrote:
> Streaming
Hi, Filip
You can modify the InfluxdbReporter code to rewrite the notifyOfAddedMetric
method and filter the required metrics for reporting.
Best,
Weihua
On Thu, Jun 30, 2022 at 8:46 PM Filip Karnicki
wrote:
> Hi All
>
> We're using the influx reporter (flink 1.14.3), which seems to create a
>
Hi, Harald
Which version of flink are you referring to?
Best,
Weihua
On Thu, Jun 30, 2022 at 11:38 AM Harald Busch wrote:
> I get an java.lang.ClassNotFoundException:
> org.apache.flink.api.common.serialization.RuntimeContextInitializationContextAdapters
> when running my Apache Flink code lo
Hi, Vishal
The reactive mode will adjust the parallelism of tasks by slots of cluster.
it will not allocate new workers automatically.[1]
1. max parallelism only works to scale up the parallelism of tasks. it
will not affect the scheduling of tasks.
2. flink will enable slot sharing by default, u
Hi, Robin
This is a good question, but Flink can't do rolling upgrades.
I'll try to explain the cost of Flink's support for RollingUpgrade.
1. There is a shuffle connection between tasks in a Region, and in order to
ensure the consistency of the data processed by the upstream and downstream
tasks
Hi,
IMO, Broadcast is a better way to do this, which can reduce the QPS of
external access.
If you do not want to use Broadcast, Try using RichFunction, start a thread
in the open() method to refresh the data regularly. but be careful to clean
up your data and threads in the close() method, otherw
Hi Zain,
You can view the list of flink applications on the yarn web ui and choose
to jump to the specified Flink web ui.
Best,
Weihua
On Mon, May 23, 2022 at 7:07 PM Zain Haider Nemati
wrote:
> Hi David,
> Thanks for your response.
> When submitting a job in application mode it gives a url at
Hi,
You can get the logs from Flink Web UI if job is running.
Best,
Weihua
> 2022年5月19日 下午10:56,Zain Haider Nemati 写道:
>
> Hey All,
> How can I check logs for my job when it is running in application mode via
> yarn
Hi Harshit,
FlinkKafkaConsumer does not support consuming a particular partition of a topic.
Best,
Weihua
> 2022年5月18日 下午5:02,harshit.varsh...@iktara.ai 写道:
>
> particular
Hi,
Which version of flink are you using?
It looks like there is a conflict between the flink version of the cluster and
the version in userjar
Best,
Weihua
> 2022年5月19日 下午4:49,Zain Haider Nemati 写道:
>
> Hi,
> Im running flink application on yarn cluster it is giving me this error, it
> is w
Hi,
Which version of Flink are you using? And what is the start cmd?
Best,
Weihua
> 2022年5月17日 下午6:33,Zain Haider Nemati 写道:
>
> main method caused an error: Failed to execute job 'Tracer Processor'.
> at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgr
Sorry, the command is parsed as reference.
The real command is :
" > taskmanager.out "
Best,
Weihua
> 2022年5月16日 下午9:52,Weihua Hu 写道:
>
> Hi,
>
> Flink redirects stdout to the taskmanager.out when starting TaskManager.
> If taskmanager.out is deleted, Fl
Hi,
Flink redirects stdout to the taskmanager.out when starting TaskManager.
If taskmanager.out is deleted, Flink cannot automatically create
taskmanager.out, which means any subsequent output to stdout will be lost.
If you want to clean up the content of taskmanager.out, you can try using:
om.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
> at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:641)
>
> From the error it looks like it's falling back to Kryo serializer instead of
> POJO serializer.
>
> Thanks,
> Tej
Hi, Tejas
These code is works in my idea environment.
Could you provide more error info or log?
Best,
Weihua
> 2022年5月10日 下午1:22,Tejas B 写道:
>
> Hi,
> I am trying to get flink schema evolution to work for me using POJO
> serializer. But I found out that if an enum is present in the POJO then
Hi, Fuyao
How did you define these classes? There is some requirements for POJO as flink
docs[1] said:
The class must be public.
It must have a public constructor without arguments (default constructor).
All fields are either public or must be accessible through getter and setter
functions. F
Hi, Felipe
sorry for late reply.
You can try to config taskmanager.numberOfTaskSlots = 1 and use different
slotSharingGroup to make sure Task do not placed in same TM.
Best
Weihua Hu
> 2020年5月29日 17:07,Felipe Gutierrez 写道:
>
> Using slotSharingGroup I can do some placement. howe
Hi, Felipe
Flink does not support run tasks on specified TM.
You can use slotSharingGroup to control Tasks not in same Slot, but cannot
specified which TM.
Can you please give the reason for specifying TM?
Best
Weihua Hu
> 2020年5月28日 21:37,Felipe Gutierrez 写道:
>
> For instance,
Hi Piotrek,
Thanks for your suggestions, I found some network issues which seems to be the
cause of back pressure.
Best
Weihua Hu
> 2020年5月26日 02:54,Piotr Nowojski 写道:
>
> Hi Weihua,
>
> > After dumping the memory and analyzing it, I found
understanding of the Flink network transmission
mechanism.
Can someone help me? Thanks a lot.
Best
Weihua Hu
from the UI.
By now, you can increase the ‘akka.ask.timeout’ to avoid this.
I have created a jira issue to improve this.
https://issues.apache.org/jira/browse/FLINK-16069
<https://issues.apache.org/jira/browse/FLINK-16069> .
Best
Weihua Hu
> 2020年2月15日 01:54,Richard Moorhead 写道:
65 matches
Mail list logo