It would still work
发件人: Richard Cheung
发送时间: 星期二, 五月 20, 2025 4:08:00 上午
收件人: Zhanghao Chen
抄送: Мосин Николай ; Schwalbe Matthias
; user@flink.apache.org
主题: Re: Apache Flink Serialization Question
Hi all,
Thanks again for the help! I have one more follow
this end.
Best,
Zhanghao Chen
From: Kamal Mittal via user
Sent: Monday, May 19, 2025 13:40
To: User
Subject: Flink task manager pod auto scaling
Hello,
Does flink supports vertical task manager pod auto scaling?
Rgds,
Kamal
eatment of these nullable cases.
Best,
Zhanghao Chen
From: Мосин Николай
Sent: Friday, May 16, 2025 0:02
To: Richard Cheung
Cc: Schwalbe Matthias ; Zhanghao Chen
; user@flink.apache.org
Subject: Re: Apache Flink Serialization Question
For List I just setup TypeI
Thanks for the insightful sharing!
Best,
Zhanghao Chen
From: Lasse Nedergaard
Sent: Thursday, May 15, 2025 13:10
To: Zhanghao Chen
Cc: mosin...@yandex.ru ; user@flink.apache.org
Subject: Re: Keyed watermarks: A fine-grained watermark generation for Apache
P-467%3A+Introduce+Generalized+Watermarks
Best,
Zhanghao Chen
From: Мосин Николай
Sent: Thursday, May 15, 2025 3:58
To: user@flink.apache.org
Subject: Keyed watermarks: A fine-grained watermark generation for Apache Flink
I found paper https://scholar.google.com/sc
[3]
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/elastic_scaling/#externalized-declarative-resource-management
Best,
Zhanghao Chen
From: Kamal Mittal via user
Sent: Monday, May 12, 2025 13:35
To: user@flink.apache.org
Subject: Flink task ma
Flink still use PojoSerializer for the class while only using Kryo for the UUID
field.
Best,
Zhanghao Chen
From: Richard Cheung
Sent: Wednesday, May 14, 2025 3:21
To: user@flink.apache.org
Subject: Apache Flink Serialization Question
Hi all!
I have a question
从日志看是 JM 的堆外内存设置的不足,导致 JVM 自己做 JIT 编译时出现了堆外内存的 OOM。你可以尝试通过增大
jobmanager.memory.jvm-overhead.max 和 jobmanager.memory.jvm-overhead.fraction
参数来扩大 JVM overhead 区内存的大小。
Best,
Zhanghao Chen
From: 咖啡本色/kf <250071...@qq.com>
Sent: Monday, April 14, 2025 7:56
To
Awesome!
Thanks for all the release managers running this release and all the
contributors to contribute to it!
Luke
On Mon, Mar 24, 2025 at 4:41 PM Zakelly Lan wrote:
> Congratulations! Thanks Xintong, Jark, Jiangjie and Martijn for driving the
> 2.0.
> Thanks everyone for the great work!
Hi, Vincent. Old versions of JDK8 lack proper container awareness. It is
suggested to upgrade your JDK to at least 8u372 or 11.0.16, see [1][2] for more
details.
[1] https://bugs.openjdk.org/browse/JDK-8146115
[2] https://bugs.openjdk.org/browse/JDK-8230305
Best,
Zhanghao Chen
tring name, TypeInformation typeInfo) API to
manually specify the typeinformation on the state side.
Best,
Zhanghao Chen
From: Sachin Mittal
Sent: Thursday, February 13, 2025 12:23
To: Zhanghao Chen
Cc: user
Subject: Re: How to register pojo type information for third
Hi,
You should register a custom type info for java.util.List with a custom list
serializer instead of class B itself in this case.
Best,
Zhanghao Chen
From: Sachin Mittal
Sent: Thursday, February 13, 2025 16:10
To: Zhanghao Chen
Cc: user
Subject: Re: How to
java.util.List
is introduced, and you should not need any additional type registrations to
disable the generic types in this case.
Best,
Zhanghao Chen
From: Sachin Mittal
Sent: Thursday, February 13, 2025 12:23
To: Zhanghao Chen
Cc: user
Subject: Re: How to
Hi, you may use the option "pipeline.serialization-config" [1] to register type
info for any custom type, which is available since Flink 1.19.
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#pipeline-serialization-config
Best,
Zha
Simply put, HA metadata will only be deleted when the job reaches terminal
state (either failed or cancelled). The ref doc is
https://nightlies.apache.org/flink/flink-docs-master/docs/ops/state/task_failure_recovery/#restart-strategies
Best,
Zhanghao Chen
From
-delay
restart-strategy.fixed-delay.delay: 10s
Thanks,
Chen
On Tue, Feb 4, 2025 at 5:14 PM Zhanghao Chen
wrote:
> Hi Yang,
>
> When the job failed temporarily, e.g. due to single machine failure, Flink
> will retain the HA metadata and try to recover. However, when the job has
> alre
Please send email to user-unsubscr...@flink.apache.org if you want to
unsubscribe the mail from user@flink.apache.org.
Best,
Zhanghao Chen
From: Mujahid Niaz
Sent: Wednesday, February 5, 2025 9:29
Cc: user@flink.apache.org
Subject: Unsubscribe
Unsubscribe
You'll need to implement an custom sink for that.
Best,
Zhanghao Chen
From: Ilya Karpov
Sent: Monday, February 3, 2025 18:30
To: user
Subject: Dead Letter Queue for FlinkSQL
Hi there,
Because sink connectors can throw exceptions in real time (for example
egies
Best,
Zhanghao Chen
From: Chen Yang via user
Sent: Wednesday, February 5, 2025 7:17
To: user@flink.apache.org
Cc: Vignesh Chandramohan
Subject: Flink High Availability Data Cleanup
Hi Flink Community,
I'm running the Flink jobs (standalone mode)
igs to keep the configmap during job cleanup. But I can't find any
Flink docs mentioning these configurations nor in the Flink code. Please
advise!
high-availability.cleanup-on-shutdown
or
kubernetes.jobmanager.cleanup-ha-metadata
Thanks,
Chen
--
Chen Yang
Software Engineer, Data Infrastr
[1] has been merged. I'll try to work on it under [2], looking forward to more
volunteers on it!
[1] https://issues.apache.org/jira/browse/FLINK-30478
[2] https://issues.apache.org/jira/browse/FLINK-15736
Best,
Zhanghao Chen
From: Nikola Milutinovic
,
Zhanghao Chen
From: Nikola Milutinovic
Sent: Friday, January 10, 2025 23:48
To: user@flink.apache.org
Subject: Re: table.exec.source.idle-timeout support
Hi Nic.
I do not have a solution (soy), but have seen something similar. And have
complained about it
Hi,
In short, yes if without user-defined functions. For UDFs, you'll have to
ensure that it does cache data internally (maintain a local hash map for
example), otherwise downstream ops may change the cached data and breaks data
integrity.
Best,
Zhanghao
[2].
[1] https://lists.apache.org/thread/qvw66of180t3425pnqf2mlx042zhlgnn
[2]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-458%3A+Long-Term+Support+for+the+Final+Release+of+Apache+Flink+1.x+Line
Best,
Zhanghao Chen
From: Anuj Jain
Sent: Sunday, January 5
Hi Jean-Marc,
Thanks for reporting this, this is a mistake. Your code is correct, I've
created a issue [1] to fix that.
[1] https://issues.apache.org/jira/browse/FLINK-36904
Best,
Zhanghao Chen
From: Jean-Marc Paulin
Sent: Thursday, December 5, 2024 21:
Unsubscribe
;Data Types & Serialization" for details of the
effect on performance and schema evolution.
I tried to create a security policy to allow setContextClassLoader, but
that didn't work. Any idea on how to fix this will be greatly appreciated.
Thanks,
--
<http://www.robinhood.com/>
ask.java:779)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566)
at java.lang.Thread.run(Thread.java:748)
rui chen 于2024年10月9日周三 11:38写道:
> We found a deadlock problem when a single piece of data is too large on
> Flink1.13.2, do not continue to process the data, which more understand the
> data trans
We found a deadlock problem when a single piece of data is too large on
Flink1.13.2, do not continue to process the data, which more understand the
data transmission piece, welcome to comment.
-- Forwarded message -
发件人: rui chen
Date: 2024年9月29日周日 10:00
Subject: Flink has a large number of data and appears to be suspended after
restarting.
To:
1.单条数据500kb
2.tm 失败后作业重新启动
/datastream/fault-tolerance/serialization/types_serialization/#defining-type-information-using-a-factory
Best,
Zhanghao Chen
From: Lasse Nedergaard
Sent: Friday, September 13, 2024 14:27
To: user
Subject: Recommendations for avoid Kryo
Hi.
I was wondering how others
In our production environment, it works fine.
Best,
Zhanghao Chen
From: Sachin Sharma
Sent: Friday, September 13, 2024 1:19
To: Oscar Perez via user
Subject: Flink 1.19.1 Java 17 Compatibility
Hi,
We are planning to use Flink 1.19.1 with kubernetes operator, I
and the resources of the
underlying host are returned instead. Check [1] for more information.
[1] https://jvmaware.com/container-aware-jvm/
Best,
Zhanghao Chen
From: Oliver Schmied
Sent: Wednesday, September 11, 2024 16:49
To: user@flink.apache.org
Subject: Tas
Hi Kartik,
The time for complete job erxpired in SessionCluster was controlled by
conf `jobstore.expiration-time`[1].
Best,
Yu Chen
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/deployment/config/#jobstore-expiration-time
发件人
full cp size counts all files.
Best,
Zhanghao Chen
From: banu priya
Sent: Monday, August 5, 2024 16:45
To: user@flink.apache.org
Subject: Managed memory and state size
Hi All,
As my incremental rocksdb check point size is increasing continuously, I am
trying to
/
Celeborn Resources:
- Issue Management: https://issues.apache.org/jira/projects/CELEBORN
- Mailing List: d...@celeborn.apache.org
Regards,
Fu Chen
On behalf of the Apache Celeborn community
t the presence of
backpressure. Unaligned checkpoint is introduced to solve this problem, where
in-flight buffers are stored in cp without the need of alignment.
Best,
Zhanghao Chen
From: Enric Ott <243816...@qq.com>
Sent: Wednesday, July 17, 2024 16:
Hi, you could increase it as follows:
Configuration config = new Configuration();
config.setString(collect-sink.batch-size.max, "10mb");
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment(config);
From: Salva Alcántara
Sent: Satur
Hi Enric,
It basically means the prioritized buffers can bypass all non-prioritized
buffers at the input gate and get processed first. You may refer to
https://issues.apache.org/jira/browse/FLINK-19026 for more details where it is
firstly introduced.
Best,
Zhanghao Chen
You can try session mode with only one job, but still with adaptive scheduler
disabled. When stopping a session job, the TMs won't be released immediately
and can be reused later.
Best,
Zhanghao Chen
From: Chetas Joshi
Sent: Tuesday, June 25, 2024 1:
un/commits/master/
Best,
Zhanghao Chen
From: L. Jiang
Sent: Tuesday, June 18, 2024 4:57
To: user@flink.apache.org
Subject: Flink Stateful Functions 3.4
Hi there,
Anyone knows which Flink version that Flink Stateful Functions 3.4 is
compatible with?
https://nightlies.
anagement/#stateful-and-stateless-application-upgrades
Best,
Zhanghao Chen
From: Chetas Joshi
Sent: Thursday, June 13, 2024 6:33
To: Zhanghao Chen
Cc: Sachin Sharma ; Gyula Fóra ;
Oscar Perez via user
Subject: Re: Understanding flink-autoscaler behavior
Hi Zhang
ernal job monitoring
system to manually recover it.
Best,
Zhanghao Chen
From: Jean-Marc Paulin
Sent: Tuesday, June 11, 2024 16:04
To: Zhanghao Chen ; user@flink.apache.org
Subject: Re: Failed to resume from HA when the checkpoint has been deleted.
Thanks for you
Hi,
In this case, you could cancel the job using the flink stop command, which
will clean up Flink HA metadata, and resubmit the job.
Best,
Zhanghao Chen
From: Jean-Marc Paulin
Sent: Monday, June 10, 2024 18:53
To: user@flink.apache.org
Subject: Failed to
e the corresponding uidHash for each
suboperator. Maybe you can further investigate it and fire a JIRA issue on it.
Best,
Zhanghao Chen
From: Salva Alcántara
Sent: Sunday, June 9, 2024 14:49
To: Gabor Somogyi
Cc: user
Subject: Re: Setting uid hash for non-legacy
tlies.apache.org/flink/flink-docs-master/docs/deployment/elastic_scaling/#reactive-mode
[2]
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-1.8/docs/custom-resource/autoscaler/
Best,
Zhanghao Chen
From: Sachin Sharma
Sent: Saturday, June
Does Application mode support multiple submissions in HA mode?
Yes, the exact offset position will also be committed when doing the savepoint.
Best,
Zhanghao Chen
From: Lei Wang
Sent: Thursday, June 6, 2024 16:54
To: Zhanghao Chen ; ruanhang1...@gmail.com
Cc: user
Subject: Re: Force to commit kafka offset when stop a job
Hi, you could stop the job with a final savepoint [1]. Flink which will trigger
a final offset commit on the final savepoint.
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/cli/#stopping-a-job-gracefully-creating-a-final-savepoint
Best,
Zhanghao Chen
Hi Sigalit,
Yes. Here, most of your memory is consumed by JVM heap and Flink network
memory, both are somewhat like a pre-allocated memory pool managed by JVM/Flink
Memory Manager, which typically do not return memory to the OS even if there's
some free space internally.
Best,
Zhanghao
ation. The default configuration in the Flink distribution has already
been configured such that Flink itself works on Java 17.
Best,
Zhanghao Chen
From: Rajat Pratap
Sent: Thursday, May 30, 2024 13:17
To: Zhanghao Chen
Subject: Re: Java 17 incompatibilities with Fl
Hi Rajat,
Flink releases are compiled with JDK 8 but it is able to run on JDK 8-17. As
long as your Flink runs on JDK17 on both server and client side, you are free
to write your Flink jobs with Java 17.
Best,
Zhanghao Chen
From: Rajat Pratap
Sent: Tuesday
,
Zhanghao Chen
From: John Smith
Sent: Thursday, May 23, 2024 22:40
To: Zhanghao Chen
Cc: Biao Geng ; user
Subject: Re: Would Java 11 cause Getting OutOfMemoryError: Direct buffer memory?
Based on these two settings...
taskmanager.memory.flink.size: 16384m
ing a larger taskmanager.memory.jvm-overhead memory,
and monitor it for a long time. If that's not the case, then there might be
native memory leakage somewhere, but that may not be related to the state.
Best,
Zhanghao Chen
From: Sigalit Eliazov
Sent: Thursd
fficient in most cases.
Best,
Zhanghao Chen
From: Maxim Senin via user
Sent: Thursday, April 18, 2024 5:56
To: user@flink.apache.org
Subject: Parallelism for auto-scaling, memory for auto-tuning - Flink operator
Hi.
Does it make sense to specify `parallelism`
instability persists?
Best,
Zhanghao Chen
From: Oscar Perez
Sent: Monday, April 15, 2024 19:24
To: Zhanghao Chen
Cc: Oscar Perez via user
Subject: Re: Flink job performance
Hei, ok that is weird. Let me resend them.
Regards,
Oscar
On Mon, 15 Apr 2024 at 14:00, Zhanghao
Hi, there seems to be sth wrong with the two images attached in the latest
email. I cannot open them.
Best,
Zhanghao Chen
From: Oscar Perez via user
Sent: Monday, April 15, 2024 15:57
To: Oscar Perez via user ; pi-team ;
Hermes Team
Subject: Flink job
the JSON processing
scenario with UDFs in Java/Python under thread mode/Python under process mode.
Best,
Zhanghao Chen
From: Niklas Wilcke
Sent: Monday, April 15, 2024 15:17
To: user
Subject: Pyflink Performance and Benchmark
Hi Flink Community,
I wanted to reach
Hi Oscar,
The rebalance operation will go over the network stack, but not necessarily
involving remote data shuffle. For data shuffling between tasks of the same
node, the local channel is used, but compared to chained operators, it still
introduces extra data serialization overhead. For data s
Add a space between -yD and the param should do the trick.
Best,
Zhanghao Chen
From: Lei Wang
Sent: Thursday, April 11, 2024 19:40
To: Zhanghao Chen
Cc: Biao Geng ; user
Subject: Re: How to enable RocksDB native metrics?
Hi Zhanghao,
flink run -m yarn-cluster
Hi Lei,
You are using an old-styled CLI for YARN jobs where "-yD" instead of "-D"
should be used.
From: Lei Wang
Sent: Thursday, April 11, 2024 12:39
To: Biao Geng
Cc: user
Subject: Re: How to enable RocksDB native metrics?
Hi Biao,
I tried, it doesn't work
Hi, you may first enable the Kryo fallback option first, submit the job, and
search for "be processed as GenericType". Flink will print it in most cases
where we fall back to Kryo (a few exceptions including type Class, Object,
recursive types, interface).
Best,
Zha
ge).
Best,
Zhanghao Chen
From: Ganesh Walse
Sent: Friday, March 29, 2024 10:42
To: Zhanghao Chen
Cc: user@flink.apache.org
Subject: Re: One query just for curiosity
You mean to say we can process 32767 records in parallel. And may I know if
this is the case
thout duplication,
you might set up a Redis service externally for that purpose.
Best,
Zhanghao Chen
From: Ganesh Walse
Sent: Friday, March 29, 2024 4:45
To: user@flink.apache.org
Subject: Flink cache support
Hi Team,
In my project my requirement is to cache data
Flink can be scaled up to a parallelism of 32767 at max. And if your record
processing is mostly IO-bound, you can further boost the throughput via
Async-IO [1].
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/operators/asyncio/
Best,
Zhanghao Chen
Congratulations!
Best,
Zhanghao Chen
From: Yu Li
Sent: Thursday, March 28, 2024 15:55
To: d...@paimon.apache.org
Cc: dev ; user
Subject: Re: [ANNOUNCE] Apache Paimon is graduated to Top Level Project
CC the Flink user and dev mailing list.
Paimon originated
Congratulations!
Thanks to release managers and everyone involved!
Best,
Yu Chen
> 2024年3月19日 01:01,Jeyhun Karimov 写道:
>
> Congrats!
> Thanks to release managers and everyone involved.
>
> Regards,
> Jeyhun
>
> On Mon, Mar 18, 2024 at 9:25 AM Lincoln Lee wr
Hi Sachin,
Flink 1.8 series have already been out of support, have you tried with a newer
version of Flink?
From: Sachin Mittal
Sent: Tuesday, March 12, 2024 14:48
To: user@flink.apache.org
Subject: Facing ClassNotFoundException:
org.apache.flink.api.common.Exe
r now, but there's some on-going
effort [2]. Hopefully, it would be much easier to do so in the future.
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/overview/#application-mode
[2] https://issues.apache.org/jira/browse/FLINK-26541
Best,
Zha
an be satisfied with the
reduce/aggregate function pattern, which is important for large windows.
Best,
Zhanghao Chen
From: Gabriele Mencagli
Sent: Monday, March 4, 2024 19:38
To: user@flink.apache.org
Subject: Question about time-based operators with RocksDB ba
Hi Kanchi,
Could you provide with more information on it? Like at what stage this log
prints (job recovering, running, etc), any more detailed job or stacktrace.
Best,
Zhanghao Chen
From: Kanchi Masalia via user
Sent: Friday, February 16, 2024 4:07
To: Neha
with.
Best,
Zhanghao Chen
From: Brent
Sent: Saturday, February 17, 2024 3:01
To: user@flink.apache.org
Subject: Flink use case feedback request
Hey everyone,
I've been looking at Flink to handle a fairly complex use case and was hoping
for some feedback
Page: https://celeborn.apache.org/
Celeborn Resources:
- Issue Management: https://issues.apache.org/jira/projects/CELEBORN
- Mailing List: d...@celeborn.apache.org
Thanks,
Fu Chen
On behalf of the Apache Celeborn(incubating) community
Hi Patricia,
Flink will create one Kafka consumer per parallelism, however, you'll need some
testing to measure the capability of a single task. Usu, one consumer can
consume at a much higher rate than 1 record per second.
Best,
Zhanghao Chen
From: patrici
d Java 17 in
production, including ByteDance as mentioned by Xiangyu.
Best,
Zhanghao Chen
From: Deepti Sharma S via user
Sent: Friday, January 26, 2024 22:56
To: xiangyu feng
Cc: user@flink.apache.org
Subject: RE: Apache Flink lifecycle and Java 17 support
Hel
r/blob/48df9d35ed55ae8bb513d9153e9f6f668da9e1c3/flink-autoscaler/src/main/java/org/apache/flink/autoscaler/event/LoggingEventHandler.java#L43C18-L43C18
Best,
Yu Chen
> 2024年1月18日 18:20,Yang LI 写道:
>
> Hello dear flink community,
>
> I noticed that there's a scaling report feature
/file/src/impl/ContinuousFileSplitEnumerator.java#L62C33-L62C54
[2] Overview | Apache Paimon
<https://paimon.apache.org/docs/master/concepts/overview/>
Best,
Yu Chen
> 2024年1月10日 04:31,Nitin Saini 写道:
>
> Hi Flink Community,
>
> I was using flink 1.12.7 readFile to read fil
Hi Chen,
You should tell flink which table to insert by “INSERT INTO XXX SELECT XXX”.
For single non insert query, flink will collect output to the console
automatically. Therefore, you don’t need to add insert also works.
But you must point out target table specifically when you need to write
Hi Elakiya,
You can try executing TableEnvironmentImpl#executeInternal for non-insert
statements, then using StatementSet.addInsertSql to add multiple insertion
statetments, and finally calling StatementSet#execute.
Best,
Zhanghao Chen
From: elakiya udhayanan
lection is performed with a unified config map for
doing that.
Best,
Zhanghao Chen
From: Ethan T Yang
Sent: Wednesday, December 6, 2023 5:40
To: user@flink.apache.org
Subject: Flink Kubernetes HA
Hi Flink users,
After upgrading Flink ( from 1.13.1 -> 1.18.0), I n
scale to
> requests to redeploy the job.
>
> Sorry, I didn't understand what type of benchmarking
> we should do, could you elaborate on it? Thanks a lot.
>
> Best,
> Rui
>
> On Sat, Nov 18, 2023 at 3:32 AM Mason Chen wrote:
>
>> Hi Rui,
>>
>&g
#heading=h.f5wfmsmpemd0>
Best,
Yu Chen
> 2023年12月5日 04:42,prashant parbhane 写道:
>
> Hi Yu,
>
> Thanks for your reply.
>
> When i run below script
>
> ```
> jeprof --show_bytes -svg `which java` /tmp/jeprof.out.301.1009.i1009.heap >
> 1009.svg
> ```
&
t=b,file=/tmp/heap.hprof
```
[1] Using jemalloc to Optimize Memory Allocation ― Sentieon Appnotes 202308.01
documentation<https://support.sentieon.com/appnotes/jemalloc/>
Best,
Yu Chen
发件人: prashant parbhane
发送时间: 2023年11月28日 1:42
收件人: user@flink.apach
Hi Lasse,
The default flink-conf.yaml file bundled in the distribution should already
have a preset env.java.opts.all config for Java 17. Have you tried that?
Best,
Zhanghao Chen
From: Lasse Nedergaard
Sent: Monday, November 27, 2023 21:20
To: user
Subject
to add such interface, you can follow the ticket
FLINK-33230[1]
[1] [FLINK-33230] Support Expanding ExecutionGraph to StreamGraph in Web UI -
ASF JIRA (apache.org)<https://issues.apache.org/jira/browse/FLINK-33230>
Best,
Yu Chen
发件人: rania duni
发送时间: 2023年
the exception of the failed restore operator id.
However, the lost of the operator state would only produce some erroneous
results and would not result in `not able to return any row`. It would be
better to provide logs after restoring to locate a more specific problem.
Best,
Yu Chen
Hi rania,
If you means the Job Vertex ID of the JobGraph, you can try this:
http://localhost:8081/jobs/
Best,
Yu Chen
发件人: Zhanghao Chen
发送时间: 2023年11月26日 11:02
收件人: rania duni ; user@flink.apache.org
主题: Re: Operator ids
It is not supported yet. Curious why
It is not supported yet. Curious why do you need to get the operator IDs? They
are usually only used internally.
Best,
Zhanghao Chen
From: rania duni
Sent: Saturday, November 25, 2023 20:44
To: user@flink.apache.org
Subject: Operator ids
Hello!
I would like
Hi Rui,
I suppose we could do some benchmarking on what works well for the resource
providers that Flink relies on e.g. Kubernetes. Based on conferences and
blogs, it seems most people are relying on Kubernetes to deploy Flink and
the restart strategy has a large dependency on how well Kubernetes
Currently, 16GB of heap size is allocated to the flink-kubernetes-operator
container by setting *jvmArgs.operator*, and this didn't help either.
On Wed, Nov 8, 2023 at 5:56 PM Tony Chen wrote:
> Hi Flink Community,
>
> This is a follow-up on a previous email thread (see emai
ed in this message
>> https://lists.apache.org/thread/0odcc9pvlpz1x9y2nop9dlmcnp9v1696
>> I tried changing versions and allocated resources, as well as the number
>> of reconcile threads, but nothing helped
>>
>> --
>> *От:* Tony Chen
&g
-externalized-checkpoint-retention
Best,
Yu Chen
> 2023年11月8日 13:08,梁嘉贤 写道:
>
> Hi, 我纠正一下我的问题,是taskmanager中checkpoints数量越来越多占用磁盘。同时,补充一下以下信息:
> 我通过把task manager的checkpoint路径挂载到本地,采用du
> -h命令查看checkpoint中的结果,发现任务中会持续增加chk,导致占用磁盘越来越大,如下图
> 我的疑问是,如何把这些历史chk文件删掉?
> <2dfdf...@8
Checkpoint导致(例如全局窗口聚合且未设置State
TTL的场景),如果要定位内存上涨的原因还需要更多的作业信息。
另外,如果你希望确认参数是否生效,可以在JobManager的Configuration一栏查看。
Best,
Yu Chen
> 2023年11月8日 11:56,梁嘉贤 写道:
>
> 您好,
> 采用Flink 1.14
> 版本,用docker分别建立了jobmanger和taskmanager两个容器,docker-compose.yml信息如下图1所示。
> 在配置中,设置了sta
kend:
serviceName: xxx
servicePort: 8081
```
Please let me know if there are any other problems.
Best,Yu Chen
> 2023年11月7日 18:40,Tauseef Janvekar 写道:
>
> Hi Chen,
>
> We are not using nginx anywhere on the server(kubernetes cluster) or on my
> client(my local
Hi Arjun,
As stated in the document, 'This regex pattern should be matched with the
absolute file path.'
Therefore, you should adjust your regular expression to match absolute paths.
Please let me know if there are any other problems.
Best,
Yu Chen
> 2023年11月7日 18:11,arjun s 写道:
Hi Tauseef,
The error was caused by the nginx configuration and was not a flink problem.
You can find many related solutions on the web [1].
Best,
Yu Chen
[1]
https://stackoverflow.com/questions/24306335/413-request-entity-too-large-file-upload-issue
> 2023年11月7日 15:14,Tauseef Janvekar
ntent part from a file,
prefix the file name with the symbol <.
The difference between @ and < is then that @ makes a file get
attached in the post as a file upload,
while the < makes a text field and just get the contents for that
text field from a fi
-- of `path` option
Best,
Yu Chen
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/table/filesystem/
发件人: arjun s
发送时间: 2023年11月6日 20:50
收件人: user@flink.apache.org
主题: Handling Schema Variability and Applying Regex P
Hi Bo,
How about write the data to Print Connector[1] simultaneously via
insertInto[2]? It will print the data into Taskmanager's Log.
Of course, you can choose an appropriate connector according to your audit log
storage.
Best,
Yu Chen
[1]
https://nightlies.apache.org/flink/flink
erent resource managers
to deal with different usage scenarios.
Please feel free to correct me if there are any misunderstandings.
Best regards,
Yu Chen
Steven Chen 于2023年11月3日周五 13:28写道:
> Dear Flink Community,
>
>
> I am currently using Flink for my project and have a q
1 - 100 of 499 matches
Mail list logo