, the universal connector is compatible with 0.10
brokers, but I want to double check that.
Best,
Paul Lam
> 2020年8月24日 22:46,Aljoscha Krettek 写道:
>
> Hi all,
>
> this thought came up on FLINK-17260 [1] but I think it would be a good idea
> in general. The issue reminded us th
. Stop the job with a savepoint, committing the offset to Kafka brokers.
2. Modify user code, migrate to he universal connector, and change the source
operator id to discard the old connector states.
3. Start the job with the savepoint, and read Kafka from group offsets.
Best,
Paul Lam
> 2020年8月
Congrats, Dian!
Best,
Paul Lam
> 2020年8月27日 17:42,Marta Paes Moreira 写道:
>
> Congrats, Dian!
>
> On Thu, Aug 27, 2020 at 11:39 AM Yuan Mei <mailto:yuanmei.w...@gmail.com>> wrote:
> Congrats!
>
> On Thu, Aug 27, 2020 at 5:38 PM Xingbo Huang &
nfig:
- Flink 1.11.0
- YARN job cluster
- HA via zookeeper
- FsStateBackend
- Aligned non-incremental checkpoint
Any comments and suggestions are appreciated! Thanks!
Best,
Paul Lam
Hi Till,
Thanks a lot for the pointer! I tried to restore the job using the
savepoint in a dry run, and it worked!
Guess I've misunderstood the configuration option, and confused by the
non-existent paths that the metadata contains.
Best,
Paul Lam
Till Rohrmann 于2020年9月29日周二 下午10
timeout, is there any recommended out
of the box
approach that can reduce the chance of getting the timeouts?
In the long run, is it possible to limit the number of taskmanager contexts
that RM creates at
a time, so that the heartbeat triggers can chime in?
Thanks!
Best,
Paul Lam
Hi Xingtong,
Thanks a lot for the pointer!
It’s good to see there would be a new IO executor to take care of the TM
contexts. Looking forward to the 1.12 release!
Best,
Paul Lam
> 2020年10月12日 14:18,Xintong Song 写道:
>
> Hi Paul,
>
> Thanks for reporting this.
>
&g
Sorry for the misspelled name, Xintong
Best,
Paul Lam
> 2020年10月12日 14:46,Paul Lam 写道:
>
> Hi Xingtong,
>
> Thanks a lot for the pointer!
>
> It’s good to see there would be a new IO executor to take care of the TM
> contexts. Looking forward to the 1.12 releas
restore
the job, which rewound
the job unexpectedly.
I’ve filed an issue[1], and any comments are appreciated.
1. https://issues.apache.org/jira/browse/FLINK-19778
Best,
Paul Lam
Well done! Thanks to Gordon and Xintong, and everyone that contributed to the
release.
Best,
Paul Lam
> 2020年12月18日 19:20,Xintong Song 写道:
>
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.11.3, which is the third bugfix release for the
Hi Antonis,
Did you try to profile the “bad” taskmanager to see what the task thread was
busy doing?
And a possible culprit might be gc, if you haven't checked that. I’ve seen gc
threads eating up 30% of cpu.
Best,
Paul Lam
> 2020年12月14日 06:24,Antonis Papaioannou 写道:
>
&
ql.Timestamp.
Is my reasoning correct? And is there any workaround? Thanks a lot!
Best,
Paul Lam
fix the
old planner if it’s not too involved.
Best,
Paul Lam
> 在 2020年3月19日,17:13,Jark Wu 写道:
>
> Hi Paul,
>
> Are you using old planner? Did you try blink planner? I guess it maybe a bug
> in old planner which doesn't work well on new types.
>
> Best,
> Jar
Filed an issue to track this problem. [1]
[1] https://issues.apache.org/jira/browse/FLINK-16693
<https://issues.apache.org/jira/browse/FLINK-16693>
Best,
Paul Lam
> 在 2020年3月20日,17:17,Paul Lam 写道:
>
> Hi Jark,
>
> Sorry for my late reply.
>
> Yes, I’m using the
tenant configuration.
2. Still restart the job, and optimize the downtime by using session mode.
Best,
Paul Lam
> 2020年7月2日 11:23,C DINESH 写道:
>
> Hi Danny,
>
> Thanks for the response.
>
> In short without restarting we cannot add new sinks or sources.
>
> For
Finally! Thanks for Piotr and Zhijiang being the release managers, and everyone
that contributed to the release!
Best,
Paul Lam
> 2020年7月7日 22:06,Zhijiang 写道:
>
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.11.0, which is the latest m
org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992)
Best,
Paul Lam
connectors seem to be broken. Should we use
https://repo.maven.apache.org/maven2/ <https://repo.maven.apache.org/maven2/>
instead?
Best,
Paul Lam
> 2020年7月13日 18:02,Jingsong Li 写道:
>
> Hi, It looks really weird.
>
> Is there any possibility of class conflict?
> How do you
.
- Keys amount are capped.
- Watermarks are good, no lags.
- No back pressure.
If I can directly read the states to find out what’s accountable for the state
size growth, that would be very intuitional and time-saving.
Best,
Paul Lam
It turns out to be a bug of StateTTL [1]. But I’m still interested in debugging
window states.
Any suggestions are appreciated. Thanks!
1. https://issues.apache.org/jira/browse/FLINK-16581
<https://issues.apache.org/jira/browse/FLINK-16581>
Best,
Paul Lam
> 2020年7月15日 13:13,Pau
at
org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:201)
Best,
Paul Lam
,
Paul Lam
> 2020年7月21日 16:24,Rui Li 写道:
>
> Hey Paul,
>
> Could you please share more about your job, e.g. the schema of your Hive
> table, whether it's partitioned, and the table properties you've set?
>
> On Tue, Jul 21, 2020 at 4:02 PM Paul Lam <mailt
iationUtil.java:586)
at
org.apache.flink.util.InstantiationUtil.writeObjectToConfig(InstantiationUtil.java:515)
at
org.apache.flink.streaming.api.graph.StreamConfig.setStreamOperatorFactory(StreamConfig.java:260)
... 36 more
Best,
Paul Lam
> 2020年7月21日 16:59,Jingsong Li
dir was not created yet.
Best,
Paul Lam
> 2020年7月21日 18:50,Jingsong Li 写道:
>
> Hi,
>
> Sorry for this. This work around only works in Hive 2+.
> We can only wait for 1.11.2.
>
> Best,
> Jingsong
>
> On Tue, Jul 21, 2020 at 6:15 PM Rui Li <mailto:lirui.fu
:
- Flink 1.11.0 with Blink planner
- Hive 1.1.0
Best,
Paul Lam
Hi Jingsong,
Thanks for your input. Now I understand the design.
I think in my case the StreamingFileCommitter is not chained because its
upstream operator is not parallelism 1.
BTW, it’d be better if it has a more meaningful operator name.
Best,
Paul Lam
> 2020年8月4日 17:11,Jingsong Li
.
Best,
Paul Lam
> 2021年4月2日 11:34,Sumeet Malhotra 写道:
>
> Just realized, my question was probably not clear enough. :-)
>
> I understand that the Avro (or JSON for that matter) format can be ingested
> as described here:
> https://ci.apache.org/projects/flink/flink-
through
the whole DAG, but events needs to be acknowledged by downstream and can
overtake records, while stream records are not). So I’m wondering if we plan to
unify the two approaches in the new control flow (as Xintong mentioned both in
the previous mails)?
Best,
Paul Lam
> 2021年6月8日 14
/StreamTask.java
Best,
Paul Lam
> 2021年6月30日 10:38,Yik San Chan 写道:
>
> Hi community,
>
> I have a batch job that consumes records from a bounded source (e.g., Hive),
> walk them through a BufferingSink as described in
> [docs](https://ci.apache.org/projects/flink/flink-docs-
Congrats Hequn! Well deserved!
Best,
Paul Lam
> 在 2019年8月7日,16:28,jincheng sun 写道:
>
> Hi everyone,
>
> I'm very happy to announce that Hequn accepted the offer of the Flink PMC to
> become a committer of the Flink project.
>
> Hequn has been contributing to
Hi Tison,
Big +1 for the Chinese Weekly Community Update. The content is well-organized,
and I believe it would be very helpful for Chinese users to get an overview of
what’s going on in the community.
Best,
Paul Lam
> 在 2019年8月19日,12:27,Zili Chen 写道:
>
> Hi community,
>
&
Well done! Thanks to everyone who contributed to the release!
Best,
Paul Lam
Yu Li 于2019年8月22日周四 下午9:03写道:
> Thanks for the update Gordon, and congratulations!
>
> Great thanks to all for making this release possible, especially to our
> release managers!
>
> Best Regards,
ease point me to the right direction. Thanks a lot!
Best,
Paul Lam
bell, I
might have read about the limitation that the state processor API is unable to
read states in WindowOperator currently. All in all, thanks again.
Best,
Paul Lam
> 在 2019年8月27日,17:32,Yun Tang 写道:
>
> Hi Paul
>
> Would you please share more information of the exception
Congratulations Zili!
Best,
Paul Lam
> 在 2019年9月12日,09:34,Rong Rong 写道:
>
> Congratulations Zili!
>
> --
> Rong
>
> On Wed, Sep 11, 2019 at 6:26 PM Hequn Cheng <mailto:chenghe...@gmail.com>> wrote:
> Congratulations!
>
> Best, Hequn
>
> On T
Hi,
Could you confirm that you’re using POJOSerializer before and after migration?
Best,
Paul Lam
> 在 2019年10月17日,21:34,ApoorvK 写道:
>
> It is throwing below error ,
> the class I am adding variables have other variable as an object of class
> which are also in state.
://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketAssigner.html
<https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketAssigner.html>
Best,
Pa
easily get access to the logs.
Best,
Paul Lam
> 在 2019年10月30日,16:22,SHI Xiaogang 写道:
>
> Hi
>
> Thanks for bringing this.
>
> The design looks very nice to me in that
> 1. In the new per-job mode, we don't need to compile user programs in the
> client and can
way, we avoid file
name conflicts with the previous execution (see[1]).
[1]
https://github.com/apache/flink/blob/93dfdd05a84f933473c7b22437e12c03239f9462/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/Buckets.java#L276
Best,
Paul Lam
> 在 2019年11月
adoop 2.6.5.
As I can remember, I've seen a similar issue that relates to the fencing of
JobManager, but I searched the JIRA and couldn't find it. It would be great
if someone can point me to the right direction. And any comments are also
welcome! Thanks!
Best,
Paul Lam
for you help!
[1] https://issues.apache.org/jira/browse/FLINK-14010
<https://issues.apache.org/jira/browse/FLINK-14010>
Best,
Paul Lam
> 在 2019年12月16日,20:35,Yang Wang 写道:
>
> Hi Paul,
>
> I found lots of "Failed to stop Container " logs in the jobmanager.log. It
Congrats, Dian!
Best,
Paul Lam
> 在 2020年1月17日,10:49,tison 写道:
>
> Congratulations! Dian
>
> Best,
> tison.
>
>
> Zhu Zhu mailto:reed...@gmail.com>> 于2020年1月17日周五
> 上午10:47写道:
> Congratulations Dian.
>
> Thanks,
> Zhu Zhu
>
> hailon
FLIP-6? If not, would it be supported in
the future?
Best regards,
Paul Lam
Hi Chen,
Thanks for the quick reply! I’ve read the design document and it is very much
what I’m looking for. And I think the design was absorbed in FLIP-26, right? I
will keep watching this FLIP. Thanks again.
Best regards,
Paul Lam
thing suggestion to
avoid it for now?
Best regard,
Paul Lam
u mentioned.
> It caches all data for the period of time before the window is triggered.
In my understanding, window functions process elements incrementally unless the
low level API ProcessWindowFunction was used, so caching data should not be
required in most scenarios. Would you mind giving more details of the window
caching design? And please correct me if I’m wrong. Thanks a lot.
Best regards,
Paul Lam
used for distributing
a specified directory via YARN to the TaskManager nodes. And you can find its
description in the Flink client by executing `bin/flink`.
> -yt,--yarnship Ship files in the specified directory (t
> for transfer)
Best Regards,
Paul Lam
event time characteristic.
Best Regards,
Paul Lam
> 在 2018年8月10日,05:49,antonio saldivar 写道:
>
> Hello
>
> Sending ~450 elements per second ( the values are in milliseconds start to
> end)
> I went from:
> with Rebalance
> ++
> | AVGWINDOW |
&
. I’ve looked into the code, but
still have no clue about where these logs came from.
Could someone help me with this? Thanks!
Best Regards,
Paul Lam
without being turned into BroadcastStream. So, I’m wondering what’s the
advantage of using BroadcastState? Thanks a lot!
Best Regards,
Paul Lam
Hi Rong, Hequn
Your answers are very helpful! Thank you!
Best Regards,
Paul Lam
> 在 2018年8月19日,23:30,Rong Rong 写道:
>
> Hi Paul,
>
> To add to Hequn's answer. Broadcast state can typically be used as "a
> low-throughput stream containing a set of rules which we
Hi yuvraj,
Please try turning off timeline server in yarn-site.xml. Currently Flink does
not ship the required dependencies for timeline server, which I think could be
a bug.
Best Regards,
Paul Lam
> 在 2018年8月21日,22:23,yuvraj singh <19yuvrajsing...@gmail.com> 写道:
>
> Hi ,
&g
Hi,
I’m trying to run Flink on a YARN cluster that’s running on JDK 7, and I think
it’s a quite common scenario, but it seems that currently there’s no way to
pass the JAVA_HOME environment variable to YARN.
Am I missing something? And should I create an issue requesting for that?
Best,
Paul
e for the jobmanager too, but it doesn’t help.
Do you have any idea on this? Thanks a lot!
Best,
Paul Lam
> 在 2018年8月29日,16:08,vino yang 写道:
>
> Hi Paul,
>
> You can try: -yD yarn.taskmanager.env.JAVA_HOME=xx in the command line.
>
> Thanks, vino.
>
> Paul Lam mai
“containerized.master.env.JAVA_HOME”.
Thank you very much!
Best,
Paul Lam
> 在 2018年8月30日,10:09,vino yang 写道:
>
> Hi Paul,
>
> This exception means that the jdk version of the execution code is lower than
> the compiled jdk version.
> 52 means that the JDK version that compiles i
Hi tison,
This information is very helpful. Thank you!
Best,
Paul Lam
> 在 2018年8月30日,10:41,陈梓立 写道:
>
> Hi Paul,
>
> For your information, `yarn.taskmanager.env.JAVA_HOME` is deprecated, you
> can set env of taskmanager using the key similar to j
triggering the checkpoint.
at
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.triggerSavepoint(CheckpointCoordinator.java:377)
at
org.apache.flink.runtime.jobmaster.JobMaster.triggerSavepoint(JobMaster.java:942)
... 21 more
Best,
Paul Lam
best
practice for my case? Thanks a lot!
Best,
Paul Lam
rminator - Shutting down
remote daemon.
2018-09-13 18:00:52,936 INFO
akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon
shut down; proceeding with flushing remote transports.
2018-09-13 18:00:52,960 INFO
akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut
down.
```
Best,
Paul Lam
a lot!
Best,
Paul Lam
> 在 2018年9月14日,10:49,devinduan(段丁瑞) 写道:
>
> Hi, Paul
>https://issues.apache.org/jira/browse/FLINK-10309
> <https://issues.apache.org/jira/browse/FLINK-10309>
>
>
> 发件人: Paul Lam <mailto:paullin3...@gmail.com>
> 发送时间: 20
Hi vino,
Thank you for the helpful information!
One more question, are these operations supposed to run concurrently to ensure
JobManager receives the cancel request before the savepoint is completed?
Best,
Paul lam
> 在 2018年9月14日,11:48,vino yang 写道:
>
> Hi Paul,
>
> It
. Suppose I’m calculating stats of
daily active users and use a userId field as key, I want the state totally
truncated at the very beginning of each day.
Thanks a lot!
Best,
Paul Lam
> 在 2018年9月14日,10:39,vino yang 写道:
>
> Hi Paul,
>
> Maybe you can try to understand the State TTL?
all namespaces, and does it violate the principle
of keyed states?
Thanks again!
Best,
Paul Lam
> 在 2018年9月14日,16:00,David Anderson 写道:
>
> Paul,
>
> Theoretically, processing-time timers will get the job done, but yes, you'd
> need a timer per key -- and folks
Hi Kien,
Thanks for your reply!
The approaches your suggested are very useful. I'll redesign the state
structure and try these approaches. Thanks a lot!
Best,
Paul Lam
> 在 2018年9月14日,17:01,Kien Truong 写道:
>
> Hi Paul,
>
> We have actually done something like this. C
change? Thanks a lot!
Best,
Paul Lam
Hi vino,
Thanks for the reply!
I’m looking forward to Bravo too. But for now, I have an idea that I can set
the stateful operators' ids to the same as the auto generated ones, so the
savepoint would be still usable. May I know your opinion on this?
Best,
Paul Lam
> 在 2018年9月18日,19
Hi Fabian,
Thanks for your reply!
It seems like there is a word missing. Did you mean it’s possible to extract
the operator ids from the savepoint? Or modify the ids in the savepoint?
Best,
Paul Lam
> 在 2018年9月18日,20:09,Fabian Hueske 写道:
>
> The auto-generated ids are includ
at
org.apache.flink.fs.s3hadoop.shaded.org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2240)
... 23 more
Best,
Paul Lam
)
```
I haven’t figured out why the s3a filesystem needs to be initiated twice. And
is it a bug that the bucketing sink does not use filesystem factories to create
filesystem?
Thank you very much!
Best,
Paul Lam
> 在 2018年9月20日,23:35,Stephan Ewen 写道:
>
> Hi!
>
> A few questions
Hi Stephan!
It's bad that I'm using Hadoop 2.6, so I have to stick to the old bucketing
sink. I made it by explicitly setting Hadoop conf for the bucketing sink in
the user code.
Thank you very much!
Best,
Paul Lam
Stephan Ewen 于2018年9月21日周五 下午6:30写道:
> Hi!
>
> The old
che-flink-vs-databricks-runtime
<https://data-artisans.com/blog/curious-case-broken-benchmark-revisiting-apache-flink-vs-databricks-runtime>
Best,
Paul Lam
> 在 2018年9月20日,23:52,Stefan Richter 写道:
>
> Oh yes exactly, enable is right.
>
>> Am 20.09.2018 um 17:48 schr
Hi Stefan,
Thanks for your detailed explanation! It helps a lot!
I think I misunderstood the sentence. I thought “avoiding additional object
copying” was the default behavior.
Best,
Paul Lam
> 在 2018年9月26日,17:22,Stefan Richter 写道:
>
> Hi Paul,
>
> sure, what I mean is basi
erThread.run(ForkJoinWorkerThread.java:107)
```
Best,
Paul Lam
Hi Kostas,
Sorry, I forget that. I'm using Flink 1.5.3.
Best,
Paul Lam
Kostas Kloudas 于2018年9月27日周四 下午8:22写道:
> Hi Paul,
>
> I am also cc’ing Till and Gary who may be able to help, but to give them
> more information,
> it would help if you told us which Flink v
mal before NoResourceAvailableException?
I will keep working on it because it’s critical to me, and any help is highly
appreciated! Thank you!
Best,
Paul Lam
> 在 2018年9月27日,20:22,Kostas Kloudas 写道:
>
> Hi Paul,
>
> I am also cc’ing Till and Gary who may be able to help, but to give them
spatcher.java:106)
at java.lang.Thread.run(Thread.java:745)
```
Best,
Paul Lam
> 在 2018年9月28日,20:32,Till Rohrmann 写道:
>
> Hi Paul,
>
> this looks to me like a Yarn setup problem. Could you check which value you
> have set for dfs.namenode.delegation.token.max-lifetime? Per defa
Hi Julien,
AFAIK, streaming jobs put data objects on heap, so the it depends on the JVM GC
to release the memory.
Best,
Paul Lam
> 在 2018年10月12日,14:29,jpreis...@free.fr 写道:
>
> Hi,
>
> My use case is :
> - I use Flink 1.4.1 in standalone cluster with 5 VM (1 VM
Hi Zhijiang,
Does the memory management apply to streaming jobs as well? A previous post[1]
said that it can only be used in batch API, but I might miss some updates on
that. Thank you!
[1] https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=53741525
Best,
Paul Lam
> 在 2018年
Hi Niels,
Please see https://issues.apache.org/jira/browse/FLINK-249
<https://issues.apache.org/jira/browse/FLINK-249>.
Best,
Paul Lam
> 在 2018年10月17日,16:58,Niels van Kaam 写道:
>
> Hi All,
>
> I am debugging an issue where the periodic checkpointing has halted. I
>
Hi Niels,
The link was broken, it should be
https://issues.apache.org/jira/browse/FLINK-2491
<https://issues.apache.org/jira/browse/FLINK-2491>.
A similar question was asked a few days ago.
Best,
Paul Lam
> 在 2018年10月17日,19:56,Niels van Kaam 写道:
>
> Hi All,
>
> Tha
checkpoint of the previous submission although you are
using the same checkpoint root dir.
Best,
Paul Lam
> 在 2018年10月18日,09:51,chrisr123 写道:
>
> Hi Folks,
> I'm trying to restart my program with restored state from a checkpoint after
> a program failure (restart strategies tried
Hi,
Please see https://issues.apache.org/jira/browse/FLINK-10309
<https://issues.apache.org/jira/browse/FLINK-10309>.
Best,
Paul Lam
> 在 2018年10月19日,14:32,郑舒力 写道:
>
> Hi community.
>
> Cancel with savepoint on yarn-cluster mode can’t retrieve savepoint dir
> direct
Hi Bend,
The offsets would be restored from the savepoint. Flink would only fallback to
use the offsets on the brokers if there are no offset in its states.
Best,
Paul Lam
> 在 2018年10月31日,17:13,
> 写道:
>
> Hi
> Let’s say we have a job which reads from a Fli
Hi Anil,
Sure. You could have a look at FlinkKafkaConsumerBase.java.
Best,
Paul Lam
> 在 2018年11月1日,03:37,Anil 写道:
>
> Hey Paul. Can you please point me to the code in Flink. Thanks!
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
problem before? Thanks a lot!
[1] https://issues.apache.org/jira/browse/FLINK-10482
<https://issues.apache.org/jira/browse/FLINK-10482>
Best,
Paul Lam
Hi,
Wouldn't `-yD` option do the trick? I use it to override the kerberos
configuration for different users very often.
Best,
Paul Lam
> 在 2018年11月8日,17:33,Dawid Wysakowicz 写道:
>
> Hi Marke,
>
> AFAIK Shuyi is right, there is no such option so far. Maybe you could do
&
/flink/flink-docs-release-1.6/monitoring/rest_api.html
<https://ci.apache.org/projects/flink/flink-docs-release-1.6/monitoring/rest_api.html>
Best,
Paul Lam
> 在 2018年11月9日,13:55,Hao Sun 写道:
>
> Since this save point path is very useful to application updates, where is
> this
<https://issues.apache.org/jira/browse/YARN-2823>
Best,
Paul Lam
Hi Ufuk,
Thanks for you reply!
I’m afraid that my case is different. Since the Flink on YARN application is
not exited, we do not
have an application exit code yet (but the job status is determined).
Best,
Paul Lam
> 在 2018年11月14日,16:49,Ufuk Celebi 写道:
>
> Hey Paul,
>
&
containers across application attempts.
Best,
Paul Lam
> 在 2018年11月13日,17:27,devinduan(段丁瑞) 写道:
>
> Hi Paul,
> Could you check out your YARN property
> "yarn.resourcemanager.work-preserving-recovery.enabled"?
> if value is false, set true and try it agai
Hi Devin,
Thanks for your reasoning! It’s consistent with my observation, and I fully
agree with you.
Maybe we should create an issue for the Hadoop community if it is not fixed in
the master branch.
Best,
Paul Lam
> 在 2018年11月15日,11:59,devinduan(段丁瑞) 写道:
>
> Hi Paul:
>
Hi Scott,
I think you can do it by catching the exception in the user function and log
the current
message that the operator is processing before re-throwing (or not) the
exception to
Flink runtime.
Best,
Paul Lam
> 在 2018年11月22日,12:59,Scott Sue 写道:
>
> Hi all,
>
> When
records is not always helpful. What do you think?
Best,
Paul Lam
> 在 2018年11月22日,13:20,Scott Sue 写道:
>
> Hi Paul,
>
> Thanks for the quick reply. Ok does that mean that as general practice, I
> should be catching all exceptions for the purpose of logging in any of my
> O
resources specs of all nodes in the StreamGraph,
but it’s problematic
because some nodes have a parallelism limit (e.g. can’t be greater than 1).
I think I might be missing something and there should be a better way to do
this. Please give me some
pointers. Thanks a lot!
Best,
Paul Lam
onfiguration or API can be
used to adjust job resources in SQL/Table API?
In my case, approaches with DataStream API is also viable, cause I’m submitting
the jobs programatically.
[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-53%3A+Fine+Grained+Operator+Resource+Management
Best,
Pa
Hi community,
I’m trying out SQL hints on DML, but there’s not very much about the supported
SQL hints on the docs.
Are the SQL hints limited to source/sink tables only at the moment? And where
can I find the full list of
supported SQL hints?
Thanks in advance!
Best,
Paul Lam
Hi JING,
Thanks for your inputs! It helps a lot.
Best,
Paul Lam
> 2021年8月12日 13:13,JING ZHANG 写道:
>
> Hi Paul,
> There are Table hints and Query hints.
> Query hints are on the way, there is a JIRA to track this issue [3]. AFAIK,
> the issue is almost close to submit
om/apache/flink/blob/36ff71f5ff63a140acc634dd1d98b2bb47a76ba5/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java#L904
Best,
Paul Lam
Sorry, I mean the relation between Flink Configuration and TableConfig, not
TableEnv.
Best,
Paul Lam
Paul Lam 于2021年9月24日周五 上午12:24写道:
> Hi all,
>
> Currently, Flink creates a new Configuration in TableConfig of
> StreamTableEnvironment, and synchronizes options in it
Hi Caizhi,
Thanks a lot for you clarification! Now I understand the design of TableConfig.
Best,
Paul Lam
> 2021年9月24日 15:40,Caizhi Weng 写道:
>
> Hi!
>
> TableConfig is for configurations related to the Table and SQL API,
> especially the configurations in Optimize
1 - 100 of 149 matches
Mail list logo