Hi, all
I ran into a weird single Task BackPressure problem.
JobInfo:
DAG: Source (1000)-> Map (2000)-> Sink (1000), which is linked via rescale.
Flink version: 1.9.0
There is no related info in jobmanager/taskamanger log.
Through Metrics, I see that Map (242) 's outPoolUsage is fu
Hi Josson,
Thanks for the details. Sorry, I overlooked, you indeed mentioned the file
backend.
Looking into Flink memory model [1], I do not notice any problems related to
the types of memory consumption we model in Flink.
Direct memory consumption by network stack corresponds to your configure
Hi Peter,
I've dealt with the cross-account delegation issues in the past (with no
relation to Flink) and got into the same ownership problems (accounts can't
access data, account A 'loses' access to it's own data).
My 2-cents are that:
- The account that produces the data (A) should be the O
Hi Weihua,
From your below info, it is with the expectation in credit-based flow control.
I guess one of the sink parallelism causes the backpressure, so you will see
that there are no available credits on Sink side and
the outPoolUsage of Map is almost 100%. It really reflects the credit-based
Hi,
First sorry that I'm not expert on Window and please correct me if I'm
wrong, but from my side, it seems the assigner might also be a problem in
addition to the trigger: currently Flink window assigner should be all based on
time (processing time or event time), and it might be hard
Hi,
There is a single source of events for me in my system.
I need to process and send the events to multiple destination/sink at the
same time.[ kafka topic, s3, kinesis and POST to HTTP endpoint ]
I am able send to one sink.
By adding more sink stream to the source stream could we achieve it
Yes, you can use kinit. But AFAIK, if you deploy Flink on Kubernetes
or Mesos, Flink will not ship the ticket cache. If you deploy Flink on
Yarn, Flink will acquire delegation tokens with your ticket cache and
set tokens for job manager and task executor. As the document said,
the main drawback is
Glad to see that!
However, I was told that it is not the right approach to directly
extend `AbstractUdfStreamOperator` in DataStream API. This would be
fixed at some point (maybe Flink 2.0). The JIRA link is [1].
[1] https://issues.apache.org/jira/browse/FLINK-17862
Best,
Yangze Guo
On Fri, May
Hi
1. You could check whether the 'org.apache.flink.api.java.clean' is in
your classpath first.
2. Do you follow the doc[1] to deploy your local cluster and run some
existed examples such as WordCount?
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/cluster_setup.h
>
> Increasing network memory buffers (fraction, min, max) seems to increase
> tasks slightly.
That's wired. I don't think the number of network memory buffers have
anything to do with the task amount.
Let me try to clarify a few things.
Please be aware that, how many tasks a Flink job has, and
Hi M,
Regarding your questions:
1. yes. The id is fixed once the job graph is generated.
2. yes
Regarding yarn mode:
1. the job id keeps the same because the job graph will be generated once
at client side and persist in DFS for reuse
2. yes if high availability is enabled
Thanks,
Zhu Zhu
M Sin
Could you try to download binary dist from flink download page and
re-execute the job? It seems like something wrong with flink-dist.jar.
BTW, please post user question on only user mailing list(not dev).
Best,
tison.
Guowei Ma 于2020年5月25日周一 上午10:49写道:
> Hi
> 1. You could check whether the 'o
Hi:
I received a java pojo serialized json string from kafka, and I want to
use UDTF to restore it to a table with a similar structure to pojo.
Some member variables of pojo use the List type or Map type
whose generic type is also a pojo .
The sample code as bellow:
public class Car {
Hi
I am using flink retained check points and along with
jobs/:jobid/checkpoints API for retrieving the latest retained check point
Following the response of Flink Checkpoints API:
I have my jobs restart attempts are 5
check point API response in "latest" key, check point file name of both
"rest
14 matches
Mail list logo