Hi Chesnay,
Thank you for reply.
I figure out that issue with using livenessProbe on Task Manager
deployment. But I think it is still a workaround.
I am using Flink 1.9.1 (currently its latest version)
And I am getting "connection unexpectedly closed by remote task manager"
error on Task Manager.
Hi Shuwen,
> When to call close() ? After every element processed? Or on
> ProcessFunction.close() ? Or never to use it?
IMO, the #close() function is used to manage the lifecycle of #Collector
instead of a single element. I think it should not be called in user function
unless you have som
Hi Community,
In ProcessFunction class, ProcessElement function, there is a Collector
that has 2 method: collect() and close(). I would like to know:
1. When to call close() ? After every element processed? Or
on ProcessFunction.close() ? Or never to use it? If it's been closed
already, can the co
Hi Akash,
Flink doesn't support auto scaling in core currently, it may be supported
in the future, when the new scheduling architecture is implemented
https://issues.apache.org/jira/browse/FLINK-10407 .
You can do it externally by cancel the job with a savepoint, update the
parallelism, and resta
Hi Akash,
You can use Pravega connector to integrate with Flink, the source code is
here[1].
In short, relying on its rescalable state feature[2] flink supports
scalable streaming jobs.
Currently, the mainstream solution about auto-scaling is Flink + K8S, I can
share some resources with you[3].
Hi,
Does Flunk support auto scaling. I read that it is supported using pravega?
Is it incorporated in any version.
Thanks,
Akash Goel
Just to add up, if you use LocalStreamEnvironment, you can pass a
configuration and you can set "execution.savepoint.path" to point to your
savepoint.
Best,
Arvid
On Wed, Nov 27, 2019 at 1:00 PM Congxian Qiu wrote:
> Hi,
>
> You can recovery from checkpoint/savepoint if JM&TM can read from the
Could you try accessing :/#/overview ?
The REST API is obviously accessible, and hence the WebUI should be too.
How did you setup the session cluster? Are you using some custom Flink
build or something, which potentially excluded flink-runtime-web from
the classpath?
On 28/11/2019 10:02, Jat
Thank you all for investigation/reporting/discussion. I have merged an older PR
[1] that was fixing this issue which was previously rejected as we didn’t
realise this is a production issue.
I have merged it and issue should be fixed in Flink 1.10, 1.9.2 and 1.8.3
releases.
Piotrek
[1] https:/
The akka.watch configuration options haven't been used for a while
irrespective of FLINK-13883 (but I can't quite tell atm since when).
Let's start with what version of Flink you are using, and what the
taskmanager/jobmanager logs say.
On 25/11/2019 12:05, Eray Arslan wrote:
Hi,
I have some
Hi Harrison,
One thing to keep in mind is that Flink will only write files if there
is data to write. If, for example, your partition is not active for a
period of time, then no files will be written.
Could this explain the fact that dt 2019-11-24T01 and 2019-11-24T02
are entirely skipped?
In add
Hi Li,
S3 file sink will write data into prefixes, with as many part-files as the
degree of parallelism. This structure comes from the good ol' Hadoop days,
where an output folder also contained part-files and is independent of S3.
However, each of the part-files will be uploaded in a multipart fa
What do you mean by "from within a sink"? Do you have a custom sink?
If you want to write to different Kafka topics from the same sink, you can
do that using a custom KafkaSerializationSchema. It allows you to return a
ProducerRecord with a custom target topic set. (A Kafka sink can write to
multi
Hi Rahul,
can you check if the KuduSink tests of Apache Bahir shed any light? [1]
Best,
Arvid
[1]
https://github.com/apache/bahir-flink/blob/55240a993df999d66aefa36e587be719c29be92a/flink-connector-kudu/src/test/java/org/apache/flink/connectors/kudu/streaming/KuduSinkTest.java
On Mon, Nov 25,
Hi Avi,
it seems to me that you are not really needing any split feature. As far as
I can see in your picture you want to apply two different windows on the
same input data.
In that case you simply use two different subgraphs.
stream = ...
stream1 = stream.window(...).addSink()
stream2 = s
the chk-* directory is not found , I think the misssing because of jobmanager
removes it automaticly , but why it still in zookeeper?
-- --
??: "Vijay Bhaskar"http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Victor,
you could implement your own SinkFunction that wraps the KafkaProducer.
However, since you may need to check if the write operation is successful,
you probably need to subclass KafkaProducer and implement your own error
handling.
Best,
Arvid
On Mon, Nov 25, 2019 at 7:51 AM vino yang
Hi Lei,
if you use
public JobExecutionResult StreamExecutionEnvironment#execute()
You can retrieve the job id through the result.
result.getJobID()
Best,
Arvid
On Mon, Nov 25, 2019 at 3:50 AM Ana wrote:
> Hi Lei,
>
> To add, you may use Hadoop Resource Manager REST APIs
> https://hadoop.ap
One more thing:
You configured:
high-availability.cluster-id: /cluster-test
it should be:
high-availability.cluster-id: cluster-test
I don't think this is major issue, in case it helps, you can check.
Can you check one more thing:
Is check pointing happening or not?
Were you able to see the chk-* f
hi??
Is there any deference ??for me using nas is more convenient to test
currently???
from the docs seems hdfs ,s3, nfs etc all will be fine.
-- --
??: "vino yang"http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi,
Why do you not use HDFS directly?
Best,
Vino
曾祥才 于2019年11月28日周四 下午6:48写道:
>
> anyone have the same problem? pls help, thks
>
>
>
> -- 原始邮件 --
> *发件人:* "曾祥才";
> *发送时间:* 2019年11月28日(星期四) 下午2:46
> *收件人:* "Vijay Bhaskar";
> *抄送:* "User-Flink";
> *主题:* 回复: JobGra
Hi Jatin,
Which version do you use?
Best,
Vino
Jatin Banger 于2019年11月28日周四 下午5:03写道:
> Hi,
>
> I checked the log file there is no error.
> And I checked the pods internal ports by using rest api.
>
> # curl : 4081
> {"errors":["Not found."]}
> 4081 is the Ui port
>
> # curl :4081/config
> {"re
anyone have the same problem?? pls help, thks
-- --
??: "??"http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi,
I checked the log file there is no error.
And I checked the pods internal ports by using rest api.
# curl : 4081
{"errors":["Not found."]}
4081 is the Ui port
# curl :4081/config
{"refresh-interval":3000,"timezone-name":"Coordinated Universal
Time","timezone-offset":0,"flink-version":"","fli
Hi Yingjie,
I read this post and the next one as well (
https://flink.apache.org/2019/07/23/flink-network-stack-2.html).
I mean the bandwidth of the channels between two physical operators. When
they are in different hosts, so when the channels are a network channel.
Thanks
*--*
*-- Felipe Gutie
25 matches
Mail list logo