I see, sounds good, thanks for the clarification.
Am Di., 26. Sept. 2023 um 03:29 Uhr schrieb Yunfeng Zhou <
flink.zhouyunf...@gmail.com>:
> Hi Alexis,
>
> Thanks for the clarification. I found the second constructor on
> Flink's master branch here[1], and maybe it was that we had been
> commenti
Hi, I wrote to hbase using map extending RichMapFunction. I initialized the connection in overrided open method and wrote to hbase in overrided map method. 22.09.2023, 05:01, "碳酸君" :hi community: I'm trying to write some data to hbase in a stream job ,withflink-connector-hbase-2.2 . I have
Hi,
Are processwindowfunctions cannot have more than 1 parallelism? Whenever I
set it to 2, I am receiving an error message, "The parallelism of non
parallel operator must be 1."
dataKafka = Kafkasource (datasource)
.parallelism(2)
.rebalance();
dataKafka.windowAll(GlobalWindows.create()).trigge
Hi Patricia,
You are using the .windowall API which generates a global window. This
operation is inherently non-parallel since all elements have to pass through
the same operator instance so it cannot be set to any parallelism larger than 1.
Best,
Zhanghao Chen
Hi,
I have a job running with parallelism=24.
I see that only one subtask is %100 busy and the others are %100 idle.
When I checked the received message counts, I saw that they are almost
identical.
How can I figure out why this task causes backpressure and why only one
subtask is %100 busy.
Than
Hi Kenan,
You may check the stack trace and task-level flame graph [1] to investigate it.
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/ops/debugging/flame_graphs/
Best,
Zhanghao Chen
发件人: Kenan Kılıçtepe
发送时间: 2023年9月26日 23:06
收件人: user
HI Антон,Hang
Thanks Reply
I implemented the RichSinkFunction with HbaseClient ,it works well.
I plan to copy some code from
org.apache.flink.connector.hbase.sink.HBaseSinkFunction mainly the
scheduled batch submit ,but except the converter part .
I think I can read from TableAPI and write with
I implemented some custom Prometheus metrics that were working on
1.16.2, with my configuration
metrics.reporter.prom.factory.class:
org.apache.flink.metrics.prometheus.PrometheusReporterFactory
metrics.reporter.prom.port:
I could see both Flink metrics and my custom metrics on port of
Thanks Gabor, reducing the attack vector looks a fair call here.
However, I am still thinking of other ways to eliminate this security concern.
Is there a way I can use ticketCache inside my pods somehow? Maybe something
like Yarn?
Just thinking out loud, but would there be a case of automating