hi all,
bg:
create a custom metric reporter via kafka,
it works in ideaj local environment.
but failed when packaged and deployed in k8s env (ververica by alibaba)
flink 1.12
config:
metrics.reporter.kafka.factory.class:
org.apache.flink.metrics.kafka.KafkaReporterFactory
metrics.reporter.kafka.se
[previous didn't cc list, sorry for dupes]
The classic connection pool pattern, where expensive connections are
created relatively few times and used by lots of transient short-lived
tasks, each of which borrows a connection from the pool and returns it when
done, would still be usable here, but a
The classic connection pool pattern, where expensive connections are
created relatively few times and used by lots of transient short-lived
tasks, each of which borrows a connection from the pool and returns it when
done, would still be usable here, but as Péter points out, you can't rely
on a sing
So, when we create an EMR cluster the NN service runs on the primary node
of the cluster.
Now at the time of creating the cluster, how can we specify the name of
this NN in format hdfs://*namenode-host*:8020/.
Is there a standard name by which we can identify the NN server ?
Thanks
Sachin
On Fr
Hi,
After some debugging I see these in the logs:
2024-03-22 14:25:47,555 INFO org.apache.kafka.clients.NetworkClient [] -
[Consumer clientId=qubit-data-consumer-0, groupId=spflink] Disconnecting
from node 11 due to request timeout.
2024-03-22 14:25:47,647 INFO org.apache.kafka.clients.NetworkClie
Hi, Nida.
About using connector with Java by Flink DataStream api, you can mainly
reference these doc[1][2].
However, Elaticsearch connector only supports sink currently. What you need it
to build a custom ES
connector with that patch[3] yourself. The following steps may help you.
1. Downlo
Hi,
I was experimenting with different starting offset strategies for my Flink
job, especially in cases where jobs are canceled and scheduled again
and I would like to start with the last committed offset and if the same is
not available then start from the latest.
So I decided to use this:
.setS