Hi folks,
I use the following to interact with databases and elasticsearch.
org.apache.flink.connector.jdbc.table.JdbcTableSource
org.apache.flink.streaming.connectors.elasticsearch7.ElasticsearchSink
Do these connectors provide any metrics out of the box? Metrics such as
- QPS to database, Elast
One way I thought to achieve this is -
1. For failures, add a special record to collection in RichAsyncFunction
2. Filter out those special records from the DataStream and push to another
Kafka
Let me know if it makes sense.
On Fri, Jun 11, 2021 at 10:40 AM Satish Saley
wrote:
> Hi,
> -
Hi,
- I have a kafka consumer to read events.
- Then, I have RichAsyncFunction to call a remote service to get
more information about that event.
If the remote call fails after X number of retries, I don't want flink to
fail the job and start processing from the beginning. Instead I would like
to
Hi team,
I am trying to figure out a way to flatten events in my Flink app.
Event that i am consuming from Kafka is
UpperLevelData {
int upperId;
List listOfModules
}
ModuleData {
int moduleId;
string info;
}
After consuming this event, i want to flatten it out in following format -
FlinkRes
Hi team,
Was there a reason for not shading hadoop-common
https://github.com/apache/flink/commit/e1e7d7f7ecc080c850a264021bf1b20e3d27d373#diff-e7b798a682ee84ab804988165e99761cR38-R44
? This is leaking lots of classes such as guava and causing issues in our
flink application.
I see that hadoop-comm