[ https://issues.apache.org/jira/browse/FLINK-30998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17688142#comment-17688142 ]
Leonid Ilyevsky commented on FLINK-30998: ----------------------------------------- Hi [~reta] , > Sorry, I didn't get the context for this one. You mean using 1.x client (the > connector's default) with OpenSearch 2.x cluster, is that right? Correct. The 1.x client when used against 2.x server failed on parsing the response. I fixed it in my project by explicitly specifying 2.5.0 dependency. > Add optional exception handler to flink-connector-opensearch > ------------------------------------------------------------ > > Key: FLINK-30998 > URL: https://issues.apache.org/jira/browse/FLINK-30998 > Project: Flink > Issue Type: Improvement > Components: Connectors / Opensearch > Affects Versions: 1.16.1 > Reporter: Leonid Ilyevsky > Priority: Major > > Currently, when there is a failure coming from Opensearch, the > FlinkRuntimeException is thrown from OpensearchWriter.java code (line 346). > This makes the Flink pipeline fail. There is no way to handle the exception > in the client code. > I suggest to add an option to set a failure handler, similar to the way it is > done in elasticsearch connector. This way the client code has a chance to > examine the failure and handle it. > Here is the use case example when it will be very useful. We are using > streams on Opensearch side, and we are setting our own document IDs. > Sometimes these IDs are duplicated; we need to ignore this situation and > continue (this way it works for us with Elastisearch). > However, with opensearch connector, the error comes back, saying that the > batch failed (even though most of the documents were indexed, only the ones > with duplicated IDs were rejected), and the whole flink job fails. -- This message was sent by Atlassian Jira (v8.20.10#820010)