[ https://issues.apache.org/jira/browse/FLINK-30998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17751826#comment-17751826 ]
Andriy Redko commented on FLINK-30998: -------------------------------------- Thank you for contribution, [~leonidilyevsky] ! Please refer to [1] that describes the release process for connectors, short brief summary below (you probably could nominate [~martijnvisser] since I am not a committer): > Anybody can propose a release on the dev@ mailing list, giving a solid > argument and nominating a committer as the Release Manager (including > themselves). Thank you. [1] https://cwiki.apache.org/confluence/display/FLINK/Creating+a+flink-connector+release > Add optional exception handler to flink-connector-opensearch > ------------------------------------------------------------ > > Key: FLINK-30998 > URL: https://issues.apache.org/jira/browse/FLINK-30998 > Project: Flink > Issue Type: Improvement > Components: Connectors / Opensearch > Affects Versions: 1.16.1 > Reporter: Leonid Ilyevsky > Assignee: Leonid Ilyevsky > Priority: Major > Labels: pull-request-available > Fix For: opensearch-1.0.2 > > > Currently, when there is a failure coming from Opensearch, the > FlinkRuntimeException is thrown from OpensearchWriter.java code (line 346). > This makes the Flink pipeline fail. There is no way to handle the exception > in the client code. > I suggest to add an option to set a failure handler, similar to the way it is > done in elasticsearch connector. This way the client code has a chance to > examine the failure and handle it. > Here is the use case example when it will be very useful. We are using > streams on Opensearch side, and we are setting our own document IDs. > Sometimes these IDs are duplicated; we need to ignore this situation and > continue (this way it works for us with Elastisearch). > However, with opensearch connector, the error comes back, saying that the > batch failed (even though most of the documents were indexed, only the ones > with duplicated IDs were rejected), and the whole flink job fails. -- This message was sent by Atlassian Jira (v8.20.10#820010)