[ 
https://issues.apache.org/jira/browse/FLINK-32028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17792024#comment-17792024
 ] 

Peter Schulz commented on FLINK-32028:
--------------------------------------

I gave it [a 
try|https://github.com/apache/flink-connector-elasticsearch/pull/83] to come up 
with a minimal invasive, backwards compatible approach. If was only about the 
error handling, you could stay in elasticsearch-land and decide to throw or not 
to throw based on {{BulkRequest}} and {{BulkResponse}}. However, we also needed 
metrics and this is where I had to bridge between flink-land and 
elasticsearch-land and allow the newly introduced 
`BulkRequestInterceptorFactory` to be aware of {{MetricGroup}}. This approach 
is somewhat tailored to our needs so I would highly appreciate feedback to make 
it generally applicable.

> Error handling for the new Elasticsearch sink
> ---------------------------------------------
>
>                 Key: FLINK-32028
>                 URL: https://issues.apache.org/jira/browse/FLINK-32028
>             Project: Flink
>          Issue Type: Improvement
>          Components: Connectors / ElasticSearch
>    Affects Versions: 1.16.1
>            Reporter: Tudor Plugaru
>            Priority: Major
>              Labels: pull-request-available
>
> The deprecated ElasticsearchSink supports setting an error handler via a 
> [public method 
> |https://github.com/apache/flink-connector-elasticsearch/blob/8f75d4e059c09b55cc3a44bab3e64330b1246d27/flink-connector-elasticsearch7/src/main/java/org/apache/flink/streaming/connectors/elasticsearch7/ElasticsearchSink.java#L216]
>  but the new sink, does not.
> Ideally there would be a way to handle ES specific exceptions and be able to 
> skip items from being retried on ES indefinitely and not block entirely the 
> pipeline.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to