[ https://issues.apache.org/jira/browse/FLINK-27156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17530621#comment-17530621 ]
Martijn Visser commented on FLINK-27156: ---------------------------------------- [~almogtavor] What would be holding you back on that? You could either create your own repository (potentially based out of a fork of the current Elasticsearch one) or create a PR towards the Flink repo (though that will not be merged). You can use this ticket or the Dev mailing list for discussion/questions. For implementation details and directions, I would probably reach out the Dev mailing list. I would also check out some of the already existing implementations to get an understanding. > [FLIP-171] MongoDB implementation of Async Sink > ----------------------------------------------- > > Key: FLINK-27156 > URL: https://issues.apache.org/jira/browse/FLINK-27156 > Project: Flink > Issue Type: New Feature > Components: Connectors / Common > Reporter: Almog Tavor > Priority: Major > > *User stories:* > I’d like to use MongoDB as a sink for my data pipeline and I think it'll be > appropriate if it would inherit AsyncSinkBase. > *Scope:* > * Implement an asynchronous sink for MongoDb by inheriting the AsyncSinkBase > class. The implementation can for now reside in its own module in > flink-connectors, or maybe we can open a dedicated repository. > * Implement an asynchronous sink writer for MongoDb by extending the > AsyncSinkWriter. The implementation must deal with failed requests and retry > them. The implementation should batch multiple requests The implemented Sink > Writer will be used by the Sink class that will be created as part of this > story. I'm currently looking for the right object in the Async MongoDb client > that will represent the request. > h2. References > More details to be found > [https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink] -- This message was sent by Atlassian Jira (v8.20.7#820007)