h [ProtonMail](https://protonmail.com) Secure Email.
> Original Message ----
> Subject: Re: Kafka FileStreamSinkConnector handling of bad messages
> Local Time: October 18, 2017 5:36 PM
> UTC Time: October 18, 2017 9:36 PM
> From: dhawan.gajend...@datavisor.com
> To: users@kafka.ap
Hi Marina,
We hit a similar problem with our S3 connectors. We added a level of
indirection, a JSON validating microservice, before publishing to the Kafka
topic. The microservice published non-JSON formatted messages to a separate
Kafka topic called error-jsons and we flushed those messages using
Considering Ewen's response, you can open a JIRA for applying the
suggestion toward FileStreamSinkConnector.
Cheers
On Wed, Oct 18, 2017 at 10:39 AM, Marina Popova
wrote:
> Hi,
> I wanted to give this question a second try as I feel it is very
> important to understand how to control error
Hi,
I wanted to give this question a second try as I feel it is very important
to understand how to control error cases with Connectors.
Any advice on how to control handling of "poison" messages in case of
connectors?
Thanks!
Marina
> Hi,
> I have the FileStreamSinkConnector working perfec
Hi,
I have the FileStreamSinkConnector working perfectly fine in a distributed mode
when only good messages are being sent to the input event topic.
However, if I send a message that is bad - for example, not in a correct JSON
format, and I am using the Json converter for keys/values as followin