I was reading a bit about Kafka Streams and was wondering if it is
appropriate for my team's use. We ingest data using Kafka and Storm. Data
gets pulled by Storm and sent off to bolts that publish the data into HBase
and Solr. One of the things we need is something analogous to Storm's
ability to fail a message retrieved from Kafka if it was never processed to
completion. Does Kafka Streams provide similar capabilities? Basically, all
we need is the ability to consume a bunch of events from Kafka and if we
can't write them, tell the API that something failed and to not commit that
offset so it can be replayed.

Thanks,

Mike

Reply via email to