Hello — I am struggling about how to design a robust implementation of a 
producer.

My use case is quite simple: 
I want to process a relatively big stream (~8MB/s) with Storm. Kafka will be 
used as intermediate between the stream and Storm. The stream is sent to a 
specific server on a specific port (through UDP). So Storm will be the consumer 
and I need to write a producer (basically in Java) that will listen on that 
specific port and send messages to a Kafka topic.

Kafka and Storm are well designed and fault-tolerant, if a node goes down the 
whole environment continues to work properly etc... Therefore my producer will 
be a single point of failure in the workflow. Moreover, writing a such producer 
is not so easy, I’ll need to write a multithreaded server to keep up with the 
throughput of the stream without guarantee that no data will be dropped…

So I would like to know if there is some best practices to write a such 
producer or is there an other (maybe simpler) way to do?

Thanks,
Thibaud

Reply via email to