Spark can be a consumer and a producer from the Kafka point of view.
You can create a kafka client in Spark that registers to a topic and reads the
feeds, and you can process data in Spark and generate a producer that sends
that data into a topic.
So, Spark lies next to Kafka and you can use Kaf
I'm a little confused on how to use Kafka and Spark together. Where exactly
does Spark lie in the architecture? Does it sit on the other side of the Kafka
producer? Does it feed the consumer? Does it pull from the consumer?
Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938