Thank you all.
I'll have a look at flume and also at akka-http and akka-streams since the
MACs will send the data to a REST endpoint.
El 29/01/2015 16:10, "Jeff Holoman" escribió:
> Yeah if you're into Flume you can definitely do per event
> modification/routing in an interceptor with relative
Yeah if you're into Flume you can definitely do per event
modification/routing in an interceptor with relative ease. I don't know the
size of the total MAC addresses to look up (or actually why a hash
partitioning scheme wouldn't just work, but w/e I assume you have your
reasons). There's kind of a
Hi Toni,
1. Kafka can create topics on the fly, in case you need it.
https://kafka.apache.org/08/configuration.html
auto.create.topics.enabletrueEnable auto creation of topic on the server.
If this is set to true then attempts to produce, consume, or fetch metadata
for a non-existent topic will
Hi Toni,
Couple of thoughts.
1. Kafka behaviour need not be changed at run time. Your producers which
push your MAC data into kafka should know to which topic it should write.
Your producer can be flume, log stash or it can be your own custom written
java producer.
As long as your producer know
Hi,
I'm starting to weight different alternatives for data ingestion and
I'd like to know whether Kafka meets the problem I have.
Say we have a set of devices each with its own MAC and then we receive
data in Kafka. There is a dictionary defined elsewhere that says each MAC
to which topic