Storm has the ability to distribute tuples to down stream processors based
on a key such that keys are grouped and can end up at the same destination
JVM.  This is handy if you want consistent processing (say updating a
global state store - one thing should do that per key) and horizontal
scalability.  In Kafka Streams, each application works in isolation so
multiple instances could be updating state.  Is there any example pattern
on Streams that I have missed or is the way to do this in the Kafka World
to distribute by key to topic partitions?

Thanks,

Kris

Reply via email to