Hi all,

We are using Samza (0.12.0) in about 2 dozen jobs implementing several 
processing pipelines. We have also begun a significant move of other services 
within our company to Docker/Kubernetes. Right now our Hadoop/Yarn cluster has 
a mix of stream and batch "Map Reduce" jobs (many reporting and other batch 
processing jobs). We would really like to move our stream processing off of 
Hadoop/Yarn and onto Kubernetes.

When I just read about some of the new progress in .13 and .14 I got really 
excited! We would love to have our jobs run as simple libraries in our own JVM, 
and use the Kafka High-Level-Consumer for partition distribution and such. This 
would let us "dockerfy" our application and run/scale in kubernetes.

However as I read it, this new deployment model is ONLY for the new(er) High 
Level API, correct? Is there a plan and/or resources for adapting this back to 
existing low-level tasks ? How complicated of a task is that? Do I have any 
other options to make this transition easier?

Thanks in advance.
Thunder

Reply via email to