I understand that for dev env creating containers might not be needed, and
as you said "start up an application and go"!
However, I would like to know to have HA in my env and make it scalable;
what is the proper setup I need to have.

- Would every Kafka streaming job/app require a new docker image and
deployment of the container/service? (e.g. 10 containers for 10 instances
of the same app)
- How should I structure things differently if I had more than one (e.g.
different) Kafka streaming apps/jobs?
- What are the advantages of using Kafka streaming over Spark streaming?
I'm asking b/c with Spark streaming I don't need to create and deploy a new
docker image every time I added a new app or changed the same app.

Best regards,
Mina

On Thu, Apr 27, 2017 at 12:03 PM, David Garcia <dav...@spiceworks.com>
wrote:

> Unlike spark, you don’t need an entire framework to deploy your job.  With
> Kstreams, you just start up an application and go.  You don’t need docker
> either…although containerizing your stuff is probably a good strategy for
> the purposes of deployment management (something you get with Yarn or a
> spark Cluster)…but you’re not tied to any one framework (e.g. you can use
> kubernetes, mesos, Yarn, or anything else)
>
> On 4/27/17, 10:52 AM, "Mina Aslani" <aslanim...@gmail.com> wrote:
>
>     Hi,
>
>     I created a kafka stream app and as I was informed I created a docker
> image
>     with the app and launched it as a container. However, I have couple of
>     questions:
>
>     - Would every Kafka streaming job require a new docker image and
> deployment
>     of the container/service?
>     - How should I structure things differently if I had more than one
> Kafka
>     streaming app/job?
>     - What are the advantages of using Kafka streaming over Spark
> streaming?
>     I'm asking b/c with Spark streaming I don't need to create and deploy
> a new
>     docker image every time I added or changed an/a app/job.
>
>     Best regards,
>     Mina
>
>
>

Reply via email to