________________________________
From: Mich Talebzadeh <mich.talebza...@gmail.com>
Sent: Sunday, July 8, 2018 1:01 PM
To: users@kafka.apache.org
Subject: Re: Real time streaming as a microservice

Thanks Martin.

>From an implementation point of view do we need to introduce docker for
each microservice? In other words does it have to be artefact --> contain
--> docker for this to be true microservice and all these microservices
communicate through Service Registry.
MG>for deployment deploying thru docker container would be the easiest means to 
test
MG>but first we would need to concentrate
MG>on your developing a micro-service first
MG>your development of a service registry
MG>your development of a micro-services container which can lookup necessary 
endpoints
MG>since you pre-pordained Docker to be your deploy container I would suggest 
implementing OpenShift
https://www.openshift.org/
OpenShift Origin - Open Source Container Application 
Platform<https://www.openshift.org/>
www.openshift.org
The next generation open source app hosting platform by Red Hat




Also if we wanted to move from a monolithic classic design with Streaming
Ingestion (ZooKeeper, Kafka) --> Processing engine (Spark Streaming, Flink)
--> Real time dashboard (anything built on something like D3) to
microservices how would that entail.
MG>the simpler the function the better ...something like
MG>simple input...user enters 'foo'
MG>simple processing....process does spark stream to determine what result 
responds to 'foo'
MG>simple output...output will be text 'bar' formatting to be decided 
(text/html/pdf?)

One option would be to have three
principal microservices (each with sub-services) providing three components?
MG>concentrate on the simplest function which would be_______________?
MG>shoehorn simple function into a viable microservice
MG>the following inventory microservice from redhat example shows how your 
______? service
MG>can be incorporated into a openshift container
MG>and be readily deployable in docker container
MG>https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
[https://developers.redhat.com/blog/wp-content/uploads/2017/05/img_5912da9d19c3c.png]<https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/>

OpenShift and DevOps: The CoolStore Microservices 
Example<https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/>
developers.redhat.com
Today I want to talk about the demo we presented @ OpenShift Container Platform 
Roadshow in Milan & Rome last week. The demo was based on JBoss team’s great 
work available on this repo: In the next few paragraphs, I’ll describe in deep 
detail the microservices CoolStore example and how we used ...


MG>the first step would involve knowing which simple function you need to 
deploy as microservice ?

Regards,

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sun, 8 Jul 2018 at 13:58, Martin Gainty <mgai...@hotmail.com> wrote:

>
>
> initial work under using Zookeeper as a Microservices container is here
>
> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
>
> ZooKeeper for Microservice Registration and Discovery ...<
> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
> >
> planet.jboss.org
> In a microservice world, multiple services are typically distributed in a
> PaaS environment. Immutable infrastructure, such as those provided by
> containers or immutable VM images. Services may scale up and down based
> upon certain pre-defined metrics. Exact address of the service may not be
> known ...
>
> once your Zookeeper Microservices container is operational
>
> you would need to 'tweak' kafka to extend and implement classes/interfaces
> to become
> a true microservices component..this may help
>
>
> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> [http://blog.arungupta.me/wp-content/uploads/2015/06/javaee-monolithic.png
> ]<
> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> >
>
> Monolithic to Microservices Refactoring for Java EE ...<
> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> >
> blog.arungupta.me
> Have you ever wondered what does it take to refactor an existing Java EE
> monolithic application to a microservices-based one? This blog explains how
> a trivial shopping cart example was converted to microservices-based
> application, and what are some of the concerns around it.
>
>
>
> let me know if i can help out
> Martin
>
>
> ________________________________
> From: Jörn Franke <jornfra...@gmail.com>
> Sent: Sunday, July 8, 2018 6:18 AM
> To: users@kafka.apache.org
> Cc: u...@flink.apache.org
> Subject: Re: Real time streaming as a microservice
>
> Yes or Kafka will need it ...
> As soon as your orchestrate different microservices this will happen.
>
>
>
> > On 8. Jul 2018, at 11:33, Mich Talebzadeh <mich.talebza...@gmail.com>
> wrote:
> >
> > Thanks Jorn.
> >
> > So I gather as you correctly suggested, microservices do provide value in
> > terms of modularisation. However, there will always "inevitably" be
> > scenarios where the receiving artefact say Flink needs communication
> > protocol changes?
> >
> > thanks
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
> >
> >
> >
> >
> >> On Sun, 8 Jul 2018 at 10:25, Jörn Franke <jornfra...@gmail.com> wrote:
> >>
> >> That they are loosely coupled does not mean they are independent. For
> >> instance, you would not be able to replace Kafka with zeromq in your
> >> scenario. Unfortunately also Kafka sometimes needs to introduce breaking
> >> changes and the dependent application needs to upgrade.
> >> You will not be able to avoid these scenarios in the future (this is
> only
> >> possible if micro services don’t communicate with each other or if they
> >> would never need to change their communication protocol - pretty
> impossible
> >> ). However there are ways of course to reduce it, eg kafka could reduce
> the
> >> number of breaking changes or you can develop a very lightweight
> >> microservice that is very easy to change and that only deals with the
> >> broker integration and your application etc.
> >>
> >>> On 8. Jul 2018, at 10:59, Mich Talebzadeh <mich.talebza...@gmail.com>
> >> wrote:
> >>>
> >>> Hi,
> >>>
> >>> I have created the Kafka messaging architecture as a microservice that
> >>> feeds both Spark streaming and Flink. Spark streaming uses
> micro-batches
> >>> meaning "collect and process data" and flink as an event driven
> >>> architecture (a stateful application that reacts to incoming events by
> >>> triggering computations etc.
> >>>
> >>> According to Wikipedia, A Microservice is a  technique that structures
> an
> >>> application as a collection of loosely coupled services. In a
> >> microservices
> >>> architecture, services are fine-grained and the protocols are
> >> lightweight.
> >>>
> >>> Ok for streaming data among other things I have to create and configure
> >>> topic (or topics), design a robust zookeeper ensemble and create Kafka
> >>> brokers with scalability and resiliency. Then I can offer the streaming
> >> as
> >>> a microservice to subscribers among them Spark and Flink. I can upgrade
> >>> this microservice component in isolation without impacting either Spark
> >> or
> >>> Flink.
> >>>
> >>> The problem I face here is the dependency on Flink etc on the jar files
> >>> specific for the version of Kafka deployed. For example
> kafka_2.12-1.1.0
> >> is
> >>> built on Scala 2.12 and Kafka version 1.1.0. To make this work in Flink
> >> 1.5
> >>> application, I need  to use the correct dependency in sbt build. For
> >>> example:
> >>> libraryDependencies += "org.apache.flink" %%
> >> "flink-connector-kafka-0.11" %
> >>> "1.5.0"
> >>> libraryDependencies += "org.apache.flink" %%
> >> "flink-connector-kafka-base" %
> >>> "1.5.0"
> >>> libraryDependencies += "org.apache.flink" %% "flink-scala" % "1.5.0"
> >>> libraryDependencies += "org.apache.kafka" % "kafka-clients" %
> "0.11.0.0"
> >>> libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" %
> >>> "1.5.0"
> >>> libraryDependencies += "org.apache.kafka" %% "kafka" % "0.11.0.0"
> >>>
> >>> and the Scala code needs to change:
> >>>
> >>> import
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
> >>> …
> >>>   val stream = env
> >>>                .addSource(new FlinkKafkaConsumer011[String]("md", new
> >>> SimpleStringSchema(), properties))
> >>>
> >>> So in summary some changes need to be made to Flink to be able to
> >> interact
> >>> with the new version of Kafka. And more importantly if one can use an
> >>> abstract notion of microservice here?
> >>>
> >>> Dr Mich Talebzadeh
> >>>
> >>>
> >>>
> >>> LinkedIn *
> >>
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >>> <
> >>
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >>> *
> >>>
> >>>
> >>>
> >>> http://talebzadehmich.wordpress.com
> >>>
> >>>
> >>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
> any
> >>> loss, damage or destruction of data or any other property which may
> arise
> >>> from relying on this email's technical content is explicitly
> disclaimed.
> >>> The author will in no case be liable for any monetary damages arising
> >> from
> >>> such loss, damage or destruction.
> >>
>

Reply via email to