Hi, Chen Qin

We also met your end-to-end use case. A RPC Source and Sink such as netty
source sink can fit such requirements. I’ve submit a natty module in
bahir-flink project which only a demo.
If use connector source instead of Kafka, how do we make the data
persistent? One choice is distributedlog project developed by twitter.

The idea of micro service is very good. Playframework is better choice to
provide micro-service of Flink instead of Flink Monitor which implemented
by netty.
Submit Flink job in the Mesos cluster, at the same time deploy the
micro-service by marathon to the same Mesos cluster, and enable mesos-dns
for service discovery.

The the micro-service can be a API Gateway for:
1. receiving data from device
2. Sending the data to the Flink Job Source(Netty Source with
distributedlog)
3. At same time, the sink send the streaming result data to the API Gateway
4. API Gateway support streaming invoke: send the sink result data to the
device channel

So this plan can guarantee the end-user invoke the service synchronized,
and don’t care about Flink Job’s data processing.

By the way, X as a Service actually is called by SAAS/PAAS in the cloud
platform, such as AWS/Azure. We can call it Flink micro service.:)

Best Regards
Jinkui Shi

在 2017/3/14 下午2:13, "Chen Qin" <qinnc...@gmail.com> 写入:

>Hi there,
>
>I am very happy about Flink 1.2 release. It was much more robust and
>feature rich compare to previous versions. In the following section, I
>would like to discuss a non typical use case in flink community.
>
>With ever increasing popularity of micro services[1] to scale out popular
>online services. Various aspect of source of truth is stored (a.k.a
>partitioned) behind various of service rpc endpoints. There is a general
>need of managing events traversal and enrichment throughout org SOA
>systems. (SOA) It is no longer part of data infrastructure scope, where
>traditionally known as batched slow and analytic(small % lossy is okay).
>Flink might also find a fit into core services as well.
>
>It's part of online production services, serving directly from mobile
>client events more importantly services database post commit logs and
>orchestrate adhoc stream toplogies to transform and transfer between
>online
>services(usually backed by databases and serving request response with
>stragent latency requirement)
>
>Use case:
>user updates comes from mobile client via kafka topic, which consumed both
>by user service as well as streaming job. When streaming job do RPC and
>trying to enrich user information, it cause race condition which turns out
>database persistence is not as speedy as streaming job.
>
>In general, streaming job should consume user service commit logs instead
>of karfka topic which defines as source of truth in term of user
>information. Is there a general way to couple with these issues?
>
>P.S I was able to build task manager as jar package and deployed to
>production environment. Instead of using YARN to manage warehouse
>machines.
>Utilize same deployment environment as other online services as docker. So
>far, it seems running smoothly.
>
>Thanks,
>Chen
>
>
>[1] https://en.wikipedia.org/wiki/Microservices
>[2] https://martinfowler.com/eaaDev/EventSourcing.html

Reply via email to