Hi Paolo,
  for use cases, high-level view you can read
https://www.confluent.io/blog/iot-streaming-use-cases-with-kafka-mqtt-confluent-and-waterstream/
(disclaimer, this is marketing material from a vendor, but would be a good
high level view of real cases).

from your other question:

> I'm wondering if Kafka could be the right choice to leave database pulling
and which benefits it brings.

Removing completely the database, not sure it makes sense, however you
could build a pipeline like: device -> mqtt broker -> kafka -> [pre-process
your data when need] -> database

would that help?

-- Pere


On Thu, Feb 23, 2023 at 7:13 AM paolo francia <paolo.francia1...@gmail.com>
wrote:

> Hello,
> I would ask if there are some cases/examples in which Kafka has been used
> in the backend of an ingestion pipeline for IoT data, with the purpose to
> make it scalable.
>
> My case is briefly doing this:
> - a web api is waiting data from IoT devices (10 million expeted by day).
> Data are not ordered in terms of time, we could receive old data that the
> device can't sent previously.
> - then data are stored in a database to be processed
> - a database job pulls and process the data (create the average, min, max,
> over/under quota, add the internal sensor id from the serial number,...)
>
> I'm wondering if Kafka could be the right choice to leave database pulling
> and which benefits it brings.
> I really appreciate if there is an example or case study.
>
> Thank you very much
> Paolo
>


-- 
Pere Urbon-Bayes
Software Architect
https://twitter.com/purbon
https://www.linkedin.com/in/purbon/

Reply via email to