Depends on your requirements, but I like the Firebase idea as a way to avoid almost the whole "back-end effort". Also as-a-Service: https://www.pubnub.com/ https://cloud.google.com/pubsub/
Redis-as-a-Service: https://www.quora.com/What-are-the-most-economic-redis-cloud-hosting-service-providers Amazon Kinesis would also work. Anything really that would "outsource" the initial effort until you're ready to commit to Kafka. Marko Bonaći Monitoring | Alerting | Anomaly Detection | Centralized Log Management Solr & Elasticsearch Support Sematext <http://sematext.com/> | Contact <http://sematext.com/about/contact.html> On Tue, Mar 22, 2016 at 1:34 AM, Mark van Leeuwen <m...@vl.id.au> wrote: > Thanks for your reply Marko. > > Do you have any simpler products in mind which might fit the requirements? > > From what I could see the promise with Kafka streams is a reduction in > engineering effort compared to what has been required with Kafka in the > past. But I'm only going off the blog - not from experience. > > Cheers. > > > On 22/03/16 05:12, Marko Bonaći wrote: > >> Given your requirements, I think the most important question here is >> *volume*. How many clients/events per day to you expect? >> Unless you expect huge amount of events right away, I would suggest that >> you start with a minimum viable product (including pub/sub), since it >> wouldn't be too hard to later replace whatever you choose to use with >> Kafka, once such a need arises. >> It's just that Kafka, being open source, requires a bit more engineering >> effort than some other products. >> But if you're willing to tackle a bug or two and don't mind sparing a bit >> more engineering time, go ahead and use Kafka regardless of the load. >> >> Marko Bonaći >> Monitoring | Alerting | Anomaly Detection | Centralized Log Management >> Solr & Elasticsearch Support >> Sematext <http://sematext.com/> | Contact >> <http://sematext.com/about/contact.html> >> >> >> On Mon, Mar 21, 2016 at 6:25 PM, Ben Stopford <b...@confluent.io> wrote: >> >> It sounds like a fairly typical pub-sub use case where you’d likely be >>> choosing Kafka because of its scalable data retention and built in fault >>> tolerance. As such it’s a reasonable choice. >>> >>>> On 21 Mar 2016, at 17:07, Mark van Leeuwen <m...@vl.id.au> wrote: >>>> >>>> Hi Sandesh, >>>> >>>> Thanks for the suggestions. I've looked at them now :-) >>>> >>>> The core problem that needs to be solved with my app is keeping a full >>>> >>> replayable history of changes, transmitting latest state to web apps when >>> they start, then keeping them in sync with latest state as changes are >>> made >>> by all current clients, preferably without polling. That's why keeping >>> track of offsets with each client seemed the way to go. >>> >>>> Not sure how stream processing engines help with that - but happy to be >>>> >>> advised otherwise. >>> >>>> Cheers. >>>> >>>> On 22/03/16 02:35, Sandesh Hegde wrote: >>>> >>>>> Hello Mark, >>>>> >>>>> Have you looked at one of the streaming engines like Apache Apex, >>>>> Flink? >>>>> >>>>> Thanks >>>>> >>>>> On Mon, Mar 21, 2016 at 7:56 AM Gerard Klijs <gerard.kl...@dizzit.com> >>>>> wrote: >>>>> >>>>> Hi Mark, >>>>>> >>>>>> I don't think it would be a good solution with the latencies to and >>>>>> >>>>> from >>> >>>> the server your running from in mind. This is less of a problem is >>>>>> >>>>> your app >>> >>>> is only mainly used in one region. >>>>>> >>>>>> I recently went to a Firebase event, and it seems a lot more fitting. >>>>>> >>>>> It >>> >>>> also allows the user to see it's own changes real-time, and provides >>>>>> several authentication options, and has servers world-wide. >>>>>> >>>>>> On Mon, Mar 21, 2016 at 7:53 AM Mark van Leeuwen <m...@vl.id.au> >>>>>> >>>>> wrote: >>> >>>> Hi, >>>>>>> >>>>>>> I'm soon to begin design and dev of a collaborative web app where >>>>>>> changes made by one user should appear to other users in near real >>>>>>> >>>>>> time. >>> >>>> I'm new to Kafka, but having read a bit about Kafka streams I'm >>>>>>> wondering if it would be a good solution. Change events produced by >>>>>>> >>>>>> one >>> >>>> user would be published to multiple consumer clients over a websocket, >>>>>>> each having their own offset. >>>>>>> >>>>>>> Would this be viable? >>>>>>> >>>>>>> Are there any considerations I should be aware of? >>>>>>> >>>>>>> Thanks, >>>>>>> Mark >>>>>>> >>>>>>> >>>>>>> >>> >