Hi, As I understand you are trying to create an operational data store from your transactional database(s) upstream?
Do you have stats on the rate of DML in the primary source? These insert/update/deletes need to pass to Kafka as messages. Besides what Kafka can handle (largely depending on the architecture you have created and your Kafka cluster plus Zookeeper ensemble.), you will be constraint by the rate that you will be allowed to extract data from Mainframe and your shared network. Also for Kafka and Zookeeper you may think about deploying these as microservices. HTH, Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>* http://talebzadehmich.wordpress.com *Disclaimer:* Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction. On Wed, 12 Sep 2018 at 08:32, Chanchal Chatterji < chanchal.chatte...@infosys.com> wrote: > Hi, > > In the process of mainframe modernization, we are attempting to stream > Mainframe data to AWS Cloud , using Kafka. We are planning to use Kafka > 'Producer API' at mainframe side and 'Connector API' on the cloud side. > Since our data is processed by a module called 'Central dispatch' located > in Mainframe and is sent to Kafka. We want to know what is rate of volume > Kafka can handle. The other end of Kafka is connected to an AWS S3 Bucket > As sink. Please help us to provide this info or else please help to > connect with relevant person who can help us to understand this. > > Thanks and Regards > > Chanchal Chatterji > Principal Consultant, > Infosys Ltd. > Electronic city Phase-1, > Bangalore-560100 > Contacts : 9731141606/ 8105120766 > >