I can't really answer your question, but you don't mention your network layout/hardware. May want to add that as a data point in your decision (wouldn't want to overrun your network device(s) on the brokers).
On Wed, Jan 13, 2016 at 7:09 PM, Vladoiu Catalin <vladoiu.cata...@gmail.com> wrote: > Hi guys, > > I've run into a long conversation with my colleagues when we discussed the > size of the Brokers for our new Kafka cluster and we still haven't reached > a final conclusion. > > Our main concern is the size of the requests 10-20MB per request (producer > will send big requests), maybe more and we estimate that we will have 4-5TB > per day. > > Our debate is between: > 1. Having a smaller cluster(not so many brokers) but big config, something > like this: > Disk: 11 x 4TB, CPU: 48 Core, RAM: 252 GB. We chose this configuration > because our Hadoop cluster has that config and can easily handle that > amount of data. > 2. Having a bigger number of brokers but smaller broker config. > > I was hopping that somebody with more experience in using Kafka can advice > on this. > > Thanks, > Catalin >