Thank all,

okay it per broker. As you can say it's really depend of the use case. Even if 
for me it seems huge, it will really depend on the use in each IS

and the needed throughput to carry out a project.


More again,


thank all,


Adrien

________________________________
De : Svante Karlsson <svante.karls...@csi.se>
Envoyé : jeudi 1 mars 2018 19:09:52
À : users@kafka.apache.org
Objet : Re: Hardware Guidance

It's per broker. Usually you run with 4-6GB of java heap. The rest is used
as disk cache and it's more that 64GB seems like a sweet spot between
memory cost and performance.

/svante

2018-03-01 18:30 GMT+01:00 Michal Michalski <michal.michal...@zalando.ie>:

> I'm quite sure it's per broker (it's a standard way to provide
> recommendation on node sizes in systems like Kafka), but you should
> definitely read it in the context of the data size and traffic the cluster
> has to handle. I didn't read the presentation, so not sure if it contains
> such information (if it doesn't, maybe the video does?), but this context
> is necessary to size Kafka properly (that includes const efficiency). To
> put that in context: I've been running small Kafka cluster on AWS'
> m4.xlarge instances in the past with no issues (low number of terabytes
> stored in total, low single-digit thousands of messages produced per second
> in peak) - I actually think it was oversized for that use case.
>
> On 1 March 2018 at 17:09, adrien ruffie <adriennolar...@hotmail.fr> wrote:
>
> > Hi all,
> >
> >
> > on the slide 5 in the following link:
> >
> > https://fr.slideshare.net/HadoopSummit/apache-kafka-best-practices/1
> >
> >
> >
> > The "Memory" mentions that "24GB+ (for small) and 64GB+ (for large)"
> Kafka
> > Brokers
> >
> > but is it 24 or 64 GB spread over all brokers ? Or 24 GB for example for
> > each broker ?
> >
> >
> > Thank you very much,
> >
> >
> > and best regards,
> >
> >
> > Adrien
> >
>

Reply via email to