The memory that a kafka broker uses is the java heap + the page cache. If you’re able to split your memory metrics by memory-used and memory-cached, you should see that the majority of a broker’s memory usage is cached memory.
As a broker receives data from producers, the data first enters the page cache. If you have consumers reading this data as it comes in, they will be able to read it directly from the page cache and the latency of having to fetch it from disk. By default, kafka leaves it to the OS to flush this data to disk. The page cache can only use free memory (not committed to other processes), and it will also relinquish memory to other processes that need it. On the metrics dashboards that I’ve build for my clusters, I always include cached and used memory charts, as well as disk read and write charts. I’m fine see to many disk reads I know I have lagging consumers. -- Peter (from phone) > On Apr 12, 2019, at 8:20 PM, Rammohan Vanteru <ramz.moha...@gmail.com> wrote: > > Hi Steve, > > We are using Prometheus jmx exporter and Prometheus to scrape metrics based > on memory metric we are measuring. > > Jmx exporter: > https://github.com/prometheus/jmx_exporter/blob/master/README.md > > Thanks, > Ramm. > > On Fri, Apr 12, 2019 at 12:43 PM Steve Howard <steve.how...@confluent.io> > wrote: > >> Hi Rammohan, >> >> How are you measuring "Kafka seems to be reserving most of the memory"? >> >> Thanks, >> >> Steve >> >> On Thu, Apr 11, 2019 at 11:53 PM Rammohan Vanteru <ramz.moha...@gmail.com> >> wrote: >> >>> Hi Users, >>> >>> As per the article here: >>> https://docs.confluent.io/current/kafka/deployment.html#memory memory >>> requirement is roughly calculated based on formula: write throughput*30 >>> (buffer time in seconds), which fits in our experiment i.e. 30MB/s*30~ >>> 900MB. Followup questions here: >>> >>> - How do we estimate the figure ’30 seconds’ for our buffer time? >>> - Kafka seems to be reserving most of the memory even when the none of >>> the messages are being streamed.. how does that play out. >>> >>> Any information provided would be helpful. >>> >>> Thanks, >>> Rammohan. >>