HI,

I sometimes use the CSV Metrics reporter to get some metrics from Kafka
with more accuracy and w/o the hassle of having to configure our central
metrics system to fetch the metrics from JMX, especially for one-off tests.

However, every time I start a broker, I get errors from kafka like this
one...

java.io.IOException: Unable to create
/tmp/kafka/metrics/NumDelayedRequests.csv
at
com.yammer.metrics.reporting.CsvReporter.createStreamForMetric(CsvReporter.java:141)
at
com.yammer.metrics.reporting.CsvReporter.getPrintStream(CsvReporter.java:257)
at com.yammer.metrics.reporting.CsvReporter.access$000(CsvReporter.java:22)
at
com.yammer.metrics.reporting.CsvReporter$1.getStream(CsvReporter.java:156)
at
com.yammer.metrics.reporting.CsvReporter.processGauge(CsvReporter.java:229)
at
com.yammer.metrics.reporting.CsvReporter.processGauge(CsvReporter.java:22)
at com.yammer.metrics.core.Gauge.processWith(Gauge.java:28)
at com.yammer.metrics.reporting.CsvReporter.run(CsvReporter.java:163)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)

If I manually delete NumDelayedRequests.csv, I get the same error for the
PurgatorySize.csv file. After deleting the latter, errors are gone.

Also, until I delete both files a first time, the csv files created is
limited to the following:

ls -l /tmp/kafka/metrics/
total 36
-rw-rw-r-- 1 mlabbe mlabbe 119 Nov 10 22:53 AllExpiresPerSecond.csv
-rw-rw-r-- 1 mlabbe mlabbe 119 Nov 10 22:53 IsrExpandsPerSec.csv
-rw-rw-r-- 1 mlabbe mlabbe 119 Nov 10 22:53 ISRShrinksPerSec.csv
-rw-rw-r-- 1 mlabbe mlabbe  27 Nov 10 22:53 NumDelayedRequests.csv
-rw-rw-r-- 1 mlabbe mlabbe  27 Nov 10 22:53
Processor-0-ResponseQueueSize.csv
-rw-rw-r-- 1 mlabbe mlabbe  27 Nov 10 22:53
Processor-1-ResponseQueueSize.csv
-rw-rw-r-- 1 mlabbe mlabbe  27 Nov 10 22:53 PurgatorySize.csv
-rw-rw-r-- 1 mlabbe mlabbe  27 Nov 10 22:53 RequestQueueSize.csv
-rw-rw-r-- 1 mlabbe mlabbe 119 Nov 10 22:53 UncleanLeaderElectionsPerSec.csv

After that, a bunch of other files appear, which I would expect to happen
initially.

I never really cared about those metrics so my workaround of deleting both
files a first time before having access to the rest of the metrics I
needed. I verified with jconsole and this does not affect JMX (which I
suppose, would have been found long before if there was a problem)

I never knew exactly what was the cause of that but after looking a bit at
the Purgatory code, I figured it is possibly because there are 2
purgatories from the same base class in which the metrics is created.
Especially since files aren't prefixed with Fetch/Produce.

I have seen KAFKA-541 which talks about this. I am using Kafka 0.8 branch
(not much behind latest), Yammer metrics is still at 2.2.0.

Marc

Reply via email to