Hey All,
We've got a couple of consumers that are run outside of zookeeper. Is
there any way to list them using a Kafka client, metrics or anything?
Willing to put in a patch too if it's necessary, just please point me to
the code which holds the consumer/fetchRequest state.
Using kafka 0.8.2.
Hey Jens,
I'm not sure I understand why increasing the session timeout is not an
option. Is the issue that there's too much uncertainly about processing
time to set an upper bound for each round of the poll loop?
One general workaround would be to move the processing into another thread.
For exam
Andrew,
Yes, in 0.8.2.x, we cleaned up the metric name to put meaning attributes in
tags, instead of in the name. This makes it easier for monitoring
applications to parse those metric names. It seems that we need a
CSVMetricsReporter that can deal with tagged names better. Many people also
use jm
Now that it has been a bit longer, the spikes I was seeing are gone but the
CPU and network in/out on the three brokers that were showing the spikes
are still much higher than before the upgrade. Their CPUs have increased
from around 1-2% to 12-20%. The network in on the same brokers has gone up
fr
I upgraded one of our Kafka clusters (9 nodes) from 0.8.2.3 to 0.9
following the instructions at
http://kafka.apache.org/documentation.html#upgrade
Most things seem to work fine based on our metrics. Something I noticed is
that the network out on 3 of the nodes goes up every 5-6 minutes. I see a
c
At the moment, there is no direct way to do this, but you could use the
commit API to include metadata with each committed offset:
public void commitSync(final Map
offsets);
public OffsetAndMetadata committed(TopicPartition partition);
The OffsetAndMetadata object contains a metadata string field
Scratch that. On more careful observation I do see this in the logs:
inter.broker.protocol.version = 0.8.2.X
On Mon, Dec 14, 2015 at 10:25 AM, Rajiv Kurian wrote:
> I am in the process of updating to 0.9 and had another question.
>
> The docs at http://kafka.apache.org/documentation.html#upgra
Hey Brian,
I think we've made these methods public again in trunk, but that won't help
you with 0.9. Another option would be to write a parser yourself since the
format is fairly straightforward. This would let you remove a dependence on
Kafka internals which probably doesn't have strong compatibi
Hello,
I've been trying to monitor a Kafka broker using the CSVMetricsReporter.
However, when I start the broker, only a few csv files are created in the
directory, and then there are repeated IOExceptions in the kafka.out log
stating that CsvReporter.createStreamForMetric cannot create the file
{
I am in the process of updating to 0.9 and had another question.
The docs at http://kafka.apache.org/documentation.html#upgrade say that to
do a smooth upgrade from 0.8.2.X to 0.9 we can use the
inter.broker.protocol.version config to control what protocol to use. After
upgrading how can we tell t
All,
I'm running into a bit of a road-block consuming the offsets topic in 0.9.
In 0.8, I was able to use kafka.server.OffsetManager.readMessageKey(..)
and readMessageValue(..) to deserialize the offset messages. In 0.9, the
equivalent methods in kafka.coordinator.GroupMetadataManager are priva
Gary,
I was asking last week on the dev list regarding performance in 0.9.x and how
best to achieve optimal message throughput and find your results rather
interesting.
Is producing 7142 msg/sec a fairly typical rate for your test environment (I
realize you're just using your laptop, though).
Hi
Is there any way to get the commit timestamp of the messages which are
retrieved using kafka consumer API.
t
SunilKalva
13 matches
Mail list logo