Hi All,
For Kafka 0.8.2, I was using kafka-consumer-offset-checker.sh to get a
consumer group's offset . Now I am testing against Kafka 0.9 , but the same
tool always gave me
Exiting due to: org.apache.zookeeper.KeeperException$NoNodeException:
KeeperErrorCode = NoNode for /consumers/
I saw "C
st:9092 --list
>
> Guozhang
>
> On Mon, Jan 25, 2016 at 5:03 PM, Yifan Ying wrote:
>
> > Hi All,
> >
> > For Kafka 0.8.2, I was using kafka-consumer-offset-checker.sh to get a
> > consumer group's offset . Now I am testing against Kafka 0.9 , but the
>
That works, thanks.
Yifan
On Mon, Jan 25, 2016 at 7:27 PM, Guozhang Wang wrote:
> I forgot to mention you need to add "--new-consumer" as well.
>
> Guozhang
>
> On Mon, Jan 25, 2016 at 7:11 PM, Yifan Ying wrote:
>
> > Thanks for reply, Guozhang. I got '
Hi All,
I was using the new Kafka Consumer to fetch messages in this way:
while (true) {
ConsumerRecords records = kafkaConsumer.poll(Long.MAX_VALUE);
// do nothing if records are empty
}
Then I realized that blocking until new messages fetched might be a little
overhead. So I l
iod
> indicating the error. Does that solve your case?
>
> Guozhang
>
>
> On Thu, Jan 28, 2016 at 1:18 PM, Yifan Ying wrote:
>
> > Hi All,
> >
> > I was using the new Kafka Consumer to fetch messages in this way:
> >
> > while (true) {
> >
Please check out consumer configs.
http://kafka.apache.org/082/documentation.html#consumerconfigs
On Tue, Feb 9, 2016 at 1:16 PM, Joe San wrote:
> Can I do automatic offset commit using the highlevel consumer? If so, where
> is the offset being comitted?
>
> On Tue, Feb 9, 2016 at 10:13 PM, Ewen
Hi all,
We implemented a MetricsReporter to collect metrics from the new producer
and the new consumer. For the producer, we did see a bunch of topic
specific metrics under "producer-topic-metrics" group. But the consumer
seems not providing any per topic metrics. All I saw are consumer-metrics,
c
One thing to add, the old consumer(0.8) does provide per topic metrics and
they are under the group, 'ConsumerTopicMetrics'.
On Tue, Feb 9, 2016 at 3:20 PM, Yifan Ying wrote:
> Hi all,
>
> We implemented a MetricsReporter to collect metrics from the new producer
> and the
Hi All,
We deleted a deprecated topic on Kafka cluster(0.8) and started observing
constant 'Expanding ISR for partition' and 'Shrinking ISR for partition'
for other topics. As a result we saw a huge number of under replicated
partitions and very high request latency from Kafka. And it doesn't seem
connect+attempts+to+NetworkdClient
>>
>> mailing thread:
>>
>> https://www.mail-archive.com/dev@kafka.apache.org/msg46868.html
>>
>>
>>
>> Guozhang
>>
>> On Tue, Apr 5, 2016 at 2:34 PM, Yifan Ying wrote:
>>
>> > Some updates
Hi,
We are seeing these logs constantly:
ZkUtils$:68 - conflict in /consumers/.
ZkUtils$:68 - I wrote this conflicted ephemeral node ... a while back in a
different session, hence I will backoff for this node to be deleted by
Zookeeper and retry
As a result, some partitions lose consumers
Hi all,
As one partition only allows one consumer thread to claim, there would be a
case that some consumer threads are idle when we over assign consumer
threads. Is there anyway I can monitor the number of active threads(not
active threads in ExecutorService, but threads are actually consuming
me
Hi all,
I am trying to use the mirrormaker on the 0.10 cluster to mirror the
0.8.2.1 cluster into 0.10 cluster. Then I got a bunch of consumer errors as
follows:
Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@f9533ee
(kafka.consumer.ConsumerFetcherThread)
java.nio.BufferUnderf
We tried to upgrade the Kafka clients dependency from 0.8.2.1 to 0.10.0.0.
As I understand, the producer client doesn't have major changes in 0.10, so
we kept the same producer config in the upgrade:
retries 3
retry.backoff.ms 5000
timeout.ms 1
block.on.buffer.full false
linger.ms 5000
metadat
i, Sep 2, 2016 at 1:12 AM Yifan Ying wrote:
>
> > We tried to upgrade the Kafka clients dependency from 0.8.2.1 to
> 0.10.0.0.
> > As I understand, the producer client doesn't have major changes in 0.10,
> so
> > we kept the same producer config in the upgrade:
&
h a dramatic difference. Is compression
> being
> > used and did you paste the full producer config?
> >
> > Ismael
> >
> > On Fri, Sep 2, 2016 at 6:31 PM, Yifan Ying wrote:
> >
> >> The load before and after the upgrade are pretty similar. And all thes
with 0.8.2.1 clients
> (there should be similar metrics but with different metrics name)?
>
> Guozhang
>
> On Mon, Sep 5, 2016 at 11:17 PM, Yifan Ying wrote:
>
> > Hi Ismael,
> >
> > Thanks for replying.
> >
> > Yes. It's the comparison between
Hi all,
0.10 consumers use poll() method to heartbeat Kafka brokers. Is there any
way that I can make the consumer heartbeat but not poll any messages? The
javadoc says, the recommended way is to move message processing to another
thread. But when message processing keeps failing(because a third p
; https://kafka.apache.org/090/javadoc/index.html?org/apache/
> > kafka/clients/consumer/KafkaConsumer.html
> >
> > > Le 28 sept. 2016 à 06:21, Yifan Ying a écrit :
> > >
> > > Hi all,
> > >
> > > 0.10 consumers use poll() method to heartbeat Kafka
Hi,
In old consumers, we use the following command line tool to manually update
offsets stored in zk:
*./kafka-run-class.sh kafka.tools.UpdateOffsetsInZK [latest | earliest]
[consumer.properties file path] [topic]*
But it doesn't work with offsets stored in Kafka. How can I update the
Kafka offs
at store the offsets?
>
> On Wed, Oct 12, 2016 at 2:26 PM, Yifan Ying wrote:
>
> > Hi,
> >
> > In old consumers, we use the following command line tool to manually
> update
> > offsets stored in zk:
> >
> > *./kafka-run-class.sh kafka.tools.UpdateOffset
ted the real consumers.
>
> Hope that helps. (Or elicits a more effective response. ;))
>
> On Fri, Oct 14, 2016 at 10:53 AM, Yifan Ying wrote:
>
> > Hi Jeff,
> >
> > Could you explain how you send messages to __consumer_offsets to
> overwrite
> > offs
Hi all,
We have some integration tests running on EC2. The test will send some
messages to Kafka(using 0.10.0.0) running on the same EC2 instance. The
topic is created automatically when it gets it first message. However, the
test becomes flaky and we are seeing this error:
NetworkClient:600 - Er
Hi,
Initially, we have only one Kafka cluster shared across all teams. But now
this cluster is very close to out of resources (disk space, # of
partitions, etc.). So we are considering adding another Kafka cluster. But
what's the best practice of topic discovery, so that applications know
which cl
> your applications code and configurations
> >
> > On Mon, Dec 5, 2016 at 9:43 PM Yifan Ying wrote:
> >
> > > Hi,
> > >
> > > Initially, we have only one Kafka cluster shared across all teams. But
> > now
> > > this cluste
:
> @Yifan Ying Why not add more brokers in your cluster? That will not
> increase the partitions. Does increasing the number of brokers cause you
> any problem? How many brokers do you have in the cluster already?
>
> On Wed, Dec 7, 2016 at 12:35 AM, Yifan Ying wrote:
>
>
w many topics do you have and how many
> partitions per topic do you have? What is your resource utilization for
> bandwidth, CPU, and memory? How many average consumers do you have for
> each topic?
>
> brian
>
>
>
> On 06.12.2016 21:23, Yifan Ying wrote:
>
>> Hi Aseem,
Hi Kafka users,
I am trying to expose Kafka client metrics to Prometheus via
*MetricsReporter*. And it looks like Kafka clients don't expose the
*Measurable* objects so that I can only do *KafkaMetric.value()* and use it
as Gauge in Prometheus even if the metric could be a Percentile in Kafka
clie
Hi,
We are running a group of Kafka consumers on 200 mesos instances and we are
observing constant Revoking/Rejoining in our consumer logs. But it's hard
to tell which consumer member initially caused this issue as every consumer
needs re-join in this case. Is there a good way to find out that 'ba
Some more details about the consumer setup:
- version 0.10.1.1
- the topic has 400 partitions
- consumers are running on 200 mesos instances, and each instance is
running 2 KafkaConsumer threads
Yifan
On Wed, Mar 8, 2017 at 11:46 PM, Yifan Ying wrote:
> Hi,
>
> We are
Hi Kafka users,
We are seeing some InterruptedException when calling KafkaProducer.flush().
The client version is 0.10.1.1. The error percentage is less than 1% and we
can't see any correlation with the volume of traffic. Here's the stacktrace,
org.apache.kafka.common.errors.InterruptException: F
31 matches
Mail list logo