OK,
thanks for heads up!
Is this requirement documented somewhere?
Would it make sense then to have AdminUtils, call setZkSerializer on
zkClient passed to it? Or maybe provide a factory method(s) for ZkClient in
AdminUtils, which would ensure ZkSerializer is appropriate.
Kind regards,
Stevo Sla
You have (at least) two problems there, stuck processing and blocking
processing retry indefinitely.
To solve first issue, there is no general answer, it depends on what kind
of processing you're doing that gets stuck, that will influence how you can
terminate/interrupt it.
I assume you use Java or
Hi All,
Using Kafka's high consumer API I have bumped into a situation where
launching a consumer process P1 with X consuming threads on a topic with X
partition kicks out all other existing consumer threads that consumed prior
to launching the process P.
That is, consumer process P is stealing al
We measure the max lag of each follower replica (
http://kafka.apache.org/documentation.html#monitoring). Could you see if
the lag is widening? Also, could you enable the request log on the leader
and see if the follower is still issuing fetch requests?
Thanks,
Jun
On Fri, Oct 24, 2014 at 9:17 A
Hello folks,
I recently noticed our message amount in kafka seems to have dropped
significantly. I didn't see any exception on my consumer side. Since
producer is not within my control, I am trying to get some guidance on how
I could debug this issue.
Our individual message size recently have inc
I think it will depend on how your producer application logs things, but
yes I have historically seen exceptions in the producer logs when they
exceed the max message size.
-Mark
On Mon, Oct 27, 2014 at 10:19 AM, Chen Wang
wrote:
> Hello folks,
> I recently noticed our message amount in kafka s
Hi team,
I am testing async publishing + acknowledgement.
Assume all settings are the default and queue.buffering.max.messages is 10k. I
use a simple for loop to publish 100k messages. In the process, I unplugged the
net cable.
What should I expect in this case?
I assume the send will be bloc
Hi Neha,
I have two problems:. Any help is greatly appreciated.
1)* java.lang.IllegalStateException: Iterator is in failed state*
ConsumerConnector consumerConnector = Consumer
.createJavaConsumerConnector(getConsumerConfig());
Map topicCountMap = new HashMap()
This is what I get -
bin/kafka-topics.sh --zookeeper localhost:2181 --describe
Topic:HeartbeatPartitionCount:2ReplicationFactor:1Configs:
Topic: TestPartition: 0Leader: 0Replicas: 0Isr: 0
Topic: TestPartition: 1Leader: 0Replicas: 0Isr: 0
The Apache Kafka community is pleased to announce the beta release for Apache
Kafka 0.8.2.
The 0.8.2-beta release introduces many new features, improvements and fixes
including:
- A new Java producer for ease of implementation and enhanced performance.
- Delete topic support.
- Per topic conf
Hello Chen,
You can look into brokers for "message size too large" exceptions if you
cannot access the producer logs (both of them should have this in their log
files). Also which ack mode are your producer using?
Guozhang
On Mon, Oct 27, 2014 at 10:31 AM, Mark Roberts wrote:
> I think it will
The add partition command essentially writes a new json value in the topic
path in ZK. You can figure out the format and write the json yourself.
Thanks,
Jun
On Fri, Oct 24, 2014 at 11:35 AM, David Charle wrote:
> hi kafka'ers
>
> I couldn't find anything useful where I could add partitions th
This seems to be a bug of 0.8.1
More info:
queue.enqueue.timeout.ms is explicitly set to -1.
queue.buffering.max.messages is explicitly set o 1.
But still, when the net cable was unplugged, the producer did not block at all
and
all messages were "sent".
> From: yu_l...@hotmail.com
> To: u
Congrats! When do you think the final 0.82 will be released?
> To: annou...@apache.org; users@kafka.apache.org; d...@kafka.apache.org
> Subject: [ANNOUNCEMENT] Apache Kafka 0.8.2-beta Released
> Date: Tue, 28 Oct 2014 00:50:35 +
> From: joest...@apache.org
>
> The Apache Kafka community is pl
I think AdminUtils just take a zkClient as its parameter, and the zkClient
should setZkSerializer at the time when it is initialized.
You can take a look at TopicCommand, which triggers AdminUtils.createTopic
by initializing a zkClient and pass it in. I agree that we probably have to
make it clear
You may also need to set the retries to something high, I think. I think
the default is something like 1 or 3 so it will try a few times then give
up.
-Jay
On Mon, Oct 27, 2014 at 6:01 PM, Libo Yu wrote:
> This seems to be a bug of 0.8.1
> More info:
> queue.enqueue.timeout.ms is explicitly set
You can take a look at the "consumer rebalancing algorithm" part in
http://kafka.apache.org/documentation.html. Basically, partitions are
evenly distributed among all consumers in the same group. If there are more
consumers in a group than partitions, some consumers will never get any
data.
Thanks
>> Sometime it give following exception.
It will help to have a more specific test case that reproduces the failed
iterator state.
Also, the consumer threads block if the fetcher queue is full. The queue
can fill up if your consumer thread dies or slows down. I'd recommend you
ensure that all you
Thanks, Jay. A large retry number can help in this case.
> Date: Mon, 27 Oct 2014 18:12:12 -0700
> Subject: Re: question about async publishing for 0.8.1
> From: jay.kr...@gmail.com
> To: users@kafka.apache.org
>
> You may also need to set the retries to something high, I think. I think
> the def
HI Neha,
If I solved the problem number 1 think and number 2 will be solved (prob 1
is causing problem number 2(blocked)). Can you please let me know what
controls the queue size for *ConsumerFetcherThread* thread ?
Please see the attached java source code which will reproduce the problem.
You
Hi Kafka Team,
Is Compression happening on Producer Side (on application thread meaning
thread that call send method or background Kafka thread ) and where does
decompression Consumer side ?
Is there any Compression/Decompression happening on Brokers Side when
receiving message from producer and
21 matches
Mail list logo