i will try to reproduce this problem later this week.
Bouncing the broker fixed the issue but the issue surfaced again after a
period of time. A little more context about this is that the cluster was
deployed to VMs and I discovered that the issue appeared whenever CPU wait
time was extremely high
Hi Harsha,
Thanks for the note. Sorry it took some time to reply back…I was ooto the last
few days. To pick up the thread using
https://cwiki.apache.org/confluence/display/KAFKA/Clients#Clients-HTTPREST. I
assume I can configure the corporate https to redirect REST calls to Kafka
HTTPS proxy.
Great, thanks for the link Mike.
>From what I can tell, the only time opening of a segment file would be slow
is in the event of unclean shutdown, where a segment file may not have been
fsync'd and Kafka needs to CRC it and rebuild its index. This should really
only be a problem for the "newest" l
The default range partition assignment algorithm will assign partition on
per topic basis. If you have more consumer threads than number of
partitions in a topic, some threads will not be assigned any partition.
If you are consuming from multiple topics, You might want to set the
partition.assignme
If you can reproduce this problem steadily, once you see this issue, can
you grep the controller log for topic partition in question and see if
there is anything interesting?
Thanks.
Jiangjie (Becket) Qin
On 5/14/15, 3:43 AM, "tao xiao" wrote:
>Yes, it does exist in ZK and the node that had t
I think I figured out what the problem is, though I'm not sure how to fix
it.
I've managed to debug through the embedded broker's callback for the
TopicChangeListener#handleChildChange() int he PartitionStateMachine class.
The following line from that function that's failing look this:
val adde
Thanks Guozhang. It worked .
On Thu, May 14, 2015 at 4:59 PM Guozhang Wang wrote:
> Hello,
>
> This behavior has been changed since 0.8.2.0, you can find the details in
> the following KIP discussion:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1+-+Remove+support+of+request.requir
Json encoded blob definitely appears to be going in as a json string. The
partition assignment json seems to be the only thing that is being prefixed
by these bytes. Any ideas?
On Thu, May 14, 2015 at 5:17 PM, Corey Nolet wrote:
> I think I figured out what the problem is, though I'm not sure ho
I have started multiple consumers with some time delay. Even after long
period of time, later joined consumers are not getting any distribution of
partitions. Only one consumer is loaded with all the partitions. I dont see
any configuration parameter to change this behavior.
Did anyone face simila
Hi, All
I had experience to setup kafka cluster among physical servers, currently I
setup two VMs, and I fire up 1 broker on each VMs, as (broker 0 and 2). I
create a topic test-rep-1:
Topic:test-rep-1PartitionCount:2ReplicationFactor:1 Configs:
Topic: test-rep-1
Thank you Andrey.
On Thu, May 14, 2015 at 11:21 AM, Andrey Yegorov
wrote:
> As I remember, you can simply stop old broker, start the new one with the
> same broker id as the old one.
> It will start syncing replicas from other brokers and eventually will get
> all of them
> After this is done (a
Hello,
This behavior has been changed since 0.8.2.0, you can find the details in
the following KIP discussion:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-1+-+Remove+support+of+request.required.acks
And the related ticket is KAFKA-1555.
For your use case you could set min.insync.repli
Hi,
The documentation for new producer allows passing ack=2(or any other
numeric value) but when i actually pass anything other than 0,1,-1 in
broker log i see following warning.
Client producer-1 from /X.x.x.x:50105 sent a produce request with
request.required.acks of 2, which is now deprecated
I raised the log levels to try to figure out what happens. I see log
statements on the broker stating:
"New topic creation callback for "
"New partition creation callback for "
"Invoking state change to NewPartition for partitions "
"Invoking state change to OnlinePartitions for partitions "
"Erro
Hi Raja,
Thanks a lot for that input. It definitely was a problem with the
__consumer_offsets not getting updated (not sure why that didnt happen
though with upgrading kafka with a new release).
But I deleted the __consumer_offsets topic and it was auto created and the
consumer offset checker work
Darn, I was looking at the defaults for 0.8.2, which is why I thought it
was enabled. Thanks for the help. Works fine now that I enabled it.
Steve
On Wed, May 13, 2015 at 3:25 PM, Jiangjie Qin
wrote:
> Automatic preferred leader election hasn¹t been turned on in 0.8.1.1. It¹s
> been turned on i
Currently, we retain the last replica in isr only when unclean leader
election is disabled. We probably should always retain the last replica in
isr. Could you file a jira to track this?
Thanks,
Jun
On Wed, Apr 8, 2015 at 9:30 AM, Valentin wrote:
>
> Hi all,
>
> I have faced a strange situatio
As I remember, you can simply stop old broker, start the new one with the
same broker id as the old one.
It will start syncing replicas from other brokers and eventually will get
all of them
After this is done (all replicas are in sync) you can trigger leader
election (or preferred replica election
Replying to Jay's message, though some mailing list snafu made me not able to
see it except through the archive. Apologies if it breaks threading:
I've confirmed that the Kafka process itself is running 64-bit, the information
is included below. At this point I'm thinking it could be ulimit so w
Hi all,
Sometimes we need to replace a kafka broker because it turns out to be a
bad instance. What is the best way of doing this?
We have been using the kafka-reassign-partitions.sh to migrate all topics
to the new list of brokers which is the (old list + the new instance - the
bad instance). T
Regarding the issue that adding more partitions kill the performance: I
would suspect it maybe due to not-sufficient batching. Note that in the new
producer batching is done per-partition, and if linger.ms setting low,
partition data may not be batched enough before they got sent to the
brokers. Al
Thanks guys, I got the point.
We ended up handling the ser/de in custom wrappers over the generic
byte[],byte[] producer to ensure only single producer for the application
and neither lose the type safety.
On Wed, May 13, 2015 at 11:31 PM, Guozhang Wang wrote:
> Hello Mohit,
>
> When we orig
I'm firing up a KafkaServer (using some EmbeddedKafkaBroker code that I
found on Github) so that I can run an end-to-end test ingesting data
through a kafka topic with consumers in Spark Streaming pushing to
Accumulo.
Thus far, my code is doing this:
1) Creating a MiniAccumuloCluster and KafkaSer
Can you try bouncing that broker?
Thanks,
Mayuresh
On Thu, May 14, 2015 at 3:43 AM, tao xiao wrote:
> Yes, it does exist in ZK and the node that had the
> NotLeaderForPartitionException
> is the leader of the topic
>
> On Thu, May 14, 2015 at 6:12 AM, Jiangjie Qin
> wrote:
>
> > Does this top
Hi All,
I am running: kafka_2.10-0.8.1.1, and when I run the
reassign-partitions.sh script, I get this:
Partitions reassignment failed due to Partition reassignment currently in
progress for Map(). Aborting operation
kafka.common.AdminCommandFailedException: Partition reassignment currently
in p
Hi Meghana,
We also faced similar issue and found that it returned
ConsumerCoordinatorNotAvailableCode always for one broker (server id 3) and
leader for all partitions of __consumer_offsets topic is same broker id 3.
So wiped off kafka data dir on that broker and restarted it. After that
Consume
Hi Mayuresh,
A few more inputs that I can provide at the moment after some testing are
as follows.
1. The error returned by the consumer offset checker's
ConsumerMetadataResponse is "ConsumerCoordinatorNotAvailableCode". Could it
somehow be related to the offsets being written to zookeeper and not
Yes, it does exist in ZK and the node that had the
NotLeaderForPartitionException
is the leader of the topic
On Thu, May 14, 2015 at 6:12 AM, Jiangjie Qin
wrote:
> Does this topic exist in Zookeeper?
>
> On 5/12/15, 11:35 PM, "tao xiao" wrote:
>
> >Hi,
> >
> >Any updates on this issue? I keep s
28 matches
Mail list logo