25, 2014 4:03 PM
To: users@kafka.apache.org
Subject: Re: ConsumerRebalanceFailedException
Could you send around the consumer log when it throws
ConsumerRebalanceFailedException. It should state the reason for the failed
rebalance attempts.
Thanks,
Neha
On Tue, Feb 25, 2014 at 12:01 PM, Yu, Li
Hi all,
I tried to reproduce this exception. In case one, when no broker was running, I
launched all consumers and
got this exception. In case two, while the consumers and brokers were running,
I shutdown all brokers one by
one and did not see this exception. I wonder why in case two this except
-register upon
receiving the session timeout. You can re-produce this issue by signal pause
the ZK process.
Guozhang
On Fri, Feb 14, 2014 at 12:15 PM, Yu, Libo wrote:
> Hi team,
>
> We have three brokers on our production cluster. I noticed two of them
> somehow got offline and then r
Hi team,
We have three brokers on our production cluster. I noticed two of them somehow
got offline and then re-registered with zookeeper and got back online. It seems
the
issue was caused by some zookeeper issue. So I want to know what may be the
possible
cases of the issue. If I want to reprod
version are you using? Pre-0.8.1 there is a bug that can cause a
registration path to be deleted:
https://issues.apache.org/jira/browse/KAFKA-992
And this has been fixed in 0.8.1
Guozhang
On Tue, Feb 11, 2014 at 1:16 PM, Yu, Libo wrote:
> Hi team,
>
> This is an issue that has frus
Hi team,
This is an issue that has frustrated me for quit some time. One of our clusters
has
three hosts. In my startup script, three zookeeper processes are brought up
first followed
by three kafka processes. The problem we have is that after three kafka
processes are up,
only one broker has b
When I telnet to the zookeeper and type "status", this is what I got:
Zookeeper version: 3.3.3-1203054, built on 11/17/2011 05:47 GMT
Is that 3.3.4? So 0.8 final also uses 3.3.4, is that right? Thanks.
Regards,
Libo
-Original Message-
From: Neha Narkhede [mailto:neha.narkh...@gmail.com
lable.
Yes bouncing the process will allow you to consume again. Also would you mind
giving 0.8 final a try? It is much more stable compared to 0.8 beta.
Thanks,
Neha
On Feb 7, 2014 6:49 AM, "Yu, Libo" wrote:
> We are using 0.8 beta1. Our zookeeper had some issue which in turn
&g
ll trigger rebalances.
Thanks,
Jun
On Thu, Feb 6, 2014 at 10:46 AM, Yu, Libo wrote:
> While the broker is not available (caused by zookeeper issue), the
> rebalance will fail. Should rebalance succeed in this case? Thanks.
>
>
> Regards,
>
> Libo
>
>
> -Ori
ns out of
the retries, it needs to be restarted to consume again.
On Thu, Feb 6, 2014 at 9:05 AM, Yu, Libo wrote:
> Hi folks,
>
> This is what we experienced recently:
> Some zookeeper's issue made broker unavailable for a short period of time.
> On the consumer side, thi
gt;
>
>
>
> On Thu, Feb 6, 2014 at 9:05 AM, Yu, Libo wrote:
>
> > Hi folks,
> >
> > This is what we experienced recently:
> > Some zookeeper's issue made broker unavailable for a short period of
> time.
> > On the consumer side, this triggere
Hi folks,
This is what we experienced recently:
Some zookeeper's issue made broker unavailable for a short period of time.
On the consumer side, this triggered rebalance and rebalanced failed after
four tries.
So while should we expect while the broker is not up? Should consumer keep
trying to reb
Hi team,
I believe num.partitions is for automatic topic creation. Is that right?
The default number of partition for kafka-create-topic.sh is 1. So
Will num.partitions impact kafka-create-topic.sh? Thanks.
Regards,
Libo
You would need to first stop the consumer, update the offset in ZK and then
restart the consumer. Also, have you looked at the tool ImportZkOffsets?
Thanks,
Jun
On Tue, Jan 14, 2014 at 12:38 PM, Yu, Libo wrote:
> Hi folks,
>
> I am writing a tool to "purge" the pending topics
Hi folks,
I am writing a tool to "purge" the pending topics for a user. Assume the user
has never
consumed this topic previously. If I create all the nodes on the path
/consumers/[myuser]/offsets/[mytopic]/[partition] and put the maximum
available offset to the node, is that enough to let the con
Hi Jun,
zookeeper.session.timeout.ms is used in a broker's configuration and manages
brokers' registration with zk.
Does it apply to consumer as well? Thanks.
Regards,
Libo
-Original Message-
From: Jun Rao [mailto:jun...@gmail.com]
Sent: Monday, December 30, 2013 11:13 AM
To: users@
8.1 is working in stable at LinkedIn now.
>
> Guozhang
>
>
> On Thu, Dec 19, 2013 at 10:52 AM, Yu, Libo wrote:
>
> > I also want to know how stable the 0.81 will be, compared with 0.8
> > or 0.8-beta1.
> >
> > Regards,
> >
> > Libo
> &g
is 0.8.1, will there be a 'release' of this soon, or are there still
significant open issues?
Thanks,
Jason
On Thu, Dec 19, 2013 at 12:17 PM, Guozhang Wang wrote:
> Libo, yes the upgrade from 0.8 to 0.8.1 can be done in place.
>
> Guozhang
>
>
> On Thu, Dec 1
beta1 to 0.81
Libo, yes the upgrade from 0.8 to 0.8.1 can be done in place.
Guozhang
On Thu, Dec 19, 2013 at 8:57 AM, Yu, Libo wrote:
> Hi folks,
>
> As the tools in 0.8 are not stable and we don't want to take the risk.
> we want to skip 0.8 and upgrade from beta1 t
Hi folks,
As the tools in 0.8 are not stable and we don't want to take the risk. we want
to skip 0.8 and upgrade from beta1 to 0.81 directly. So my question is whether
we can do an in place upgrade and let 0.81 use beta1's zk and kf data. Assume
that we will disable log compaction. Thanks.
Regard
will return false instead of
> throwing
> > an exception.
> >
> > Guozhang
> >
> >
> > On Tue, Dec 17, 2013 at 11:53 AM, Yu, Libo wrote:
> >
> > > Sorry, a typo. Correct my question. When consumer.timeout.ms is
> > > set to
> > 0,
&
Jun
On Tue, Dec 17, 2013 at 4:57 PM, Guozhang Wang wrote:
> If there is no more messages, hasNext will return false instead of
> throwing an exception.
>
> Guozhang
>
>
> On Tue, Dec 17, 2013 at 11:53 AM, Yu, Libo wrote:
>
> > Sorry, a typo. Correct my questi
: Tuesday, December 17, 2013 12:40 AM
To: users@kafka.apache.org
Subject: Re: a consumer question
If there is a message, hasNext() returns true, not throwing an exception.
Thanks,
Jun
On Mon, Dec 16, 2013 at 11:29 AM, Yu, Libo wrote:
> Hi folks,
>
> For this parameters, if consumer.t
Hi folks,
For this parameters, if consumer.timeout.ms is set to 0, whenever I call
ConsumerIterator's hasNext(),
if there is a message available, a timeout exception will be thrown. Is my
understanding correct? Thanks.
consumer.timeout.ms
-1
Throw a timeout exception to the consumer if no mes
venly distributed on all six brokers.
If I use reassignment tool in 0.81 with 0.8 broker, will that work and get
around the bugs?
Your broker also needs to be on 0.8.1 for it to work correctly.
On Mon, Dec 16, 2013 at 9:06 AM, Yu, Libo wrote:
> If we have six brokers, and a topic has three pa
-Original Message-
From: Neha Narkhede [mailto:neha.narkh...@gmail.com]
Sent: Monday, December 16, 2013 10:26 AM
To: users@kafka.apache.org
Subject: RE: cluster expansion
They will be evenly distributed across the nodes in the cluster.
Thanks,
Neha
On Dec 16, 2013 6:42 AM, "Yu,
will remain on the original brokers.
> > You
> > > could either reassign some partitions from all topics to the new
> brokers
> > or
> > > you could add partitions to the new brokers for each topic. In
> > > 0.8.0
> > there
> > > is now
Hi folks,
There are three brokers running 0.8-beta1 in our cluster currently. Assume all
the topics have six partitions.
I am going to add another three brokers to the cluster and upgrade all of them
to 0.8. My question is after
the cluster is up, will the partition be evenly distributed to all
AM
To: users@kafka.apache.org
Subject: Re: error from adding a partition
Tried this on the 0.8.0 release and it works for me. Could you make sure there
are no duplicated kafka jars?
Thanks,
Jun
On Tue, Dec 10, 2013 at 7:08 AM, Yu, Libo wrote:
> Hi folks,
>
> I got this error when I tri
Hi folks,
I got this error when I tried to test the partition addition tool.
bin/kafka-add-partitions.sh --partition 1 --topic libotesttopic --zookeeper
xx.xxx.xxx.xx:
adding partitions failed because of
kafka.admin.AdminUtils$.assignReplicasToBrokers(Lscala/collection/Seq;)Lscala/collec
+release+plan
Thanks,
Jun
On Wed, Dec 4, 2013 at 8:15 AM, Yu, Libo wrote:
> Thanks for the clarification. I am just curious about how this works out.
> If we can change the retention size with "kafka-topics.sh --alter",
> will the new retention size be updated to the serve
g tool. 0.8.1 makes all per-topic
> configuration dynamic and updatable via a command line tool.
>
> -Jay
>
>
> On Tue, Dec 3, 2013 at 1:23 PM, Yu, Libo wrote:
>
> > Hi Neha,
> >
> > "0.8.1 includes the ability to dynamically change per top
t 10:21 AM, Yu, Libo wrote:
> Hi folks,
>
> For 0.8, it is possible to add a partition dynamically. Is it possible
> to increase the retention size on the fly? This feature will be very
> useful for operation. I know rolling start can pick up the change but
> it takes too much effort. Thanks.
>
> Libo
>
>
Hi folks,
For 0.8, it is possible to add a partition dynamically. Is it possible to
increase the retention size on the fly? This feature will be very useful
for operation. I know rolling start can pick up the change but it takes
too much effort. Thanks.
Libo
,
Jun
On Mon, Dec 2, 2013 at 6:57 AM, Yu, Libo wrote:
> Actually, I saw this line in the log : can't rebalance after 4 retries.
> What should I expect in this case? All consumers threads failed or
> only some of them?
> If I increase the number of retries or delay between retr
, Nov 29, 2013 at 6:35 AM, Yu, Libo wrote:
> Hi team,
>
> Currently we are using 0.8-beta1. We plan to upgrade to 0.8. My
> concern is whether we need to purge all existing kafka and zookeeper
> data on the hard drive for this upgrade. In other words, can 0.8 use
> 0.8-beta1 k
ill not be
consumed by any consumers.
Thanks,
Jun
On Fri, Nov 29, 2013 at 10:44 AM, Yu, Libo wrote:
> You are right, Joe. I checked our brokers' log. We have three brokers.
> All of them failed to connect to zk at some point.
> So they were offline and later reregistered themselves
http://www.twitter.com/allthingshadoop>
****/
On Fri, Nov 29, 2013 at 11:31 AM, Yu, Libo wrote:
> We found our consumer stopped working after this exception occurred.
> Can the consumer recover from such an exception?
>
> Regards,
>
> Libo
>
>
> -O
We found our consumer stopped working after this exception occurred.
Can the consumer recover from such an exception?
Regards,
Libo
-Original Message-
From: Florin Trofin [mailto:ftro...@adobe.com]
Sent: Tuesday, July 16, 2013 4:20 PM
To: users@kafka.apache.org
Subject: Re: ConsumerReb
Hi team,
Currently we are using 0.8-beta1. We plan to upgrade to 0.8. My concern
is whether we need to purge all existing kafka and zookeeper data on the
hard drive for this upgrade. In other words, can 0.8 use 0.8-beta1 kafka
and zookeeper data on the hard drive? Thanks.
Regards,
Libo
, 2013 at 12:28 PM, Yu, Libo wrote:
> Hi team,
>
> I am reading this link:
>
> https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Re
> plicationtools-5.AddPartitionTool and this JIRA
> https://issues.apache.org/jira/i#browse/KAFKA-1030. I have a couple of
&
Source Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
/
On Thu, Nov 28, 2013 at 4:24 PM, Yu, Libo wrote:
> Hi team,
>
> For the current 0.8 branch, is it recommended to compil
Hi team,
For the current 0.8 branch, is it recommended to compile it with Scala 2.10?
I remember someone said previously it is best to compile the broker with
Scala 2.80. Thanks.
Regards,
Libo
Hi team,
I am reading this link:
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-5.AddPartitionTool
and this JIRA https://issues.apache.org/jira/i#browse/KAFKA-1030. I have a
couple of questions.
After adding a partition by using the tool, should the consumer
I did a restart and the issue was gone. It could be that we changed the
retention size and did not restart
the brokers to pick up the change. Thanks for your help.
Regards,
Libo
-Original Message-
From: Yu, Libo [ICG-IT]
Sent: Friday, November 22, 2013 10:53 AM
To: '
producer sticks to a partition for the metadata refresh period - so if your
test run isn't long enough some partitions may be more loaded than the others.
On Thu, Nov 21, 2013 at 06:28:39PM +0000, Yu, Libo wrote:
> Hi team,
>
> We have 3 brokers in a cluster. The replication factor i
Hi team,
We have 3 brokers in a cluster. The replication factor is 2. I set the default
retention size to
3G bytes. I published 12G data to a topic, which is enough to fully load all
partitions. I assume
on each broker the partition size should be 3G. However, it is only 1.4G for
one partation.
Hi team,
Still this is from beta1. I notice this exception occurred frequently in our
broker logs.
[2013-11-14 21:09:58,714] INFO Got user-level KeeperException when proce
ssing sessionid:0x24250e816b000df type:create cxid:0x26b zxid:0x
fffe txntype:unknown reqpath:n/a Error
Path:/con
...@gmail.com]
Sent: Thursday, November 14, 2013 12:27 PM
To: users@kafka.apache.org
Subject: Re: broker exception
Are you using ZK 3.3.4? This seems to be caused by a bug in 3.3.3 and 3.3.0.
https://issues.apache.org/jira/browse/ZOOKEEPER-1115
Thanks,
Jun
On Thu, Nov 14, 2013 at 5:25 AM, Yu, Libo
r log?
Guozhang
On Thu, Nov 14, 2013 at 8:23 AM, Yu, Libo wrote:
> Hi team,
>
> I saw this line within a long time span in our logs for the same topic
> and partition.
> [2013-11-14 11:13:41,647] WARN [KafkaApi-1] Produce request with
> correlation id 529240 from client
Hi team,
I saw this line within a long time span in our logs for the same topic and
partition.
[2013-11-14 11:13:41,647] WARN [KafkaApi-1] Produce request with correlation id
529240 from client on partition [mytopic,2] failed due to Leader not local for
partition [mytopic,2] on broker 1 (kafka
Hi team,
We are using beta1. I am going to delete all topics and create them with more
partitions.
But I don't want to lose any messages.
Assume the consumers are online all the time for the following steps. The
consumer's
auto.offset.reset is set to largest.
1 stop publishing to the brokers.
Hi team,
This exception occurs regularly on our brokers. When it occurs, a broker will
lose its leader role but still in ISR. And running
preferred-leader-election script may rebalance the leadership but in some cases
it does not help.
[2013-11-14 08:04:40,001] INFO Processed session terminatio
I read it and tried to understand it. It would be great to add a summary
at the beginning about what it is and how it may impact a user.
Regards,
Libo
-Original Message-
From: Joel Koshy [mailto:jjkosh...@gmail.com]
Sent: Friday, November 08, 2013 2:01 AM
To: users@kafka.apache.org
Sub
Thanks for your reply, Joel.
Regards,
Libo
-Original Message-
From: Joel Koshy [mailto:jjkosh...@gmail.com]
Sent: Thursday, November 07, 2013 5:00 PM
To: users@kafka.apache.org
Subject: Re: add partition tool in 0.8
>
> kafka-add-partitions.sh is in 0.8 but not in 0.8-beta1. Therefor
Hi team,
Here is what I want to do:
We are using 0.8-beta1 currently. We already have some topics and want to add
partitions
for them.
kafka-add-partitions.sh is in 0.8 but not in 0.8-beta1. Therefore we cannot use
this tool with
0.8-beta1. If I download latest 0.8 and compile it, can I use its
Got it. Thanks.
Regards,
Libo
-Original Message-
From: Neha Narkhede [mailto:neha.narkh...@gmail.com]
Sent: Thursday, October 24, 2013 10:09 AM
To: users@kafka.apache.org
Subject: Re: question about default key
The default key is null.
Thanks,
Neha
On Oct 24, 2013 6:47 AM, "Yu,
Hi team,
If I don't specify a key when publishing a message, a default key will be
generated.
In this case, how long is the default key and will the consumer get this
default key?
Thanks.
Libo
Hi team,
For the message type in KeyedMessage, I can use String or byte[].
Is there any difference in terms of the actual data transferred?
Regards,
Libo
Hi team,
According to the document, the default partitioner hashes the key string and
assign the message
to a broker. Could you give a brief introduction to the hash algorithm? If a
long timestamp (in hex format)
is used as key, will the messages be distributed evenly to all partitions?
Assume
, 2013 at 9:14 AM, Yu, Libo wrote:
> Hi team,
>
> Is it possible to use a single producer with more than one threads? I
> am not sure If its send() is thread safe.
>
> Regards,
>
> Libo
>
>
--
-- Guozhang
Hi team,
Is it possible to use a single producer with more than one threads? I am not
sure
If its send() is thread safe.
Regards,
Libo
ssage 'markedForCommit',
and not the last 'consumed' offset, which may or may not have succeeded. This
way, consumer code can just call 'markForCommit()'
after successfully processing each message successfully.
Does that make sense?
On Mon, Sep 9, 2013 at 5:21 P
You can use a thread pool to write to hbase. And create another pool of
consumer threads. Or add more
consumer processes. The bottleneck is writing to Hbase in this case.
Regards,
Libo
-Original Message-
From: Graeme Wallace [mailto:graeme.wall...@farecompare.com]
Sent: Wednesday, Oct
Hi team,
Here is a usage case: Assume each host in a kafka cluster a gigabit network
adaptor.
And the incoming traffic is 0.8gbps and at one point all the traffic goes to
one host.
The remaining bandwidth is not enough for the followers to replicate messages
from
this leader.
To make sure no b
Hi team,
Is it safe to apply the 0.8 patch to 0.8 beta1?
Regards,
Libo
-Original Message-
From: Joe Stein [mailto:crypt...@gmail.com]
Sent: Friday, September 13, 2013 4:10 PM
To: d...@kafka.apache.org; users@kafka.apache.org
Subject: Re: [jira] [Updated] (KAFKA-1046) Added support for
Answer my own question:
When I tried to apply the patch to 0.8 beta1, I got many errors and had to skip
it.
Regards,
Libo
-Original Message-
From: Yu, Libo [ICG-IT]
Sent: Tuesday, September 17, 2013 3:33 PM
To: 'users@kafka.apache.org'; 'd...@kafka.apache.org'
, September 10, 2013 8:48 PM
To: users@kafka.apache.org
Subject: Re: monitoring followers' lag
It should be "kafka.server":type="ReplicaFetcherManager",name="Replica-MaxLag"
- can you confirm and mind updating the wiki if this is the case?
Thanks,
Joel
On Tue,
Hi team,
I wonder if anybody can give detailed instructions on how to monitor
the followers' lag by using JMX. Thanks.
Regards,
Libo
mailto:jun...@gmail.com]
Sent: Tuesday, September 10, 2013 11:01 AM
To: users@kafka.apache.org
Subject: Re: monitoring followers' lag
Have you looked at the updated docs in
http://kafka.apache.org/documentation.html#monitoring ?
Thanks,
Jun
On Tue, Sep 10, 2013 at 7:59 AM, Yu, Libo wrote:
>
o messages translation
yourself.
As for setting replica.lag.max.messages, you can observe the max lag in the
follower and set replica.lag.max.messages to be a bit larger than that. I am
curious to know the observed max lag in your use case.
Thanks,
Jun
On Tue, Sep 10, 2013 at 6:46 AM, Yu, L
e.org/documentation.html#monitoring ?
Thanks,
Jun
On Tue, Sep 10, 2013 at 7:59 AM, Yu, Libo wrote:
> Hi team,
>
> I wonder if anybody can give detailed instructions on how to monitor
> the followers' lag by using JMX. Thanks.
>
> Regards,
>
> Libo
>
>
Hi team,
For default broker configuration, replica.lag.max.messages is 4000 and
message.max.bytes is 1Mb.
In the extreme case, the follower(s) could lag by 4000 messages. The leader
must save at least
4000 messages to allow follower(s) to catch up. So the minimum retention size
is 4000Mb=4Gb.
I
.
Thanks,
Neha
On Mon, Sep 9, 2013 at 9:08 AM, Yu, Libo wrote:
> If one connector is used for a single stream, when there are many
> topics/streams, will that cause any performance issue, e.g. too many
> connections or too much memory or big latency?
>
> Regards,
>
> Lib
ocessed (seems a change to the connector itself
> > > might expose a way to
> > use
> > > auto offset commit, and have it never commit a message until it is
> > > processed). But that would be a change to the
> > > ZookeeperConsumerConnectorEssenti
the same time?
This is a better approach as there is no complex locking involved.
Thanks,
Neha
On Thu, Aug 29, 2013 at 10:28 AM, Yu, Libo wrote:
> Hi team,
>
> This is our current use case:
> Assume there is a topic with multiple partitions.
> 1 Create a connector first and
Hi team,
This is our current use case:
Assume there is a topic with multiple partitions.
1 Create a connector first and create multiple streams from the connector for a
topic.
2 Create multiple threads, one for each stream. You can assume the thread's job
is to
save the message into the database
n Wed, Aug 28, 2013 at 1:09 PM, Yu, Libo wrote:
> Hi team,
>
> We notice when the incoming throughput is very high, the broker has to
> delete old log files to free up disk space. That caused some kind of
> blocking
> (latency) and
> frequently the broker's zookeeper s
nted in
> > http://kafka.apache.org/documentation.html#brokerconfigs
> >
> > "Note that all per topic configuration properties below have the
> > format
> of
> > csv (e.g., "topic1:value1,topic2:value2")."
> > Thanks,
> > Jun
> >
Hi team,
We notice when the incoming throughput is very high, the broker has to delete
old log files to free up disk space. That caused some kind of blocking
(latency) and
frequently the broker's zookeeper session times out. Currently our zookeeper
time out threshold is 4s. We can increase it. Bu
Hi Jun,
In a previous email thread
http://markmail.org/search/?q=kafka+log.retention.bytes#query:kafka%20log.retention.bytes+page:1+mid:qnt4pbq47goii2ui+state:results,
you said log.retention.bytes is for each partition. Could you clarify on that?
Say if I have a topic with three partitions. I wa
, having a large
replica.lag.time.max.ms may delay the committing of a message.
Thanks,
Jun
On Tue, Aug 27, 2013 at 6:37 AM, Yu, Libo wrote:
> Thanks, Jun. That is very helpful. However, I still have a couple of
> questions. "We have a min fetch rate JMX in the broker". How t
Hi,
We have three brokers in our kafka cluster. For all topics, the replica factor
is two.
Here is the distribution of leaders. After I ran the leader election tool,
nothing
happened. In this list, the first broker in ISR is the leader. I assume after
running
the tool, the first broker is repli
> That's right. You shouldn't need to restart the whole cluster for a
> broker
> > to rejoin ISR. Do you see many ZK session expirations in the brokers
> > (search for "(Expired)"? If so, you may need to tune the GC on the
> broker.
> >
> > T
ISR
When a broker is restarted, it will automatically catch up from the leader and
will join ISR when it's caught up. Are you not seeing this happening?
Thanks,
Jun
On Fri, Aug 23, 2013 at 11:33 AM, Yu, Libo wrote:
> Hi,
>
> When a broker is not in a topic's ISR, will it
-729
Thanks,
Neha
On Fri, Aug 23, 2013 at 10:52 AM, Yu, Libo wrote:
> I will give it a try. I know how to delete log files. But to delete
> the zookeeper data, do I only need to run the delete script?
>
> Regards,
>
> Libo
>
>
> -Original Message
Hi,
When a broker is not in a topic's ISR, will it try to catch up to go back to
ISR itself?
Or do we have to restart it?
We can increase replica.lag.time.max.ms and replica.lag.max.messages
to let brokers stay longer in ISR. Is that good practice? Still this is
related to the first questions. W
to bounce the entire cluster once you've deleted the
zookeeper and kafka data for the topic in question.
Can you give it a try and let us know how it went?
Thanks,
Neha
On Fri, Aug 23, 2013 at 10:15 AM, Yu, Libo wrote:
> Hi Neha,
>
> One more questions. Assume I want to dele
shrinking ISR or electing a new leader for the same partition.
Could you please file a JIRA to improve the quality of logging in this case?
Thanks,
Neha
On Fri, Aug 23, 2013 at 10:28 AM, Yu, Libo wrote:
> Hi team,
>
> During normal operation, all of a sudden, we found many exceptions
Hi team,
During normal operation, all of a sudden, we found many exceptions in the log
like this:
It seems one thread' zookeeper's data is written unexpectedly by some other
thread.
Any expertise will be appreciated.
[2013-08-23 13:17:00,622] INFO Partition [our.own.topic
one.default,0] on bro
-1021
Thanks,
Neha
On Fri, Aug 23, 2013 at 6:34 AM, Yu, Libo wrote:
> Hi Neha,
>
> "Wipe out the cluster" Do you mean you uninstall the cluster and
> reinstall it?
> Or you just delete all kafka data and zookeeper data for the cluster?
> This is not a blocking issu
An auto-increment index can be assigned to a message as a key when it is being
published.
The consumer can monitor this index when receiving. If the expected message
does not
show up, buffer all received messages in a hashtable (use index as hash key)
until it is
received. Then handle all mes
Hi team,
Right now, from a stream, an iterator can be obtained which has a blocking
hasNext().
So what is the implementation behind the iterator? I assume there must be queue
and
the iterator monitors the queue. And a separate thread fetches data and feeds
to the
queue when it is almost empty.
Right now it appears to work but
> > >> > doesn't
> which
> > >> is
> > >> > clearly not good.
> > >> >
> > >> > -Jay
> > >> >
> > >> >
> > >> > On Thu, Aug 22, 2013 at 10:57 AM, Neha N
This is from the broker 3's log:
[2013-08-22 15:40:02,984] WARN [KafkaApi-3] Fetch request: Partition [tes
t.replica1.default,0] doesn't exist on 3 (kafka.server.KafkaApis)
Here is what list topic command shows:
topic: test.replica1.defaultpartition: 0leader: 3 replicas: 3
isr:
feature in Kafka 0.8. So
> > any manual attempts to do so might have a negative impact on functionality.
> >
> > Thanks,
> > Neha
> >
> >
> > On Thu, Aug 22, 2013 at 10:30 AM, Yu, Libo wrote:
> >
> > > Hi team,
> > >
>
:
https://issues.apache.org/jira/browse/KAFKA-1019
Guozhang
On Wed, Aug 21, 2013 at 11:27 AM, Yu, Libo wrote:
> We never deleted it. Either it was never created or deleted somehow.
>
> Regards,
>
> Libo
>
>
> -Original Message-
> From: Guozhang Wang [mailto
ts to do so might have a negative impact on functionality.
Thanks,
Neha
On Thu, Aug 22, 2013 at 10:30 AM, Yu, Libo wrote:
> Hi team,
>
> When I delete a topic, the topic is deleted from zookeeper but its log
> files are not deleted from Brokers.
>
> When I restart a broker,
Hi team,
When I delete a topic, the topic is deleted from zookeeper but its log files
are not deleted from
Brokers.
When I restart a broker, the broker will try to sync the log files whose topic
has been deleted.
Manually deleting the log files will resolve the issue. Should broker ignore
log
]
Sent: Thursday, August 22, 2013 12:01 AM
To: users@kafka.apache.org
Subject: Re: ordering
Actually, I am not sure if I understand the trouble that you mentioned.
Could you elaborate that a bit more?
Thanks,
Jun
On Wed, Aug 21, 2013 at 12:30 PM, Yu, Libo wrote:
> Hi,
>
> This is f
1 - 100 of 185 matches
Mail list logo