Jun. Yes. I am using 0.8.0-beta1.
On Fri, Jun 28, 2013 at 9:26 PM, Jun Rao wrote:
> In this case. the warning is for the fetch request from the replica, not
> regular consumers. I assume this the log for the restarted broker. Is this
> transient? Are you using 0.8.0-beta1?
>
> Thanks,
>
> Jun
>
Using Kafka 0.8, when specifying a starting offset and appropriate
fetchSize, SimpleConsumer will only return up to the highest offset of the
log segment containing the starting offset.
For example,
log segment #1 contains offsets 1 - 10
log segment #2 contiains offsets 11 - 100
A fetch request
Since broker A persists all messages on disk, the buffer is the on-disk
file (with recent data cached in file system buffer).
Thanks,
Jun
On Fri, Jun 28, 2013 at 5:41 PM, Yu, Libo wrote:
> Hi,
>
> Assume A and B are two brokers in a kafka server. And there is long network
> Latency between A
In this case. the warning is for the fetch request from the replica, not
regular consumers. I assume this the log for the restarted broker. Is this
transient? Are you using 0.8.0-beta1?
Thanks,
Jun
On Fri, Jun 28, 2013 at 10:57 AM, Vadim Keylis wrote:
> Good morning. I have a cluster of 3 kafk
I'm assuming this is somewhat related to your previous question on
cross-DC replication. This is not an ideal set up as mentioned there.
If the replica lags then it will fall out of the "in-sync-replica"
set. You could tune parameters that effectively allow a high (but
bounded over time) lag betwee
The second method (replication across DCs) is not recommended.
The first set up would work provided the set of topics you are
mirroring from A->B is disjoint from the set of topics you are
mirroring from B->A (i.e., to avoid a mirroring loop).
Joel
On Fri, Jun 28, 2013 at 5:29 PM, Yu, Libo wrote
Looks good overall - thanks a lot for the improvements.
Couple of comments: clicking on the 0.7 link goes to the migration
page (which should probably be on the 0.8 link)
Also, for the configuration.html file, I used to find the old scala
docs pointing to the actual *Config classes more current an
Hi,
Assume A and B are two brokers in a kafka server. And there is long network
Latency between A and B. For a partition with two replications, one replication
Is assigned to A and the other is assigned to B. Number of acknowledge is set
to one. Assume the partition is handled by broker A.
After
Hi,
I can think of two failover strategies. I am not sure which one is the right
way to go.
First method. set up kafka server A on cluster 1 and set up another server B on
cluster 2.
The two clusters are in different data centers. Use customized mirrormaker to
sync between
the two servers. Use
On 6/28/13 2:48 PM, "Sriram Subramanian"
wrote:
>1. I have moved the FAQ to a wiki. I have separated the sections into
>producer, consumers and broker related questions. I would still need to
>add replication FAQ. The main FAQ will now link to this. Let me know if
>you guys have better ways of
1. I have moved the FAQ to a wiki. I have separated the sections into
producer, consumers and broker related questions. I would still need to
add replication FAQ. The main FAQ will now link to this. Let me know if
you guys have better ways of representing the FAQ.
https://cwiki.apache.org/confluen
Leader election occurs when brokers are bounced or lose their
zookeeper registration. Do you have a state-change.log on your
brokers? Also can you see what's in the following zk paths:
get /brokers/topics/meetme
get /brokers/topics/meetme/partitions/0/state
On Fri, Jun 28, 2013 at 1:40 PM, Vadim K
Joel. My problem after your explanation is that leader for some reason did
not get elected and exception is been thrown for hours now. What is the
best way to force leader creation for that partition?
Vadim
On Fri, Jun 28, 2013 at 12:26 PM, Joel Koshy wrote:
> Just wanted to clarify: the topic
Oh, I take it back - I forgot to clear out the ZK and kafkalogs folders.
Once I did that and changed the log.dir=\\tmp\\kafka-logs then we're up
and running. Thanks!
On Fri, Jun 28, 2013 at 7:53 AM, Jun Rao wrote:
> Any error in state-change.log or controller.log?
>
> Thanks,
>
> Jun
>
>
> On
Subscribe by sending an email to users-subscr...@kafka.apache.org
On Fri, Jun 28, 2013 at 1:47 AM, Yavar Husain wrote:
>
>
Just wanted to clarify: the topic.metadata.refresh.interval.ms would apply
to producers - and mainly with ack = 0. (If ack = 1, then a metadata
request would be issued on this exception although even with ack > 0 it is
useful to have the metadata refresh for refreshing information about how
many pa
Unless I'm misreading something, that is controlled by the
topic.metadata.refresh.interval.ms variable (defaults to 10 minutes),
and I've not seen it run longer than that (unless there was other
problems besides that going on).
I would check the JMX values for things under
"kafka.server":type="
David. What is the expected time frame for the exception to continue? Its
an hour has passed since short downtime and I still see the exception in
kafka service logs.
Thanks,
Vadim
On Fri, Jun 28, 2013 at 11:25 AM, David DeMaagd wrote:
> Getting kafka.common.NotLeaderForPartitionException for a
Getting kafka.common.NotLeaderForPartitionException for a time after a
node is brought back on line (especially if it is a short downtime) is
normal - that is because the consumers have not yet completely picked up
the new leader information. If should settle shortly.
--
Dave DeMaagd
ddema...@l
I want to clarify that I restarted only one kafka node, all others were
running and did not require restart
On Fri, Jun 28, 2013 at 10:57 AM, Vadim Keylis wrote:
> Good morning. I have a cluster of 3 kafka nodes. They were both running at
> the time. I need it to make configuration change in the
Good morning. I have a cluster of 3 kafka nodes. They were both running at
the time. I need it to make configuration change in the property file and
restart kafka. I have not broker shutdown tool, but simple used pkill -TERM
-u ${KAFKA_USER} -f kafka.Kafka. That suddenly cause the exception. How t
Joe,
Thanks for getting this done. Do you know if we have the beta1 jar in maven
yet?
Jun
On Thu, Jun 27, 2013 at 3:48 PM, Joe Stein wrote:
> The Apache Kafka team is pleased to announce the release of Kafka
> 0.8.0-beta1
>
> Apache Kafka is a distributed publish-subscribe messaging system .
Any error in state-change.log or controller.log?
Thanks,
Jun
On Fri, Jun 28, 2013 at 12:09 AM, Denny Lee wrote:
> Quick follow up - I snagged the 0.8.0-beta1-candidate1 from github but I'm
> still getting the leader error issue. The good news is that it's not
> pumping out a lot of warning m
Hi,
I want to start to use kafka. I've written some test consumer/producer with
maven version from conjars (7.0) but it is old.
I want to build a new kafka 7.0.2 (then kafka8 when it will be released for
production usage)
I've looked at web for several topics/forums that there is no kafka versio
I can confirm that they are using 2.2.0 in the code and thats all they
depend on (we too use yammer metrics and currently its integrating
nicely with our metrics.
To Jun's question it would seem 3.0.0 was in fact released recently
see https://groups.google.com/forum/#!topic/metrics-user/U9cJsrSEq2
+1 to 3.4.5
I also did run it with 3.3.5 at one point and didn't experience your
issue but that was the cloudera release of 3.3.5 so may of had other
fixes and patches in it. I know that 3.3.4 gets a lot of votes as the
most stable of the 3.3.x line so I would recomend either going to
3.4.5 or goi
Quick follow up - I snagged the 0.8.0-beta1-candidate1 from github but I'm
still getting the leader error issue. The good news is that it's not
pumping out a lot of warning messages. Below is an excerpt of the
kafka-server-start.bat output - note that initially a successful leader
was elected but
28 matches
Mail list logo