Hello Apache Kafka community,
auto.create.topics.enable configuration option docs state:
"Enable auto creation of topic on the server. If this is set to true then
attempts to produce, consume, or fetch metadata for a non-existent topic
will automatically create it with the default replication fact
only writers should trigger auto topic creation, but not the
> readers. So, a topic can be auto created by the producer, but not the
> consumer.
>
> Thanks,
>
> Jun
>
> On Thu, Oct 2, 2014 at 2:44 PM, Stevo Slavić wrote:
>
> > Hello Apache Kafka community,
> >
Hello Apache Kafka community,
When trying to publish (using high level sync producer) a message on a
non-existing topic (with implicit topic creation enabled), with
message.send.max.retries set to 1, sending will fail with
FailedToSendMessageException (and LeaderNotAvailableException swallowed).
t; jun
>
> On Fri, Oct 3, 2014 at 1:31 AM, Stevo Slavić wrote:
>
> > OK, thanks,
> >
> > Do you agree then that the docs for auto topic creation configuration
> > parameter are misleading and should be changed?
> >
> > Another issue is that when the topic
8.1/core/src/main/scala/kafka/cluster/Broker.scala#L29
>
> On Fri, Oct 10, 2014 at 10:47 AM, Stevo Slavić wrote:
> > Hello Apache Kafka community,
> >
> > Attached trivial Maven built project with Kafka code fails to compile,
> with
> > error:
> >
> > "
Hello Apache Kafka community,
Current (Kafka 0.8.1.1) high-level API's KafkaConsumer is not lightweight
object, it's creation takes some time and resources, and it does not seem
to be thread-safe. It's API also does not support reuse, for consuming
messages from different consumer groups.
I see e
Hello Apache Kafka users,
Using Kafka 0.8.1.1 (single instance with single ZK 3.4.6 running locally),
with auto topic creation disabled, in a test I have topic created with
AdminUtils.createTopic (AdminUtils.topicExists returns true) but
KafkaProducer on send request keeps throwing
UnknownTopicOrP
e correct, and it is indeed wired
> that the producer gets the exception after topic is created. Could you use
> the kafka-topics command to check if the leaders exist?
>
> kafka-topics.sh --zookeeper XXX --topic [topic-name] describe
>
> Guozhang
>
> On Wed, Oct 22, 201
(ArrayBuffer.scala:47)
at kafka.admin.TopicCommand$.describeTopic(TopicCommand.scala:127)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:56)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
On Wed, Oct 22, 2014 at 9:45 PM, Stevo Slavić wrote:
> kafka-topics.sh execution, from latest tr
Wed, Oct 22, 2014 at 10:03 PM, Stevo Slavić wrote:
> Output on trunk is clean too, after clean build:
>
> ~/git/oss/kafka [trunk|✔]
> 22:00 $ bin/kafka-topics.sh --zookeeper 127.0.0.1:50194 --topic
> 059915e6-56ef-4b8e-8e95-9f676313a01c --describe
> Error while executing top
It seems that used ZkSerializer has to be aligned with KafkaProducer
configured key.serializer.class.
On Thu, Oct 23, 2014 at 1:13 AM, Stevo Slavić wrote:
> Still have to understand what is going on, but when I set
> kafka.utils.ZKStringSerializer to be ZkSerializer for ZkClient u
afka/clients/consumer/KafkaConsumer.html
> >
> .
>
> Can you explain why you would like to poll messages across consumer groups
> using just one instance?
>
> Thanks,
> Neha
>
> On Tue, Oct 14, 2014 at 1:03 AM, Stevo Slavić wrote:
>
> > Hello Apache Ka
Have group 1 act like a filter, publish to a new topic all messages that
group 2 should process and then have group 2 actually consume only new
topic.
Kind regards,
Stevo Slavic
On Oct 26, 2014 2:36 AM, "Srinivas Reddy Kancharla"
wrote:
> Hi,
>
> I have a scenario where I produce messages for a
a path.
>
> Guozhang
>
> On Wed, Oct 22, 2014 at 5:36 PM, Stevo Slavić wrote:
>
> > It seems that used ZkSerializer has to be aligned with KafkaProducer
> > configured key.serializer.class.
> >
> > On Thu, Oct 23, 2014 at 1:13 AM, Stevo Slavić
take control of that partition so that it can read
> other pending messages?
>
> Thanks and regards,
> Srini
>
> On Sun, Oct 26, 2014 at 12:40 AM, Stevo Slavić wrote:
>
> > Have group 1 act like a filter, publish to a new topic all messages that
> > group 2 should p
scaladoc is built it can h
>
> Stevo, could you file a JIRA for the comments improvements?
>
> Guozhang
>
>
> On Mon, Oct 27, 2014 at 1:55 AM, Stevo Slavić wrote:
>
> > OK,
> >
> > thanks for heads up!
> >
> > Is this requirement documented som
Hello Apache Kafka community,
Is it already possible to configure/use a different metadata store (topics,
consumer groups, consumer to partition assignments, etc.) instead of
ZooKeeper?
If not, are there any plans to make it plugable in future?
Kind regards,
Stevo Slavic
>
> > On 14-Nov-2014, at 5:18 pm, Stevo Slavić wrote:
> >
> > Hello Apache Kafka community,
> >
> > Is it already possible to configure/use a different metadata store
> (topics,
> > consumer groups, consumer to partition assignments, etc.) instead of
I have no experience with, but https://github.com/gerritjvv/kafka-fast
seems to fit your description.
Kind regards,
Stevo Slavic
On Fri, Dec 12, 2014 at 7:18 PM, Surendranauth Hiraman <
suren.hira...@velos.io> wrote:
>
> Basically, don't want to use ZK, for the reasons driving the new client
> of
Hello Apache Kafka community,
Is currently active vote for 0.8.2.0-RC1 or 0.8.2.0?
If the vote is for 0.8.2.0-RC1 why isn't that reflected in artifact
metadata? Version should be 0.8.2.0-RC1, 0.8.2-RC1 or something similar
(0.8.2 beta release had "-beta" and no ".0" suffix - see
http://repo1.mave
istent in terms of versioning format. So picking 0.8.2.0 is
> intended to fix that inconsistency.
>
> Thanks,
>
> Jun
>
> On Wed, Jan 14, 2015 at 9:12 AM, Stevo Slavić wrote:
>
> > Hello Apache Kafka community,
> >
> > Is currently active vote for 0.8.2.0
Hello Apache Kafka community,
In Kafka 0.8.1.1, are Kafka metrics updated/tracked/marked by simple
consumer implementation or only by high level one?
Kind regards,
Stevo Slavic.
Have you considered including order information in messages that are sent
to Kafka, and then restoring order in logic that is processing messages
consumed from Kafka?
http://www.enterpriseintegrationpatterns.com/Resequencer.html
Kind regards,
Stevo Slavic.
On Wed, Mar 4, 2015 at 12:15 AM, Josh Ra
Hello Apache Kafka community,
On Apache Kafka website home page http://kafka.apache.org/ it is stated
that Kafka "can be elastically and transparently expanded without downtime."
Is that really true? More specifically, can one just add one more broker,
have another partition added for the topic, h
Lots more
> features already in there... we are also in progress to auto balance
> partitions when increasing/decreasing the size of the cluster and some more
> goodies too.
>
> ~ Joe Stein
> - - - - - - - - - - - - - - - - -
>
> http://www.stealth.ly
> - - - -
-do that out of the box... folks do
> this
> > > elastic scailing today with AWS CloudFormation and internal systems
> they
> > > built too.
> > >
> > > So, it can be done... you just have todo it.
> > >
> > > ~ Joe Stein
> > > - - - - - - - - - - - - - - - - -
> > >
distributed load
> balancing. That will sufficiently isolate the resources required to run the
> various consumers. But probably you have a specific use case in mind for
> running several consumer groups on the same machine. Would you mind giving
> more details?
>
> On Thu, Oct 23
* Centralized Log Management
> Solr & Elasticsearch Support * http://sematext.com/
>
>
> On Fri, Feb 27, 2015 at 2:35 AM, Stevo Slavić wrote:
>
> > Hello Apache Kafka community,
> >
> > In Kafka 0.8.1.1, are Kafka metrics updated/tracked/marked by simple
&g
Hello Apache Kafka community,
I like that kafka-clients is now separate module, and has no scala
dependency even. I'd like to propose that kafka.admin package gets
published as separate module too.
I'm writing some tests, and to be able to use kafka.admin tools/utils in
them I have to bring in to
gt;
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations
> JIRA - https://issues.apache.org/jira/browse/KAFKA-1694
> Mailing list - thread "[DISCUSS] KIP-4 - Command line and centralized
> administrative operations"
+1 for dropping 2.9.x support
Kind regards,
Stevo Slavic.
On Fri, Mar 27, 2015 at 3:20 PM, Ismael Juma wrote:
> Hi all,
>
> The Kafka build currently includes support for Scala 2.9, which means that
> it cannot take advantage of features introduced in Scala 2.10 or depend on
> libraries that re
Hello Apache Kafka community,
Please correct me if wrong, AFAIK currently (Kafka 0.8.2.x) offset
management responsibility is mainly client/consumer side responsibility.
Wouldn't it be better if it was broker side only responsibility?
E.g. now if one wants to use custom offset management, any of
Please correct me if wrong, but I think it is really not hard constraint
that one cannot have more consumers (from same group) than partitions on
single topic - all the surplus consumers will not be assigned to consume
any partition, but they can be there and as soon as one active consumer
from sam
I noticed similar behavior on similar small 3 Kafka broker with 3 ZooKeeper
node cluster, Kafka 0.8.1.1 and ZooKeeper 3.4.6, with ~5K topics, most of
them with single partition, replication factor of 1, and most of them
unused for long time, but brokers are busy and performance especially
producer
Nice, thanks for sharing!
Is 30k msgs/sec publishing or push throughput? Will check, hopefully
performance tests are included in sources.
Does Hermes have same max number of topics limitations as Kafka or does it
include a solution to have that aspect scalable as well?
On May 16, 2015 8:02 AM, "
then you
> should monitor them yourself, simple and plain, right?
>
> 2015-04-22 14:36 GMT+08:00 Stevo Slavić :
>
> > Hello Apache Kafka community,
> >
> > Please correct me if wrong, AFAIK currently (Kafka 0.8.2.x) offset
> > management responsibility is mainly cl
Hello Kafka community,
We had a ton of test topics, and deleted them using Kafka admin scripts -
then our metrics error log started filling up with exceptions.
Kafka metric reporter is trying to read LogStartOffset gauge value, and
that throws NoSuchElementException.
java.util.NoSuchElementExcept
PM, Stevo Slavić wrote:
> Hello Kafka community,
>
> We had a ton of test topics, and deleted them using Kafka admin scripts -
> then our metrics error log started filling up with exceptions.
> Kafka metric reporter is trying to read LogStartOffset gauge value, an
Hello Apache Kafka community,
In current (v0.8.2.1) documentation at
http://kafka.apache.org/documentation.html#configuration I cannot find
anything about two configuration properties used in
kafka.metrics.KafkaMetricsConfig, namely "kafka.metrics.reporters" and
"kafka.metrics.polling.interval.sec
jira?
>
> Aditya
>
> ____
> From: Stevo Slavić [ssla...@gmail.com]
> Sent: Sunday, May 31, 2015 3:57 PM
> To: users@kafka.apache.org
> Subject: KafkaMetricsConfig not documented
>
> Hello Apache Kafka community,
>
> In current (v0.8.2.1) document
Hello Jakub,
Maybe it will work for you to combine MirrorMaker
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330
and Burrow: https://github.com/linkedin/Burrow
See recent announcement for Burrow
http://mail-archives.apache.org/mod_mbox/kafka-users/201506.mbox/%3CCACrdVJpS3k
guess adding a new component will increase the complexity of the
> system
> > > structure. And if the new component consists of one or a few nodes, it
> > may
> > > becomes the bottleneck of the whole system, if it consists of many
> nodes,
> > > i
With auto-commit one can only have at-most-once delivery guarantee - after
commit but before message is delivered for processing, or even after it is
delivered but before it is processed, things can fail, causing event not to
be processed, which is basically same outcome as if it was not delivered.
Hello Marina,
There's Kafka API to fetch and commit offsets
https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka
- maybe it will work for you.
Kind regards,
Stevo Slavic.
On Fri, Jun 19, 2015 at 3:23 PM, Marina wrote:
> Hi,
>
> in older Kafka vers
I believe it's: users-unsubscr...@kafka.apache.org
Maybe http://kafka.apache.org/contact.html could be updated with
unsubscribe info.
Kind regards,
Stevo Slavic.
On Thu, Jun 25, 2015 at 9:25 AM, Monika Garg wrote:
> Hi
>
> I want to unsubscribe from this mailing list.
>
> I have sent the unsub
Hello Apache Kafka community,
Couldn't broker return a special error code in FetchResponse for a given
partition(s) where it detects that there was something to return/read from
partition but actual FetchResponse contains no messages for that partition
since fetch size in FetchRequest for that par
hen you will know that your fetch size needs
> to increase.
>
> Thanks,
>
> Joel
>
> On Thu, Jul 02, 2015 at 05:32:20PM +0200, Stevo Slavić wrote:
> > Hello Apache Kafka community,
> >
> > Couldn't broker return a special error code in FetchResponse fo
Hello Emanuele,
>From logs it seems that auto.create.topics.enable is not overriden for the
embedded broker.
It also seems that test is explicitly creating topic before publishing
message to it.
Consider commenting out explicit topic creation and rely on implicit topic
creation.
Kind regards,
Ste
Hello Apache Kafka community,
Documentation for min.insync.replicas in
http://kafka.apache.org/documentation.html#brokerconfigs states:
"When used together, min.insync.replicas and request.required.acks allow
you to enforce greater durability guarantees. A typical scenario would be
to create a to
;different replica" can ACK the second back of messages
> while not having the first - from what I can see, it will need to be
> up-to-date on the latest messages (i.e. correct HWM) in order to ACK.
>
> On Tue, Jul 7, 2015 at 7:13 AM, Stevo Slavić wrote:
> > Hello Apache K
licasReachOffset(...) has this logic. A
> replica can't reach offset of second batch, without first having
> written the first batch. So I believe we are safe in this scenario.
>
> Gwen
>
> On Tue, Jul 7, 2015 at 8:01 AM, Stevo Slavić wrote:
> > Hello Gwen,
> >
&g
th-apache-kaf
> ka-49753844
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On 7/7/15, 9:28 AM, "Stevo Slavić" wrote:
>
> >Thanks for heads up and code reference!
> >
> >Traced back required offset to
> >
> https://github.com/apache/kafka/
Hello Gaurav,
There are several Zookeeper client connection configuration properties you
can tune for Kafka brokers. They are document @
http://kafka.apache.org/documentation.html#brokerconfigs (all begin with
"zookeeper." prefix)
Did you change any already? I'd start with defaults (they can and
Hello Apache Kafka Community,
Is it possible to store and retrieve additional custom topic metadata along
with existing Kafka managed ones, using some Kafka API? If not would it be
a problem (e.g. for Kafka broker or some client APIs) if I was to
store/retrieve additional custom topic metadata usi
Hello Sivananda,
Calling AdminUtils.deleteTopic just requests topic to be deleted - it does
not actually delete topic immediately. Requests for topic deletion get
saved in ZooKeeper as a node (named by topic name), under
/admin/delete_topics node.
If brokers in the cluster are configured with top
Hello Apache Kafka community,
In new KafkaConsumer API on trunk, it seems it's only possible to define
consumer group id at construction time of KafkaConsumer, through property
with group.id key.
Would it make sense and be possible to support setting/changing consumer
group id after construction,
g Wang wrote:
> Hi Stevo,
>
> Hmm this is interesting, do you have any use cases in mind that need
> dynamic group changing?
>
> Guozhang
>
> On Fri, Jul 17, 2015 at 11:13 PM, Stevo Slavić wrote:
>
> > Hello Apache Kafka community,
> >
> > In new
t;
> Guozhang
>
> On Mon, Jul 20, 2015 at 11:06 AM, Jason Gustafson
> wrote:
>
> > Hey Stevo,
> >
> > The new consumer doesn't have any threads of its own, so I think
> > construction should be fairly cheap.
> >
> > -Jason
> >
> &g
Hello Apache Kafka community,
New HLC poll returns ConsumerRecords.
Do ConsumerRecords contain records for every partition that HLC is actively
subscribed on for every poll request, or does it contain only records for
partitions which had messages and which were retrieved in poll request?
If lat
Hello Apache Kafka community,
I find new consumer poll/seek javadoc a bit confusing. Just by reading docs
I'm not sure what the outcome will be, what is expected in following
scenario:
- kafkaConsumer is instantiated with auto-commit off
- kafkaConsumer.subscribe(someTopic)
- kafkaConsumer.positi
Hello Apache Kafka community,
Just noticed that :
- message is successfully published using new 0.8.2.1 producer
- and then Kafka is stopped
next attempt to publish message using same instance of new producer hangs
forever, and following stacktrace gets logged repeatedly:
[WARN ] [o.a.kafka.comm
Hello Apache Kafka community,
It seems new high level consumer coming in 0.8.3 will support only offset
storage in Kafka topic.
Can somebody please confirm/comment?
Kind regards,
Stevo Slavic.
Hello Apache Kafka community,
In the new consumer I encountered unexpected behavior. After constructing
KafakConsumer instance with configured consumer rebalance callback handler,
and subscribing to a topic with "consumer.subscribe(topic)", retrieving
subscriptions would return empty set and callb
ions which had messages.
> Would you mind creating a jira for the feature request? You're welcome to
> submit a patch as well.
>
> -Jason
>
> On Tue, Jul 21, 2015 at 2:27 AM, Stevo Slavić wrote:
>
> > Hello Apache Kafka community,
> >
> > New HLC poll r
Hello Sreenivasulu,
It's automatic, just start them, and as each HLC starts it registers in ZK,
rebalancing of the HLC to partition assignments happens.
Be gentle when starting consumers, there is a bug reported that if multiple
HLCs are started in short time, some of them may end up without any
unclean leader election is not
> enabled).
>
> On Tue, Jul 21, 2015 at 9:11 PM, James Cheng wrote:
>
> >
> > > On Jul 21, 2015, at 9:15 AM, Ewen Cheslack-Postava
> > wrote:
> > >
> > > On Tue, Jul 21, 2015 at 2:38 AM, Stevo Slavić
> wrote:
&g
Strange, if after seek I make several poll requests, eventually it will
read/return messages from offset that seek set.
On Thu, Jul 23, 2015 at 11:03 AM, Stevo Slavić wrote:
> Thanks Ewen for heads up.
>
> It's great that seek is not needed in between poll when business goes as
&
ubscriptions() other than to keep it
> non-blocking. Perhaps you can open a ticket and we can get feedback from
> some other devs?
>
> Thanks,
> Jason
>
> On Wed, Jul 22, 2015 at 2:09 AM, Stevo Slavić wrote:
>
> > Hello Apache Kafka community,
> >
>
Hello Apache Kafka community,
Say there is only one topic with single partition and a single message on
it.
Result of calling a poll with new consumer will return ConsumerRecord for
that message and it will have offset of 0.
After processing message, current KafkaConsumer implementation expects o
set(partition))?
>
> Thanks,
> Jason
>
> On Fri, Jul 24, 2015 at 10:11 AM, Stevo Slavić wrote:
>
> > Hello Apache Kafka community,
> >
> > Say there is only one topic with single partition and a single message on
> > it.
> > Result of calling a poll with
Hello David,
It's a known issue, see https://issues.apache.org/jira/browse/KAFKA-1788
and https://issues.apache.org/jira/browse/KAFKA-2120
Kind regards,
Stevo Slavic.
On Fri, Jul 31, 2015 at 10:15 AM, David KOCH wrote:
> Hello,
>
> The new producer org.apache.kafka.clients.producer.KafkaProduc
Hello Apache Kafka community,
If I recall well, two weeks ago it was mentioned in a discussion that Kafka
0.8.3 might be released in a month time.
Is this still Kafka dev team goal, in few weeks time to have Kafka 0.8.3
released? Or is more (re)work (e.g. more new consumer API changes) planned
fo
updated
> https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan).
>
> As we are getting closer, we will clean up the 0.8.3 jiras and push
> non-critical ones to future releases.
>
> Thanks,
>
> Jun
>
> On Mon, Aug 3, 2015 at 5:52 AM, Stevo Slavić wrote:
>
>
+1 (non-binding) for 0.8.2.2 release
Would be nice to include in that release new producer resiliency bug fixes
https://issues.apache.org/jira/browse/KAFKA-1788 and
https://issues.apache.org/jira/browse/KAFKA-2120
On Fri, Aug 14, 2015 at 4:03 PM, Gwen Shapira wrote:
> Will be nice to include Ka
> committed yet.
> >
> > Hope that helps the effort.
> >
> > Thanks,
> > Grant
> >
> > On Mon, Aug 17, 2015 at 12:09 AM, Grant Henke
> wrote:
> >
> >> +1 to that suggestion. Though I suspect that requires a committer to do.
> &g
Hello Apache Kafka community,
Current unclean leader election docs state:
"In the future, we would like to make this configurable to better support
use cases where downtime is preferable to inconsistency. "
If I'm not mistaken, since 0.8.2, unclean leader election strategy (whether
to allow it or
mentions ability to disable
unclean leader election, so likely just this one reference needs to be
updated.
On Sat, Sep 12, 2015 at 1:05 AM, Guozhang Wang wrote:
> Hi Stevo,
>
> Could you point me to the link of the docs?
>
> Guozhang
>
> On Fri, Sep 11, 2015 at 5:47 AM, Stev
Have a look at https://github.com/allegro/hermes
On Mon, Sep 14, 2015, 01:28 David Luu wrote:
> The toy project idea is good. Another option I think could be to look at
> the various Kafka client langague bindings and/or utilities (like
> kafkacat). And from there, another option is to build a c
Hello Jason,
Maybe this answers your question:
http://mail-archives.apache.org/mod_mbox/kafka-dev/201509.mbox/%3CCAFc58G-UScVKrSF1kdsowQ8Y96OAaZEdiZsk40G8fwf7iToFaw%40mail.gmail.com%3E
Kind regards,
Stevo Slavic.
On Mon, Sep 14, 2015 at 8:56 AM, Jason Rosenberg wrote:
> Hi Jun,
>
> Can you cl
has "Affects Version/s" set to 0.9, maybe "Fix
Version/s" should be set to that value instead.
Kind regards,
Stevo Slavic.
On Mon, Sep 14, 2015 at 9:43 AM, Stevo Slavić wrote:
> Hello Jason,
>
> Maybe this answers your question:
> http://mail-archives.apache.or
Created https://issues.apache.org/jira/browse/KAFKA-2551
On Mon, Sep 14, 2015 at 7:22 PM, Guozhang Wang wrote:
> Yes you are right. Could you file a JIRA to edit the documents?
>
> Guozhang
>
> On Fri, Sep 11, 2015 at 4:41 PM, Stevo Slavić wrote:
>
> > That sente
Hello Damian,
Yes, there's a +1 difference. See related discussion
http://mail-archives.apache.org/mod_mbox/kafka-users/201507.mbox/%3CCAOeJiJh2SMzVn23JsoWiNk3sfsw82Jr_-kRLcNRd-oZ7pR1yWg%40mail.gmail.com%3E
Kind regards,
Stevo Slavic.
On Tue, Sep 15, 2015 at 3:56 PM, Damian Guy wrote:
> I turn
Hello Apache Kafka community,
In my integration tests, with single 0.8.2.2 broker, for newly created
topic with single partition, after determining through topic metadata
request that partition has lead broker assigned, when I try to reset offset
for given consumer group, I first try to discover o
ers/topics/__consumer_offsets
>
> get /brokers/topics/__consumer_offsets
>
>
> Thanks,
> Grant
>
> On Mon, Oct 5, 2015 at 10:44 AM, Stevo Slavić wrote:
>
> > Hello Apache Kafka community,
> >
> > In my integration tests, with single 0.8.2.2 broker, for newly
o Slavic.
On Tue, Oct 6, 2015 at 10:02 AM, Stevo Slavić wrote:
> Thanks Grant for quick reply!
>
> I've used AdminUtils.topicExists("__consumer_offsets") check and even
> 10sec after Kafka broker startup, the check fails.
>
> When, on which event, does this inte
ation factor for
offsets topic.
Does this make sense?
As workaround, I guess I will have to resort to explicitly creating offsets
topic if it doesn't exist already.
Kind regards,
Stevo Slavic.
On Tue, Oct 6, 2015 at 11:34 AM, Stevo Slavić wrote:
> Debugged, and found in KafkaApis.handl
32 PM, Stevo Slavić wrote:
> There's another related bug - triggering offsets topic creation through
> requesting metadata about that topic does not work in case of single broker
> clean (no topics created yet) Kafka cluster running. In that case sequence
> returned by KafkaApis.ge
Hello Kiran,
Check how many brokers you have in the cluster. Consumer offsets topic
requires by default at least 3. In dev environment you could lower
replication factor for that topic (see broker config options).
Kind regards,
Stevo Slavic.
On Fri, Oct 16, 2015, 07:31 Kiran Singh wrote:
> Hel
Hello Apache Kafka community,
I'm trying to use new producer, from kafka-clients 0.8.2.2, together with
simple consumer to fetch and commit offsets stored in Kafka, and I'm seeing
strange behavior - a committed offset/message gets read multiple times,
offset fetch requests do not always see commit
Hello David,
In short, problem is not with your topic, it is with consumer offsets topic
initialization.
You could modify your code to just retry fetching offsets (until successful
where successful is also return of -1 offset, or timeout), or alternatively
you could trigger consumer offsets topic
m
is not with your topic, but with internal consumer offsets topic.
On Mon, Nov 2, 2015 at 1:56 AM, Stevo Slavić wrote:
> Hello David,
>
> In short, problem is not with your topic, it is with consumer offsets
> topic initialization.
>
> You could modify your code to just retry fetchin
Hello Apache Kafka community,
I'm seeing some cyclic load on cluster, trying to determine cause.
Couldn't determine from docs - is there some internal scheduled job in
Kafka (0.8.2.1) executing on every broker every two hours?
Kind regards,
Stevo Slavic.
Hello Apache Kafka community,
I'm using simple consumer with Kafka 0.8.2.2 and noticed that under some
conditions fetch response message set for a partition can contain at least
one (if not all) MessageAndOffset with nextOffset being equal to current
(committed) offset, offset used in fetch reque
would get triggered only when consumer lag is of
appropriate size - I guess async producer made this issue more likely,
because of batch sending multiple messages to same partition.
On Wed, Nov 11, 2015 at 9:40 AM, Stevo Slavić wrote:
> Hello Apache Kafka community,
>
>
> I'm usin
It's
> the responsibility of the client to skip over those messages. Note that the
> high level consumer handles that logic already.
>
> Thanks,
>
> Jun
>
> On Wed, Nov 11, 2015 at 12:40 AM, Stevo Slavić wrote:
>
> > Hello Apache Kafka community,
> >
&g
ystem? Will it affect how logs are managed,
> and when “older” messages are purged? Or are they two independent systems?
>
> On 11/2/15, 03:51, "Stevo Slavić" wrote:
>
> >Here is a bit longer and more detailed, not necessarily better
> >understandable explanatio
Delete was actually considered to be working since Kafka 0.8.2 (although
there are still not easily reproducible edge cases when it doesn't work
well even in in 0.8.2 or newer).
In 0.8.1 one could request topic to be deleted (request gets stored as
entry in ZooKeeper), because of presence of the re
Hello Rakesh,
log.cleanup.policy is broker configuration property, while cleanup.policy
is topic configuration property (see
http://kafka.apache.org/documentation.html#topic-config ). Since you are
configuring particular topic, you need to use second one.
Kind regards,
Stevo Slavic.
On Mon, Dec
Have you considered deleting and recreating topic used in test?
Once topic is clean, read/poll once - any committed offset should be
outside of the range, and consumer should reset offset.
On Tue, Dec 29, 2015 at 4:11 PM, Han JU wrote:
> Hello,
>
> For local test purpose I need to frequently res
and consumers all the messages.
>
> 2015-12-29 16:19 GMT+01:00 Stevo Slavić :
>
> > Have you considered deleting and recreating topic used in test?
> > Once topic is clean, read/poll once - any committed offset should be
> > outside of the range, and consumer should res
1 - 100 of 162 matches
Mail list logo