Yes, the docs can be improved. Could you file a jira? For the 2nd issue, the new java producer handles this better.
Thanks, jun On Fri, Oct 3, 2014 at 1:31 AM, Stevo Slavić <ssla...@gmail.com> wrote: > OK, thanks, > > Do you agree then that the docs for auto topic creation configuration > parameter are misleading and should be changed? > > Another issue is that when the topic auto creation is disabled, attempts to > publish a message on a non-existing topic using high-level api will throw a > generic FailedToSendMessageException (even when message.send.max.retries is > 0) without having UnknownTopicOrPartitionException at least as cause. Is > this feature or a bug, and more importantly could it be improved? > > Kind regards, > Stevo Slavic. > On Oct 3, 2014 6:30 AM, "Jun Rao" <jun...@gmail.com> wrote: > > > In general, only writers should trigger auto topic creation, but not the > > readers. So, a topic can be auto created by the producer, but not the > > consumer. > > > > Thanks, > > > > Jun > > > > On Thu, Oct 2, 2014 at 2:44 PM, Stevo Slavić <ssla...@gmail.com> wrote: > > > > > Hello Apache Kafka community, > > > > > > auto.create.topics.enable configuration option docs state: > > > "Enable auto creation of topic on the server. If this is set to true > then > > > attempts to produce, consume, or fetch metadata for a non-existent > topic > > > will automatically create it with the default replication factor and > > number > > > of partitions." > > > > > > I read this that topic should be created on any attempt to consume > > > non-existing topic. > > > > > > With auto.create.topics.enable left at default or explicitly set to > true, > > > attempts to consume non existing topic, using blocking consumer, or a > > > non-blocking consumer with positive consumer.timeout.ms configured, > will > > > not result in topic creation (I cannot see one registered in > ZooKeeper). > > > > > > Additionally, for non-blocking consumer with timeout, no offset will be > > > recorded. This further means, if such consumer had auto.offset.reset > set > > to > > > largest, that it will miss at least one message (initial one that when > > > published creates the topic), even though consumer attempted to read > > before > > > first message was published. > > > > > > I'm using Kafka 0.8.1.1 but I see same issue exists in current trunk. > > > > > > Is this a known issue? Or are my expectations/assumptions wrong and > this > > is > > > expected behavior? > > > > > > Kind regards, > > > Stevo Slavic. > > > > > >