Jaikiran,
What about
1) create topic
2) create consumer1 and do consumer1.partitionsFor() until it succeeds
3) close consumer1
4) create consumer2 and do consumer2.subscribe()
-James
An update on this. This workaround has worked out fine and our initial
tests so far show that it gets us past the issue reported in this
thread. It does have a slight performance penalty (since we create a
temporary throwaway consumer, which internally leads to connection
creations/closing etc...), but this is the only reliable and relatively
better workaround compared to the others we discussed in this thread.
I would have liked if Kafka itself provided a (admin?) API wherein I
could be assured that the creation of a topic via the API would ensure
that any subsequent KafkaConsumer creation would be aware of those
topics (and have their metadata) without having to resort to these
workarounds, but I don't have a concrete proposal for it, so for now I'm
going to stick with this workaround. If anyone from the dev team thinks
there's a proper way to deal this internally in Kafka, in any subsequent
release, I would be glad to hear the details.
James, thanks very much for the help you provided and the workarounds
you suggested, that ultimately got us a usable workaround.
-Jaikiran