Thanks so much, Jun. That seems to have fixed the problem. I increased both
message.max.bytes and replica.fetch.max.bytes on the broker.
For the benefit of future Kafka users, how hard would it be to build out
some clearer error messaging for this case?
On Mon, Sep 22, 2014 at 10:38 PM, Jun Rao
Hi Valentin,
I see your point. Would the following be work for you then: You can
maintain the broker metadata as you already did and then use a 0.9 kafka
consumer for each broker, and hence by calling subscribe / de-subscribe the
consumer would not close / re-connect to the broker if it is impleme
Hi Jun,
yes, that would theoretically be possible, but it does not scale at all.
I.e. in the current HTTP REST API use case, I have 5 connection pools on
every tomcat server (as I have 5 brokers) and each connection pool holds
upto 10 SimpleConsumer connections. So all in all I get a maximum of 5
Hello,
For your use case, with the new consumer you can still create a new
consumer instance for each topic / partition, and remember the mapping of
topic / partition => consumer. The upon receiving the http request you can
then decide which consumer to use. Since the new consumer is single
threa
Hi Jun,
On Mon, 22 Sep 2014 21:15:55 -0700, Jun Rao wrote:
> The new consumer api will also allow you to do what you want in a
> SimpleConsumer (e.g., subscribe to a static set of partitions, control
> initial offsets, etc), only more conveniently.
Yeah, I have reviewed the available javadocs f
Hi Guozhang,
On Mon, 22 Sep 2014 10:08:58 -0700, Guozhang Wang
wrote:
> 1) The new consumer clients will be developed under a new directory. The
> old consumer, including the SimpleConsumer will not be changed, though
it
> will be retired in the 0.9 release.
So that means that the SimpleConsume