Thanks Valentin!
Guozhang
On Sun, Sep 28, 2014 at 3:49 PM, Valentin wrote:
>
> Hi Jun,
>
> ok, I created:
> https://issues.apache.org/jira/browse/KAFKA-1655
>
> Greetings
> Valentin
>
> On Sat, 27 Sep 2014 08:31:01 -0700, Jun Rao wrote:
> > Valentin,
> >
> > That's a good point. We don't have
Hi Jun,
ok, I created:
https://issues.apache.org/jira/browse/KAFKA-1655
Greetings
Valentin
On Sat, 27 Sep 2014 08:31:01 -0700, Jun Rao wrote:
> Valentin,
>
> That's a good point. We don't have this use case in mind when designing
the
> new consumer api. A straightforward implementation could
Valentin,
That's a good point. We don't have this use case in mind when designing the
new consumer api. A straightforward implementation could be removing the
locally cached topic metadata for unsubscribed topics. It's probably
possible to add a config value to avoid churns in caching the metadata
Hi Jun, Hi Guozhang,
hm, yeah, if the subscribe/unsubscribe is a smart and lightweight
operation this might work. But if it needs to do any additional calls to
fetch metadata during a subscribe/unsubscribe call, the overhead could get
quite problematic. The main issue I still see here is that an
Valentin,
As Guozhang mentioned, to use the new consumer in the SimpleConsumer way,
you would subscribe to a set of topic partitions and the issue poll(). You
can change subscriptions on every poll since it's cheap. The benefit you
get is that it does things like leader discovery and maintaining
c
Hi Valentin,
I see your point. Would the following be work for you then: You can
maintain the broker metadata as you already did and then use a 0.9 kafka
consumer for each broker, and hence by calling subscribe / de-subscribe the
consumer would not close / re-connect to the broker if it is impleme
Hi Jun,
yes, that would theoretically be possible, but it does not scale at all.
I.e. in the current HTTP REST API use case, I have 5 connection pools on
every tomcat server (as I have 5 brokers) and each connection pool holds
upto 10 SimpleConsumer connections. So all in all I get a maximum of 5
Hello,
For your use case, with the new consumer you can still create a new
consumer instance for each topic / partition, and remember the mapping of
topic / partition => consumer. The upon receiving the http request you can
then decide which consumer to use. Since the new consumer is single
threa
Hi Jun,
On Mon, 22 Sep 2014 21:15:55 -0700, Jun Rao wrote:
> The new consumer api will also allow you to do what you want in a
> SimpleConsumer (e.g., subscribe to a static set of partitions, control
> initial offsets, etc), only more conveniently.
Yeah, I have reviewed the available javadocs f
Hi Guozhang,
On Mon, 22 Sep 2014 10:08:58 -0700, Guozhang Wang
wrote:
> 1) The new consumer clients will be developed under a new directory. The
> old consumer, including the SimpleConsumer will not be changed, though
it
> will be retired in the 0.9 release.
So that means that the SimpleConsume
The new consumer api will also allow you to do what you want in a
SimpleConsumer (e.g., subscribe to a static set of partitions, control
initial offsets, etc), only more conveniently.
Thanks,
Jun
On Mon, Sep 22, 2014 at 8:10 AM, Valentin wrote:
>
> Hello,
>
> I am currently working on a Kafka
Hello,
1) The new consumer clients will be developed under a new directory. The
old consumer, including the SimpleConsumer will not be changed, though it
will be retired in the 0.9 release.
2) I am not very familiar with HTTP wrapper on the clients, could someone
who have done so comment here?
3
Hello,
I am currently working on a Kafka implementation and have a couple of
questions concerning the road map for the future.
As I am unsure where to put such questions, I decided to try my luck on
this mailing list. If this is the wrong place for such inquiries, I
apologize. In this case it wou
13 matches
Mail list logo