Thanks, Manikumar.
best,
Colin
On Tue, Jul 17, 2018, at 19:44, Manikumar wrote:
> Closing this KIP in favor of adding filtering support to the Metadata API
> and KIP-142. Will open a new KIP when ready.
> Thanks for your reviews.
>
> On Mon, Jul 16, 2018 at 8:38 AM Colin McCabe wrote:
>
> > Th
Closing this KIP in favor of adding filtering support to the Metadata API
and KIP-142. Will open a new KIP when ready.
Thanks for your reviews.
On Mon, Jul 16, 2018 at 8:38 AM Colin McCabe wrote:
> Thanks, Manikumar. I've been meaning to bring up KIP-142 again. It would
> definitely be a nice
Thanks, Manikumar. I've been meaning to bring up KIP-142 again. It would
definitely be a nice improvement.
best,
Colin
On Sat, Jul 14, 2018, at 08:51, Manikumar wrote:
> Hi Jason and Colin,
>
> Thanks for the feedback. I agree that having filtering support to the
> Metadata API would be usef
Hi Stephane,
Pagniation would be useful. But I think the more immediate need is to stop
sending stuff over the wire that we don't even use.
For example, imagine that you have a cluster with 50,000 topics and your
Consumer subscribes to abracadabra*. Perhaps there's actually only 3 topics
tha
Hi Jason and Colin,
Thanks for the feedback. I agree that having filtering support to the
Metadata API would be useful and solves
the scalability issues.
But to implement specific use case of "describe all topics", regex support
won't help. In any case user needs to
call listTopics() to get topic
What if broker crashes before all the pages can be returned ?
Cheers
On Sat, Jul 14, 2018 at 1:07 AM Stephane Maarek <
steph...@simplemachines.com.au> wrote:
> Why not paginate ? Then one can retrieve as many topics as desired ?
>
> On Sat., 14 Jul. 2018, 4:15 pm Colin McCabe, wrote:
>
> > Good
Why not paginate ? Then one can retrieve as many topics as desired ?
On Sat., 14 Jul. 2018, 4:15 pm Colin McCabe, wrote:
> Good point. We should probably have a maximum number of results like
> 1000 or something. That can go in the request RPC as well...
> Cheers,
> Colin
>
> On Fri, Jul 13, 2
Good point. We should probably have a maximum number of results like
1000 or something. That can go in the request RPC as well...
Cheers,
Colin
On Fri, Jul 13, 2018, at 18:15, Ted Yu wrote:
> bq. describe topics by a regular expression on the server side
>
> Should caution be taken if the regex
bq. describe topics by a regular expression on the server side
Should caution be taken if the regex doesn't filter ("*") ?
Cheers
On Fri, Jul 13, 2018 at 6:02 PM Colin McCabe wrote:
> As Jason wrote, this won't scale as the number of partitions increases.
> We already have users who have tens
As Jason wrote, this won't scale as the number of partitions increases. We
already have users who have tens of thousands of topics, or more. If you
multiply that by 100x over the next few years, you end up with this API
returning full information about millions of topics, which clearly doesn't
The KIP looks good to me.
However, if there is willingness in the community to work on metadata
request with patterns, the feature proposed here and filtering by '*' or
'.*' would be redundant.
Andras
On Fri, Jul 13, 2018 at 12:38 AM Jason Gustafson wrote:
> Hey Manikumar,
>
> As Kafka begins
Hey Manikumar,
As Kafka begins to scale to larger and larger numbers of topics/partitions,
I'm a little concerned about the scalability of APIs such as this. The API
looks benign, but imagine you have have a few million partitions. We
already expose similar APIs in the producer and consumer, so pr
Very useful. LGTM.
Thanks,
Harsha
On Thu, Jul 12, 2018, at 9:56 AM, Manikumar wrote:
> Hi all,
>
> I have created a KIP to add describe all topics API to AdminClient .
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-327%3A+Add+describe+all+topics+API+to+AdminClient
>
> Please take a
13 matches
Mail list logo