We had a situation where the kafka 0.8.2 broker would not come up
Hope this helps someone in the same situation! Be aware of the downsides of
increasing this number though.
- neelesh
On Wed, May 6, 2015 at 10:07 AM, Neelesh wrote:
> We had a situation where the kafka 0.8.2 broker would not come up
>
, without
losing data? I understand the throughput will be low.
Thanks!
-Neelesh
ests to your webservice will get
> batched and sent to the broker which will increase the throughput of the
> Producer and in turn your webservice.
>
> On Fri, Aug 14, 2015 at 6:10 PM Gwen Shapira wrote:
>
> > Hi Neelesh :)
> >
> > The new producer has configuratio
mplements Callback {
>
> HttpServletRequest request;
> HttpServletResponse response;
>
> void onCompletion(RecordMetadata metadata, java.lang.Exception exception) {
>
> // Check exception and send appropriate response
>
> }
> }
>
> On Mon, Aug 17, 2015 at 10
.
Is this something related to
https://issues.apache.org/jira/browse/KAFKA-2096 ?
Thanks!
-neelesh
I searched for this and could not get a definitive answer- when do logs get
deleted , after a topic is deleted? What happens if I delete a topic and
recreate it immediately?
So far I know that deleting a topic marks it for deleted in zk, and some
sweeper eventually deletes the topic meta and logs.
Hi,
Can I use the alter topic command to change the offsets.retention.minutes
setting on __consumer_offsets topic while the broker is running?
Thanks!
-neelesh
The command succeeds, but does not have an impact. Setting it to a minute
does not clear the logs for this topic. The code in GroupMetadataManager
also does not seem to support it.
On Feb 15, 2017 10:31 PM, "Manikumar" wrote:
> Yes, we can change.
>
> On Thu, Feb 16, 2017
ndow for offsets topic". It is for discarding
> offsets older than retention period.
>
> On Thu, Feb 16, 2017 at 9:56 PM, Neelesh wrote:
>
> > The command succeeds, but does not have an impact. Setting it to a minute
> > does not clear the logs for this topic. The code i
topics and several tens of
thousands (even hundreds of thousands) of partitions on a single kafka
cluster.
I remember reading that you are effectively limited by filehandles.
Has anyone tried such a setup ?
Thanks!
-Neelesh
Thanks Todd. That's the current thinking. We use multiple clusters in a
single data center for solr to avoid a similar problem - number of
collections per cluster in solr's case.
Your numbers are encouraging. I will go ahead with this design for now.
Thanks!
Neelesh
On Mar 21, 20
12 matches
Mail list logo