Do we have an ETA on when y'all think 2.3.1 will land?
On Sat, Sep 28, 2019 at 1:55 PM Matthias J. Sax
wrote:
> There was a recent report about vulnerabilities of some dependent
> libraries: https://issues.apache.org/jira/browse/KAFKA-8952
>
> I think we should fix this for 2.3.1.
>
> Furthermor
Check out the kafka-consumer-groups command line tool. You can set the
current offsets for any inactive consumer group to whatever value you like
using that tool. I think that'll do what you want.
On Thu, Nov 15, 2018 at 12:17 PM Parth Gandhi <
parth.gan...@excellenceinfonet.com> wrote:
> Hi Team
Hey everyone,
I wanted to share a small tool I developed last weekend named Kafka Hawk.
Kafka Hawk monitors the __consumer_offsets topic in Kafka and reports on
the number of commits it sees from each consumer group and topic. It can
also optionally report information on the deltas between offset
We've run into some unexpected behavior around this as well, though I
forgot to send in a note when we found it so I'm fuzzy on the details at
the moment. I'll chime back in if I can dig up exactly what we were doing,
but I'd also welcome a ruling from someone with knowledge of the code. I
seem to
I will also +1 the JMX Prometheus exporter. It's capable of running as a
-javaagent so it's really easy to get up and running. And we happen to use
Prometheus anyways so it's pretty convenient for us.
Link: https://github.com/prometheus/jmx_exporter
On Fri, Aug 10, 2018 at 1:55 AM Ishwor Gurung
I don’t think this is a terrible idea. It’s really the only way to know what
events were before and what events were after an event across all partitions.
> On Feb 26, 2018, at 11:18 AM, Ryan Worsley wrote:
>
> Hey everyone,
>
> I believe I have a use-case for writing a control message periodi
We recently scaled up the number of brokers we had in our cluster. Instead of
adding partitions we just reassigned the partitions to distributed them better
across all the brokers we now had. We did this for internal streams topics,
too, and things went pretty smoothly.
You can find documentati
ote:
>
> Thanks, Matt. Have you done any benchmarking to see how using different
> Serializers may impact throughput/latency?
>
> Regards,
> Ali
>
> On Wed, Jan 10, 2018 at 7:55 AM, Matt Farmer wrote:
>
>> We use the default byte array serializer provided wit
We use the default byte array serializer provided with Kafka and it works great
for us.
> On Jan 9, 2018, at 8:12 AM, Ali Nazemian wrote:
>
> Hi All,
>
> I was wondering whether there is any best practice/recommendation for
> publishing byte messages to Kafka. Is there any specific Serializer
This is “normal” as far as I know. We’ve seen this behavior after unclean
shutdowns of 0.10.1.1.
In the event of an unclean shutdown Kafka seems to have to rebuild some indexes
and for large data directories this takes some time. We got bit by this a few
times recently when we had boxes that po
I’ll give a +1 for jmxtrans here. We use it with great success paired with
Graphite and Grafana for monitoring.
On December 6, 2017 at 7:43:31 AM, Subhash Sriram (subhash.sri...@gmail.com)
wrote:
Hi Irtiza,
Have you looked at jmxtrans? It has multiple output writers for the metrics
and one of t
Bump, still haven't seen anything here. Betting this problem isn't unique
to us. Would love to hear how other folks are managing controlled restarts
of their clusters. =)
On Tue, Nov 28, 2017 at 4:40 PM Matt Farmer wrote:
> Hey all,
>
> So, I'm curious to hear how
Hey all,
So, I'm curious to hear how others have solved this problem.
We've got quite a few brokers and rolling all of them to pick up new
configuration (which consists of triggering a clean shutdown, then
restarting the service and waiting for replication to catch up before
moving on) ultimately
ion: even if you use an In-Memory store, it's still backed by a
> changelog topic, right?
>
>
> -Matthias
>
> On 11/14/17 3:07 PM, Matt Farmer wrote:
> > Hey everyone,
> >
> > We ran across a little bit of a landmine in Kafka Streams 0.11.0.1.
> >
&g
hanks,
Matt Farmer
The JIRA ticket for its implementation still appears to be open, so I'd
guess it's not in 1.0
On Fri, Nov 10, 2017 at 12:28 PM Artur Mrozowski wrote:
> Hi,
> I have a question about KIP-150. Has that functionality been released in
> version 1.0 or is it planned for version 1.1?
>
> Here it says
> Above log is keep on printing.
>
> Regards,
> Madhukar
>
>
>
>
> On Mon, Nov 6, 2017 at 2:05 AM, Matt Farmer wrote:
>
> > If you could provide more details about the kind of issues you saw when
> you
> > downgraded (e.g. error logs, behavior, etc) it m
If you could provide more details about the kind of issues you saw when you
downgraded (e.g. error logs, behavior, etc) it might help folks help you.
At the moment, I wouldn't know where to begin with the issue you've
described as there's not a ton of detail about what, exactly, went wrong.
On Sun
I can confirm that the message size check in the producer works on the
uncompressed size as of 0.11.0.1, as I had to investigate this internally.
:)
I've got a similar problem with messages that can occasionally exceed this
limit. We're taking the approach of enforcing a hard size limit when event
19 matches
Mail list logo