Any timeline on an official 0.8.2.1 release? Were there any issues found
with rc2? Just checking in because we are anxious to update our brokers but
waiting for the patch release. Thanks.
On Thu, Mar 5, 2015 at 12:01 AM, Neha Narkhede wrote:
> +1. Verified quick start, unit tests.
>
> On Tue, Ma
+1
levels of CPU usage as with 0.8.1.1 (though with an
> > additional broker), so everything looks pretty great.
> >
> > We're using acks = "all" (-1) by the way.
> >
> > Best regards,
> > Mathias
> >
> > On Sat Feb 14 2015 at 4:40:31 AM Sol
https://issues.apache.org/jira/browse/KAFKA-1952
> >
> > I would recommend people hold off on 0.8.2 upgrades until we have a
> handle
> > on this.
> >
> > -Jay
> >
> > On Fri, Feb 13, 2015 at 1:47 PM, Solon Gordon wrote:
> >
> > > The partit
ee which methods used the most CPU?
>
> Thanks,
>
> Jun
>
> On Thu, Feb 12, 2015 at 3:19 PM, Solon Gordon wrote:
>
> > I saw a very similar jump in CPU usage when I tried upgrading from
> 0.8.1.1
> > to 0.8.2.0 today in a test environment. The Kafka cluster there is
I saw a very similar jump in CPU usage when I tried upgrading from 0.8.1.1
to 0.8.2.0 today in a test environment. The Kafka cluster there is two
m1.larges handling 2,000 partitions across 32 topics. CPU usage rose from
40% into the 150%–190% range, and load average from under 1 to over 4.
Downgrad
er bump up your heap or decrease your fetch size.
> >
> > -jay
> >
> > On Wed, Dec 10, 2014 at 7:48 AM, Solon Gordon wrote:
> >
> >> I just wanted to bump this issue to see if anyone has thoughts. Based on
> >> the error message it seems like the broker
to support really large messages and increase these values,
> you may run into OOM issues.
>
> Gwen
>
> On Wed, Dec 10, 2014 at 7:48 AM, Solon Gordon wrote:
> > I just wanted to bump this issue to see if anyone has thoughts. Based on
> > the error message it seems like th
I just wanted to bump this issue to see if anyone has thoughts. Based on
the error message it seems like the broker is attempting to consume nearly
2GB of data in a single fetch. Is this expected behavior?
Please let us know if more details would be helpful or if it would be
better for us to file
> controlled shutdown. Would you mind trying out 0.8.2-beta?
>
> On Fri, Nov 7, 2014 at 11:52 AM, Solon Gordon wrote:
>
> > We're using 0.8.1.1 with auto.leader.rebalance.enable=true.
> >
> > On Fri, Nov 7, 2014 at 2:35 PM, Guozhang Wang
> wrote:
> >
> &
We're using 0.8.1.1 with auto.leader.rebalance.enable=true.
On Fri, Nov 7, 2014 at 2:35 PM, Guozhang Wang wrote:
> Solon,
>
> Which version of Kafka are you running and are you enabling auto leader
> rebalance at the same time?
>
> Guozhang
>
> On Fri, Nov 7,
Hi all,
My team has observed that if a broker process is killed in the middle of
the controlled shutdown procedure, the remaining brokers start spewing
errors and do not properly rebalance leadership. The cluster cannot recover
without major manual intervention.
Here is how to reproduce the probl
an issue: basically when we gets a request time
> out
> > from the broker, we would avoid trying to re-connect to it refreshing
> > metadata. Could you file a JIRA for this?
> >
> > Guozhang
> >
> >
> > On Tue, Nov 4, 2014 at 10:43 AM, Solon Gordon wr
Hi all,
I've been investigating how Kafka 0.8.1.1 responds to the scenario where
one broker loses connectivity (due to something like a hardware issue or
network partition.) It looks like the brokers themselves adjust within a
few seconds to reassign leaders and shrink ISRs. However, I see produce
14 matches
Mail list logo