Hello Kafka users!
Here is a blog post on how we migrated our 0.7 traffic over to 0.8 at
Knewton. I hope this might be useful for anyone considering doing a cutover.
http://tech.knewton.com/blog/2015/09/how-knewton-cutover-the-core-of-its-infrastructure-from-kafka-0-7-to-kafka-0-8/
ided to skip the leap second problem (even though we're
> supposedly on a version that doesn't have that bug) by shutting down ntpd
> everywhere and then allowing it to slowly adjust the time afterwards
> without sending the leap second.
>
> -Todd
>
>
> On Thu
Hi Kafka users,
ZooKeeper in our staging environment was running on a very old ubuntu
version, that was exposed to the "leap second causes spuriously high CPU
usage" bug.
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1020285
As a result, when the leap second arrived, the ZooKeeper CPU usa
tion is
> normal when the followers consumer gets disconnected from the leader.
>
> In the coming 0.8.3 release there is a new featured adde (KAFKA-1546
> <https://issues.apache.org/jira/browse/KAFKA-1546>) that handles traffic
> spikes better.
>
> Guozhang
>
&
Hi Kafka users,
Every ~3-4 days we are seeing a broker logging "Shrinking ISR for
partition..." only to log "Expanding ISR for partition..." a few seconds
later.
- The broker logging "Shrinking ISR..." is the leader of all partitions for
which the ISR is shrunk.
- The number of partitions is alwa
I think this question might relate to the very recently posted "callback
handler is not getting called if cluster is down" topic from "ankit tyagi".
I am using the 0.8.2.1 new producer send(ProducerRecord record,
Callback callback) with a Callback and never calling .get() on the
Future. I have not
Hi Kafka users!
I was just migrating a cluster of 3 brokers from one set of EC2 instances
to another, but ran into replication problems. The method of migration used
is that of stopping one broker and letting a new broker join with the same
broker.id. Replication started, but after ~4 of ~15 GB th
Hi Kafka users,
Was there ever a JIRA ticket filed for this?
"Re: Stale TopicMetadata"
http://mail-archives.apache.org/mod_mbox/kafka-users/201307.mbox/%3ce238b018f88c39429066fc8c4bfd0c2e019be...@esv4-mbx01.linkedin.biz%3E
As far as I can tell this is still an issue in 0.8.1.1
Using the python
I have a custom Decoder for my messages (Thrift). I want to be able to
handle "bad" messages that I can't decode. When the ConsumerIterator
encounters a bad message, the exception thrown by my Decoder bubbles up and
I can catch it and handle it. Subsequent calls to the ConsumerIterator give
me Ille
.
>
> Guozhang
>
>
> On Wed, Feb 26, 2014 at 11:04 AM, Christofer Hedbrandh <
> christo...@knewton.com> wrote:
>
> > Thanks for your response Guozhang.
> >
> > I did make sure that new meta data is fetched before taking out the old
> > broker. I set
ling bounce of the cluster to,
> for example, do a in-place upgrade, you need to make sure at least one
> broker in the list is alive during the rolling bounce.
>
> Hope this helps.
>
> Guozhang
>
>
>
>
>
> On Wed, Feb 26, 2014 at 8:19 AM, Christofer Hedbrandh <
and it has a bug.
2. Broker discovery is not a producer feature, in which case I think many
people might benefit from a clearer documentation.
3. I am doing something dumb e.g. forgetting about some important
configuration.
Please let me know what you make of this.
Thanks,
Christofer Hedbrandh
12 matches
Mail list logo