Hi Jun,
Setting to -1, may solve this issue.
But it will cause producer buffer full in load test resulting to failures
and drop of messages from client(producer side)
Hence, this will not actually solve the problem.
I need to fix this from kafka broker side, so that there is no impact on
producer
I see a bugs raised over the same.
which is still open.
do we have any solution for this ?
https://issues.apache.org/jira/browse/KAFKA-3916
http://mail-archives.apache.org/mod_mbox/kafka-dev/201606.
mbox/%3cjira.12984498.146714867.10722.1467148737...@atlassian.jira%3E
Regards,
Mazhar Shaikh
Hi Jun,
No, Kafka doesn't delete these topics from Zookeeper ever, unless you've
run a delete command against the cluster. I'd expect either an issue with
Zookeeper or an admin having manually deleted the configuration.
Thanks
Tom Crayford
Heroku Kafka
On Thu, Aug 18, 2016 at 2:42 AM, Jun MA w
Hi Guozhang,
Hm... I hadn't thought of the repartitioning involvement.
I'm not confident I'm understanding completely, but I believe you're
saying the decision to process data in this way is made before the
data being processed is available, because the partition *may* change,
because the groupBy
Hi users,
Someone know, how i could do something similar to this command
"kafka-consumer-offset-checker.sh --group mygroup --topic mytopic
--zookeeper 10.1.2.:2181" in java, using kafka_2.10-0.8.2.0?
I need to know the last offset consumed and the logSize by group and topic.
Thanks,
Sergio
Mazhar,
There is probably a mis-understanding. Ack=-1 (or all) doesn't mean waiting
for all replicas. It means waiting for all replicas that are in sync. So,
if a replica is down, it will be removed from the in-sync replicas, which
allows the producer to continue with fewer replicas.
For the conn
Hi Jun,
Thanks for clarification, I'll give a try with ack=-1 (in producer).
However, i did a fallback to older version of kafka (*kafka_2.10-0.8.2.1*),
and i don't see this issue (loss of messages).
looks like kafka_2.11-0.9.0.1 has issues(BUG) during replication.
Thanks,
Regards,
Mazhar Shai
Mazhar,
With ack=1, whether you lose messages or not is not deterministic. It
depends on the time when the broker receives/acks a message, the follower
fetches the data and the broker fails. So, it's possible that you got lucky
in one version and unlucky in another.
Thanks,
Jun
On Thu, Aug 18,
Hello,
Wanted to check if these JIRAs are on track for 0.10.1.0.
https://issues.apache.org/jira/browse/KAFKA-3478
https://issues.apache.org/jira/browse/KAFKA-3705
Now that 0.10.0.1 is out will the next release be 0.10.1.0 or another bug
fix release?
Srikanth
Kafka users,
I want to resurface this post since it becomes crucial for our team to
understand our recent Samza throughput issues we are facing.
Any help is appreciated.
Thanks,
David
On Tue, Aug 2, 2016 at 10:30 PM David Yu wrote:
> I'm having a hard time finding documentation explaining the
Hi,
Can I create a Kafka Streams app that consumes from a set of topics prefixed by
some prefix? The way it’s possible using createMessageStreamsByFilter? If so,
how?
Best,
Drew
This doc link may help:
http://kafka.apache.org/documentation.html#new_producer_monitoring
On Fri, Aug 19, 2016 at 2:36 AM, David Yu wrote:
> Kafka users,
>
> I want to resurface this post since it becomes crucial for our team to
> understand our recent Samza throughput issues we are facing.
>
Hey Drew,
You can easily use a WhiteList passing as parameter your regex pattern.
E.g:
Whitelist filter = new Whitelist(“topic_\\d+”);
consumer.createMessageStreamsByFilter(filter, 1);
> On Aug 19, 2016, at 2:46 AM, Drew Kutcharian wrote:
>
> Hi,
>
> Can I create a Kafka Streams app that co
13 matches
Mail list logo