[
https://issues.apache.org/jira/browse/KAFKA-3845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ewen Cheslack-Postava updated KAFKA-3845:
-
Status: Patch Available (was: Open)
> Support per-connector converters
>
Jun,
The motivation for this KIP is to handle joins and windows in Kafka streams
better and since Streams supports time-based windows, the KIP suggests
combining time-based deletion and compaction.
It might make sense to do the same for size-based windows, but can you
think of a concrete use case
[
https://issues.apache.org/jira/browse/KAFKA-4034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15418299#comment-15418299
]
ASF GitHub Bot commented on KAFKA-4034:
---
GitHub user hachikuji opened a pull request
[
https://issues.apache.org/jira/browse/KAFKA-4034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Gustafson reassigned KAFKA-4034:
--
Assignee: Jason Gustafson
> Consumer need not lookup coordinator when using manual assi
Hi Jun,
Correct me if I am wrong.
If the response size includes throttled and unthrottled replicas, I am
thinking if this is possible :
The leader broker B1 receives a fetch request partition P1 and P2 for a
topic from replica broker B2. In this case lets say that only P2 is
throttled on the leade
Jason Gustafson created KAFKA-4034:
--
Summary: Consumer need not lookup coordinator when using manual
assignment
Key: KAFKA-4034
URL: https://issues.apache.org/jira/browse/KAFKA-4034
Project: Kafka
Hey Gwen,
I think this was more than a verification step, it was a building step
towards a backwards compatible clients or for clients that can select
feature based on brokers it is talking to. Are we now against the idea of
having smarter clients? This adds complexity to enable clients to inform
Hi, Damian,
Thanks for the proposal. It makes sense to use time-based deletion
retention and compaction together, as you mentioned in the KStream.
Is there a use case where we want to combine size-based deletion retention
and compaction together?
Jun
On Thu, Aug 11, 2016 at 2:00 AM, Damian Guy
Hi, Joel,
Yes, the response size includes both throttled and unthrottled replicas.
However, the response is only delayed up to max.wait if the response size
is less than min.bytes, which matches the current behavior. So, there is no
extra delay to due throttling, right? For replica fetchers, the d
It's probably worth filing a ticket in JIRA. Please also include a bit of
context why it's important for the consumers to tolerate system clock
changes.
Ismael
On Thu, Aug 11, 2016 at 7:54 PM, Gabriel Ibarra <
gabriel.iba...@tallertechnologies.com> wrote:
> Thanks Ismael,
> I agree with you, It
[
https://issues.apache.org/jira/browse/KAFKA-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15417824#comment-15417824
]
Aishwarya Ganesan edited comment on KAFKA-4009 at 8/11/16 7:50 PM:
-
[
https://issues.apache.org/jira/browse/KAFKA-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15417824#comment-15417824
]
Aishwarya Ganesan commented on KAFKA-4009:
--
The way I test is stop the cluster af
[
https://issues.apache.org/jira/browse/KAFKA-3916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15417813#comment-15417813
]
Jason Gustafson commented on KAFKA-3916:
Could we be hitting the idle connection t
[
https://issues.apache.org/jira/browse/KAFKA-4033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Work on KAFKA-4033 started by Vahid Hashemian.
--
> KIP-70: Revise Partition Assignment Semantics on New Consumer's Subscriptio
Vahid Hashemian created KAFKA-4033:
--
Summary: KIP-70: Revise Partition Assignment Semantics on New
Consumer's Subscription Change
Key: KAFKA-4033
URL: https://issues.apache.org/jira/browse/KAFKA-4033
Hello,
Thanks to everyone who voted, and provided feedback on this KIP.
The KIP has passed with a total of 9 "+1" votes.
Feedback and suggestions on the KIP are still welcome!
Regards,
--Vahid
From: Vahid S Hashemian/Silicon Valley/IBM@IBMUS
To: dev@kafka.apache.org
Date: 08/08/2016
Thanks Ismael,
I agree with you, It seems to be a problem related with absolute timers.
So, How we continue?, do you agree with report this as a bug?
In our system this issue has a great impact. And maybe this particular
issue could be fixed without a serious decreasing of performance.
On Thu, A
Hi Jun,
I'm not sure that would work unless we have separate replica fetchers,
since this would cause all replicas (including ones that are not throttled)
to get delayed. Instead, we could just have the leader populate the
throttle-time field of the response as a hint to the follower as to how
lon
[
https://issues.apache.org/jira/browse/KAFKA-3894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15417675#comment-15417675
]
Tom Crayford commented on KAFKA-3894:
-
Hi Jun,
We're probably going to start on b. fo
Mayuresh,
That's a good question. I think if the response size (after leader
throttling) is smaller than min.bytes, we will just delay the sending of
the response up to max.wait as we do now. This should prevent frequent
empty responses to the follower.
Thanks,
Jun
On Wed, Aug 10, 2016 at 9:17
I think we do not need to make the same guarantee as for "how old of your
Kafka version that you can upgrade to the latest in one shot" (just call it
"upgrade maintenance" for short) and "how old of your Kafka version that
you can enjoy backport critical bug fixes from the latest version" (call it
Hi ,
We've wrote kafka consumer and for each restart it is reading messages from
starting of partition . Please help me how we can make it read from last
offset.
Also, please subscribe me to the forum.
Thanks, Suresh
Ismael,
Thanks for running the release.
Jun
On Wed, Aug 10, 2016 at 5:01 PM, Ismael Juma wrote:
> The Apache Kafka community is pleased to announce the release for Apache
> Kafka 0.10.0.1.
> This is a bug fix release that fixes 53 issues in 0.10.0.0.
>
> All of the changes in this release can
[
https://issues.apache.org/jira/browse/KAFKA-3777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15417518#comment-15417518
]
Damian Guy commented on KAFKA-3777:
---
We need to support concurrent access (Queryable Sta
Thanks Ismael for managing the release!
Guozhang
On Thu, Aug 11, 2016 at 12:15 AM, Ismael Juma wrote:
> Thank you Gwen. :) Also, thanks to Jun for copying the artifacts to the SVN
> release repo (requires a PMC member) and to Gwen for answering my
> questions.
>
> Ismael
>
> On Thu, Aug 11, 20
Hi Kafka Team,
I'm using kafka (kafka_2.11-0.9.0.1) with librdkafka (0.8.1) API for
producer
During a run of 2hrs, I notice the total number of messaged ack'd by
librdkafka delivery report is greater than the maxoffset of a partition in
kafka broker.
I'm running kafka broker with replication facto
Kafka code uses System.currentTimeMillis in a number of places, so it would
not surprise me if it misbehaves when the clock is turned back by an hour.
System.nanoTime is meant to handle this issue, but there are questions
about the performance impact of using that (
https://github.com/apache/kafka/
Are you running with unclean leader election on? Are you setting min in
sync replicas at all?
Can you attach controller and any other logs from the brokers you have?
They would be crucial in debugging this kind of issue.
Thanks
Tom Crayford
Heroku Kafka
On Thursday, 11 August 2016, Mazhar Shaik
Thanks for answering, all help is welcome.
Yes, I tested without changing the clock and It works well.
Actually both consumer are running in different process,
so I think it is not the case that you mention.
I even tested this using two different Kafka clients,
using the java client and using lib
GitHub user sven0726 opened a pull request:
https://github.com/apache/kafka/pull/1719
Sentences are not fluent
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/sven0726/kafka trunk
Alternatively you can review and apply these cha
Hi Jason,
Thanks for your input - appreciated.
1. Would it make sense to use this KIP in the consumer coordinator to
> expire offsets based on the topic's retention time? Currently, we have a
> periodic task which scans the full cache to check which offsets can be
> expired, but we might be able
Hi Mayureesh
That’s a good question and something that should be covered.
I think if the leader throttles partitions so the response gets too small, it
should be automatically delayed in Purgatory. Likewise, on the follower, if the
request is contains no partitions the request will be again be
[
https://issues.apache.org/jira/browse/KAFKA-3096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ismael Juma updated KAFKA-3096:
---
Status: In Progress (was: Patch Available)
> Leader is not set to -1 when it is shutdown if followers
Thank you Gwen. :) Also, thanks to Jun for copying the artifacts to the SVN
release repo (requires a PMC member) and to Gwen for answering my questions.
Ismael
On Thu, Aug 11, 2016 at 4:37 AM, Gwen Shapira wrote:
> Woohoo!
>
> Thank you, Ismael! You make a great release manager :)
>
> On Wed, A
[
https://issues.apache.org/jira/browse/KAFKA-3916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15416682#comment-15416682
]
Michal Turek edited comment on KAFKA-3916 at 8/11/16 7:14 AM:
--
Do we need to make a decision on this particular point? Could we gauge
community demand (people tend to ask for fixes to be backported in JIRA)
and decide then?
If we make a good job of avoiding regressions, then it seems that each
branch should really only need one or or a maximum of two bug fix
[
https://issues.apache.org/jira/browse/KAFKA-3916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15416682#comment-15416682
]
Michal Turek commented on KAFKA-3916:
-
Production of messages was failing for few seco
Hi Joel,
I think my suggestion was misunderstood. :) I suggested that we should
support upgrades to the latest release for a reasonable period (and I used
2 years as an example). That doesn't mean supporting all of those branches
for that period. It simply means that we maintain the code necessary
38 matches
Mail list logo