e and we probably should
not optimize for that.
Thanks,
Jiangjie (Becket) Qin
On Fri, Aug 11, 2017 at 2:08 PM, Apurva Mehta wrote:
> Thanks for your email Becket. I would be interested in hearing others
> opinions on which should be a better default between acks=1 and acks=all.
>
> One
ProducerConfig.forSemantc(Semantic semantic);
Where the semantics are "AT_MOST_ONCE, AT_LEAST_ONCE, EXACTLY_ONCE". So
users could just pick the one they want. This would be as if we have more
than one default config sets.
Thanks,
Jiangjie (Becket) Qin
On Fri, Aug 11, 2017 at 5:26 PM,
out we do have to put an upper
bound for the max.in.flight.requests.per.connection, maybe it should be
something like 500 instead of 5?
Thanks,
Jiangjie (Becket) Qin
On Sat, Aug 12, 2017 at 2:04 PM, Jay Kreps wrote:
> Becket,
>
> I think this proposal actually does a great deal to addr
A, never mind, my last calculation actually forget to take the number of
partitions into account. So it does seem a problem if we keep info of last
N appended batches on the broker.
On Sat, Aug 12, 2017 at 9:50 PM, Becket Qin wrote:
> Hi Jay and Apurva,
>
> Thanks for the reply. I agre
uld limit the total memory of the sequence buffered on the broker. In
the worst case, falling back to do a disk search may not be that bad
assuming people are not doing insane things.
Thanks,
Jiangjie (Becket) Qin
On Mon, Aug 14, 2017 at 1:36 PM, Guozhang Wang wrote:
> Just want to clarif
discuss this in a separate
thread.
Thanks,
Jiangjie (Becket) Qin
On Tue, Aug 15, 2017 at 1:46 PM, Guozhang Wang wrote:
> Hi Jay,
>
> I chatted with Apurva offline, and we think the key of the discussion is
> that, as summarized in the updated KIP wiki, whether we should conside
batch from a
request also introduces some complexity. Again, personally I think it is
fine to expire a little bit late. So maybe we don't need to expire a batch
that is already in flight. In the worst case we will expire it with delay
of request.timeout.ms.
Thanks,
Jiangjie (Becket) Qin
duplicates, and we may not need to reset the PID.
That said, given that batch expiration is probably already rare enough, so
it may not be necessary to optimize for that.
Thanks,
Jiangjie (Becket) Qin
On Wed, Aug 23, 2017 at 5:01 PM, Jun Rao wrote:
> Hi, Becket,
>
> If a message ex
Thanks everyone!
On Thu, Aug 24, 2017 at 11:27 AM, Jason Gustafson
wrote:
> Congrats Becket!
>
> On Thu, Aug 24, 2017 at 11:15 AM, Ismael Juma wrote:
>
> > Congratulations Becket!
> >
> > On 24 Aug 2017 6:20 am, "Joel Koshy" wrote:
> >
> > H
-flight batch immediately,
but wait for the produce response. If the batch has been successfully
appended, we do not expire it. Otherwise, we expire it.
Thanks,
Jiangjie (Becket) Qin
On Thu, Aug 24, 2017 at 11:26 AM, Jason Gustafson
wrote:
> @Becket
>
> Good point about unnecessarily
of min(remaining delivery.timeout.ms, request.timeout.ms)?
Thanks,
Jiangjie (Becket) Qin
On Fri, Aug 25, 2017 at 9:34 AM, Jun Rao wrote:
> Hi, Becket,
>
> Good point on expiring inflight requests. Perhaps we can expire an inflight
> request after min(remaining deliver
delivery.timeout.ms in this case is that if there are many batches to be
expired in the queue, we may end up with continuous expirations and PID
reset.
Thanks,
Jiangjie (Becket) Qin
On Sun, Aug 27, 2017 at 12:08 PM, Jun Rao wrote:
> Hi, Jiangjie,
>
> If we want to enforce delivery.timeout.ms, we nee
cause currently
when TimeoutException is thrown, there is no guarantee whether the messages
are delivered or not.
Thanks,
Jiangjie (Becket) Qin
On Tue, Aug 29, 2017 at 12:33 PM, Jason Gustafson
wrote:
> I think I'm with Becket. We should wait for request.timeout.ms for each
> produce
ut.ms, but we will still wait for the
response to be returned before sending the next request.
Thanks,
Jiangjie (Becket) Qin
On Tue, Aug 29, 2017 at 4:00 PM, Jun Rao wrote:
> Hmm, I thought delivery.timeout.ms bounds the time from a message is in
> the
> accumulator (i.e., when send() r
Sounds good to me as well.
On Tue, Aug 29, 2017 at 2:43 AM, Ismael Juma wrote:
> Sounds good to me too. Since this is a non controversial change, I suggest
> starting the vote in 1-2 days if no-one else comments.
>
> Ismael
>
> On Thu, Aug 24, 2017 at 7:32 PM, Jason Gustafson
> wrote:
>
> > See
(Becket) Qin
On Wed, Sep 6, 2017 at 3:14 PM, Jun Rao wrote:
> Hi, Sumant,
>
> The diagram in the wiki seems to imply that delivery.timeout.ms doesn't
> include the batching time.
>
> For retries, probably we can just default it to MAX_INT?
>
> Thanks,
>
> Jun
&g
+1. Thanks for the KIP, Sumant and Joel.
On Fri, Sep 8, 2017 at 11:33 AM, Jason Gustafson wrote:
> +1. Thanks for the KIP.
>
> On Fri, Sep 8, 2017 at 8:17 AM, Sumant Tambe wrote:
>
> > Updated.
> >
> > On 8 September 2017 at 02:04, Ismael Juma wrote:
> >
> > > Thanks for the KIP. +1 (binding)
ber of 5 max.in.flight.requests.per.connection
3. bounded memory footprint on the cached sequence/timestamp/offset entries.
Hope it's not too late to have the changes if that makes sense.
Thanks,
Jiangjie (Becket) Qin
On Mon, Sep 11, 2017 at 11:21 AM, Apurva Mehta wrote:
> Thanks for th
urgent to fix
the upgrade path.
Thanks,
Jiangjie (Becket) Qin
On Mon, Sep 11, 2017 at 4:13 PM, Apurva Mehta wrote:
> Hi Becket,
>
> Regarding the current implementation: we opted for a simpler server side
> implementation where we _don't_ snapshot the metadata of the last 5
fault. The metadata of JoinGroupRequests are likely similar so the
aggregated metadata should be highly compressible.
Thanks,
Jiangjie (Becket) Qin
On Mon, May 23, 2016 at 9:17 AM, Guozhang Wang wrote:
> The original concern is that regex may not be efficiently supported
> across-languag
.
4. SyncGroupResponse will read the message, extract the assignment part and
send back the partition assignment. We can compress the partition
assignment before sends it out if we want.
Jiangjie (Becket) Qin
On Mon, May 23, 2016 at 5:08 PM, Jason Gustafson wrote:
> >
> > Jason, d
Awesome!
On Tue, May 24, 2016 at 9:41 AM, Jay Kreps wrote:
> Woohoo!!! :-)
>
> -Jay
>
> On Tue, May 24, 2016 at 9:24 AM, Gwen Shapira wrote:
>
> > The Apache Kafka community is pleased to announce the release for Apache
> > Kafka 0.10.0.0.
> > This is a major release with exciting new features,
format is the following:
MemberMetadata => Version Generation ClientId Host Subscription Assignment
So DescribeGroupResponse will just return the entire compressed
GroupMetadataMessage. SyncGroupResponse will return the corresponding inner
message.
Thanks,
Jiangjie (Becket) Qin
On Tue, May
lag" a bad thing?
Thanks,
Jiangjie (Becket) Qin
On Tue, May 24, 2016 at 4:21 PM, Gwen Shapira wrote:
> +1 (binding)
>
> Thanks for responding to all my original concerns in the discussion thread.
>
> On Tue, May 24, 2016 at 1:37 PM, Eric Wasserman
> wrote:
>
> > Hi,
r may lose
messages if auto commit is enabled, or the manual commit might fail after a
consumer.poll() because the partitions might have been reassigned. So
having a separate rebalance timeout also potentially means a big change to
the users as well.
Thanks,
Jiangjie (Becket) Qin
On Fri, Jun 3, 20
by user thread? This
is implementation detail but may be worth thinking about a bit more.
Thanks,
Jiangjie (Becket) Qin
On Mon, Jun 6, 2016 at 11:27 AM, Guozhang Wang wrote:
> Jiangjie:
>
> About doing the rebalance in the background thread, I'm a bit concerned as
> it will ch
; >> > > > > > >
> >> > > > > >
> >> > > > > > Sorry - I'm officially confused. I think it may not be
> required
> >> -
> >> > > since
> >> > > > > the
> >> > > > &g
bytes, we always return the first message even it is bigger than
max fetch size. Otherwise we only return up to fetch max bytes. We only do
this for __consumer_offsets topic so no user topic will be impacted.
Thanks,
Jiangjie (Becket) Qin
On Thu, Jun 9, 2016 at 2:40 PM, Jason Gustafson wrote:
le. 2) On some Linux, the file create time is not available, so
using segment file create time may not always work.
Do people have any concern for this change? I will update the KIP if people
think the change is OK.
Thanks,
Jiangjie (Becket) Qin
On Tue, Apr 19, 2016 at 6:27 PM, Becket Qin
ikely happen even without the byte limit or we disable READ
from the sockets. The only difference is that the broker won't have OOM if
we have the bytes limit.
Thanks,
Jiangjie (Becket) Qin
On Mon, Aug 8, 2016 at 10:04 AM, Jun Rao wrote:
> Radai,
>
> Thanks for the proposal. A co
g requests from the
sockets when the RequestChannel is full also seems hurting the memory usage
control effort.
Thanks,
Jiangjie (Becket) Qin
On Mon, Aug 8, 2016 at 4:46 PM, radai wrote:
> I agree that filling up the request queue can cause clients to time out
> (and presumably retry?).
value
to down convert the message if the consumer version is old, right?
Thanks.
Jiangjie (Becket) Qin
On Wed, Nov 2, 2016 at 1:37 AM, Michael Pearce
wrote:
> Hi Joel , et al.
>
> Any comments on the below idea to handle roll out / compatibility of this
> feature, using a configuratio
the ducktape tests, will that be an issue
if we run the tests for each updates of the PR?
Thanks,
Jiangjie (Becket) Qin
On Thu, Nov 3, 2016 at 8:16 AM, Harsha Chintalapani wrote:
> Thanks, Raghav . I am +1 for having this in Kafka. It will help identify
> any potential issues, especiall
entry/apache_gains_additional_travis_ci
>
> Thanks,
> Raghav.
>
> On Thu, Nov 3, 2016 at 9:41 AM, Becket Qin wrote:
>
> > Thanks Raghav,
> >
> > +1 for the idea in general.
> >
> > One thing I am wondering is when the tests would be run? Would it be
stand-alone service independent of Kafka or
the application. It may have its own configurations as we discussed in this
KIP so the applications in that case would just talk to that service to
trim the log instead of taking to Kafka.
Thanks,
Jiangjie (Becket) Qin
On Sun, Nov 6, 2016 at 6:10 AM, 东方甲乙
essage may have already been consumed before the log compaction happens.
Thanks,
Jiangjie (Becket) Qin
On Mon, Nov 7, 2016 at 9:59 AM, Michael Pearce
wrote:
> Hi Becket,
>
> We were thinking more about having the logic that’s in the method
> shouldRetainMessage configurable
in a log compacted topic, but I
am not sure if that has any use case.
Thanks,
Jiangjie (Becket) Qin
On Wed, Nov 9, 2016 at 9:23 AM, Mayuresh Gharat
wrote:
> I think it will be a good idea. +1
>
> Thanks,
>
> Mayuresh
>
> On Wed, Nov 9, 2016 at 9:13 AM, Michael Pearce
>
hanks,
Jiangjie (Becket) Qin
On Wed, Nov 9, 2016 at 10:33 AM, Guozhang Wang wrote:
> Hello Jun,
>
> Thanks for reporting this issue. I looked through the code and I agree the
> logic you found with 0.9.0.1 also exists in 0.10.0+. However, I think the
> process is designed intentional
null value = tombstone ( Do not allow a key with
null value in the compacted topics)
No matter which flavor we choose, we just need to stick to that way of
interpretation, right? Why would we need a second stage?
Jiangjie (Becket) Qin
On Thu, Nov 10, 2016 at 10:37 AM, Ignacio Solis wrote:
>
Jun,
The shallow iterator actually does not check the CRC, right? The CRC is
only checked when the log.append() is called. That is why the exception was
thrown from the processPartitionData().
Thanks,
Jiangjie (Becket) Qin
On Thu, Nov 10, 2016 at 4:27 PM, Guozhang Wang wrote:
> Tha
sure when to down convert the messages to adapt to older clients. Otherwise
we will have to always scan all the messages. It would probably work but
relies on guess or inference.
Thanks,
Jiangjie (Becket) Qin
On Fri, Nov 11, 2016 at 8:42 AM, Mayuresh Gharat wrote:
> Sounds good Michael.
>
&g
Hi,
We created KIP-92 to propose adding per partition lag metrics to
KafkaConsumer.
The KIP wiki link is the following:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-92+-+Add+per+partition+lag+metrics+to+KafkaConsumer
Comments are welcome.
Thanks,
Jiangjie (Becket) Qin
priority topics. We have seen a few other similar use cases that require a
programmatic access to the lag. Although people can always use
offsetsForTimes() to get the LEO, but it is more expensive call involving
an RPC and is a blocking call.
Thanks,
Jiangjie (Becket) Qin
On Mon, Nov 14, 2016 at
some additional configuration to let people do
this, which is not a bad idea but seems better to be discussed in another
KIP.
Thanks,
Jiangjie (Becket) Qin
On Mon, Nov 14, 2016 at 8:52 AM, Michael Pearce
wrote:
> I agree with Mayuresh.
>
> I don't see how having a magic byte helps here.
>
not
be changed and should always be supported until deprecated.
Thanks,
Jiangjie (Becket) Qin
On Mon, Nov 14, 2016 at 11:28 AM, Mayuresh Gharat <
gharatmayures...@gmail.com> wrote:
> I am not sure about "If I understand correctly, you want to let the broker
> to reject re
(Becket) Qin
On Mon, Nov 14, 2016 at 1:43 PM, Michael Pearce
wrote:
> I like the idea of up converting and then just having the logic to look
> for tombstones. It makes that quite clean in nature.
>
> It's quite late here in the UK, so I fully understand / confirm I
> underst
r not. With a
magic value bump, the broker/consumer knows for sure there is no need to
look at the value anymore if the tombstone bit is not set. So if we want to
eventually only use tombstone bit, a magic value bump is necessary.
Thanks,
Jiangjie (Becket) Qin
On Mon, Nov 14, 2016 at 11:39 PM,
.
Thanks,
JIangjie (Becket) Qin
On Wed, Nov 16, 2016 at 1:28 PM, Mayuresh Gharat wrote:
> Hi Ismael,
>
> This is something I can think of for migration plan:
> So the migration plan can look something like this, with up conversion :
>
> 1) Currently lets say we have Broker
,
Jiangjie (Becket) Qin
On Tue, Nov 29, 2016 at 8:33 AM, radai wrote:
> +1 (non-binding)
>
> On Tue, Nov 29, 2016 at 8:08 AM, wrote:
>
> > +1 (non-binding)
> >
> > Thanks,
> >
> > Mayuresh
> >
> >
> > > On Nov 29, 2016, at 3:18 AM
would
be big because we do not allow LCO to go beyond LSO.
7.
What happens if a consumer starts up and seeks to the middle of a
transaction?
Thanks,
Jiangjie (Becket) Qin
On Sun, Dec 11, 2016 at 5:15 PM, Neha Narkhede wrote:
> Apurva and Jason -- appreciate the detai
Hey Guozhang,
Thanks for running the release.
KAFKA-4521 is just checked in. It fixes a bug in mirror maker that may
result in message loss. Can we include that in 0.10.1.1 as well?
Thanks,
Jiangjie (Becket) Qin
On Thu, Dec 15, 2016 at 9:46 AM, Guozhang Wang wrote:
> Michael,
>
Yes, that sounds good. Thanks.
Jiangjie (Becket) Qin
On Thu, Dec 15, 2016 at 1:46 PM, Guozhang Wang wrote:
> Hey Becket,
>
> I just cut the release this morning and the RC1 is out a few minutes ago so
> that we can possibly have the release out before the break. I looked
>
+1 on the idea. We have a ticket about making all the blocking call have a
timeout in KafkaConsumer. The implementation could be a little tricky as
Ewen mentioned. But for close it is probably a simpler case because in the
worst case the consumer will just stop polling and heartbeating and
eventual
Hi,
I want to start a voting thread on KIP-92 which proposes to add per
partition lag metrics to KafkaConsumer. The KIP wiki page is below:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-92+-+Add+per+partition+lag+metrics+to+KafkaConsumer
Thanks,
Jiangjie (Becket) Qin
r the transactions for the users
anymore, which is a big difference. For the latter case, the consumer may
have to buffer the incomplete transactions otherwise we are just throwing
the burden onto the users.
Thanks,
Jiangjie (Becket) Qin
On Fri, Dec 16, 2016 at 4:56 PM, Jay Kreps wrote:
> Yea
fter a rebalance? For e.g., should it reset itself to (say) -1?
> or removed? This really applies to any per-partition metrics that we intend
> to maintain in the consumer.
>
> On Mon, Nov 14, 2016 at 9:35 AM, Becket Qin wrote:
>
> > Hey Michael,
> >
> > Thanks for the c
, for per
partition lag, the average value of the same partition at different times
seems having some real meaning?
Thanks,
Jiangjie (Becket) Qin
On Wed, Dec 21, 2016 at 1:02 PM, Ismael Juma wrote:
> Thanks for the KIP, it's a useful improvement. Just one question, the KIP
> state
gt; On Wed, Dec 21, 2016 at 1:49 PM, Becket Qin wrote:
>
> > Hi Ismael,
> >
> > Thanks for the comments. Good observation. I guess for max lag of all the
> > partitions the average value is less meaningful because the lag can be
> from
> > different partitio
+1
On Tue, Jan 3, 2017 at 5:41 PM, Joel Koshy wrote:
> +1
>
> On Tue, Jan 3, 2017 at 10:54 AM, Ben Stopford wrote:
>
> > Hi All
> >
> > Please find the below KIP which proposes changing the setting
> > unclean.leader.election.enabled from true to false. The motivation for
> > this change is tha
e.
Thanks,
Jiangjie (Becket) Qin
On Wed, Feb 22, 2017 at 8:57 PM, Jay Kreps wrote:
> Hey Becket,
>
> I get the problem we want to solve with this, but I don't think this is
> something that makes sense as a user controlled knob that everyone sending
> data to kafka has to thin
many messages
need to be split at the same time. That could potentially be an issue for
some users.
What do you think about this approach?
Thanks,
Jiangjie (Becket) Qin
On Thu, Feb 23, 2017 at 1:31 PM, Jay Kreps wrote:
> Hey Becket,
>
> Yeah that makes sense.
>
> I agree th
de on offset commit)? They are
probably in your proof of concept code. Could you add them to the wiki as
well?
Thanks,
Jiangjie (Becket) Qin
On Fri, Feb 24, 2017 at 1:19 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:
> Thanks Jorge for addressing my question/suggestion.
>
IP is already
complicated, I would rather leave this out of the scope and address that
later when needed, e.g. after having batch level interceptors.
Thanks,
Jiangjie (Becket) Qin
On Fri, Feb 24, 2017 at 3:56 PM, Michael Pearce
wrote:
> KIP updated in response to the below comments:
>
>
it
the batch.
Thanks,
Jiangjie (Becket) Qin
On Mon, Feb 27, 2017 at 10:30 AM, Mayuresh Gharat <
gharatmayures...@gmail.com> wrote:
> Hi Becket,
>
> Seems like an interesting idea.
> I had couple of questions :
> 1) How do we decide when the batch should be split?
&g
+1
On Mon, Feb 27, 2017 at 6:38 PM, Ismael Juma wrote:
> Thanks to everyone who voted and provided feedback. +1 (binding) from me
> too.
>
> The vote has passed with 4 binding votes (Grant, Jason, Guozhang, Ismael)
> and 11 non-binding votes (Bill, Damian, Eno, Edoardo, Mickael, Bharat,
> Onur,
Hi Ismael,
Thanks for volunteering on the new release.
I think 0.11.0.0 makes a lot of sense given the new big features we are
intended to include.
Thanks,
Jiangjie (Becket) Qin
On Mon, Feb 27, 2017 at 7:47 PM, Ismael Juma wrote:
> Hi all,
>
> With 0.10.2.0 out of the way, I woul
it twice, which is probably
a more common case.
Thanks,
Jiangjie (Becket) Qin
On Tue, Feb 28, 2017 at 12:43 PM, radai wrote:
> I will settle for any API really, but just wanted to point out that as it
> stands right now the API targets the most "advanced" (hence obscure and
&
Hi Ismael,
Thanks for the reply. Please see the comments inline.
On Wed, Mar 1, 2017 at 6:47 AM, Ismael Juma wrote:
> Hi Becket,
>
> Thanks for sharing your thoughts. More inline.
>
> On Wed, Mar 1, 2017 at 2:54 AM, Becket Qin wrote:
>
> > As you can imagine if the Pr
release
X+1. It seems reasonable to do the same for Scala version here. So should
we consider only making Scala version 2.11 as default in Kafka 0.11.0 and
drop support for Scala 2.10 in Kafka 0.11.1?
Thanks,
Jiangjie (Becket) Qin
On Wed, Mar 1, 2017 at 4:42 PM, Apurva Mehta wrote:
> +1 (
Thanks for the update. The changes sound reasonable.
On Wed, Mar 1, 2017 at 1:57 PM, Dong Lin wrote:
> Hi all,
>
> I have updated the KIP to include a script that allows user to purge data
> by providing a map from partition to offset. I think this script may be
> convenience and useful, e.g., i
Thanks for the clarification, Ismael. In that case, it is reasonable to
drop support for Scala 2.10. LinkedIn is probably fine with this change.
I did not notice we have recommended Scala version on the download page.
+1 on the KIP
On Thu, Mar 2, 2017 at 10:46 AM, Grant Henke wrote:
> +1
>
> O
update it according.
Thanks,
Jiangjie (Becket) Qin
On Mon, Feb 27, 2017 at 3:50 PM, Joel Koshy wrote:
> >
> > Lets say we sent the batch over the wire and received a
> > RecordTooLargeException, how do we split it as once we add the message to
> > the batch we loose the
Just to clarify, the implementation is basically what I mentioned above
(split/resend + adjusted estimation evolving algorithm) and changing the
compression ratio estimation to be per topic.
Thanks,
Jiangjie (Becket) Qin
On Fri, Mar 3, 2017 at 6:36 PM, Becket Qin wrote:
> I went ahead
Hi Dong,
Yes, there is a sensor in the patch about the split occurrence.
Currently it is a count instead of rate. In practice, it seems count is
easier to use in this case. But I am open to change.
Thanks,
Jiangjie (Becket) Qin
On Fri, Mar 3, 2017 at 7:43 PM, Dong Lin wrote:
> Hey Bec
ecket) Qin
On Sat, Mar 4, 2017 at 10:27 AM, Becket Qin wrote:
> Hi Dong,
>
> Yes, there is a sensor in the patch about the split occurrence.
>
> Currently it is a count instead of rate. In practice, it seems count is
> easier to use in this case. But I am open to change.
>
>
I have updated the KIP based on the latest discussion. Please check and let
me know if there is any further concern.
Thanks,
Jiangjie (Becket) Qin
On Sat, Mar 4, 2017 at 10:56 AM, Becket Qin wrote:
> Actually second thought on this, rate might be better for two reasons:
> 1. Most
ratio improving/deteriorate
steps are determined in the wiki.
Thanks,
Jiangjie (Becket) Qin
On Mon, Mar 6, 2017 at 4:42 PM, Dong Lin wrote:
> Hey Becket,
>
> I am wondering if we should first vote for the KIP before reviewing the
> patch. I have two comments below:
>
> - Sho
somewhat overlapping with the bytes rate quota?
Thanks,
Jiangjie (Becket) Qin
On Tue, Mar 7, 2017 at 11:04 AM, Rajini Sivaram
wrote:
> Jun,
>
> Thank you for the explanation, I hadn't realized you meant percentage of
> the total thread pool. If everyone is OK with Jun
ls the new controller will still see the notification.
2. In the notification znode we have Event field as an integer. Can we
document what is the value of LogDirFailure? And also are there any other
possible values?
Thanks,
Jiangjie (Becket) Qin
On Tue, Mar 7, 2017 at 11:30 AM, Dong Lin wrote:
&g
I see. Good point about SSL.
I just asked Todd to take a look.
Thanks,
Jiangjie (Becket) Qin
On Tue, Mar 7, 2017 at 2:17 PM, Jun Rao wrote:
> Hi, Jiangjie,
>
> Yes, I agree that byte rate already protects the network threads
> indirectly. I am not sure if byte rate fully capt
Hi Ismael,
Yes, it makes sense to do benchmark. My concern was based on the
observation in KAFKA-3994 where we saw GC problem when creating new lists
in the purgatory.
Thanks,
Jiangjie (Becket) Qin
On Fri, Mar 10, 2017 at 8:54 AM, Ismael Juma wrote:
> Hi Becket,
>
> Sorry for the
Bump up the thread for further comments. If there is no more comments on
the KIP I will start the voting thread on Wed.
Thanks,
Jiangjie (Becket) Qin
On Tue, Mar 7, 2017 at 9:48 AM, Becket Qin wrote:
> Hi Dong,
>
> Thanks for the comments.
>
> The patch is mostly for proof of
. Maybe we can do the same
here by just remove all the non-batching interface.
2. NewTopic.setConfigs() is a little weird, can it just be part of the
constructor? Any specific reason to change the configs after the creation
of a NewTopic instance?
Thanks,
Jiangjie (Becket) Qin
On Tue, Mar 14, 2017 at
Hi Colin,
Thanks for the reply. Please see comments inline.
On Tue, Mar 14, 2017 at 5:30 PM, Colin McCabe wrote:
> On Tue, Mar 14, 2017, at 13:36, Becket Qin wrote:
> > The interface looks good overall. Thanks for the much needed work Colin.
>
> Thanks, Becket.
>
>
configuration (as well as some other configurations
such as timestamp type) from the broker side and use that to decide
whether a batch should be split or not. I probably should add this to the
KIP wiki.
Thanks,
Jiangjie (Becket) Qin
On Wed, Mar 15, 2017 at 9:47 AM, Jason Gustafson wrote
Hi Ismael,
KIP-4 is also the one that I was thinking about. We have introduced a
DescribeConfigRequest there so the producer can easily get the
configurations. By "another KIP" do you mean a new (or maybe extended)
protocol or using that protocol in clients?
Thanks,
Jiangjie (Becke
IP-
> 4+-+Command+line+and+centralized+administrative+operations#KIP-4-
> Commandlineandcentralizedadministrativeoperations-DescribeConfigsRequest
>
> We have only voted on KIP-4 Metadata, KIP-4 Create Topics, KIP-4 Delete
> Topics so far.
>
> Ismael
>
> On Wed, Mar 15,
+1
Thanks for driving through this, Rajini :)
On Tue, Mar 21, 2017 at 9:49 AM, Roger Hoover
wrote:
> Rajini,
>
> This is great. Thank you. +1 (non-binding)
>
> Roger
>
> On Tue, Mar 21, 2017 at 8:55 AM, Ismael Juma wrote:
>
> > Rajini,
> >
> > Thanks for the proposal and for addressing the
+1
Thanks for the KIP. The tool is very useful.
On Tue, Mar 21, 2017 at 4:46 PM, Jason Gustafson wrote:
> +1 This looks super useful! Might be worth mentioning somewhere
> compatibility with the old consumer. It looks like offsets in zk are not
> covered, which seems fine, but probably should b
ms, I would prefer adding the
configuration to the broker so that we can address both problems.
Thanks,
Jiangjie (Becket) Qin
On Fri, Mar 24, 2017 at 5:30 AM, Damian Guy wrote:
> Thanks for the feedback.
>
> Ewen: I'm happy to make it a client side config. Other than the protocol
Hi Matthias,
Yes, that was what I was thinking. We will keep delay it until either
reaching the rebalance timeout or no new consumer joins in that small delay
which is configured on the broker side.
Thanks,
Jiangjie (Becket) Qin
On Fri, Mar 24, 2017 at 1:39 PM, Matthias J. Sax
wrote
how many times the delay was extended, at T+10
the rebalance will kick off even if at T+9 a new consumer joined the group.
I also agree that we should set the default delay to some meaningful value
instead of setting it to 0.
Thanks,
Jiangjie (Becket) Qin
On Tue, Mar 28, 2017 at 12:32 PM, Jason
+1 Thanks for the KIP!
On Thu, Mar 30, 2017 at 12:55 PM, Jason Gustafson
wrote:
> +1 Thanks for the KIP!
>
> On Thu, Mar 30, 2017 at 12:51 PM, Guozhang Wang
> wrote:
>
> > +1
> >
> > Sorry about the previous email, Gmail seems be collapsing them into a
> > single thread on my inbox.
> >
> > Guo
+1. Thanks for the KIP.
On Mon, Apr 3, 2017 at 4:29 AM, Rajini Sivaram
wrote:
> +1 (non-binding)
>
> On Fri, Mar 31, 2017 at 5:36 PM, radai wrote:
>
> > possible priorities:
> >
> > 1. keepalives/coordination
> > 2. inter-broker-traffic
> > 3. produce traffic
> > 4. consume traffic
> >
> > (don
members to guess the state of a group. Can you elaborate a little bit
on your idea?
Thanks,
Jiangjie (Becket) Qin
On Mon, Apr 3, 2017 at 8:16 AM, Onur Karaman
wrote:
> Hi Damian.
>
> After reading the discussion thread again, it still doesn't seem like the
> thread discussed the o
+1
Thanks for the KIP. Made a pass and had some minor change.
On Mon, Apr 3, 2017 at 3:16 PM, radai wrote:
> +1, LGTM
>
> On Mon, Apr 3, 2017 at 9:49 AM, Dong Lin wrote:
>
> > Hi all,
> >
> > It seems that there is no further concern with the KIP-112. We would like
> > to start the voting proc
this case.
Thanks,
Jiangjie (Becket) Qin
On Wed, Mar 22, 2017 at 10:54 AM, Dong Lin wrote:
> Never mind about my second comment. I misunderstood the semantics of
> producer's batch.size.
>
> On Wed, Mar 22, 2017 at 10:20 AM, Dong Lin wrote:
>
> > Hey Becket,
> >
>
+1
Thanks for the proposal.
On Fri, Jan 6, 2017 at 11:37 AM, Roger Hoover
wrote:
> +1 (non-binding)
>
> On Fri, Jan 6, 2017 at 11:16 AM, Tom Crayford
> wrote:
>
> > +1 (non-binding)
> >
> > On Fri, Jan 6, 2017 at 6:58 PM, Colin McCabe wrote:
> >
> > > Looks good. +1 (non-binding).
> > >
> >
Congrats Grant!
On Wed, Jan 11, 2017 at 2:17 PM, Kaufman Ng wrote:
> Congrats Grant!
>
> On Wed, Jan 11, 2017 at 4:28 PM, Jay Kreps wrote:
>
> > Congrats Grant!
> >
> > -Jay
> >
> > On Wed, Jan 11, 2017 at 11:51 AM, Gwen Shapira
> wrote:
> >
> > > The PMC for Apache Kafka has invited Grant Hen
+1. Thanks for the KIP.
On Thu, Jan 12, 2017 at 10:33 AM, Joel Koshy wrote:
> +1
>
> (for the record, I favor the rejected alternative of not awaiting low
> watermarks to go past the purge offset. I realize it offers a weaker
> guarantee but it is still very useful, easier to implement, slightly
101 - 200 of 338 matches
Mail list logo