nother
> operation is occurring automatically get batched. This can sometimes be
> nicer and more automatic than another tunable parameter...
>
> -Jay
>
> On Fri, Aug 7, 2015 at 9:29 AM, Jiangjie Qin
> wrote:
>
> > Hi,
> >
> > I just created KIP-29 to ad
Hi Gwen,
Completely agree with you. I originally just hard coded it to be 10
seconds. Ashish raised this requirement in KAFKA-2406 because people might
want to ISR changes get propagated quicker.
I don't have a good use case myself. Personally I think hard code it is
fine although I don't object t
i, Aug 7, 2015 at 4:06 PM, Gwen Shapira wrote:
>
> > Maybe Ashish can supply the use-case and tuning advice then :)
> > I'm a -1 on adding new configurations that we can't quite explain.
> >
> > On Fri, Aug 7, 2015 at 3:57 PM, Jiangjie Qin
> > wr
desired. Having a config
> to
> > > expose the delay time provides admin a way to control it, and it will
> > come
> > > with a default value so if someone does not want to play with it they
> can
> > > choose not to.
> > >
> > > @Gwen, does th
Guozhang,
By interleaved groups of message, I meant something like this: Say we have
message 0,1,2,3, message 0 and 2 together completes a business logic,
message 1 and 3 together completes a business logic. In that case, after
user processed message 2, they cannot commit offsets because if they c
Hi Jason,
Thanks for writing this up. It would be useful to generalize the group
concept. I have a few questions below.
1. In old consumer actually the partition assignment are done by consumers
themselves. We used zookeeper to guarantee that a partition will only be
consumed by one consumer thre
> On Aug. 11, 2015, 10:08 p.m., Gwen Shapira wrote:
> > Ship It!
>
> Gwen Shapira wrote:
> Jiangjie, I commited despite your concerns since this patch fixes a huge
> potential issue.
>
> If you have an idea for an improved fix, we can tackle this in a follow
> up.
Thanks Gwen. I
Hey Guozhang,
Will it be a little bit hard to keep the volunteer list up to date?
Personally I would prefer to have a summery e-mail automatically sent to
kafka-dev list every day for tickets with patches submitted in recent 7
days. The email can also include the reviewer for the ticket. And peopl
ship problem in abnormal case either,
which seems to be a little bit vulnerable.
Thanks,
Jiangjie (Becket) Qin
On Tue, Aug 11, 2015 at 11:06 PM, Ewen Cheslack-Postava
wrote:
> On Tue, Aug 11, 2015 at 10:15 PM, Jiangjie Qin
> wrote:
>
> > Hi Jason,
> >
> > Thanks
urrent new consumer also has this problem and I believe
we need to fix it.
Thanks,
Jiangjie (Becket) Qin
On Tue, Aug 11, 2015 at 11:43 PM, Ewen Cheslack-Postava
wrote:
> On Tue, Aug 11, 2015 at 11:29 PM, Jiangjie Qin
> wrote:
>
> > Ewen,
> >
> > Thanks for the ex
Good annotations. I can see a few future usages :)
Jiangjie (Becket) Qin
On Wed, Aug 12, 2015 at 3:05 PM, Gwen Shapira wrote:
> Hi Team Kafka,
>
> Ewen just added stability annotations to Apache Kafka (KAFKA-2429).
>
> In the same PR, we marked the new Consumer API as "unstable" since we are
>
nsumer) detects the change and forces the group to rebalance.
> >
> > What do you think?
> >
> > (Also I think adding groupId/generationId to fetch and produce requests
> > seems like an interesting line of thought.)
> >
> > -Jason
> >
> >
> &
> would be no need for the coordinator to return the subscriptions for each
> member. However, this would prevent graceful upgrades. We might be able to
> fix that problem by allowing the consumer to provide two subscriptions to
> allowing rolling updates, but that starts to sound
,
> but making it separate seems perfectly reasonable to me. Jason, any issue
> with splitting the version out into a separate field like this?
>
> >
> > > That can simplify the metadata format to the following:
> > >
> > > GroupType => "consumer"
&g
tial extra round of metadata
fetch and will only occur when consumer see metadata change, which is
infrequent.
Any thoughts?
Thanks,
Jiangjie (Becket) Qin
On Fri, Aug 14, 2015 at 12:57 PM, Ewen Cheslack-Postava
wrote:
> On Fri, Aug 14, 2015 at 10:59 AM, Jiangjie Qin
> wrote:
>
imes on this, and I'm generally not opposed. It might help
> if
> > we try to quantify the impact of the metadata churn in practice. I think
> > the rate of change would have to be in the low seconds for this to be a
> > real problem. It does seem nice though that we ha
the members in a group is inconsistent and hence there
> are
> > > typically several rebalance attempts. Note that this is fixed in the
> new
> > > design where the group membership is always consistently communicated
> to
> > > all members.
> > > 3. The r
Jun,
Yes, I agree. If the metadata can be synced quickly there should not be an
issue. It just occurred to me that there is a proposal to allow consuming
from followers in ISR, that could potentially cause more frequent metadata
change for consumers. Would that be an issue?
Thanks,
Jiangjie (Bec
I am thinking can we put some notes in commit message when commit a patch
which introduces introduce some API change or backward compatible change?
It mainly serves two purposes:
1. Easier for people to track the changes they need to make to run a new
version
2. Easier for us to write the release
kicked out of the group. We wouldn't want it to
> be able to effect a rebalance, for example, if it would just be kicked out
> again. That would probably complicate the group management logic on the
> coordinator.
>
>
> Thanks,
> Jason
>
>
> On Tue, Aug 18, 2015 a
lly if it does not complicate the protocol. The proposed
> >> >> changes do not complicate the protocol IMO - i.e., there is no
> further
> >> >> modification to the request/response formats beyond the current
> >> >> client-side proposal. It only involves a trivial rein
, never affect membership, etc. Re: actual sync, I'd
> be curious if defaulting to the coordinator to ensure consistency has any
> problems that you can think of? It's not strictly a guarantee of
> consistency since the metadata can change between requests from different
> client
e previous two comments. Maybe one thing
>> to address again is that metadata hashes can be based only on topics
>> matching subscriptions, so any changes in, e.g., test topics that do not
>> affect a consumer groups subscriptions should have 0 effect. They never
>> trig
on't push in changes to the consumer in a hurry for 0.8.3
>> without due diligence.
>>
>> Joel
>>
>> On Sun, Aug 30, 2015 at 12:00 AM, Jiangjie Qin wrote:
>> > Hi Joel,
>> >
>> > I was trying to calculate the number but found it might be better
I kind of think letting the ProducerPerformance send uncompressed bytes is
not a bad idea. The reason being is that when you send compressed bytes, it
is not easy to determine how much data are you actually send. Arguably
sending uncompressed bytes does not take compression cost into performance
be
Hi,
We just created KIP-31 to propose a message format change in Kafka.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-31+-+Message+format+change+proposal
As a summary, the motivations are:
1. Avoid server side message re-compression
2. Honor time-based log roll and retention
3. Enable of
e-up Jiangjie.
>
> One comment about migration plan: "For old consumers, if they see the new
> protocol the CRC check will fail"..
>
> Do you mean this bug in the old consumer cannot be fixed in a
> backward-compatible way?
>
> Guozhang
>
>
> On Thu, Sep 3, 2015
n Thu, Sep 3, 2015 at 12:48 PM, Jiangjie Qin wrote:
> Hi, Guozhang,
>
> Thanks for reading the KIP. By "old consumer", I meant the
> ZookeeperConsumerConnector in trunk now, i.e. without this bug fixed. If we
> fix the ZookeeperConsumerConnector then it will thro
ncluding it in the protocol can have; non-Java clients are
> likely to expose it if it is available, whether it's actually a good idea
> to or not.)
>
Yes, searching by timestamp will be a Client API. Actually it is currently
a client API, OffsetRequest will search the offset by tim
2015 at 4:22 PM, Jay Kreps wrote:
> > >
> > > > The magic byte is used to version message format so we'll need to
> make
> > > > sure that check is in place--I actually don't see it in the current
> > > > consumer code which I think is
Based on the new feature in next release, 0.9 looks reasonable.
There might be some other things worth thinking about. Although we have a
lot of new feature added, many of them are actually either still in
development or not well tested yet. For example, for security features,
only SSL is done and
> records are <= X?
> >
> > For retention, I agree with the problem you point out, but I think what
> you
> > are saying in that case is that you want a size limit too. If you use
> > system time you actually hit the same problem: say you do a full dump of
&
Hi folks,
This proposal was previously in KIP-31 and we separated it to KIP-32 per
Neha and Jay's suggestion.
The proposal is to add the following two timestamps to Kafka message.
- CreateTime
- LogAppendTime
The CreateTime will be set by the producer and will change after that. The
LogAppendTim
the data coming in. So the guarantee
of using server side timestamp is that "after appended to the log, all
messages will be available on broker for retention time", which is not
changeable by clients.
>
> -Jay
On Thu, Sep 10, 2015 at 12:55 PM, Jiangjie Qin wrote:
> Hi folks,
; -Jay
>
> On Thu, Sep 10, 2015 at 12:42 PM, Jiangjie Qin
> wrote:
>
> > Neha and Jay,
> >
> > Thanks a lot for the feedback. Good point about splitting the
> discussion. I
> > have split the proposal to three KIPs and it does make each discussion
> more
&g
but I’m not sure it
> is
> > > >> very useful and as Jay noted, a major change from the old proposal
> > > >> linked from the KIP is the sparse time-based index which we felt was
> > > >> essential to bound memory usage (and having timestamps on each l
ntries. Without knowing the exact diff
> between the previous clock and new clock we cannot adjust the times
> exactly, but we can at least ensure increasing timestamps.
>
> On Fri, Sep 11, 2015 at 10:52 AM, Jiangjie Qin
> wrote:
> > Ewen and Jay,
> >
> > They way
LogAppendTime respectively and put some more concrete use cases as well.
Thanks,
Jiangjie (Becket) Qin
On Mon, Sep 14, 2015 at 9:40 AM, Jiangjie Qin wrote:
> Hi Joel,
>
> Good point about rebuilding index. I agree that having a per message
> LogAppendTime might be necessary. About tim
HI Jun,
Can we also include KAFKA-2448 in 0.9 as well? We see this issue a few
times before and that cause replica fetcher threads not startup.
Thanks,
Jiangjie (Becket) Qin
On Sat, Sep 12, 2015 at 9:40 AM, Jun Rao wrote:
> The following is a candidate list of jiras that we want to complete i
message in terms of position. Exposing LogAppendTime means we
expose another internal concept of message in terms of time.
Considering the above reasons, personally I think it worth adding the
LogAppendTime to each message.
Any thoughts?
Thanks,
Jiangjie (Becket) Qin
On Mon, Sep 14, 2015 at 11
> https://issues.apache.org/jira/browse/KAFKA-1
>
>
> Thanks,
>
> Mayuresh
>
> On Mon, Sep 14, 2015 at 5:13 PM, Jiangjie Qin
> wrote:
>
> > I just updated the KIP-33 to explain the indexing on CreateTime and
> > LogAppendTime respectively. I also used some use case to compa
about the KIP.
Thanks,
Jiangjie (Becket) Qin
On Mon, Sep 14, 2015 at 5:13 PM, Jiangjie Qin wrote:
> I just updated the KIP-33 to explain the indexing on CreateTime and
> LogAppendTime respectively. I also used some use case to compare the two
> solutions.
> Although this is for KIP-33,
Hey Bhavesh,
I think it is useful to notify the user about the partition change.
The problem of having a listener in producer is that it is hard to
guarantee the synchronization. For example, consider the following sequence:
1. producer sender thread refreshes the metadata with partition change.
change partitioning strategy etc).
>
> This give ability to share the diff code and not all implementation
> have to implement diff logic that is main concern.
>
>
> Thanks,
>
> Bhavesh
>
>
> On Fri, Sep 18, 2015 at 3:47 PM, Jiangjie Qin
> wrote:
> > Hey Bhaves
Congrats, Harsha!
On Mon, Sep 21, 2015 at 10:31 PM, Prabhjot Bharaj
wrote:
> Congratulations. It's inspiring for newbies like me
>
> Regards,
> Prabhjot
> On Sep 22, 2015 10:30 AM, "Ashish Singh" wrote:
>
> > Congrats Harsha!
> >
> > On Monday, September 21, 2015, Manikumar Reddy
> > wrote:
>
update the inner
> message's relative offset values."
>
This is assuming that when we compact log segments, we might compact
multiple message sets into one message set. If so, the relative offset in
original message set is likely different from the relative offset in the
compacted me
2015 at 12:54 PM, Jiangjie Qin >
> > wrote:
> >
> > > Hi folks,
> > >
> > > Thanks a lot for the feedback on KIP-31 - move to use relative offset.
> > (Not
> > > including timestamp and index discussion).
> > >
> > >
Hi,
Thanks a lot for the reviews and feedback on KIP-31. It looks all the
concerns of the KIP has been addressed. I would like to start the voting
process.
The short summary for the KIP:
We are going to use the relative offset in the message format to avoid
server side recompression.
In case you
not sure if the prolonged upgrade process is viable in every
> scenario. I think it should work at LinkedIn for e.g., but may not for
> other environments.
>
> Joel
>
>
> On Tue, Sep 22, 2015 at 12:55 AM, Jiangjie Qin
> wrote:
> > Thanks for the explanation, Ja
Thanks for the writeup. I also think having a specific protocol for
client-broker version negotiation is better.
I'm wondering is it better to let the broker to decide the version to use?
It might have some value If brokers have preference for a particular
version.
Using a global version is a good
be able to handle messages that
> are in original as well as new (relative offset) format.
>
> Thanks,
>
> Joel
>
>
> On Thu, Sep 24, 2015 at 7:56 PM, Jiangjie Qin
> wrote:
> > Hi Joel,
> >
> > That is a valid concern. And that is actually why we had the
>
; want to have such a long deployment plan but at least it is an option
> for those who want to tread very carefully given that it is backwards
> incompatible.
>
> Joel
>
> On Tue, Sep 29, 2015 at 4:50 PM, Jiangjie Qin
> wrote:
> > Hi Joel and other folks.
> >
> &
Gwen,
It looks there are two requirements here:
1. Know which message failed.
2. Know which record batch failed. i.e. which messages fail together.
The current callback interface can easily support (1) - user can simply
construct here own callback which store the message in it. Do you mean we
wan
during
tomorrow's KIP hangout.
Thanks,
Jiangjie (Becket) Qin
On Thu, Sep 10, 2015 at 2:47 PM, Jiangjie Qin wrote:
> Hi Jay,
>
> I just copy/pastes here your feedback on the timestamp proposal that was
> in the discussion thread of KIP-31. Please see the replies inline.
> Th
Hi Ismael,
Thanks for bringing this up. Completely agree the exploding amount of
emails is a little annoying, regardless they are sent to dev list or
personal emails.
Not sure whether it is doable or not, but here is what I am thinking.
1. batch the comments email and send periodically to dev lis
>
> > > >> > +1
> > > >> >
> > > >> > On Wed, Sep 23, 2015 at 8:03 PM, Neha Narkhede >
> > > >> wrote:
> > > >> >
> > > >> > > +1
> > > >> > >
> > > >> > > On Wed, Sep 23, 2015 at 6:21 PM, Tod
rmat.version=1
> and intra.cluster.protocol = 0.9.0.
>
> Thanks,
>
> Jun
>
> On Tue, Oct 6, 2015 at 2:58 PM, Jiangjie Qin
> wrote:
>
> > Hi folks,
> >
> > Sorry for this prolonged voting session and thanks for the votes.
> >
> > There is an add
I am thinking instead of returning an empty response, it would be better to
return an explicit UnsupportedVersionException code.
Today KafkaApis handles the error in the following way:
1. For requests/responses using old Scala classes, KafkaApis uses
RequestOrResponse.handleError() to return an er
ce. The solution would
> > either
> > >be to reject the message or to override it with the server time.
> > >
> > > So in LI's environment you would configure the clusters used for
> direct,
> > > unbuffered, message production (e.g. tra
Hey Jay,
If we allow consumer to subscribe to /*/my-event, does that mean we allow
consumer to consume cross namespaces? In that case it seems not
"hierarchical" but more like a name field filtering. i.e. user can choose
to consume from topic where datacenter={x,y},
topic_name={my-topic1,mytopic2}
+1 (non-binding)
On Wed, Oct 21, 2015 at 3:40 PM, Joel Koshy wrote:
> +1 binding
>
> On Wed, Oct 21, 2015 at 8:17 AM, Flavio Junqueira wrote:
>
> > Thanks everyone for the feedback so far. At this point, I'd like to start
> > a vote for KIP-38.
> >
> > Summary: Add support for ZooKeeper authent
Hi Cliff,
If auto.offset.commit is set to true, the offset will be committed in
following cases in addition to periodical offset commit:
1. During consumer rebalance before release the partition ownership.
If consumer A owns partition P before rebalance, it will commit offset for
partition P duri
elete the older log segment.
>
>
> The index timestamps would always be a lower bound (i.e. the maximum at
> that time) so I don't think that is possible.
>
> 4. In bootstrap case, if we reload the data to a Kafka cluster, we have to
> > make sure we configure the topic co
+1 (non-binding)
On Thu, Oct 29, 2015 at 1:00 PM, Guozhang Wang wrote:
> +1 (binding)
>
> On Thu, Oct 29, 2015 at 12:04 PM, Ashish Singh
> wrote:
>
> > +1 (non-binding)
> >
> > On Thu, Oct 29, 2015 at 12:02 PM, Jason Gustafson
> > wrote:
> >
> > > Since we're crunching a little on the 0.9 rele
the message
meaningless to user - user don't know if the timestamp has been overwritten
by the brokers.
Any thoughts?
Thanks,
Jiangjie (Becket) Qin
On Mon, Oct 26, 2015 at 1:23 PM, Jiangjie Qin wrote:
> Hi Jay,
>
> Thanks for such detailed explanation. I think we both are
3b15254f32252cf824d7a292889ac7662d73ada1
gradle.properties 4827769a3f8e34f0fe7e783eb58e44d4db04859b
Diff: https://reviews.apache.org/r/24196/diff/
Testing
---
Thanks,
Jiangjie Qin
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24196/#review49401
---
On Aug. 1, 2014, 9:26 p.m., Jiangjie Qin wrote:
>
> -
scala
e1610d3c602fb0f5f4cc237cb8b4e0d168a41530
core/src/main/scala/kafka/producer/ProducerTopicStats.scala
ed209f4773dedb09e9a34005e6849730229aa6e9
Diff: https://reviews.apache.org/r/24196/diff/
Testing
---
Thanks,
Jiangjie Qin
41530
core/src/main/scala/kafka/producer/ProducerTopicStats.scala
ed209f4773dedb09e9a34005e6849730229aa6e9
Diff: https://reviews.apache.org/r/24196/diff/
Testing
---
Thanks,
Jiangjie Qin
ducer/ProducerStats.scala
e1610d3c602fb0f5f4cc237cb8b4e0d168a41530
core/src/main/scala/kafka/producer/ProducerTopicStats.scala
ed209f4773dedb09e9a34005e6849730229aa6e9
Diff: https://reviews.apache.org/r/24196/diff/
Testing
---
Thanks,
Jiangjie Qin
erSec" will match
".*"+clientId+".*"+"DroppedMessagesPerSec".
- Jiangjie
---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24196/#review50328
e1610d3c602fb0f5f4cc237cb8b4e0d168a41530
core/src/main/scala/kafka/producer/ProducerTopicStats.scala
ed209f4773dedb09e9a34005e6849730229aa6e9
Diff: https://reviews.apache.org/r/24196/diff/
Testing
---
Thanks,
Jiangjie Qin
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24196/#review50401
-------
On Aug. 12, 2014, 10:20 p.m., Jiangjie Qin wrote:
>
> --
/producer/ProducerStats.scala
e1610d3c602fb0f5f4cc237cb8b4e0d168a41530
core/src/main/scala/kafka/producer/ProducerTopicStats.scala
ed209f4773dedb09e9a34005e6849730229aa6e9
Diff: https://reviews.apache.org/r/24196/diff/
Testing
---
Thanks,
Jiangjie Qin
ange the val name to "numRegisteredMetricsBeforeRemoval" and
"numRegisteredMetricsAfterRemoval".
- Jiangjie
---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24196/#review
e1610d3c602fb0f5f4cc237cb8b4e0d168a41530
core/src/main/scala/kafka/producer/ProducerTopicStats.scala
ed209f4773dedb09e9a34005e6849730229aa6e9
Diff: https://reviews.apache.org/r/24196/diff/
Testing
---
Thanks,
Jiangjie Qin
---
Thanks,
Jiangjie Qin
b8698ee1469c8fbc92ccc176d916eb3e28b87867
core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala PRE-CREATION
Diff: https://reviews.apache.org/r/25995/diff/
Testing
---
Thanks,
Jiangjie Qin
.get() call is missing.
I put it in side the put().
- Jiangjie
---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25995/#review54620
---
/browse/KAFKA-1647
Repository: kafka
Description
---
Fix for Kafka-1647.
Diffs
-
core/src/main/scala/kafka/server/ReplicaManager.scala
78b7514cc109547c562e635824684fad581af653
Diff: https://reviews.apache.org/r/26373/diff/
Testing
---
Thanks,
Jiangjie Qin
iff: https://reviews.apache.org/r/25995/diff/
Testing
---
Thanks,
Jiangjie Qin
scala
b8698ee1469c8fbc92ccc176d916eb3e28b87867
core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala PRE-CREATION
Diff: https://reviews.apache.org/r/25995/diff/
Testing
---
Thanks,
Jiangjie Qin
pache.org/r/26373/#review55615
---
On Oct. 6, 2014, 5:06 p.m., Jiangjie Qin wrote:
>
> ---
> This is an automatically generated e-mail. To reply, visi
org/r/26373/diff/
Testing
---
Thanks,
Jiangjie Qin
/browse/KAFKA-1706
Repository: kafka
Description
---
Adding ByteBoundedBlockingQueue to utils.
Diffs
-
core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala PRE-CREATION
Diff: https://reviews.apache.org/r/26755/diff/
Testing
---
Thanks,
Jiangjie Qin
---
Thanks,
Jiangjie Qin
/
Testing
---
Thanks,
Jiangjie Qin
e/src/main/scala/kafka/server/ReplicaManager.scala
78b7514cc109547c562e635824684fad581af653
Diff: https://reviews.apache.org/r/26373/diff/
Testing
---
Thanks,
Jiangjie Qin
s.apache.org/r/26373/#review56989
---
On Oct. 18, 2014, 7:26 a.m., Jiangjie Qin wrote:
>
> ---
> This is an automatically generated e-mail. To reply, v
is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26373/#review57490
---
On Oct. 18, 2014, 7:26 a.m., Jiangjie Qin wrote:
>
> ---
> This is an automatically generated e-mail. To reply, visit:
&g
---
Thanks,
Jiangjie Qin
mments. Will do tests to verify if it works.
Diffs (updated)
-
core/src/main/scala/kafka/server/ReplicaManager.scala
78b7514cc109547c562e635824684fad581af653
Diff: https://reviews.apache.org/r/26373/diff/
Testing
---
Thanks,
Jiangjie Qin
s.apache.org/r/26994/#review57680
---
On Oct. 21, 2014, 8:37 p.m., Jiangjie Qin wrote:
>
> ---
> This is an automatically generated e-mail. To reply, v
org/r/26994/diff/
Testing
---
Thanks,
Jiangjie Qin
---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26994/#review57907
---
On Oct. 22, 2014, 10:04 p.m., Jiangjie Qin wrote:
>
> --
> On Oct. 21, 2014, 10:21 p.m., Guozhang Wang wrote:
> > core/src/main/scala/kafka/tools/MirrorMaker.scala, line 323
> > <https://reviews.apache.org/r/26994/diff/1/?file=727975#file727975line323>
> >
> > Is this change intended?
>
> Jiangjie Qin wrote
ally generated e-mail. To reply, visit:
https://reviews.apache.org/r/26373/#review57947
-------
On Oct. 22, 2014, 6:08 a.m., Jiangjie Qin wrote:
>
> ---
> This is an automatically generated
3e28b87867
Diff: https://reviews.apache.org/r/26994/diff/
Testing
---
Thanks,
Jiangjie Qin
potential race where
cleanShutdown could execute multiple times if several threads exit abnormally
at same time.
Diffs (updated)
-
core/src/main/scala/kafka/tools/MirrorMaker.scala
b8698ee1469c8fbc92ccc176d916eb3e28b87867
Diff: https://reviews.apache.org/r/26994/diff/
Testing
---
Thanks,
Jiangjie Qin
le722245line109>
> >
> > getAndDecrement(sizeFunction.get(e))
It seems getAndDecrement() does not take argument and will always decrement by
1.
- Jiangjie
---
This is an automatically generated e-mail. To re
301 - 400 of 1543 matches
Mail list logo