Re: [ANNOUNCE] New committer: Boyang Chen

2020-06-23 Thread Kowshik Prakasam
Congrats, Boyang! :)


Cheers,
Kowshik

On Tue, Jun 23, 2020 at 8:43 AM Aparnesh Gaurav 
wrote:

> Congrats Boyang.
>
> On Tue, 23 Jun, 2020, 9:07 PM Vahid Hashemian, 
> wrote:
>
> > Congrats Boyang!
> >
> > --Vahid
> >
> > On Tue, Jun 23, 2020 at 6:41 AM Wang (Leonard) Ge 
> > wrote:
> >
> > > Congrats Boyang! This is a great achievement.
> > >
> > > On Tue, Jun 23, 2020 at 10:33 AM Mickael Maison <
> > mickael.mai...@gmail.com>
> > > wrote:
> > >
> > > > Congrats Boyang! Well deserved
> > > >
> > > > On Tue, Jun 23, 2020 at 8:20 AM Tom Bentley 
> > wrote:
> > > > >
> > > > > Congratulations Boyang!
> > > > >
> > > > > On Tue, Jun 23, 2020 at 8:11 AM Bruno Cadonna 
> > > > wrote:
> > > > >
> > > > > > Congrats, Boyang!
> > > > > >
> > > > > > Best,
> > > > > > Bruno
> > > > > >
> > > > > > On Tue, Jun 23, 2020 at 7:50 AM Konstantine Karantasis
> > > > > >  wrote:
> > > > > > >
> > > > > > > Congrats, Boyang!
> > > > > > >
> > > > > > > -Konstantine
> > > > > > >
> > > > > > > On Mon, Jun 22, 2020 at 9:19 PM Navinder Brar
> > > > > > >  wrote:
> > > > > > >
> > > > > > > > Many Congratulations Boyang. Very well deserved.
> > > > > > > >
> > > > > > > > Regards,Navinder
> > > > > > > >
> > > > > > > > On Tuesday, 23 June, 2020, 07:21:23 am IST, Matt Wang <
> > > > > > wang...@163.com>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > >  Congratulations, Boyang!
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > >
> > > > > > > > Best,
> > > > > > > > Matt Wang
> > > > > > > >
> > > > > > > >
> > > > > > > > On 06/23/2020 07:59,Boyang Chen
> > > wrote:
> > > > > > > > Thanks a lot everyone, I really appreciate the recognition,
> and
> > > > hope to
> > > > > > > > make more solid contributions to the community in the future!
> > > > > > > >
> > > > > > > > On Mon, Jun 22, 2020 at 4:50 PM Matthias J. Sax <
> > > mj...@apache.org>
> > > > > > wrote:
> > > > > > > >
> > > > > > > > Congrats! Well deserved!
> > > > > > > >
> > > > > > > > -Matthias
> > > > > > > >
> > > > > > > > On 6/22/20 4:38 PM, Bill Bejeck wrote:
> > > > > > > > Congratulations Boyang! Well deserved.
> > > > > > > >
> > > > > > > > -Bill
> > > > > > > >
> > > > > > > > On Mon, Jun 22, 2020 at 7:35 PM Colin McCabe <
> > cmcc...@apache.org
> > > >
> > > > > > wrote:
> > > > > > > >
> > > > > > > > Congratulations, Boyang!
> > > > > > > >
> > > > > > > > cheers,
> > > > > > > > Colin
> > > > > > > >
> > > > > > > > On Mon, Jun 22, 2020, at 16:26, Guozhang Wang wrote:
> > > > > > > > The PMC for Apache Kafka has invited Boyang Chen as a
> committer
> > > > and we
> > > > > > > > are
> > > > > > > > pleased to announce that he has accepted!
> > > > > > > >
> > > > > > > > Boyang has been active in the Kafka community more than two
> > years
> > > > ago.
> > > > > > > > Since then he has presented his experience operating with
> Kafka
> > > > Streams
> > > > > > > > at
> > > > > > > > Pinterest as well as several feature development including
> > > > rebalance
> > > > > > > > improvements (KIP-345) and exactly-once scalability
> > improvements
> > > > > > > > (KIP-447)
> > > > > > > > in various Kafka Summit and Kafka Meetups. More recently he's
> > > also
> > > > been
> > > > > > > > participating in Kafka broker development including
> > > post-Zookeeper
> > > > > > > > controller design (KIP-500). Besides all the code
> > contributions,
> > > > Boyang
> > > > > > > > has
> > > > > > > > also helped reviewing even more PRs and KIPs than his own.
> > > > > > > >
> > > > > > > > Thanks for all the contributions Boyang! And look forward to
> > more
> > > > > > > > collaborations with you on Apache Kafka.
> > > > > > > >
> > > > > > > >
> > > > > > > > -- Guozhang, on behalf of the Apache Kafka PMC
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > >
> > > > > >
> > > >
> > >
> > >
> > > --
> > > Leonard Ge
> > > Software Engineer Intern - Confluent
> > >
> >
> >
> > --
> >
> > Thanks!
> > --Vahid
> >
>


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-08-13 Thread Kowshik Prakasam
Hi Harsha/Satish,

Thanks for the great KIP. Below are the first set of questions/suggestions
I had after making a pass on the KIP.

5001. Under the section "Follower fetch protocol in detail", the
next-local-offset is the offset upto which the segments are copied to
remote storage. Instead, would last-tiered-offset be a better name than
next-local-offset? last-tiered-offset seems to naturally align well with
the definition provided in the KIP.

5002. After leadership is established for a partition, the leader would
begin uploading a segment to remote storage. If successful, the leader
would write the updated RemoteLogSegmentMetadata to the metadata topic (via
RLMM.putRemoteLogSegmentData). However, for defensive reasons, it seems
useful that before the first time the segment is uploaded by the leader for
a partition, the leader should ensure to catch up to all the metadata
events written so far in the metadata topic for that partition (ex: by
previous leader). To achieve this, the leader could start a lease (using an
establish_leader metadata event) before commencing tiering, and wait until
the event is read back. For example, this seems useful to avoid cases where
zombie leaders can be active for the same partition. This can also prove
useful to help avoid making decisions on which segments to be uploaded for
a partition, until the current leader has caught up to a complete view of
all segments uploaded for the partition so far (otherwise this may cause
same segment being uploaded twice -- once by the previous leader and then
by the new leader).

5003. There is a natural interleaving between uploading a segment to remote
store, and, writing a metadata event for the same (via
RLMM.putRemoteLogSegmentData). There can be cases where a remote segment is
uploaded, then the leader fails and a corresponding metadata event never
gets written. In such cases, the orphaned remote segment has to be
eventually deleted (since there is no confirmation of the upload). To
handle this, we could use 2 separate metadata events viz. copy_initiated
and copy_completed, so that copy_initiated events that don't have a
corresponding copy_completed event can be treated as garbage and deleted
from the remote object store by the broker.

5004. In the default implementation of RLMM (using the internal topic
__remote_log_metadata), a separate topic called
__remote_segments_to_be_deleted is going to be used just to track failures
in removing remote log segments. A separate topic (effectively another
metadata stream) introduces some maintenance overhead and design
complexity. It seems to me that the same can be achieved just by using just
the __remote_log_metadata topic with the following steps: 1) the leader
writes a delete_initiated metadata event, 2) the leader deletes the segment
and 3) the leader writes a delete_completed metadata event. Tiered segments
that have delete_initiated message and not delete_completed message, can be
considered to be a failure and retried.

5005. When a Kafka cluster is provisioned for the first time with KIP-405
tiered storage enabled, could you explain in the KIP about how the
bootstrap for __remote_log_metadata topic will be performed in the the
default RLMM implementation?

5006. I currently do not see details on the KIP on why RocksDB was chosen
as the default cache implementation, and how it is going to be used. Were
alternatives compared/considered? For example, it would be useful to
explain/evaulate the following: 1) debuggability of the RocksDB JNI
interface, 2) performance, 3) portability across platforms and 4) interface
parity of RocksDB’s JNI api with it's underlying C/C++ api.

5007. For the RocksDB cache (the default implementation of RLMM), what is
the relationship/mapping between the following: 1) # of tiered partitions,
2) # of partitions of metadata topic __remote_log_metadata and 3) # of
RocksDB instances? i.e. is the plan to have a RocksDB instance per tiered
partition, or per metadata topic partition, or just 1 for per broker?

5008. The system-wide configuration 'remote.log.storage.enable' is used to
enable tiered storage. Can this be made a topic-level configuration, so
that the user can enable/disable tiered storage at a topic level rather
than a system-wide default for an entire Kafka cluster?

5009. Whenever a topic with tiered storage enabled is deleted, the
underlying actions require the topic data to be deleted in local store as
well as remote store, and eventually the topic metadata needs to be deleted
too. What is the role of the controller in deleting a topic and it's
contents, while the topic has tiered storage enabled?

5010. RLMM APIs are currently synchronous, for example
RLMM.putRemoteLogSegmentData waits until the put operation is completed in
the remote metadata store. It may also block until the leader has caught up
to the metadata (not sure). Could we make these apis asynchronous (ex:
based on java.util.concurrent.Future) to provide room for tapping
performance i

Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-08-24 Thread Kowshik Prakasam
;> is
> >>>
> >>>
> >>>>
> >>>>
> >>>> uploaded, then the leader fails and a corresponding metadata event
> >>>>
> >>>>
> >>>
> >>>
> >>
> >>
> >>
> >> never
> >>
> >>
> >>>
> >>>>
> >>>>
> >>>> gets written. In such cases, the orphaned remote segment has to be
> >>>> eventually deleted (since there is no confirmation of the upload). To
> >>>> handle this, we could use 2 separate metadata events viz.
> >>>>
> >>>>
> >>>
> >>>
> >>
> >>
> >>
> >> copy_initiated
> >>
> >>
> >>>
> >>>>
> >>>>
> >>>> and copy_completed, so that copy_initiated events that don't have a
> >>>> corresponding copy_completed event can be treated as garbage and
> >>>>
> >>>>
> >>>
> >>>
> >>
> >>
> >>
> >> deleted
> >>
> >>
> >>>
> >>>>
> >>>>
> >>>> from the remote object store by the broker.
> >>>>
> >>>>
> >>>>
> >>>> We are already updating RMM with RemoteLogSegmentMetadata pre and post
> >>>> copying of log segments. We had a flag in RemoteLogSegmentMetadata
> whether
> >>>> it is copied or not. But we are making changes in
> RemoteLogSegmentMetadata
> >>>> to introduce a state field in RemoteLogSegmentMetadata which will
> have the
> >>>> respective started and finished states. This includes for other
> operations
> >>>> like delete too.
> >>>>
> >>>>
> >>>>
> >>>> 5004. In the default implementation of RLMM (using the internal topic
> >>>> __remote_log_metadata), a separate topic called
> >>>> __remote_segments_to_be_deleted is going to be used just to track
> >>>>
> >>>>
> >>>
> >>>
> >>>
> >>> failures
> >>>
> >>>
> >>>>
> >>>>
> >>>> in removing remote log segments. A separate topic (effectively another
> >>>> metadata stream) introduces some maintenance overhead and design
> >>>> complexity. It seems to me that the same can be achieved just by using
> >>>>
> >>>>
> >>>
> >>>
> >>>
> >>> just
> >>>
> >>>
> >>>>
> >>>>
> >>>> the __remote_log_metadata topic with the following steps: 1) the
> leader
> >>>> writes a delete_initiated metadata event, 2) the leader deletes the
> >>>>
> >>>>
> >>>
> >>>
> >>>
> >>> segment
> >>>
> >>>
> >>>>
> >>>>
> >>>> and 3) the leader writes a delete_completed metadata event. Tiered
> >>>>
> >>>>
> >>>
> >>>
> >>>
> >>> segments
> >>>
> >>>
> >>>>
> >>>>
> >>>> that have delete_initiated message and not delete_completed message,
> >>>>
> >>>>
> >>>
> >>>
> >>
> >>
> >>
> >> can
> >>
> >>
> >>>
> >>>
> >>> be
> >>>
> >>>
> >>>>
> >>>>
> >>>> considered to be a failure and retried.
> >>>>
> >>>>
> >>>>
> >>>> Jun suggested in earlier mail to keep this simple . We decided not to
> have
> >>>> this topic as mentioned in our earlier replies, updated the KIP. As I
> >>>> mentioned in an earlier comment, we are adding state entries for
> delete
> >>>> operations too.
> >>>>
> >>>>
> >>>>
> >>>> 5005. When a Kafka cluster is provisioned for the first time with
> >>>>
> >>>>
> >>>
> >>>
> >>
> >>
> >>
> >> KIP-405 <https://issues.apache.org/jira/browse/KIP-405>
> >>
> >>
> >>>
> >>>>
> >>>>
> >>>> ti

Re: [VOTE] KIP-584: Versioning scheme for features

2020-09-22 Thread Kowshik Prakasam
Hi all,

I wanted to let you know that I have made the following changes to the
KIP-584 write up. The purpose is to ensure the design is correct for a few
things which came up during implementation:

1. Per FeatureUpdate error code: The UPDATE_FEATURES controller API is no
longer transactional. Going forward, we allow for individual FeatureUpdate
to succeed/fail in the request. As a result, the response schema now
contains an error code per FeatureUpdate as well as a top-level error code.
Overall this is a better design because it better represents the nature of
the API: each FeatureUpdate in the request is independent of the other
updates, and the controller can process/apply these independently to ZK.
When an UPDATE_FEATURES request fails, this new design provides better
clarity to the caller on which FeatureUpdate could not be applied (via the
individual error codes). In the previous design, we were unable to achieve
such an increased level of clarity in communicating the error codes.

2. Due to #1, there were some minor changes required to the proposed Admin
APIs (describeFeatures and updateFeatures). A few unnecessary public APIs
have been removed, and couple essential ones have been added. The latest
changes now represent the latest design.

3. The timeoutMs field has been removed from the the UPDATE_FEATURES API
request, since it was not found to be required during implementation.

4. Previously we handled the incompatible broker lifetime race condition in
the controller by skipping sending of UpdateMetadataRequest to the
incompatible broker. But this had a few edge cases. Instead, now we handle
it by refusing to register the incompatible broker in the controller. This
is a better design because if we already acted on an incompatible broker
registration, then some damage may already be done to the cluster. This is
because the UpdatateMetadataRequest will still be sent to other brokers and
its metadata will be available to the clients. Therefore we would like to
avoid this problem with the new design where the controller would not keep
track of an incompatible broker because the broker will eventually shutdown
automatically (when reacting to the incompatibility).

Please let me know if you have any questions.


Cheers,
Kowshik


On Mon, Jun 8, 2020 at 3:32 AM Kowshik Prakasam 
wrote:

> Hi all,
>
> I wanted to let you know that I have made the following minor changes to
> the KIP-584 <https://issues.apache.org/jira/browse/KIP-584> write up. The
> purpose is to ensure the design is correct for a few things which came up
> during implementation:
>
> 1. Feature version data type has been made to be int16 (instead of int64).
> The reason is two fold:
> a. Usage of int64 felt overkill. Feature version bumps are infrequent
> (since these bumps represent breaking changes that are generally
> infrequent). Therefore int16 is big enough to support version bumps of a
> particular feature.
> b. The int16 data type aligns well with existing API versions data
> type. Please see the file
> '/clients/src/main/resources/common/message/ApiVersionsResponse.json'.
>
> 2. Finalized feature version epoch data type has been made to be int32
> (instead of int64). The reason is that the epoch value is the value of ZK
> node version, whose data type is int32.
>
> 3. Introduced a new 'status' field in the '/features' ZK node schema. The
> purpose is to implement Colin's earlier point for the strategy for
> transitioning from not having a /features znode to having one. An
> explanation has been provided in the following section of the KIP detailing
> the different cases:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-FeatureZKnodestatus
> .
>
> Please let me know if you have any questions or concerns.
>
>
> Cheers,
> Kowshik
>
>
>
> Cheers,
> Kowshik
>
> On Tue, Apr 28, 2020 at 11:24 PM Kowshik Prakasam 
> wrote:
>
>> Hi all,
>>
>> This KIP vote has been open for ~12 days. The summary of the votes is
>> that we have 3 binding votes (Colin, Guozhang, Jun), and 3 non-binding
>> votes (David, Dhruvil, Boyang). Therefore, the KIP vote passes. I'll mark
>> KIP as accepted and start working on the implementation.
>>
>> Thanks a lot!
>>
>>
>> Cheers,
>> Kowshik
>>
>> On Mon, Apr 27, 2020 at 12:15 PM Colin McCabe  wrote:
>>
>>> Thanks, Kowshik.  +1 (binding)
>>>
>>> best,
>>> Colin
>>>
>>> On Sat, Apr 25, 2020, at 13:20, Kowshik Prakasam wrote:
>>> > Hi Colin,
>>> >
>>> > Thanks for the explanation! I agree with you, and I have updated the
>>> > KIP.
>>> > Here is a link t

Re: [VOTE] KIP-584: Versioning scheme for features

2020-09-25 Thread Kowshik Prakasam
Hi Jun,

Thanks for the feedback. It's a very good point. I have now modified the
KIP-584 write-up "goals" section a bit. It now mentions one of the goals as
enabling rolling upgrades using a single restart (instead of 2). Also I
have removed the text explicitly aiming for deprecation of IBP. Note that
previously under "Potential features in Kafka" the IBP was mentioned under
point (4) as a possible coarse-grained feature. Hopefully, now the 2
sections of the KIP align with each other well.


Cheers,
Kowshik


On Fri, Sep 25, 2020 at 2:03 PM Colin McCabe  wrote:

> On Tue, Sep 22, 2020, at 00:43, Kowshik Prakasam wrote:
> > Hi all,
> >
> > I wanted to let you know that I have made the following changes to the
> > KIP-584 write up. The purpose is to ensure the design is correct for a
> few
> > things which came up during implementation:
> >
>
> Hi Kowshik,
>
> Thanks for the updates.
>
> >
> > 1. Per FeatureUpdate error code: The UPDATE_FEATURES controller API is no
> > longer transactional. Going forward, we allow for individual
> FeatureUpdate
> > to succeed/fail in the request. As a result, the response schema now
> > contains an error code per FeatureUpdate as well as a top-level error
> code.
> > Overall this is a better design because it better represents the nature
> of
> > the API: each FeatureUpdate in the request is independent of the other
> > updates, and the controller can process/apply these independently to ZK.
> > When an UPDATE_FEATURES request fails, this new design provides better
> > clarity to the caller on which FeatureUpdate could not be applied (via
> the
> > individual error codes). In the previous design, we were unable to
> achieve
> > such an increased level of clarity in communicating the error codes.
> >
>
> OK
>
> >
> > 2. Due to #1, there were some minor changes required to the proposed
> Admin
> > APIs (describeFeatures and updateFeatures). A few unnecessary public APIs
> > have been removed, and couple essential ones have been added. The latest
> > changes now represent the latest design.
> >
> > 3. The timeoutMs field has been removed from the the UPDATE_FEATURES API
> > request, since it was not found to be required during implementation.
> >
>
> Please don't get rid of timeoutMs.  timeoutMs is required if you want to
> implement the ability to timeout the call if the controller can't get to it
> in time.  This is important for avoiding congestion collapse where the
> controller collapses under the weight of lots of retries of the same set of
> calls.
>
> We may not be able to do it in the initial implementation, but we will
> eventually implement this for all the controller-bound RPCs.
>
> > >
> > > 2. Finalized feature version epoch data type has been made to be int32
> > > (instead of int64). The reason is that the epoch value is the value of
> ZK
> > > node version, whose data type is int32.
> > >
>
> Sorry, I missed this earlier.  Using 16 bit feature levels seems fine.
> However, please don't use a 32-bit epoch here.  We deliberately made the
> epoch 64 bits to avoid overflow problems in the future once ZK is gone.
>
> best,
> Colin
>
> > > 3. Introduced a new 'status' field in the '/features' ZK node schema.
> The
> > > purpose is to implement Colin's earlier point for the strategy for
> > > transitioning from not having a /features znode to having one. An
> > > explanation has been provided in the following section of the KIP
> detailing
> > > the different cases:
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-FeatureZKnodestatus
> > > .
> > >
> > > Please let me know if you have any questions or concerns.
> > >
> > >
> > > Cheers,
> > > Kowshik
> > >
> > >
> > >
> > > Cheers,
> > > Kowshik
> > >
> > > On Tue, Apr 28, 2020 at 11:24 PM Kowshik Prakasam <
> kpraka...@confluent.io>
> > > wrote:
> > >
> > >> Hi all,
> > >>
> > >> This KIP vote has been open for ~12 days. The summary of the votes is
> > >> that we have 3 binding votes (Colin, Guozhang, Jun), and 3 non-binding
> > >> votes (David, Dhruvil, Boyang). Therefore, the KIP vote passes. I'll
> mark
> > >> KIP as accepted and start working on the implementation.
> > >>
> > >> Thanks a lot!
> > >>
> > >>
>

Re: [VOTE] KIP-584: Versioning scheme for features

2020-09-25 Thread Kowshik Prakasam
Hi Colin,

Thanks for the feedback. Those are very good points. I have made the
following changes to the KIP as you had suggested:
1. Included the `timeoutMs` field in the `UpdateFeaturesRequest` schema.
The initial implementation won't be making use of the field, but we can
always use it in the future as the need arises.
2. Modified the `FinalizedFeaturesEpoch` field in `ApiVersionsResponse` to
use int64. This is to avoid overflow problems in the future once ZK is gone.

I have also incorporated these changes into the versioning write path PR
that is currently under review: https://github.com/apache/kafka/pull/9001.


Cheers,
Kowshik



On Fri, Sep 25, 2020 at 4:57 PM Kowshik Prakasam 
wrote:

> Hi Jun,
>
> Thanks for the feedback. It's a very good point. I have now modified the
> KIP-584 write-up "goals" section a bit. It now mentions one of the goals as
> enabling rolling upgrades using a single restart (instead of 2). Also I
> have removed the text explicitly aiming for deprecation of IBP. Note that
> previously under "Potential features in Kafka" the IBP was mentioned under
> point (4) as a possible coarse-grained feature. Hopefully, now the 2
> sections of the KIP align with each other well.
>
>
> Cheers,
> Kowshik
>
>
> On Fri, Sep 25, 2020 at 2:03 PM Colin McCabe  wrote:
>
>> On Tue, Sep 22, 2020, at 00:43, Kowshik Prakasam wrote:
>> > Hi all,
>> >
>> > I wanted to let you know that I have made the following changes to the
>> > KIP-584 write up. The purpose is to ensure the design is correct for a
>> few
>> > things which came up during implementation:
>> >
>>
>> Hi Kowshik,
>>
>> Thanks for the updates.
>>
>> >
>> > 1. Per FeatureUpdate error code: The UPDATE_FEATURES controller API is
>> no
>> > longer transactional. Going forward, we allow for individual
>> FeatureUpdate
>> > to succeed/fail in the request. As a result, the response schema now
>> > contains an error code per FeatureUpdate as well as a top-level error
>> code.
>> > Overall this is a better design because it better represents the nature
>> of
>> > the API: each FeatureUpdate in the request is independent of the other
>> > updates, and the controller can process/apply these independently to ZK.
>> > When an UPDATE_FEATURES request fails, this new design provides better
>> > clarity to the caller on which FeatureUpdate could not be applied (via
>> the
>> > individual error codes). In the previous design, we were unable to
>> achieve
>> > such an increased level of clarity in communicating the error codes.
>> >
>>
>> OK
>>
>> >
>> > 2. Due to #1, there were some minor changes required to the proposed
>> Admin
>> > APIs (describeFeatures and updateFeatures). A few unnecessary public
>> APIs
>> > have been removed, and couple essential ones have been added. The latest
>> > changes now represent the latest design.
>> >
>> > 3. The timeoutMs field has been removed from the the UPDATE_FEATURES API
>> > request, since it was not found to be required during implementation.
>> >
>>
>> Please don't get rid of timeoutMs.  timeoutMs is required if you want to
>> implement the ability to timeout the call if the controller can't get to it
>> in time.  This is important for avoiding congestion collapse where the
>> controller collapses under the weight of lots of retries of the same set of
>> calls.
>>
>> We may not be able to do it in the initial implementation, but we will
>> eventually implement this for all the controller-bound RPCs.
>>
>> > >
>> > > 2. Finalized feature version epoch data type has been made to be int32
>> > > (instead of int64). The reason is that the epoch value is the value
>> of ZK
>> > > node version, whose data type is int32.
>> > >
>>
>> Sorry, I missed this earlier.  Using 16 bit feature levels seems fine.
>> However, please don't use a 32-bit epoch here.  We deliberately made the
>> epoch 64 bits to avoid overflow problems in the future once ZK is gone.
>>
>> best,
>> Colin
>>
>> > > 3. Introduced a new 'status' field in the '/features' ZK node schema.
>> The
>> > > purpose is to implement Colin's earlier point for the strategy for
>> > > transitioning from not having a /features znode to having one. An
>> > > explanation has been provided in the following section of the KIP
>> detailing
>> > > the diffe

[DISCUSS] KIP-584: Versioning scheme for features

2020-03-24 Thread Kowshik Prakasam
Hi all,

I've opened KIP-584  which
is intended to provide a versioning scheme for features. I'd like to use
this thread to discuss the same. I'd appreciate any feedback on this. Here
is a link to KIP-584:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features
 .

Thank you!


Cheers,
Kowshik


Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-03-24 Thread Kowshik Prakasam
Hi all,

Apologies, please ignore the reference to
https://issues.apache.org/jira/browse/KIP-584. It was included
accidentally. But the link to the KIP is the right one.


Cheers,
Kowshik


On Tue, Mar 24, 2020 at 5:08 PM Kowshik Prakasam 
wrote:

> Hi all,
>
> I've opened KIP-584 <https://issues.apache.org/jira/browse/KIP-584> which
> is intended to provide a versioning scheme for features. I'd like to use
> this thread to discuss the same. I'd appreciate any feedback on this. Here
> is a link to KIP-584:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features
>  .
>
> Thank you!
>
>
> Cheers,
> Kowshik
>


Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-03-25 Thread Kowshik Prakasam
Hi Boyang,

Great catch, thanks! I have fixed this now. Please have a look, and let me
if you have any questions.


Cheers,
Kowshik

On Tue, Mar 24, 2020 at 11:06 PM Boyang Chen 
wrote:

> Nice KIP Kowshik! This is a long due feature for the ease of both client
> side and server side upgrade in general.
>
> One meta comment: it seems the KIP's font is a bit weirdly rendered:
> [image: image.png]
> Could you try to remove all the rectangle blocks? It looks inconsistent
> with most KIPs.
>
> Thanks,
> Boyang
>
> On Tue, Mar 24, 2020 at 5:14 PM Kowshik Prakasam 
> wrote:
>
>> Hi all,
>>
>> Apologies, please ignore the reference to
>> https://issues.apache.org/jira/browse/KIP-584. It was included
>> accidentally. But the link to the KIP is the right one.
>>
>>
>> Cheers,
>> Kowshik
>>
>>
>> On Tue, Mar 24, 2020 at 5:08 PM Kowshik Prakasam 
>> wrote:
>>
>> > Hi all,
>> >
>> > I've opened KIP-584 <https://issues.apache.org/jira/browse/KIP-584> <
>> https://issues.apache.org/jira/browse/KIP-584> which
>> > is intended to provide a versioning scheme for features. I'd like to use
>> > this thread to discuss the same. I'd appreciate any feedback on this.
>> Here
>> > is a link to KIP-584 <https://issues.apache.org/jira/browse/KIP-584>:
>> >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features
>> >  .
>> >
>> > Thank you!
>> >
>> >
>> > Cheers,
>> > Kowshik
>> >
>>
>


Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-03-26 Thread Kowshik Prakasam
Hi Colin,

Thanks for the feedback! I've changed the KIP to address your suggestions.
Please find below my explanation. Here is a link to KIP 584:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features
.

1. '__data_version__' is the version of the finalized feature metadata
(i.e. actual ZK node contents), while the '__schema_version__' is the
version of the schema of the data persisted in ZK. These serve different
purposes. '__data_version__' is is useful mainly to clients during reads,
to differentiate between the 2 versions of eventually consistent 'finalized
features' metadata (i.e. larger metadata version is more recent).
'__schema_version__' provides an additional degree of flexibility, where if
we decide to change the schema for '/features' node in ZK (in the future),
then we can manage broker roll outs suitably (i.e.
serialization/deserialization of the ZK data can be handled safely).

2. Regarding admin client needing min and max information - you are right!
I've changed the KIP such that the Admin API also allows the user to read
'supported features' from a specific broker. Please look at the section
"Admin API changes".

3. Regarding the use of `long` vs `Long` - it was not deliberate. I've
improved the KIP to just use `long` at all places.

4. Regarding kafka.admin.FeatureCommand tool - you are right! I've updated
the KIP sketching the functionality provided by this tool, with some
examples. Please look at the section "Tooling support examples".

Thank you!


Cheers,
Kowshik

On Wed, Mar 25, 2020 at 11:31 PM Colin McCabe  wrote:

> Thanks, Kowshik, this looks good.
>
> In the "Schema" section, do we really need both __schema_version__ and
> __data_version__?  Can we just have a single version field here?
>
> Shouldn't the Admin(Client) function have some way to get the min and max
> information that we're exposing as well?  I guess we could have min, max,
> and current.  Unrelated: is the use of Long rather than long deliberate
> here?
>
> It would be good to describe how the command line tool
> kafka.admin.FeatureCommand will work.  For example the flags that it will
> take and the output that it will generate to STDOUT.
>
> cheers,
> Colin
>
>
> On Tue, Mar 24, 2020, at 17:08, Kowshik Prakasam wrote:
> > Hi all,
> >
> > I've opened KIP-584 <https://issues.apache.org/jira/browse/KIP-584>
> > which
> > is intended to provide a versioning scheme for features. I'd like to use
> > this thread to discuss the same. I'd appreciate any feedback on this.
> > Here
> > is a link to KIP-584:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features
> >  .
> >
> > Thank you!
> >
> >
> > Cheers,
> > Kowshik
> >
>


Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-03-30 Thread Kowshik Prakasam
Hi Colin,

Once again, thanks a lot for the feedback! Regarding the first point about
using a single version field, you are correct. Recently I understood the
idea and realized that both the '__data_version__' and '__schema_version__'
can be folded into a single field. This can be bumped anytime when there is
a change to data or schema. I have updated the KIP now using a single field
called '__version__' within the '/features' ZK node.

Please refer to this section:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Persistenceoffinalizedfeatureversions


Cheers,
Kowshik


On Thu, Mar 26, 2020 at 7:24 PM Kowshik Prakasam 
wrote:

> Hi Colin,
>
> Thanks for the feedback! I've changed the KIP to address your suggestions.
> Please find below my explanation. Here is a link to KIP 584:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features
> .
>
> 1. '__data_version__' is the version of the finalized feature metadata
> (i.e. actual ZK node contents), while the '__schema_version__' is the
> version of the schema of the data persisted in ZK. These serve different
> purposes. '__data_version__' is is useful mainly to clients during reads,
> to differentiate between the 2 versions of eventually consistent 'finalized
> features' metadata (i.e. larger metadata version is more recent).
> '__schema_version__' provides an additional degree of flexibility, where if
> we decide to change the schema for '/features' node in ZK (in the future),
> then we can manage broker roll outs suitably (i.e.
> serialization/deserialization of the ZK data can be handled safely).
>
> 2. Regarding admin client needing min and max information - you are right!
> I've changed the KIP such that the Admin API also allows the user to read
> 'supported features' from a specific broker. Please look at the section
> "Admin API changes".
>
> 3. Regarding the use of `long` vs `Long` - it was not deliberate. I've
> improved the KIP to just use `long` at all places.
>
> 4. Regarding kafka.admin.FeatureCommand tool - you are right! I've updated
> the KIP sketching the functionality provided by this tool, with some
> examples. Please look at the section "Tooling support examples".
>
> Thank you!
>
>
> Cheers,
> Kowshik
>
> On Wed, Mar 25, 2020 at 11:31 PM Colin McCabe  wrote:
>
>> Thanks, Kowshik, this looks good.
>>
>> In the "Schema" section, do we really need both __schema_version__ and
>> __data_version__?  Can we just have a single version field here?
>>
>> Shouldn't the Admin(Client) function have some way to get the min and max
>> information that we're exposing as well?  I guess we could have min, max,
>> and current.  Unrelated: is the use of Long rather than long deliberate
>> here?
>>
>> It would be good to describe how the command line tool
>> kafka.admin.FeatureCommand will work.  For example the flags that it will
>> take and the output that it will generate to STDOUT.
>>
>> cheers,
>> Colin
>>
>>
>> On Tue, Mar 24, 2020, at 17:08, Kowshik Prakasam wrote:
>> > Hi all,
>> >
>> > I've opened KIP-584 <https://issues.apache.org/jira/browse/KIP-584>
>> > which
>> > is intended to provide a versioning scheme for features. I'd like to use
>> > this thread to discuss the same. I'd appreciate any feedback on this.
>> > Here
>> > is a link to KIP-584:
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features
>> >  .
>> >
>> > Thank you!
>> >
>> >
>> > Cheers,
>> > Kowshik
>> >
>>
>


Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-03-31 Thread Kowshik Prakasam
Hi Colin,

Thanks for the suggestions. It is a good idea to refer to the
'__data_version__' as just 'epoch', to avoid any confusions. However note
that this is not the same as broker epoch. The main distinction is that
this epoch is bumped by the controller whenever a modification made to the
finalized feature versions is persisted into ZK.

I have updated the KIP to use the new schema for the ‘/features’ ZK node:

   - We use 2 separate fields ‘epoch’ and ‘version’. The latter describing
   changes to the overall schema of the data that is written to ZooKeeper in
   the '/features' node.
   - We don’t have a header and a data section separately, I have clubbed
   these so that we have just 1 dictionary containing both.


Here is a link to the updated section:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584
<https://issues.apache.org/jira/browse/KIP-584>
%3A+Versioning+scheme+for+features#KIP-584
<https://issues.apache.org/jira/browse/KIP-584>:Versioningschemeforfeatures-Persistenceoffinalizedfeatureversions
.

Please feel free to let me know if you have any questions or concerns.


Cheers,
Kowshik


On Mon, Mar 30, 2020 at 4:53 PM Colin McCabe  wrote:

> On Thu, Mar 26, 2020, at 19:24, Kowshik Prakasam wrote:
> > Hi Colin,
> >
> > Thanks for the feedback! I've changed the KIP to address your
> > suggestions.
> > Please find below my explanation. Here is a link to KIP 584:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features
> > .
> >
> > 1. '__data_version__' is the version of the finalized feature metadata
> > (i.e. actual ZK node contents), while the '__schema_version__' is the
> > version of the schema of the data persisted in ZK. These serve different
> > purposes. '__data_version__' is is useful mainly to clients during reads,
> > to differentiate between the 2 versions of eventually consistent
> 'finalized
> > features' metadata (i.e. larger metadata version is more recent).
> > '__schema_version__' provides an additional degree of flexibility, where
> if
> > we decide to change the schema for '/features' node in ZK (in the
> future),
> > then we can manage broker roll outs suitably (i.e.
> > serialization/deserialization of the ZK data can be handled safely).
>
> Hi Kowshik,
>
> If you're talking about a number that lets you know if data is more or
> less recent, we would typically call that an epoch, and not a version.  For
> the ZK data structures, the word "version" is typically reserved for
> describing changes to the overall schema of the data that is written to
> ZooKeeper.  We don't even really change the "version" of those schemas that
> much, since most changes are backwards-compatible.  But we do include that
> version field just in case.
>
> I don't think we really need an epoch here, though, since we can just look
> at the broker epoch.  Whenever the broker registers, its epoch will be
> greater than the previous broker epoch.  And the newly registered data will
> take priority.  This will be a lot simpler than adding a separate epoch
> system, I think.
>
> >
> > 2. Regarding admin client needing min and max information - you are
> right!
> > I've changed the KIP such that the Admin API also allows the user to read
> > 'supported features' from a specific broker. Please look at the section
> > "Admin API changes".
>
> Thanks.
>
> >
> > 3. Regarding the use of `long` vs `Long` - it was not deliberate. I've
> > improved the KIP to just use `long` at all places.
>
> Sounds good.
>
> >
> > 4. Regarding kafka.admin.FeatureCommand tool - you are right! I've
> updated
> > the KIP sketching the functionality provided by this tool, with some
> > examples. Please look at the section "Tooling support examples".
> >
> > Thank you!
>
>
> Thanks, Kowshik.
>
> cheers,
> Colin
>
> >
> >
> > Cheers,
> > Kowshik
> >
> > On Wed, Mar 25, 2020 at 11:31 PM Colin McCabe 
> wrote:
> >
> > > Thanks, Kowshik, this looks good.
> > >
> > > In the "Schema" section, do we really need both __schema_version__ and
> > > __data_version__?  Can we just have a single version field here?
> > >
> > > Shouldn't the Admin(Client) function have some way to get the min and
> max
> > > information that we're exposing as well?  I guess we could have min,
> max,
> > > and current.  Unrelated: is the use of Long rather than long deliberate
> > > h

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-03-31 Thread Kowshik Prakasam
owshik): Nice question! The broker reads finalized feature info stored in
ZK,
only during startup when it does a validation. When serving
`ApiVersionsRequest`, the
broker does not read this info from ZK directly. I'd imagine the risk is
that it can increase
the ZK read QPS which can be a bottleneck for the system. Today, in Kafka
we use the
controller to fan out ZK updates to brokers and we want to stick to that
pattern to avoid
the ZK read bottleneck when serving `ApiVersionsRequest`.

> 8. I was under the impression that user could configure a range of
> supported versions, what's the trade-off for allowing single finalized
> version only?

(Kowshik): Great question! The finalized version of a feature basically
refers to
the cluster-wide finalized feature "maximum" version. For example, if the
'group_coordinator' feature
has the finalized version set to 10, then, it means that cluster-wide all
versions upto v10 are
supported for this feature. However, note that if some version (ex: v0)
gets deprecated
for this feature, then we don’t convey that using this scheme (also
supporting deprecation is a non-goal).

(Kowshik): I’ve now modified the KIP at all points, refering to finalized
feature "maximum" versions.

> 9. One minor syntax fix: Note that here the "client" here may be a
producer

(Kowshik): Great point! Done.


Cheers,
Kowshik


On Tue, Mar 31, 2020 at 1:17 PM Boyang Chen 
wrote:

> Hey Kowshik,
>
> thanks for the revised KIP. Got a couple of questions:
>
> 1. "When is it safe for the brokers to begin handling EOS traffic" could be
> converted as "When is it safe for the brokers to start serving new
> Exactly-Once(EOS) semantics" since EOS is not explained earlier in the
> context.
>
> 2. In the *Explanation *section, the metadata version number part seems a
> bit blurred. Could you point a reference to later section that we going to
> store it in Zookeeper and update it every time when there is a feature
> change?
>
> 3. For the feature downgrade, although it's a Non-goal of the KIP, for
> features such as group coordinator semantics, there is no legal scenario to
> perform a downgrade at all. So having downgrade door open is pretty
> error-prone as human faults happen all the time. I'm assuming as new
> features are implemented, it's not very hard to add a flag during feature
> creation to indicate whether this feature is "downgradable". Could you
> explain a bit more on the extra engineering effort for shipping this KIP
> with downgrade protection in place?
>
> 4. "Each broker’s supported dictionary of feature versions will be defined
> in the broker code." So this means in order to restrict a certain feature,
> we need to start the broker first and then send a feature gating request
> immediately, which introduces a time gap and the intended-to-close feature
> could actually serve request during this phase. Do you think we should also
> support configurations as well so that admin user could freely roll up a
> cluster with all nodes complying the same feature gating, without worrying
> about the turnaround time to propagate the message only after the cluster
> starts up?
>
> 5. "adding a new Feature, updating or deleting an existing Feature", may be
> I misunderstood something, I thought the features are defined in broker
> code, so admin could not really create a new feature?
>
> 6. I think we need a separate error code like FEATURE_UPDATE_IN_PROGRESS to
> reject a concurrent feature update request.
>
> 7. I think we haven't discussed the alternative solution to pass the
> feature information through Zookeeper. Is that mentioned in the KIP to
> justify why using UpdateMetadata is more favorable?
>
> 8. I was under the impression that user could configure a range of
> supported versions, what's the trade-off for allowing single finalized
> version only?
>
> 9. One minor syntax fix: Note that here the "client" here may be a producer
>
> Boyang
>
> On Mon, Mar 30, 2020 at 4:53 PM Colin McCabe  wrote:
>
> > On Thu, Mar 26, 2020, at 19:24, Kowshik Prakasam wrote:
> > > Hi Colin,
> > >
> > > Thanks for the feedback! I've changed the KIP to address your
> > > suggestions.
> > > Please find below my explanation. Here is a link to KIP 584:
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features
> > > .
> > >
> > > 1. '__data_version__' is the version of the finalized feature metadata
> > > (i.e. actual ZK node contents), while the '__schema_version__' is the
> > > version of the schema of the data persisted in ZK. T

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-01 Thread Kowshik Prakasam
pe":  "string", "versions":  "3+",
> >   "about": "The name of the feature."},
> > {"name":  "Version", "type":  "int64", "versions":  "3+",
> >   "about": "The finalized version for the feature."}
> >   ]
> >
> > 105. kafka-features.sh: Instead of using update/delete, perhaps it's
> better
> > to use enable/disable?
> >
> > Jun
> >
> > On Tue, Mar 31, 2020 at 5:29 PM Kowshik Prakasam  >
> > wrote:
> >
> > > Hey Boyang,
> > >
> > > Thanks for the great feedback! I have updated the KIP based on your
> > > feedback.
> > > Please find my response below for your comments, look for sentences
> > > starting
> > > with "(Kowshik)" below.
> > >
> > >
> > > > 1. "When is it safe for the brokers to begin handling EOS traffic"
> > could
> > > be
> > > > converted as "When is it safe for the brokers to start serving new
> > > > Exactly-Once(EOS) semantics" since EOS is not explained earlier in
> the
> > > > context.
> > >
> > > (Kowshik): Great point! Done.
> > >
>
> > > 2. In the *Explanation *section, the metadata version number part seems
> > a
> > > > bit blurred. Could you point a reference to later section that we
> going
> > > to
> > > > store it in Zookeeper and update it every time when there is a
> feature
> > > > change?
> > >
> > > (Kowshik): Great point! Done. I've added a reference in the KIP.
> > >
> > >
> > > > 3. For the feature downgrade, although it's a Non-goal of the KIP,
> for
> > > > features such as group coordinator semantics, there is no legal
> > scenario
> > > to
> > > > perform a downgrade at all. So having downgrade door open is pretty
> > > > error-prone as human faults happen all the time. I'm assuming as new
> > > > features are implemented, it's not very hard to add a flag during
> > feature
> > > > creation to indicate whether this feature is "downgradable". Could
> you
> > > > explain a bit more on the extra engineering effort for shipping this
> > KIP
> > > > with downgrade protection in place?
> > >
> > > (Kowshik): Great point! I'd agree and disagree here. While I agree that
> > > accidental
> > > downgrades can cause problems, I also think sometimes downgrades should
> > > be allowed for emergency reasons (not all downgrades cause issues).
> > > It is just subjective to the feature being downgraded.
> > >
> > > To be more strict about feature version downgrades, I have modified the
> > KIP
> > > proposing that we mandate a `--force-downgrade` flag be used in the
> > > UPDATE_FEATURES api
> > > and the tooling, whenever the human is downgrading a finalized feature
> > > version.
> > > Hopefully this should cover the requirement, until we find the need for
> > > advanced downgrade support.
> > >
> >
> +1 for adding this flag.
>
> > > > 4. "Each broker’s supported dictionary of feature versions will be
> > > defined
> > > > in the broker code." So this means in order to restrict a certain
> > > feature,
> > > > we need to start the broker first and then send a feature gating
> > request
> > > > immediately, which introduces a time gap and the intended-to-close
> > > feature
> > > > could actually serve request during this phase. Do you think we
> should
> > > also
> > > > support configurations as well so that admin user could freely roll
> up
> > a
> > > > cluster with all nodes complying the same feature gating, without
> > > worrying
> > > > about the turnaround time to propagate the message only after the
> > cluster
> > > > starts up?
> > >
> > > (Kowshik): This is a great point/question. One of the expectations out
> of
> > > this KIP, which is
> > > already followed in the broker, is the following.
> > >  - Imagine at time T1 the broker starts up and registers it’s presence
> in
> > > ZK,
> > >along with advertising it’s supported features.
> > >  - Imagine at a future time T2 the broker receives the
> > > UpdateMetadataRequest
> > 

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-03 Thread Kowshik Prakasam
Hey Boyang,

Great point! You are right, thanks for the suggestion!
Yes, we can just use ZK watches to propagate finalized features
information. I have updated the KIP write up with this change.
As a result, I feel the design is simpler as we have also eliminated
the changes to UpdateMetadataRequest.

You are right, after exploring/discussing KIP-500 further, we have now
realized that taking a ZK dependency here in this KIP just for reads is OK.
The future migration path off ZK (in post ZK world) will simply involve
reading the finalized features from the controller quorum via the new
MetadataFetch API that's proposed in KIP-500.

Also note that in the latest KIP write-up, the features metadata epoch
is just the ZK node version (as suggested by Jun).

Hey Colin,

Please feel free to let us know if you have any questions or concerns
on the above.


Cheers,
Kowshik

On Thu, Apr 2, 2020 at 10:39 AM Boyang Chen 
wrote:

> Thanks for the reply. The only remaining question is the propagation path.
> KIP-500 <https://issues.apache.org/jira/browse/KIP-500> only restricts
> `write access` to the controller, in a sense that
> brokers in the pre-KIP-500 <https://issues.apache.org/jira/browse/KIP-500>
> world could still listen to Zookeeper
> notifications. Thus, we are open to discuss the engineering effort to go
> through Zookeeper vs UpdateMetadata routing. What's your opinion on this
> matter? Will either path significantly simpler than another?
>
> Boyang
>
> On Wed, Apr 1, 2020 at 12:10 AM Kowshik Prakasam 
> wrote:
>
> > Hey Boyang,
> >
> > Thanks for the feedback! Please find below my response to your latest
> > comments.
> > I have modified the KIP wherever possible to address the comments.
> >
> > > My point is that during a bootstrapping stage of a cluster, we could
> not
> > > pick the desired feature version as no controller is actively handling
> > our
> > > request.
> >
> > (Kowshik): Note that just deploying the latest broker binary does not
> > always mean that the
> > new version of a certain feature will be automatically activated.
> Enabling
> > the effects of the
> > actual feature version is still left to the discretion of the
> > implementation logic for  the feature.
> > For example, for safety reasons, the feature can still be gated behind a
> > dynamic config
> > and later activated when the time comes.
> >
> > > Feature changes should be roughly the same frequency as config changes.
> > > Today, the dynamic configuration changes are propagated via Zookeeper.
> > > So I guess propagating through UpdateMetadata doesn't get us more
> > benefits,
> > > while going through ZK notification should be a simpler solution.
> >
> > (Kowshik): Maybe I'm missing something, but were you suggesting we should
> > have these
> > notifications delivered to the brokers directly via ZK? Note that with
> > KIP-500 <https://issues.apache.org/jira/browse/KIP-500> (where we are
> replacing ZK),
> > for the bridge release we prefer that we will perform all access to ZK in
> > the controller,
> > rather than in other brokers, clients, or tools. Therefore, although ZK
> > will still be
> > required for the bridge release, it will be a well-isolated dependency.
> > Please read
> > this section of KIP-500 <https://issues.apache.org/jira/browse/KIP-500>:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-500%3A+Replace+ZooKeeper+with+a+Self-Managed+Metadata+Quorum#KIP-500:ReplaceZooKeeperwithaSelf-ManagedMetadataQuorum-BridgeRelease
> > .
> >
> > Therefore, the existing approach in the KIP is future proof with regards
> to
> > the above requirement.
> > We deliver the ZK notification only via the controller's
> > `UpdateMetadataRequest` to the brokers.
> > We also access ZK only always via the controller.
> >
> > > Understood, I don't feel strong about deprecation, but does the current
> > KIP
> > > keep the door open for future improvements if
> > > someone has a need for feature deprecation? Could we briefly discuss
> > about
> > > it in the future work section?
> >
> > (Kowshik): Done. Please refer to the 'Future work' section:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Futurework
> >
> >
> > Cheers,
> > Kowshik
> >
> >
> > On Tue, Mar 31, 2020 at 9:12 PM Boyang Chen 
> > wrote:
> >
> > > Thanks Kowshik, the answers are making sense. Some follow-ups:
&

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-03 Thread Kowshik Prakasam
> 104.2 The new fields have the following versions. Why are the versions 3+
> when the top version is bumped to 6?
>   "fields":  [
> {"name": "Name", "type":  "string", "versions":  "3+",
>   "about": "The name of the feature."},
> {"name":  "Version", "type":  "int64", "versions":  "3+",
>   "about": "The finalized version for the feature."}
>   ]

(Kowshik): With the new improved design, we have completely eliminated the
need to
use UpdateMetadataRequest. This is because we now rely on ZK to deliver the
notifications for changes to the '/features' ZK node.

> 105. kafka-features.sh: Instead of using update/delete, perhaps it's
better
> to use enable/disable?

(Kowshik): For delete, yes, I have changed it so that we instead call it
'disable'.
However for 'update', it can now also refer to either an upgrade or a
forced downgrade.
Therefore, I have left it the way it is, just calling it as just 'update'.


Cheers,
Kowshik

On Tue, Mar 31, 2020 at 6:51 PM Jun Rao  wrote:

> Hi, Kowshik,
>
> Thanks for the KIP. Looks good overall. A few comments below.
>
> 100. UpdateFeaturesRequest/UpdateFeaturesResponse
> 100.1 Since this request waits for responses from brokers, should we add a
> timeout in the request (like createTopicRequest)?
> 100.2 The response schema is a bit weird. Typically, the response just
> shows an error code and an error message, instead of echoing the request.
> 100.3 Should we add a separate request to list/describe the existing
> features?
> 100.4 We are mixing ADD_OR_UPDATE and DELETE in a single request. For
> DELETE, the version field doesn't make sense. So, I guess the broker just
> ignores this? An alternative way is to have a separate
> DeleteFeaturesRequest
> 100.5 In UpdateFeaturesResponse, we have "The monotonically increasing
> version of the metadata for finalized features." I am wondering why the
> ordering is important?
> 100.6 Could you specify the required ACL for this new request?
>
> 101. For the broker registration ZK node, should we bump up the version in
> the json?
>
> 102. For the /features ZK node, not sure if we need the epoch field. Each
> ZK node has an internal version field that is incremented on every update.
>
> 103. "Enabling the actual semantics of a feature version cluster-wide is
> left to the discretion of the logic implementing the feature (ex: can be
> done via dynamic broker config)." Does that mean the broker registration ZK
> node will be updated dynamically when this happens?
>
> 104. UpdateMetadataRequest
> 104.1 It would be useful to describe when the feature metadata is included
> in the request. My understanding is that it's only included if (1) there is
> a change to the finalized feature; (2) broker restart; (3) controller
> failover.
> 104.2 The new fields have the following versions. Why are the versions 3+
> when the top version is bumped to 6?
>   "fields":  [
> {"name": "Name", "type":  "string", "versions":  "3+",
>   "about": "The name of the feature."},
> {"name":  "Version", "type":  "int64", "versions":  "3+",
>   "about": "The finalized version for the feature."}
>   ]
>
> 105. kafka-features.sh: Instead of using update/delete, perhaps it's better
> to use enable/disable?
>
> Jun
>
> On Tue, Mar 31, 2020 at 5:29 PM Kowshik Prakasam 
> wrote:
>
> > Hey Boyang,
> >
> > Thanks for the great feedback! I have updated the KIP based on your
> > feedback.
> > Please find my response below for your comments, look for sentences
> > starting
> > with "(Kowshik)" below.
> >
> >
> > > 1. "When is it safe for the brokers to begin handling EOS traffic"
> could
> > be
> > > converted as "When is it safe for the brokers to start serving new
> > > Exactly-Once(EOS) semantics" since EOS is not explained earlier in the
> > > context.
> >
> > (Kowshik): Great point! Done.
> >
> > > 2. In the *Explanation *section, the metadata version number part
> seems a
> > > bit blurred. Could you point a reference to later section that we going
> > to
> > > store it in Zookeeper and update it every time when there is a feature
> > > change?
> >
> > (Kowshik): Great point! Done. I've

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-03 Thread Kowshik Prakasam
Hi all,

Any other feedback on this KIP before we start the vote?


Cheers,
Kowshik

On Fri, Apr 3, 2020 at 1:27 AM Kowshik Prakasam 
wrote:

> Hey Jun,
>
> Thanks a lot for the great feedback! Please note that the design
> has changed a little bit on the KIP, and we now propagate the finalized
> features metadata only via ZK watches (instead of UpdateMetadataRequest
> from the controller).
>
> Please find below my response to your questions/feedback, with the prefix
> "(Kowshik):".
>
> > 100. UpdateFeaturesRequest/UpdateFeaturesResponse
> > 100.1 Since this request waits for responses from brokers, should we add
> a
> > timeout in the request (like createTopicRequest)?
>
> (Kowshik): Great point! Done. I have added a timeout field. Note: we no
> longer
> wait for responses from brokers, since the design has been changed so that
> the
> features information is propagated via ZK. Nevertheless, it is right to
> have a timeout
> for the request.
>
> > 100.2 The response schema is a bit weird. Typically, the response just
> > shows an error code and an error message, instead of echoing the request.
>
> (Kowshik): Great point! Yeah, I have modified it to just return an error
> code and a message.
> Previously it was not echoing the "request", rather it was returning the
> latest set of
> cluster-wide finalized features (after applying the updates). But you are
> right,
> the additional info is not required, so I have removed it from the
> response schema.
>
> > 100.3 Should we add a separate request to list/describe the existing
> > features?
>
> (Kowshik): This is already present in the KIP via the 'DescribeFeatures'
> Admin API,
> which, underneath covers uses the ApiVersionsRequest to list/describe the
> existing features. Please read the 'Tooling support' section.
>
> > 100.4 We are mixing ADD_OR_UPDATE and DELETE in a single request. For
> > DELETE, the version field doesn't make sense. So, I guess the broker just
> > ignores this? An alternative way is to have a separate
> DeleteFeaturesRequest
>
> (Kowshik): Great point! I have modified the KIP now to have 2 separate
> controller APIs
> serving these different purposes:
> 1. updateFeatures
> 2. deleteFeatures
>
> > 100.5 In UpdateFeaturesResponse, we have "The monotonically increasing
> > version of the metadata for finalized features." I am wondering why the
> > ordering is important?
>
> (Kowshik): In the latest KIP write-up, it is called epoch (instead of
> version), and
> it is just the ZK node version. Basically, this is the epoch for the
> cluster-wide
> finalized feature version metadata. This metadata is served to clients via
> the
> ApiVersionsResponse (for reads). We propagate updates from the '/features'
> ZK node
> to all brokers, via ZK watches setup by each broker on the '/features'
> node.
>
> Now here is why the ordering is important:
> ZK watches don't propagate at the same time. As a result, the
> ApiVersionsResponse
> is eventually consistent across brokers. This can introduce cases
> where clients see an older lower epoch of the features metadata, after a
> more recent
> higher epoch was returned at a previous point in time. We expect clients
> to always employ the rule that the latest received higher epoch of metadata
> always trumps an older smaller epoch. Those clients that are external to
> Kafka should strongly consider discovering the latest metadata once during
> startup from the brokers, and if required refresh the metadata periodically
> (to get the latest metadata).
>
> > 100.6 Could you specify the required ACL for this new request?
>
> (Kowshik): What is ACL, and how could I find out which one to specify?
> Please could you provide me some pointers? I'll be glad to update the
> KIP once I know the next steps.
>
> > 101. For the broker registration ZK node, should we bump up the version
> in
> the json?
>
> (Kowshik): Great point! Done. I've increased the version in the broker
> json by 1.
>
> > 102. For the /features ZK node, not sure if we need the epoch field. Each
> > ZK node has an internal version field that is incremented on every
> update.
>
> (Kowshik): Great point! Done. I'm using the ZK node version now, instead
> of explicitly
> incremented epoch.
>
> > 103. "Enabling the actual semantics of a feature version cluster-wide is
> > left to the discretion of the logic implementing the feature (ex: can be
> > done via dynamic broker config)." Does that mean the broker registration
> ZK
> > node will be updated dynamically wh

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-03 Thread Kowshik Prakasam
Hey Boyang,

Thanks for the feedback! I've updated the KIP. Please find below my
response.

> 1. Do you mind updating the non-goal section as we are introducing a
> --feature-force-downgrade to address downgrade concern?

(Kowshik): This was already mentioned. Look for non-goal: 1-b.

> 2. For the flags `--feature` seems to be a redundant prefix, as the script
> is already called `kafka-features.sh`. They could just be called
> `--upgrade` and `--force-downgrade`.

(Kowshik): Great point! Done.

> 3. I don't feel strong to require a confirmation for a normal feature
> upgrade, unless there are other existing scripts doing so.

(Kowshik): Done. Removed now. We now ask for a confirmation only for
downgrades.

> 4. How could we know the existing feature versions when user are only
> executing upgrades? Does the `kafka-features.sh` always send a
> DescribeFeatureRequest to broker first?

(Kowshik): For deletes, yes it will make an ApiVersionsRequest call to show
the
versions of the features. Perhaps the ApiVersionsRequest can be sent
to just the controller to avoid questions on consistency, but that's
an implementation detail.

> 5. I'm not 100% sure, but a script usually use the same flag once, so
maybe
> we should also do that for `--upgrade-feature`? Instead of flagging twice
> for different features, a comma separated list of (feature:max_version)
> will be expected, or something like that.

(Kowshik): Done. I'm using a comma-separated list now.

> 6. "The node data shall be readable via existing ZK tooling" Just trying
to
> clarify, we are not introducing ZK direct read tool in this KIP correct?
As
> for KIP-500 <https://issues.apache.org/jira/browse/KIP-500> we are
eventually going to deprecate all direct ZK access tools.

(Kowshik): Done. Yes, we are not intending to add such a tool. I was just
saying that
if we ever want to read it from ZK, then it's readable via ZK cli (in the
interim).
I have modified the text conveying the intent to support reads via
ApiVersionsRequest only (optionally this request can be directed at the
controller to
void questions on consistency, but that's an implementation detail).

> 7. Could we have a centralized section called `Public Interfaces` to
> summarize all the public API changes? This is a required section in a KIP.
> And we should also write down the new error codes we will be introducing
in
> this KIP, and include both new and old error codes in the Response schema
> comment if possible. For example, UpdateFeatureResponse could expect a
> `NOT_CONTROLLER` error code.

(Kowshik): Done. The error codes have been documented in the response
schemas now.
Added a new section titled "New or Changed Public Interfaces" summarizing
only the
changes made to the public interfaces.


Cheers,
Kowshik


On Fri, Apr 3, 2020 at 9:39 AM Boyang Chen 
wrote:

> Hey Kowshik,
>
> thanks for getting the KIP updated. The Zookeeper routing approach makes
> sense and simplifies the changes.
> Some follow-ups:
>
> 1. Do you mind updating the non-goal section as we are introducing a
> --feature-force-downgrade to address downgrade concern?
>
> 2. For the flags `--feature` seems to be a redundant prefix, as the script
> is already called `kafka-features.sh`. They could just be called
> `--upgrade` and `--force-downgrade`.
>
> 3. I don't feel strong to require a confirmation for a normal feature
> upgrade, unless there are other existing scripts doing so.
>
> 4. How could we know the existing feature versions when user are only
> executing upgrades? Does the `kafka-features.sh` always send a
> DescribeFeatureRequest to broker first?
>
> 5. I'm not 100% sure, but a script usually use the same flag once, so maybe
> we should also do that for `--upgrade-feature`? Instead of flagging twice
> for different features, a comma separated list of (feature:max_version)
> will be expected, or something like that.
>
> 6. "The node data shall be readable via existing ZK tooling" Just trying to
> clarify, we are not introducing ZK direct read tool in this KIP correct? As
> for KIP-500 <https://issues.apache.org/jira/browse/KIP-500> we are
> eventually going to deprecate all direct ZK access tools.
>
> 7. Could we have a centralized section called `Public Interfaces` to
> summarize all the public API changes? This is a required section in a KIP.
> And we should also write down the new error codes we will be introducing in
> this KIP, and include both new and old error codes in the Response schema
> comment if possible. For example, UpdateFeatureResponse could expect a
> `NOT_CONTROLLER` error code.
>
>
> Boyang
>
> On Fri, Apr 3, 2020 at 3:15 AM Kowshik Prakasam 
> wrote:
>
> > Hi all,
> >
> > Any other feedbac

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-03 Thread Kowshik Prakasam
7;s possible
> for new features to be included in minor releases too. Should we make the
> feature versioning match the release versioning?
>
> 111. "During regular operations, the data in the ZK node can be mutated
> only via a specific admin API served only by the controller." I am
> wondering why can't the controller auto finalize a feature version after
> all brokers are upgraded? For new users who download the latest version to
> build a new cluster, it's inconvenient for them to have to manually enable
> each feature.
>
> 112. DeleteFeaturesResponse: It seems the apiKey should be 49 instead of
> 48.
>
> Jun
>
>
> On Fri, Apr 3, 2020 at 1:27 AM Kowshik Prakasam 
> wrote:
>
> > Hey Jun,
> >
> > Thanks a lot for the great feedback! Please note that the design
> > has changed a little bit on the KIP, and we now propagate the finalized
> > features metadata only via ZK watches (instead of UpdateMetadataRequest
> > from the controller).
> >
> > Please find below my response to your questions/feedback, with the prefix
> > "(Kowshik):".
> >
> > > 100. UpdateFeaturesRequest/UpdateFeaturesResponse
> > > 100.1 Since this request waits for responses from brokers, should we
> add
> > a
> > > timeout in the request (like createTopicRequest)?
> >
> > (Kowshik): Great point! Done. I have added a timeout field. Note: we no
> > longer
> > wait for responses from brokers, since the design has been changed so
> that
> > the
> > features information is propagated via ZK. Nevertheless, it is right to
> > have a timeout
> > for the request.
> >
> > > 100.2 The response schema is a bit weird. Typically, the response just
> > > shows an error code and an error message, instead of echoing the
> request.
> >
> > (Kowshik): Great point! Yeah, I have modified it to just return an error
> > code and a message.
> > Previously it was not echoing the "request", rather it was returning the
> > latest set of
> > cluster-wide finalized features (after applying the updates). But you are
> > right,
> > the additional info is not required, so I have removed it from the
> response
> > schema.
> >
> > > 100.3 Should we add a separate request to list/describe the existing
> > > features?
> >
> > (Kowshik): This is already present in the KIP via the 'DescribeFeatures'
> > Admin API,
> > which, underneath covers uses the ApiVersionsRequest to list/describe the
> > existing features. Please read the 'Tooling support' section.
> >
> > > 100.4 We are mixing ADD_OR_UPDATE and DELETE in a single request. For
> > > DELETE, the version field doesn't make sense. So, I guess the broker
> just
> > > ignores this? An alternative way is to have a separate
> > DeleteFeaturesRequest
> >
> > (Kowshik): Great point! I have modified the KIP now to have 2 separate
> > controller APIs
> > serving these different purposes:
> > 1. updateFeatures
> > 2. deleteFeatures
> >
> > > 100.5 In UpdateFeaturesResponse, we have "The monotonically increasing
> > > version of the metadata for finalized features." I am wondering why the
> > > ordering is important?
> >
> > (Kowshik): In the latest KIP write-up, it is called epoch (instead of
> > version), and
> > it is just the ZK node version. Basically, this is the epoch for the
> > cluster-wide
> > finalized feature version metadata. This metadata is served to clients
> via
> > the
> > ApiVersionsResponse (for reads). We propagate updates from the
> '/features'
> > ZK node
> > to all brokers, via ZK watches setup by each broker on the '/features'
> > node.
> >
> > Now here is why the ordering is important:
> > ZK watches don't propagate at the same time. As a result, the
> > ApiVersionsResponse
> > is eventually consistent across brokers. This can introduce cases
> > where clients see an older lower epoch of the features metadata, after a
> > more recent
> > higher epoch was returned at a previous point in time. We expect clients
> > to always employ the rule that the latest received higher epoch of
> metadata
> > always trumps an older smaller epoch. Those clients that are external to
> > Kafka should strongly consider discovering the latest metadata once
> during
> > startup from the brokers, and if required refresh the metadata
> periodically
> > (to get the latest metadata).
> >
> > > 100.6 Could you speci

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-03 Thread Kowshik Prakasam
Hi Colin,

Thanks for the feedback! I have updated the KIP based on your feedback.
Please find my response below.

> The discussion on ZooKeeper reads versus writes makes sense to me.  The
important thing to keep in mind here is that in the bridge release,
> all brokers can read from ZooKeeper, but only the controller writes.

(Kowshik): Great, thanks!

> Why do we need both UpdateFeaturesRequest and DeleteFeaturesRequest?  It
seems awkward to have "deleting" be a special case here when the
> general idea is that we have an RPC to change the supported feature
flags.  Changing the feature level from 2 to 1 doesn't seem that different
> from changing it from 1 to not present.

(Kowshik): Done, makes sense. I have updated the KIP to just use 1 API, and
that's the UpdateFeaturesRequest. For the deletion case, we can just ignore
the version number passed in the API (which is indicative of 'not present').

> It would be simpler to just say that a feature flag which doesn't appear
in the znode is considered to be at version level 0.  This will also
> simplify the code a lot, I think, since you won't have to keep track of
tricky distinctions between "disabled" and "enabled at version 0."
> Then you would be able to just use an int in most places.

(Kowshik): I'm not sure I understood why we want do it this way. If an
entry for some finalized feature is absent in '/features' node,
alternatively we can just treat this as a feature with a version that
was never finalized/enabled or it was deleted at some point. Then, we can
even allow for "enabled at version 0" as the {minVersion, maxVersion} range
can be any valid range, not necessarily minVersion > 0.

> (By the way, I would propose the term "version level" for this number,
since it avoids confusion with all the other meanings of the word
> "version" that we have in the code.)

(Kowshik): Good idea! I have updated the KIP to refer to "version level"
instead of version.

> Another thing to keep in mind is that if a request RPC is batch, the
corresponding response RPC also needs to be batch.  In other words, you
> need multiple error codes, one for each feature flag whose level you are
trying to change.  Unless the idea is that the whole change is a
> transaction that all either happens or doesn't?

(Kowshik): Yes, the whole change is a transaction. Either all provided
FeatureUpdate is carried out in ZK, or none happens. That's why we just
allow for a single error code field, as it is easier that way. This
transactional guarantee is mentioned under 'Proposed changes > New
controller API'

> Rather than FeatureUpdateType, I would just go with a boolean like
"force."  I'm not sure what other values we'd want to add to this later on,
> if it were an enum.  I think the boolean is clearer.

(Kowshik): Since we have decided to go just one API (i.e.
UpdateFeaturesRequest), it is better that FeatureUpdateType is an enum with
multiple values. A FeatureUpdateType is tied to a feature, and the possible
values are: ADD_OR_UPDATE, ADD_OR_UPDATE_ALLOW_DOWNGRADE, DELETE.

> This ties in with my comment earlier, but for the result classes, we need
methods other than just "all".  Batch operations aren't usable if
> you can't get the result per operation unless the semantics are
transactional and it really is just everything succeeded or everything
> failed.

(Kowshik): The semantics are transactional, as I explained above.

> There are a bunch of Java interfaces described like FinalizedFeature,
FeatureUpdate, UpdateFeaturesResult, and so on that should just be
> regular concrete Java classes.  In general we'd only use an interface if
we wanted the caller to implement some kind of callback function. We
> don't make classes that are just designed to hold data into interfaces,
since that just imposes extra work on callers (they have to define
> their own concrete class for each interface just to use the API.)
 There's also probably no reason to have these classes inherit from each
> other or have complex type relationships.  One more nitpick is that Kafka
generally doesn't use "get" in the function names of accessors.

(Kowshik): Done, I have changed the KIP. By 'interface', I just meant
interface from a pseudocode standpoint (i.e. it was just an abstraction
providing at least the specified behavior). Since that was a bit confusing,
I have now renamed it calling it a class. Also I have eliminated the type
relationships.


Cheers,
Kowshik

On Fri, Apr 3, 2020 at 5:54 PM Kowshik Prakasam 
wrote:

> Hi Jun,
>
> Thanks for the feedback and suggestions. Please find my response below.
>
> > 100.6 For every new request, the admin needs to control who is allowed to
> > issue that 

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-05 Thread Kowshik Prakasam
Hi Colin,

Thanks a lot for the explanation! I've updated the KIP based on your
suggestions. Please find my response to your comments below.

> If you can just treat "not present" as version level 0, you can have just
checks like the second one.  This should lead to simpler code.

(Kowshik): Good idea! Done. I've updated the KIP to eliminate
FeatureUpdateType.DELETE, and instead use a version level value < 1 to be
indicative of feature  deletion, or more generally absence of a feature.
Thanks for the idea!

> I guess this ties in with the discussion above-- I would rather not have
a "deleted" state.  It doesn't seem to make anything more expressive, and
it complicates the code.

(Kowshik): As mentioned above, I've eliminated FeatureUpdateType.DELETE
now. Thanks for the idea!


Cheers,
Kowshik

On Sat, Apr 4, 2020 at 8:32 PM Colin McCabe  wrote:

> On Fri, Apr 3, 2020, at 20:32, Kowshik Prakasam wrote:
> > > Colin wrote:
> > > It would be simpler to just say that a feature flag which doesn't
> appear
> > > in the znode is considered to be at version level 0.  This will also
> > > simplify the code a lot, I think, since you won't have to keep track of
> > > tricky distinctions between "disabled" and "enabled at version 0."
> > > Then you would be able to just use an int in most places.
> >
> > (Kowshik): I'm not sure I understood why we want do it this way. If an
> > entry for some finalized feature is absent in '/features' node,
> > alternatively we can just treat this as a feature with a version that
> > was never finalized/enabled or it was deleted at some point. Then, we can
> > even allow for "enabled at version 0" as the {minVersion, maxVersion}
> range
> > can be any valid range, not necessarily minVersion > 0.
>
> Think about the following pseudocode.  Which is simpler:
>
> > if (feature is not present) || (feature level < 1) {
> > ... something ...
> > } else {
> > ... something ...
> > }
>
> or
>
> > if (feature level < 1) {
> > ... something ...
> > } else {
> > ... something ...
> > }
>
> If you can just treat "not present" as version level 0, you can have just
> checks like the second one.  This should lead to simpler code.
>
> > (Kowshik): Yes, the whole change is a transaction. Either all provided
> > FeatureUpdate is carried out in ZK, or none happens. That's why we just
> > allow for a single error code field, as it is easier that way. This
> > transactional guarantee is mentioned under 'Proposed changes > New
> > controller API'
>
> That makes sense, thanks.
>
> > > Rather than FeatureUpdateType, I would just go with a boolean like
> > > "force."  I'm not sure what other values we'd want to add to this
> later on,
> > > if it were an enum.  I think the boolean is clearer.
> >
> > (Kowshik): Since we have decided to go just one API (i.e.
> > UpdateFeaturesRequest), it is better that FeatureUpdateType is an enum
> with
> > multiple values. A FeatureUpdateType is tied to a feature, and the
> possible
> > values are: ADD_OR_UPDATE, ADD_OR_UPDATE_ALLOW_DOWNGRADE, DELETE.
>
> I guess this ties in with the discussion above-- I would rather not have a
> "deleted" state.  It doesn't seem to make anything more expressive, and it
> complicates the code.
>
> >
> > > This ties in with my comment earlier, but for the result classes, we
> need
> > > methods other than just "all".  Batch operations aren't usable if
> > > you can't get the result per operation unless the semantics are
> > > transactional and it really is just everything succeeded or everything
> > > failed.
> >
> > (Kowshik): The semantics are transactional, as I explained above.
>
> Thanks for the clarification.
>
> >
> > > There are a bunch of Java interfaces described like FinalizedFeature,
> > > FeatureUpdate, UpdateFeaturesResult, and so on that should just be
> > > regular concrete Java classes.  In general we'd only use an interface
> if
> > > we wanted the caller to implement some kind of callback function. We
> > > don't make classes that are just designed to hold data into interfaces,
> > > since that just imposes extra work on callers (they have to define
> > > their own concrete class for each interface just to use the API.)
> > >  There's also probably no reason to have these classes inherit from
> each
> > > other or have com

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-06 Thread Kowshik Prakasam
tion procedure is actually executed at
> both the broker side as well as the controller:
>
> 3.a) the broker reads the cluster-level feature vectors from ZK directly
> and compare with its own supported versions; if the validation fails, it
> will shutdown itself, otherwise, proceed normally.
> 3.b) upon being notified through ZK watchers of the newly registered
> broker, the controller will ALSO execute the validation comparing its
> registry's supported feature versions with the cluster-level feature
> vectors; if the validation fails, the controller will stop the remaining of
> the new-broker-startup procedure like potentially adding it to some
> partition's replica list or moving leaders to it.
>
> The key here is that 3.b) on the controller side is serially executed with
> all other controller operations, including the add/update/delete-feature
> request handling. So if the broker-startup registry is executed first, then
> the later update-feature request which would make the broker incompatible
> would be rejected; if the update-feature request is handled first, then the
> broker-startup logic would abort since the validation fails. In that sense,
> there would be no race condition windows -- of course that's based on my
> understanding that validation is also executed on the controller side.
> Please let me know if that makes sense?
>
>
> Guozhang
>
>
> On Sat, Apr 4, 2020 at 8:36 PM Colin McCabe  wrote:
>
> > On Fri, Apr 3, 2020, at 11:24, Jun Rao wrote:
> > > Hi, Kowshik,
> > >
> > > Thanks for the reply. A few more comments below.
> > >
> > > 100.6 For every new request, the admin needs to control who is allowed
> > > to issue that request if security is enabled. So, we need to assign the
> > new
> > > request a ResourceType and possible AclOperations. See
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-455%3A+Create+an+Administrative+API+for+Replica+Reassignment
> > > as an example.
> > >
> >
> > Yeah, agreed.  To be more specific, the permissions required for this
> > should be Alter on Cluster, right?  It's certainly something only system
> > administrators should be doing (KIP-455 also specifies ALTER on CLUSTER)
> >
> > best,
> > Colin
> >
> >
> > > 105. If we change delete to disable, it's better to do this
> consistently
> > in
> > > request protocol and admin api as well.
> > >
> > > 110. The minVersion/maxVersion for features use int64. Currently, our
> > > release version schema is major.minor.bugfix (e.g. 2.5.0). It's
> possible
> > > for new features to be included in minor releases too. Should we make
> the
> > > feature versioning match the release versioning?
> > >
> > > 111. "During regular operations, the data in the ZK node can be mutated
> > > only via a specific admin API served only by the controller." I am
> > > wondering why can't the controller auto finalize a feature version
> after
> > > all brokers are upgraded? For new users who download the latest version
> > to
> > > build a new cluster, it's inconvenient for them to have to manually
> > enable
> > > each feature.
> > >
> > > 112. DeleteFeaturesResponse: It seems the apiKey should be 49 instead
> of
> > 48.
> > >
> > > Jun
> > >
> > >
> > > On Fri, Apr 3, 2020 at 1:27 AM Kowshik Prakasam <
> kpraka...@confluent.io>
> > > wrote:
> > >
> > > > Hey Jun,
> > > >
> > > > Thanks a lot for the great feedback! Please note that the design
> > > > has changed a little bit on the KIP, and we now propagate the
> finalized
> > > > features metadata only via ZK watches (instead of
> UpdateMetadataRequest
> > > > from the controller).
> > > >
> > > > Please find below my response to your questions/feedback, with the
> > prefix
> > > > "(Kowshik):".
> > > >
> > > > > 100. UpdateFeaturesRequest/UpdateFeaturesResponse
> > > > > 100.1 Since this request waits for responses from brokers, should
> we
> > add
> > > > a
> > > > > timeout in the request (like createTopicRequest)?
> > > >
> > > > (Kowshik): Great point! Done. I have added a timeout field. Note: we
> no
> > > > longer
> > > > wait for responses from brokers, since the design has been changed so
> > that
> > > > the
> > > > features 

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-07 Thread Kowshik Prakasam
Hi Colin,

Thanks for the feedback, and suggestions! It is a great idea to provide a
`--finalize-latest` flag. I agree it's a burden to ask the user to manually
upgrade each feature to the latest version, after a release.

I have now updated the KIP adding this idea.

> What about a simple solution to problem this where we add a flag to the
command-line tool like --enable-latest?  The command-line tool could query
what the highest possible versions for
> each feature were (using the API) and then make another RPC to enable the
latest features.

(Kowshik): I've updated the KIP with the above idea, please look at this
section (point #3 and the tooling example later):
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Toolingsupport


> I think this is actually much easier than the version number solution.
The version string solution requires us to maintain a complicated mapping
table between version strings and features.  > In practice, we also have
"internal versions" in ApiVersion.scala like 2.4IV0, 2.4IV1, and so on.
This isn't simple for users to understand or use.

> It's also hard to know what the difference is between different version
strings.  For example, there's actually no difference between 2.5IV0 and
2.4IV1, but you wouldn't know that unless you > read the comments in
ApiVersion.scala.  A system administrator who didn't know this might end up
doing a cluster roll to upgrade the IBP that turned out to be unnecessary.

(Kowshik): Yes, I can see the disadvantages!


Cheers,
Kowshik



On Mon, Apr 6, 2020 at 3:46 PM Colin McCabe  wrote:

> Hi Jun,
>
> I agree that asking the user to manually upgrade all features to the
> latest version is a burden.  Then the user has to know what the latest
> version of every feature is when upgrading.
>
> What about a simple solution to problem this where we add a flag to the
> command-line tool like --enable-latest?  The command-line tool could query
> what the highest possible versions for each feature were (using the API)
> and then make another RPC to enable the latest features.
>
> I think this is actually much easier than the version number solution.
> The version string solution requires us to maintain a complicated mapping
> table between version strings and features.  In practice, we also have
> "internal versions" in ApiVersion.scala like 2.4IV0, 2.4IV1, and so on.
> This isn't simple for users to understand or use.
>
> It's also hard to know what the difference is between different version
> strings.  For example, there's actually no difference between 2.5IV0 and
> 2.4IV1, but you wouldn't know that unless you read the comments in
> ApiVersion.scala.  A system administrator who didn't know this might end up
> doing a cluster roll to upgrade the IBP that turned out to be unnecessary.
>
> best,
> Colin
>
>
> On Mon, Apr 6, 2020, at 12:06, Jun Rao wrote:
> > Hi, Kowshik,
> >
> > Thanks for the reply. A few more replies below.
> >
> > 100.6 You can look for the sentence "This operation requires ALTER on
> > CLUSTER." in KIP-455. Also, you can check its usage in
> > KafkaApis.authorize().
> >
> > 110. From the external client/tooling perspective, it's more natural to
> use
> > the release version for features. If we can use the same release version
> > for internal representation, it seems simpler (easier to understand, no
> > mapping overhead, etc). Is there a benefit with separate external and
> > internal versioning schemes?
> >
> > 111. To put this in context, when we had IBP, the default value is the
> > current released version. So, if you are a brand new user, you don't need
> > to configure IBP and all new features will be immediately available in
> the
> > new cluster. If you are upgrading from an old version, you do need to
> > understand and configure IBP. I see a similar pattern here for
> > features. From the ease of use perspective, ideally, we shouldn't
> require a
> > new user to have an extra step such as running a bootstrap script unless
> > it's truly necessary. If someone has a special need (all the cases you
> > mentioned seem special cases?), they can configure a mode such that
> > features are enabled/disabled manually.
> >
> > Jun
> >
> > On Fri, Apr 3, 2020 at 5:54 PM Kowshik Prakasam 
> > wrote:
> >
> > > Hi Jun,
> > >
> > > Thanks for the feedback and suggestions. Please find my response below.
> > >
> > > > 100.6 For every new request, the admin needs to control who is
> allowed to
> > > > issue tha

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-07 Thread Kowshik Prakasam
Hi Jun,

I have updated the KIP for the item 111.
I'm in the process of addressing 100.6, and will provide an update soon.
I think item 110 is still under discussion given we are now providing a way
to finalize
all features to their latest version levels. In any case, please let us know
how you feel in response to Colin's comments on this topic.

> 111. To put this in context, when we had IBP, the default value is the
> current released version. So, if you are a brand new user, you don't need
> to configure IBP and all new features will be immediately available in the
> new cluster. If you are upgrading from an old version, you do need to
> understand and configure IBP. I see a similar pattern here for
> features. From the ease of use perspective, ideally, we shouldn't require
a
> new user to have an extra step such as running a bootstrap script unless
> it's truly necessary. If someone has a special need (all the cases you
> mentioned seem special cases?), they can configure a mode such that
> features are enabled/disabled manually.

(Kowshik): That makes sense, thanks for the idea! Sorry if I didn't
understand
this need earlier. I have updated the KIP with the approach that whenever
the '/features' node is absent, the controller by default will bootstrap
the node
to contain the latest feature levels. Here is the new section in the KIP
describing
the same:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Controller:ZKnodebootstrapwithdefaultvalues

Next, as I explained in my response to Colin's suggestions, we are now
providing a `--finalize-latest-features` flag with the tooling. This lets
the sysadmin finalize all features known to the controller to their latest
version
levels. Please look at this section (point #3 and the tooling example
later):
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Toolingsupport


Do you feel this addresses your comment/concern?


Cheers,
Kowshik

On Mon, Apr 6, 2020 at 12:06 PM Jun Rao  wrote:

> Hi, Kowshik,
>
> Thanks for the reply. A few more replies below.
>
> 100.6 You can look for the sentence "This operation requires ALTER on
> CLUSTER." in KIP-455. Also, you can check its usage in
> KafkaApis.authorize().
>
> 110. From the external client/tooling perspective, it's more natural to use
> the release version for features. If we can use the same release version
> for internal representation, it seems simpler (easier to understand, no
> mapping overhead, etc). Is there a benefit with separate external and
> internal versioning schemes?
>
> 111. To put this in context, when we had IBP, the default value is the
> current released version. So, if you are a brand new user, you don't need
> to configure IBP and all new features will be immediately available in the
> new cluster. If you are upgrading from an old version, you do need to
> understand and configure IBP. I see a similar pattern here for
> features. From the ease of use perspective, ideally, we shouldn't require a
> new user to have an extra step such as running a bootstrap script unless
> it's truly necessary. If someone has a special need (all the cases you
> mentioned seem special cases?), they can configure a mode such that
> features are enabled/disabled manually.
>
> Jun
>
> On Fri, Apr 3, 2020 at 5:54 PM Kowshik Prakasam 
> wrote:
>
> > Hi Jun,
> >
> > Thanks for the feedback and suggestions. Please find my response below.
> >
> > > 100.6 For every new request, the admin needs to control who is allowed
> to
> > > issue that request if security is enabled. So, we need to assign the
> new
> > > request a ResourceType and possible AclOperations. See
> > >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-455%3A+Create+an+Administrative+API+for+Replica+Reassignment
> > > as an example.
> >
> > (Kowshik): I don't see any reference to the words ResourceType or
> > AclOperations
> > in the KIP. Please let me know how I can use the KIP that you linked to
> > know how to
> > setup the appropriate ResourceType and/or ClusterOperation?
> >
> > > 105. If we change delete to disable, it's better to do this
> consistently
> > in
> > > request protocol and admin api as well.
> >
> > (Kowshik): The API shouldn't be called 'disable' when it is deleting a
> > feature.
> > I've just changed the KIP to use 'delete'. I don't have a strong
> > preference.
> >
> > > 110. The minVersion/maxVersion for features use int64. Cur

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-13 Thread Kowshik Prakasam
Hi Guozhang,

Thanks for the explanation! This is a very good point. I have updated the
KIP incorporating
the proposed idea. We now maintain/serve MAX as well as MIN version levels
of finalized features.
So, the client will get to know both these values in the
ApiVersionsResponse. This serves as a solution
to the problem that you explained earlier.

It is important to note the following explanation. We only allow the
finalized feature MAX version level to
be increased/decreased dynamically via the controller API. Contrastingly,
the MIN version level can not be
mutated via the Controller API. This is because, MIN version level is
usually increased only to indicate the
intent to stop support for a certain feature version. We would usually
deprecate features during broker releases,
after prior announcements. Therefore, the facility to mutate MIN version
level need not be made available
through the controller API to the cluster operator.

Instead it is sufficient if such changes can be done directly by the
controller i.e. during a certain Kafka
release we would change the controller code to mutate the '/features' ZK
node increasing the MIN version level
of one or more finalized features (this will be a planned change, as
determined by Kafka developers). Then, as
this Broker release gets rolled out to a cluster, the feature versions will
become permanently deprecated.

Here are links to the specific sub-sections with the changes including
MIN/MAX version levels:

Goals:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Goals

Non-goals (see point #2):
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Non-goals

Feature version deprecation:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Featureversiondeprecation

Admin API changes:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-AdminAPIchanges


Cheers,
Kowshik



On Mon, Apr 6, 2020 at 3:28 PM Guozhang Wang  wrote:

> Hello Kowshik,
>
> For 2) above, my motivations is more from the flexibility on client side
> instead of version deprecation: let's say a client talks to the cluster
> learned that the cluster-wide version for feature is X, while the client
> itself only knows how to execute the feature up to version Y ( < X), then
> at the moment the client has to give up leveraging that since it is not
> sure if all brokers actually supports version Y or not. This is because the
> version X is only guaranteed to be within the common overlapping range of
> all [low, high] across brokers where "low" is not always 0, so the client
> cannot safely assume that any versions smaller than X are also supported on
> the cluster.
>
> If we assume that when cluster-wide version is X, then all versions smaller
> than X are guaranteed to be supported, then it means all broker's supported
> version range is like [0, high], which I think is not realistic?
>
>
> Guozhang
>
>
>
> On Mon, Apr 6, 2020 at 12:06 PM Jun Rao  wrote:
>
> > Hi, Kowshik,
> >
> > Thanks for the reply. A few more replies below.
> >
> > 100.6 You can look for the sentence "This operation requires ALTER on
> > CLUSTER." in KIP-455 <https://issues.apache.org/jira/browse/KIP-455>.
> Also, you can check its usage in
> > KafkaApis.authorize().
> >
> > 110. From the external client/tooling perspective, it's more natural to
> use
> > the release version for features. If we can use the same release version
> > for internal representation, it seems simpler (easier to understand, no
> > mapping overhead, etc). Is there a benefit with separate external and
> > internal versioning schemes?
> >
> > 111. To put this in context, when we had IBP, the default value is the
> > current released version. So, if you are a brand new user, you don't need
> > to configure IBP and all new features will be immediately available in
> the
> > new cluster. If you are upgrading from an old version, you do need to
> > understand and configure IBP. I see a similar pattern here for
> > features. From the ease of use perspective, ideally, we shouldn't
> require a
> > new user to have an extra step such as running a bootstrap script unless
> > it's truly necessary. If someone has a special need (all the cases you
> > mentioned seem special cases?), they can configure a mode such that
> > features are enabled/disabled manually.
> >
> > Jun
> >
> > On Fri, Apr 3, 2020 at 5:54 PM Kowshik Prakasam 
> > wrot

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-13 Thread Kowshik Prakasam
mple, if a user wants to downgrade
> to a release 2.5, it's easier for the user to use the tool like "tool
> --downgrade 2.5" instead of "tool --downgrade --feature X --version 6".
> Similarly, if the client library finds a feature mismatch with the broker,
> the client likely needs to log some error message for the user to take some
> actions. It's much more actionable if the error message is "upgrade the
> broker to release version 2.6" than just "upgrade the broker to feature
> version 7".
>
> 111. Sounds good.
>
> 120. When should a developer bump up the version of a feature?
>
> Jun
>
> On Tue, Apr 7, 2020 at 12:26 AM Kowshik Prakasam 
> wrote:
>
> > Hi Jun,
> >
> > I have updated the KIP for the item 111.
> > I'm in the process of addressing 100.6, and will provide an update soon.
> > I think item 110 is still under discussion given we are now providing a
> way
> > to finalize
> > all features to their latest version levels. In any case, please let us
> > know
> > how you feel in response to Colin's comments on this topic.
> >
> > > 111. To put this in context, when we had IBP, the default value is the
> > > current released version. So, if you are a brand new user, you don't
> need
> > > to configure IBP and all new features will be immediately available in
> > the
> > > new cluster. If you are upgrading from an old version, you do need to
> > > understand and configure IBP. I see a similar pattern here for
> > > features. From the ease of use perspective, ideally, we shouldn't
> require
> > a
> > > new user to have an extra step such as running a bootstrap script
> unless
> > > it's truly necessary. If someone has a special need (all the cases you
> > > mentioned seem special cases?), they can configure a mode such that
> > > features are enabled/disabled manually.
> >
> > (Kowshik): That makes sense, thanks for the idea! Sorry if I didn't
> > understand
> > this need earlier. I have updated the KIP with the approach that whenever
> > the '/features' node is absent, the controller by default will bootstrap
> > the node
> > to contain the latest feature levels. Here is the new section in the KIP
> > describing
> > the same:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Controller:ZKnodebootstrapwithdefaultvalues
> >
> > Next, as I explained in my response to Colin's suggestions, we are now
> > providing a `--finalize-latest-features` flag with the tooling. This lets
> > the sysadmin finalize all features known to the controller to their
> latest
> > version
> > levels. Please look at this section (point #3 and the tooling example
> > later):
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Toolingsupport
> >
> >
> > Do you feel this addresses your comment/concern?
> >
> >
> > Cheers,
> > Kowshik
> >
> > On Mon, Apr 6, 2020 at 12:06 PM Jun Rao  wrote:
> >
> > > Hi, Kowshik,
> > >
> > > Thanks for the reply. A few more replies below.
> > >
> > > 100.6 You can look for the sentence "This operation requires ALTER on
> > > CLUSTER." in KIP-455. Also, you can check its usage in
> > > KafkaApis.authorize().
> > >
> > > 110. From the external client/tooling perspective, it's more natural to
> > use
> > > the release version for features. If we can use the same release
> version
> > > for internal representation, it seems simpler (easier to understand, no
> > > mapping overhead, etc). Is there a benefit with separate external and
> > > internal versioning schemes?
> > >
> > > 111. To put this in context, when we had IBP, the default value is the
> > > current released version. So, if you are a brand new user, you don't
> need
> > > to configure IBP and all new features will be immediately available in
> > the
> > > new cluster. If you are upgrading from an old version, you do need to
> > > understand and configure IBP. I see a similar pattern here for
> > > features. From the ease of use perspective, ideally, we shouldn't
> > require a
> > > new user to have an extra step such as running a bootstrap script
> unless
> > > it's truly necessary. If someone has a special need (all the cases

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-14 Thread Kowshik Prakasam
Hi Jun,

Thanks a lot for the feedback and the questions!
Please find my response below.

> 200. The UpdateFeaturesRequest includes an AllowDowngrade field. It seems
> that field needs to be persisted somewhere in ZK?

(Kowshik): Great question! Below is my explanation. Please help me
understand,
if you feel there are cases where we would need to still persist it in ZK.

Firstly I have updated my thoughts into the KIP now, under the 'guidelines'
section:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Guidelinesonfeatureversionsandworkflows

The allowDowngrade boolean field is just to restrict the user intent, and
to remind
them to double check their intent before proceeding. It should be set to
true
by the user in a request, only when the user intent is to forcefully
"attempt" a
downgrade of a specific feature's max version level, to the provided value
in
the request.

We can extend this safeguard. The controller (on it's end) can maintain
rules in the code, that, for safety reasons would outright reject certain
downgrades
from a specific max_version_level for a specific feature. Such rejections
may
happen depending on the feature being downgraded, and from what version
level.

The CLI tool only allows a downgrade attempt in conjunction with specific
flags and sub-commands. For example, in the CLI tool, if the user uses the
'downgrade-all' command, or passes '--allow-downgrade' flag when updating a
specific feature, only then the tool will translate this ask to setting
'allowDowngrade' field in the request to the server.

> 201. UpdateFeaturesResponse has the following top level fields. Should
> those fields be per feature?
>
>   "fields": [
> { "name": "ErrorCode", "type": "int16", "versions": "0+",
>   "about": "The error code, or 0 if there was no error." },
> { "name": "ErrorMessage", "type": "string", "versions": "0+",
>   "about": "The error message, or null if there was no error." }
>   ]

(Kowshik): Great question!
As such, the API is transactional, as explained in the sections linked
below.
Either all provided FeatureUpdate was applied, or none.
It's the reason I felt we can have just one error code + message.
Happy to extend this if you feel otherwise. Please let me know.

Link to sections:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-ChangestoKafkaController

https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Guarantees

> 202. The /features path in ZK has a field min_version_level. Which API and
> tool can change that value?

(Kowshik): Great question! Currently this cannot be modified by using the
API or the tool.
Feature version deprecation (by raising min_version_level) can be done only
by the Controller directly. The rationale is explained in this section:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Featureversiondeprecation


Cheers,
Kowshik

On Tue, Apr 14, 2020 at 5:33 PM Jun Rao  wrote:

> Hi, Kowshik,
>
> Thanks for addressing those comments. Just a few more minor comments.
>
> 200. The UpdateFeaturesRequest includes an AllowDowngrade field. It seems
> that field needs to be persisted somewhere in ZK?
>
> 201. UpdateFeaturesResponse has the following top level fields. Should
> those fields be per feature?
>
>   "fields": [
> { "name": "ErrorCode", "type": "int16", "versions": "0+",
>   "about": "The error code, or 0 if there was no error." },
> { "name": "ErrorMessage", "type": "string", "versions": "0+",
>   "about": "The error message, or null if there was no error." }
>   ]
>
> 202. The /features path in ZK has a field min_version_level. Which API and
> tool can change that value?
>
> Jun
>
> On Mon, Apr 13, 2020 at 5:12 PM Kowshik Prakasam 
> wrote:
>
> > Hi Jun,
> >
> > Thanks for the feedback! I have updated the KIP-584 addressing your
> > comments.
> > Please find my response below.
> >
> > > 100.6 You can look for the sentence "This operation requires ALTER on
> > > CLUSTER." in KIP-455. Also, you can check its usage in
> > > KafkaApis.authorize().
> >
> > (Kowshik): Done. Great point! For the newly introduced UPDATE_FEATURES
>

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-15 Thread Kowshik Prakasam
Hi Jun,

Great question! Please find my response below.

> 200. My understanding is that If the CLI tool passes the
> '--allow-downgrade' flag when updating a specific feature, then a future
> downgrade is possible. Otherwise, the feature is now downgradable. If so,
I
> was wondering how the controller remembers this since it can be restarted
> over time?

(Kowshik): The purpose of the flag was to just restrict the user intent for
a specific request.
It seems to me that to avoid confusion, I could call the flag as
`--try-downgrade` instead.
Then this makes it clear, that, the controller just has to consider the ask
from
the user as an explicit request to attempt a downgrade.

The flag does not act as an override on controller's decision making that
decides whether
a flag is downgradable (these decisions on whether to allow a flag to be
downgraded
from a specific version level, can be embedded in the controller code).

Please let me know what you think.
Sorry if I misunderstood the original question.


Cheers,
Kowshik


On Wed, Apr 15, 2020 at 9:40 AM Jun Rao  wrote:

> Hi, Kowshik,
>
> Thanks for the reply. Makes sense. Just one more question.
>
> 200. My understanding is that If the CLI tool passes the
> '--allow-downgrade' flag when updating a specific feature, then a future
> downgrade is possible. Otherwise, the feature is now downgradable. If so, I
> was wondering how the controller remembers this since it can be restarted
> over time?
>
> Jun
>
>
> On Tue, Apr 14, 2020 at 6:49 PM Kowshik Prakasam 
> wrote:
>
> > Hi Jun,
> >
> > Thanks a lot for the feedback and the questions!
> > Please find my response below.
> >
> > > 200. The UpdateFeaturesRequest includes an AllowDowngrade field. It
> seems
> > > that field needs to be persisted somewhere in ZK?
> >
> > (Kowshik): Great question! Below is my explanation. Please help me
> > understand,
> > if you feel there are cases where we would need to still persist it in
> ZK.
> >
> > Firstly I have updated my thoughts into the KIP now, under the
> 'guidelines'
> > section:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Guidelinesonfeatureversionsandworkflows
> >
> > The allowDowngrade boolean field is just to restrict the user intent, and
> > to remind
> > them to double check their intent before proceeding. It should be set to
> > true
> > by the user in a request, only when the user intent is to forcefully
> > "attempt" a
> > downgrade of a specific feature's max version level, to the provided
> value
> > in
> > the request.
> >
> > We can extend this safeguard. The controller (on it's end) can maintain
> > rules in the code, that, for safety reasons would outright reject certain
> > downgrades
> > from a specific max_version_level for a specific feature. Such rejections
> > may
> > happen depending on the feature being downgraded, and from what version
> > level.
> >
> > The CLI tool only allows a downgrade attempt in conjunction with specific
> > flags and sub-commands. For example, in the CLI tool, if the user uses
> the
> > 'downgrade-all' command, or passes '--allow-downgrade' flag when
> updating a
> > specific feature, only then the tool will translate this ask to setting
> > 'allowDowngrade' field in the request to the server.
> >
> > > 201. UpdateFeaturesResponse has the following top level fields. Should
> > > those fields be per feature?
> > >
> > >   "fields": [
> > > { "name": "ErrorCode", "type": "int16", "versions": "0+",
> > >   "about": "The error code, or 0 if there was no error." },
> > > { "name": "ErrorMessage", "type": "string", "versions": "0+",
> > >   "about": "The error message, or null if there was no error." }
> > >   ]
> >
> > (Kowshik): Great question!
> > As such, the API is transactional, as explained in the sections linked
> > below.
> > Either all provided FeatureUpdate was applied, or none.
> > It's the reason I felt we can have just one error code + message.
> > Happy to extend this if you feel otherwise. Please let me know.
> >
> > Link to sections:
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versi

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-15 Thread Kowshik Prakasam
Hi Jun,

Thank you for the suggestion! I have updated the KIP, please find my
response below.

> 200. I guess you are saying only when the allowDowngrade field is set, the
> finalized feature version can go backward. Otherwise, it can only go up.
> That makes sense. It would be useful to make that clear when explaining
> the usage of the allowDowngrade field. In the validation section, we
have  "
> /features' from {"max_version_level": X} to {"max_version_level": X’}", it
> seems that we need to mention Y there.

(Kowshik): Great point! Yes, that is correct. Done, I have updated the
validations
section explaining the above. Here is a link to this section:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Validations


Cheers,
Kowshik




On Wed, Apr 15, 2020 at 11:05 AM Jun Rao  wrote:

> Hi, Kowshik,
>
> 200. I guess you are saying only when the allowDowngrade field is set, the
> finalized feature version can go backward. Otherwise, it can only go up.
> That makes sense. It would be useful to make that clear when explaining
> the usage of the allowDowngrade field. In the validation section, we have
> "
> /features' from {"max_version_level": X} to {"max_version_level": X’}", it
> seems that we need to mention Y there.
>
> Thanks,
>
> Jun
>
> On Wed, Apr 15, 2020 at 10:44 AM Kowshik Prakasam 
> wrote:
>
> > Hi Jun,
> >
> > Great question! Please find my response below.
> >
> > > 200. My understanding is that If the CLI tool passes the
> > > '--allow-downgrade' flag when updating a specific feature, then a
> future
> > > downgrade is possible. Otherwise, the feature is now downgradable. If
> so,
> > I
> > > was wondering how the controller remembers this since it can be
> restarted
> > > over time?
> >
> > (Kowshik): The purpose of the flag was to just restrict the user intent
> for
> > a specific request.
> > It seems to me that to avoid confusion, I could call the flag as
> > `--try-downgrade` instead.
> > Then this makes it clear, that, the controller just has to consider the
> ask
> > from
> > the user as an explicit request to attempt a downgrade.
> >
> > The flag does not act as an override on controller's decision making that
> > decides whether
> > a flag is downgradable (these decisions on whether to allow a flag to be
> > downgraded
> > from a specific version level, can be embedded in the controller code).
> >
> > Please let me know what you think.
> > Sorry if I misunderstood the original question.
> >
> >
> > Cheers,
> > Kowshik
> >
> >
> > On Wed, Apr 15, 2020 at 9:40 AM Jun Rao  wrote:
> >
> > > Hi, Kowshik,
> > >
> > > Thanks for the reply. Makes sense. Just one more question.
> > >
> > > 200. My understanding is that If the CLI tool passes the
> > > '--allow-downgrade' flag when updating a specific feature, then a
> future
> > > downgrade is possible. Otherwise, the feature is now downgradable. If
> > so, I
> > > was wondering how the controller remembers this since it can be
> restarted
> > > over time?
> > >
> > > Jun
> > >
> > >
> > > On Tue, Apr 14, 2020 at 6:49 PM Kowshik Prakasam <
> kpraka...@confluent.io
> > >
> > > wrote:
> > >
> > > > Hi Jun,
> > > >
> > > > Thanks a lot for the feedback and the questions!
> > > > Please find my response below.
> > > >
> > > > > 200. The UpdateFeaturesRequest includes an AllowDowngrade field. It
> > > seems
> > > > > that field needs to be persisted somewhere in ZK?
> > > >
> > > > (Kowshik): Great question! Below is my explanation. Please help me
> > > > understand,
> > > > if you feel there are cases where we would need to still persist it
> in
> > > ZK.
> > > >
> > > > Firstly I have updated my thoughts into the KIP now, under the
> > > 'guidelines'
> > > > section:
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Guidelinesonfeatureversionsandworkflows
> > > >
> > > > The allowDowngrade boolean field is just to restrict the user intent,
> > and
> > > > to remind
> > > > them to double check

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-15 Thread Kowshik Prakasam
Hi all,

Thank you very much for all the insightful feedback!
How do you feel about the KIP?
Does the scope and the write up look OK to you, and is it time to call a
vote?


Cheers,
Kowshik

On Wed, Apr 15, 2020 at 1:08 PM Kowshik Prakasam 
wrote:

> Hi Jun,
>
> Thank you for the suggestion! I have updated the KIP, please find my
> response below.
>
> > 200. I guess you are saying only when the allowDowngrade field is set,
> the
> > finalized feature version can go backward. Otherwise, it can only go up.
> > That makes sense. It would be useful to make that clear when explaining
> > the usage of the allowDowngrade field. In the validation section, we
> have  "
> > /features' from {"max_version_level": X} to {"max_version_level": X’}",
> it
> > seems that we need to mention Y there.
>
> (Kowshik): Great point! Yes, that is correct. Done, I have updated the
> validations
> section explaining the above. Here is a link to this section:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Validations
>
>
> Cheers,
> Kowshik
>
>
>
>
> On Wed, Apr 15, 2020 at 11:05 AM Jun Rao  wrote:
>
>> Hi, Kowshik,
>>
>> 200. I guess you are saying only when the allowDowngrade field is set, the
>> finalized feature version can go backward. Otherwise, it can only go up.
>> That makes sense. It would be useful to make that clear when explaining
>> the usage of the allowDowngrade field. In the validation section, we
>> have  "
>> /features' from {"max_version_level": X} to {"max_version_level": X’}", it
>> seems that we need to mention Y there.
>>
>> Thanks,
>>
>> Jun
>>
>> On Wed, Apr 15, 2020 at 10:44 AM Kowshik Prakasam > >
>> wrote:
>>
>> > Hi Jun,
>> >
>> > Great question! Please find my response below.
>> >
>> > > 200. My understanding is that If the CLI tool passes the
>> > > '--allow-downgrade' flag when updating a specific feature, then a
>> future
>> > > downgrade is possible. Otherwise, the feature is now downgradable. If
>> so,
>> > I
>> > > was wondering how the controller remembers this since it can be
>> restarted
>> > > over time?
>> >
>> > (Kowshik): The purpose of the flag was to just restrict the user intent
>> for
>> > a specific request.
>> > It seems to me that to avoid confusion, I could call the flag as
>> > `--try-downgrade` instead.
>> > Then this makes it clear, that, the controller just has to consider the
>> ask
>> > from
>> > the user as an explicit request to attempt a downgrade.
>> >
>> > The flag does not act as an override on controller's decision making
>> that
>> > decides whether
>> > a flag is downgradable (these decisions on whether to allow a flag to be
>> > downgraded
>> > from a specific version level, can be embedded in the controller code).
>> >
>> > Please let me know what you think.
>> > Sorry if I misunderstood the original question.
>> >
>> >
>> > Cheers,
>> > Kowshik
>> >
>> >
>> > On Wed, Apr 15, 2020 at 9:40 AM Jun Rao  wrote:
>> >
>> > > Hi, Kowshik,
>> > >
>> > > Thanks for the reply. Makes sense. Just one more question.
>> > >
>> > > 200. My understanding is that If the CLI tool passes the
>> > > '--allow-downgrade' flag when updating a specific feature, then a
>> future
>> > > downgrade is possible. Otherwise, the feature is now downgradable. If
>> > so, I
>> > > was wondering how the controller remembers this since it can be
>> restarted
>> > > over time?
>> > >
>> > > Jun
>> > >
>> > >
>> > > On Tue, Apr 14, 2020 at 6:49 PM Kowshik Prakasam <
>> kpraka...@confluent.io
>> > >
>> > > wrote:
>> > >
>> > > > Hi Jun,
>> > > >
>> > > > Thanks a lot for the feedback and the questions!
>> > > > Please find my response below.
>> > > >
>> > > > > 200. The UpdateFeaturesRequest includes an AllowDowngrade field.
>> It
>> > > seems
>> > > > > that field needs to be persisted somewhere in ZK?
>> > > >
>> > > > (Kowshik): Great question! Below is my explana

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-15 Thread Kowshik Prakasam
Hi Jun,

Sorry the links were broken in my last response, here are the right links:

200. 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioning
Scheme For Features-Validations
<https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Validations>
110. 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-When
To Use Versioned Feature Flags?
<https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Whentouseversionedfeatureflags?>


Cheers,
Kowshik

On Wed, Apr 15, 2020 at 6:24 PM Kowshik Prakasam 
wrote:

>
> Hi Jun,
>
> Thanks for the feedback! I have addressed the comments in the KIP.
>
> > 200. In the validation section, there is still the text  "*from*
> > {"max_version_level":
> > X} *to* {"max_version_level": X’}". It seems that it should say "from X
> to
> > Y"?
>
> (Kowshik): Done. I have reworded it a bit to make it clearer now in this
> section:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Validations
>
> > 110. Could we add that we need to document the bumped version of each
> > feature in the upgrade section of a release?
>
> (Kowshik): Great point! Done, I have mentioned it in #3 this section:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584
> <https://issues.apache.org/jira/browse/KIP-584>
> %3A+Versioning+scheme+for+features#KIP-584
> <https://issues.apache.org/jira/browse/KIP-584>
> :Versioningschemeforfeatures-Whentouseversionedfeatureflags?
>
>
> Cheers,
> Kowshik
>
> On Wed, Apr 15, 2020 at 4:00 PM Jun Rao  wrote:
>
>> Hi, Kowshik,
>>
>> Looks good to me now. Just a couple of minor things below.
>>
>> 200. In the validation section, there is still the text  "*from*
>> {"max_version_level":
>> X} *to* {"max_version_level": X’}". It seems that it should say "from X to
>> Y"?
>>
>> 110. Could we add that we need to document the bumped version of each
>> feature in the upgrade section of a release?
>>
>> Thanks,
>>
>> Jun
>>
>> On Wed, Apr 15, 2020 at 1:08 PM Kowshik Prakasam 
>> wrote:
>>
>> > Hi Jun,
>> >
>> > Thank you for the suggestion! I have updated the KIP, please find my
>> > response below.
>> >
>> > > 200. I guess you are saying only when the allowDowngrade field is set,
>> > the
>> > > finalized feature version can go backward. Otherwise, it can only go
>> up.
>> > > That makes sense. It would be useful to make that clear when
>> explaining
>> > > the usage of the allowDowngrade field. In the validation section, we
>> > have  "
>> > > /features' from {"max_version_level": X} to {"max_version_level":
>> X’}",
>> > it
>> > > seems that we need to mention Y there.
>> >
>> > (Kowshik): Great point! Yes, that is correct. Done, I have updated the
>> > validations
>> > section explaining the above. Here is a link to this section:
>> >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Validations
>> >
>> >
>> > Cheers,
>> > Kowshik
>> >
>> >
>> >
>> >
>> > On Wed, Apr 15, 2020 at 11:05 AM Jun Rao  wrote:
>> >
>> > > Hi, Kowshik,
>> > >
>> > > 200. I guess you are saying only when the allowDowngrade field is set,
>> > the
>> > > finalized feature version can go backward. Otherwise, it can only go
>> up.
>> > > That makes sense. It would be useful to make that clear when
>> explaining
>> > > the usage of the allowDowngrade field. In the validation section, we
>> have
>> > > "
>> > > /features' from {"max_version_level": X} to {"max_version_level":
>> X’}",
>> > it
>> > > seems that we need to mention Y there.
>> > >
>> > > Thanks,
>> > >
>> > > Jun
>> > >
>> > > On Wed, Apr 15, 2020 at 10:44 AM Kowshik Prakasam <
>> > kpraka...@confluent.io>
>> > > wrote:
>> > >
>>

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-15 Thread Kowshik Prakasam
Hi Jun,

Thanks for the feedback! I have addressed the comments in the KIP.

> 200. In the validation section, there is still the text  "*from*
> {"max_version_level":
> X} *to* {"max_version_level": X’}". It seems that it should say "from X to
> Y"?

(Kowshik): Done. I have reworded it a bit to make it clearer now in this
section:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Validations

> 110. Could we add that we need to document the bumped version of each
> feature in the upgrade section of a release?

(Kowshik): Great point! Done, I have mentioned it in #3 this section:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584
<https://issues.apache.org/jira/browse/KIP-584>
%3A+Versioning+scheme+for+features#KIP-584
<https://issues.apache.org/jira/browse/KIP-584>
:Versioningschemeforfeatures-Whentouseversionedfeatureflags?


Cheers,
Kowshik

On Wed, Apr 15, 2020 at 4:00 PM Jun Rao  wrote:

> Hi, Kowshik,
>
> Looks good to me now. Just a couple of minor things below.
>
> 200. In the validation section, there is still the text  "*from*
> {"max_version_level":
> X} *to* {"max_version_level": X’}". It seems that it should say "from X to
> Y"?
>
> 110. Could we add that we need to document the bumped version of each
> feature in the upgrade section of a release?
>
> Thanks,
>
> Jun
>
> On Wed, Apr 15, 2020 at 1:08 PM Kowshik Prakasam 
> wrote:
>
> > Hi Jun,
> >
> > Thank you for the suggestion! I have updated the KIP, please find my
> > response below.
> >
> > > 200. I guess you are saying only when the allowDowngrade field is set,
> > the
> > > finalized feature version can go backward. Otherwise, it can only go
> up.
> > > That makes sense. It would be useful to make that clear when explaining
> > > the usage of the allowDowngrade field. In the validation section, we
> > have  "
> > > /features' from {"max_version_level": X} to {"max_version_level": X’}",
> > it
> > > seems that we need to mention Y there.
> >
> > (Kowshik): Great point! Yes, that is correct. Done, I have updated the
> > validations
> > section explaining the above. Here is a link to this section:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Validations
> >
> >
> > Cheers,
> > Kowshik
> >
> >
> >
> >
> > On Wed, Apr 15, 2020 at 11:05 AM Jun Rao  wrote:
> >
> > > Hi, Kowshik,
> > >
> > > 200. I guess you are saying only when the allowDowngrade field is set,
> > the
> > > finalized feature version can go backward. Otherwise, it can only go
> up.
> > > That makes sense. It would be useful to make that clear when explaining
> > > the usage of the allowDowngrade field. In the validation section, we
> have
> > > "
> > > /features' from {"max_version_level": X} to {"max_version_level": X’}",
> > it
> > > seems that we need to mention Y there.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Wed, Apr 15, 2020 at 10:44 AM Kowshik Prakasam <
> > kpraka...@confluent.io>
> > > wrote:
> > >
> > > > Hi Jun,
> > > >
> > > > Great question! Please find my response below.
> > > >
> > > > > 200. My understanding is that If the CLI tool passes the
> > > > > '--allow-downgrade' flag when updating a specific feature, then a
> > > future
> > > > > downgrade is possible. Otherwise, the feature is now downgradable.
> If
> > > so,
> > > > I
> > > > > was wondering how the controller remembers this since it can be
> > > restarted
> > > > > over time?
> > > >
> > > > (Kowshik): The purpose of the flag was to just restrict the user
> intent
> > > for
> > > > a specific request.
> > > > It seems to me that to avoid confusion, I could call the flag as
> > > > `--try-downgrade` instead.
> > > > Then this makes it clear, that, the controller just has to consider
> the
> > > ask
> > > > from
> > > > the user as an explicit request to attempt a downgrade.
> > > >
> > > > The flag does not act as an override on controller's decision making
>

Re: [DISCUSS] KIP-594 Safely abort Producer transactions during application shutdown

2020-04-15 Thread Kowshik Prakasam
Hi,

It appears "KIP-594" is already taken. Please see this existing link:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-594%3A+Expose+output+topic+names+from+TopologyTestDriver
 .
To avoid a duplicate, please change your KIP number to pick the next
available KIP number, as mentioned here:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals#KafkaImprovementProposals-KIPround-up

Once you are done, I'd suggest that you please start a separate discussion
thread with the new KIP number.


Cheers,
Kowshik


On Wed, Apr 15, 2020 at 6:42 PM 张祥  wrote:

> Hi everyone,
>
> I have opened a small KIP about safely aborting transaction during
> shutdown. I'd like to use this thread to discuss about it and any feedback
> is appreciated. Here is a link to KIP-594 :
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-594%3A+Safely+abort+Producer+transactions+during+application+shutdown
>
> Thank you!
>


[VOTE] KIP-584: Versioning scheme for features

2020-04-16 Thread Kowshik Prakasam
Hi all,

I'd like to start a vote for KIP-584. The link to the KIP can be found here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features
.

Thanks!


Cheers,
Kowshik


Re: [VOTE] KIP-584: Versioning scheme for features

2020-04-24 Thread Kowshik Prakasam
OGRESS error code.  The
> controller is basically single-threaded and will only do one of these
> operations at once.  Even if it weren't, though, we could simply block the
> second operation behind the first one.
>
> ===
>
> For updateFeatures, it would be good to specify that if a single feature
> version update in the batch can't be done, none of them are done.  I think
> this was the intention, but I wasn't able to find it spelled out (maybe i
> missed it).
>
> ===
>
> And now, something a little bit bigger (sorry).  For finalized features,
> why do we need both min_version_level and max_version_level?  Assuming that
> we want all the brokers to be on the same feature version level, we really
> only care about three numbers for each feature, right?  The minimum
> supported version level, the maximum supported version level, and the
> current active version level.
>
> We don't actually want different brokers to be on different versions of
> the same feature, right?  So we can just have one number for current
> version level, rather than two.  At least that's what I was thinking -- let
> me know if I missed something.
>
> best,
> Colin
>
>
> On Tue, Apr 21, 2020, at 13:01, Dhruvil Shah wrote:
> > Thanks for the KIP! +1 (non-binding)
> >
> > On Tue, Apr 21, 2020 at 6:09 AM David Jacot  wrote:
> >
> > > Great KIP, thanks! +1 (non-binding)
> > >
> > > On Fri, Apr 17, 2020 at 8:56 PM Guozhang Wang 
> wrote:
> > >
> > > > Thanks for the great KIP Kowshik, +1 (binding).
> > > >
> > > > On Fri, Apr 17, 2020 at 11:22 AM Jun Rao  wrote:
> > > >
> > > > > Hi, Kowshik,
> > > > >
> > > > > Thanks for the KIP. +1
> > > > >
> > > > > Jun
> > > > >
> > > > > On Thu, Apr 16, 2020 at 11:14 AM Kowshik Prakasam <
> > > > kpraka...@confluent.io>
> > > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I'd like to start a vote for KIP-584
> <https://issues.apache.org/jira/browse/KIP-584>. The link to the KIP can
> be
> > > found
> > > > > > here:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features
> > > > > > .
> > > > > >
> > > > > > Thanks!
> > > > > >
> > > > > >
> > > > > > Cheers,
> > > > > > Kowshik
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
>


Re: [VOTE] KIP-584: Versioning scheme for features

2020-04-25 Thread Kowshik Prakasam
Hi Colin,

Thanks for the explanation! I agree with you, and I have updated the KIP.
Here is a link to relevant section:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Controller:ZKnodebootstrapwithdefaultvalues


Cheers,
Kowshik

On Fri, Apr 24, 2020 at 8:50 PM Colin McCabe  wrote:

> On Fri, Apr 24, 2020, at 00:01, Kowshik Prakasam wrote:
> > (Kowshik): Great point! However for case #1, I'm not sure why we need to
> > create a '/features' ZK node with disabled features. Instead, do you see
> > any drawback if we just do not create it? i.e. if IBP is less than 2.6,
> the
> > controller treats the case as though the versioning system is completely
> > disabled, and would not create a non-existing '/features' node.
>
> Hi Kowshik,
>
> When the IBP is less than 2.6, but the software has been upgraded to a
> state where it supports this KIP, that
>  means the user is upgrading from an earlier version of the software.  In
> this case, we want to start with all the features disabled and allow the
> user to enable them when they are ready.
>
> Enabling all the possible features immediately after an upgrade could be
> harmful to the cluster.  On the other hand, for a new cluster, we do want
> to enable all the possible features immediately . I was proposing this as a
> way to distinguish the two cases (since the new cluster will never be
> started with an old IBP).
>
> > Colin MccCabe wrote:
> > > And now, something a little bit bigger (sorry).  For finalized
> features,
> > > why do we need both min_version_level and max_version_level?  Assuming
> that
> > > we want all the brokers to be on the same feature version level, we
> really only care
> > > about three numbers for each feature, right?  The minimum supported
> version
> > > level, the maximum supported version level, and the current active
> version level.
> >
> > > We don't actually want different brokers to be on different versions of
> > > the same feature, right?  So we can just have one number for current
> > > version level, rather than two.  At least that's what I was thinking
> -- let
> > > me know if I missed something.
> >
> > (Kowshik): It is my understanding that the "current active version level"
> > that you have mentioned, is the "max_version_level". But we still
> > maintain/publish both min and max version levels, because, the detail
> about
> > min level is useful to external clients. This is described below.
> >
> > For any feature F, think of the closed range: [min_version_level,
> > max_version_level] as the range of finalized versions, that's guaranteed
> to
> > be supported by all brokers in the cluster.
> >  - "max_version_level" is the finalized highest common version among all
> > brokers,
> >  - "min_version_level" is the finalized lowest common version among all
> > brokers.
> >
> > Next, think of "client" here as the "user of the new feature versions
> > system". Imagine that such a client learns about finalized feature
> > versions, and exercises some logic based on the version. These clients
> can
> > be of 2 types:
> > 1. Some part of the broker code itself could behave like a client trying
> to
> > use some feature that's "internal" to the broker cluster. Such a client
> > would learn the latest finalized features via ZK.
> > 2. An external system (ex: Streams) could behave like a client, trying to
> > use some "external" facing feature. Such a client would learn latest
> > finalized features via ApiVersionsRequest. Ex: group_coordinator feature
> > described in the KIP.
> >
> > Next, imagine that for F, the max_version_level is successfully bumped by
> > +1 (via Controller API). Now it is guaranteed that all brokers (i.e.
> > internal clients) understand max_version_level + 1. However, it is still
> > not guaranteed that all external clients have support for (or have
> > activated) the logic for the newer version. Why? Because, this is
> > subjective as explained next:
> >
> > 1. On one hand, imagine F as an internal feature only relevant to
> Brokers.
> > The binary for the internal client logic is controlled by Broker cluster
> > deployments. When shipping a new Broker release, we wouldn't bump max
> > "supported" feature version for F by 1, unless we have introduced some
> new
> > logic (with a potentially breaking change) in the Broker. Furthermore,
> such
> &

Re: [VOTE] KIP-584: Versioning scheme for features

2020-04-28 Thread Kowshik Prakasam
Hi all,

This KIP vote has been open for ~12 days. The summary of the votes is that
we have 3 binding votes (Colin, Guozhang, Jun), and 3 non-binding votes
(David, Dhruvil, Boyang). Therefore, the KIP vote passes. I'll mark KIP as
accepted and start working on the implementation.

Thanks a lot!


Cheers,
Kowshik

On Mon, Apr 27, 2020 at 12:15 PM Colin McCabe  wrote:

> Thanks, Kowshik.  +1 (binding)
>
> best,
> Colin
>
> On Sat, Apr 25, 2020, at 13:20, Kowshik Prakasam wrote:
> > Hi Colin,
> >
> > Thanks for the explanation! I agree with you, and I have updated the
> > KIP.
> > Here is a link to relevant section:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Controller:ZKnodebootstrapwithdefaultvalues
> >
> >
> > Cheers,
> > Kowshik
> >
> > On Fri, Apr 24, 2020 at 8:50 PM Colin McCabe  wrote:
> >
> > > On Fri, Apr 24, 2020, at 00:01, Kowshik Prakasam wrote:
> > > > (Kowshik): Great point! However for case #1, I'm not sure why we
> need to
> > > > create a '/features' ZK node with disabled features. Instead, do you
> see
> > > > any drawback if we just do not create it? i.e. if IBP is less than
> 2.6,
> > > the
> > > > controller treats the case as though the versioning system is
> completely
> > > > disabled, and would not create a non-existing '/features' node.
> > >
> > > Hi Kowshik,
> > >
> > > When the IBP is less than 2.6, but the software has been upgraded to a
> > > state where it supports this KIP, that
> > >  means the user is upgrading from an earlier version of the software.
> In
> > > this case, we want to start with all the features disabled and allow
> the
> > > user to enable them when they are ready.
> > >
> > > Enabling all the possible features immediately after an upgrade could
> be
> > > harmful to the cluster.  On the other hand, for a new cluster, we do
> want
> > > to enable all the possible features immediately . I was proposing this
> as a
> > > way to distinguish the two cases (since the new cluster will never be
> > > started with an old IBP).
> > >
> > > > Colin MccCabe wrote:
> > > > > And now, something a little bit bigger (sorry).  For finalized
> > > features,
> > > > > why do we need both min_version_level and max_version_level?
> Assuming
> > > that
> > > > > we want all the brokers to be on the same feature version level, we
> > > really only care
> > > > > about three numbers for each feature, right?  The minimum supported
> > > version
> > > > > level, the maximum supported version level, and the current active
> > > version level.
> > > >
> > > > > We don't actually want different brokers to be on different
> versions of
> > > > > the same feature, right?  So we can just have one number for
> current
> > > > > version level, rather than two.  At least that's what I was
> thinking
> > > -- let
> > > > > me know if I missed something.
> > > >
> > > > (Kowshik): It is my understanding that the "current active version
> level"
> > > > that you have mentioned, is the "max_version_level". But we still
> > > > maintain/publish both min and max version levels, because, the detail
> > > about
> > > > min level is useful to external clients. This is described below.
> > > >
> > > > For any feature F, think of the closed range: [min_version_level,
> > > > max_version_level] as the range of finalized versions, that's
> guaranteed
> > > to
> > > > be supported by all brokers in the cluster.
> > > >  - "max_version_level" is the finalized highest common version among
> all
> > > > brokers,
> > > >  - "min_version_level" is the finalized lowest common version among
> all
> > > > brokers.
> > > >
> > > > Next, think of "client" here as the "user of the new feature versions
> > > > system". Imagine that such a client learns about finalized feature
> > > > versions, and exercises some logic based on the version. These
> clients
> > > can
> > > > be of 2 types:
> > > > 1. Some part of the broker code itself could behave like a client
> trying
> > > to
> > 

Help! Can't add reviewers for Github Kafka PR

2020-05-17 Thread Kowshik Prakasam
Hi all,

My intent is to create a PR for review in https://github.com/apache/kafka .
However I find that I'm unable to add reviewers to my PR. Does this need
any specific permissions? If so, please could someone grant me access or
help me understand what I need to do to get permissions to add reviewers?


Cheers,
Kowshik


Re: Help! Can't add reviewers for Github Kafka PR

2020-05-17 Thread Kowshik Prakasam
Thanks, John!


Cheers,
Kowshik

On Sun, May 17, 2020 at 8:12 AM John Roesler  wrote:

> Hi Kowshik,
>
> You just have to “@“ mention the username of the person you want in a gh
> comment. I think you have to be a committee to add labels, reviewers, etc.
>
> Hope this helps!
> -John
>
> On Sun, May 17, 2020, at 04:11, Kowshik Prakasam wrote:
> > Hi all,
> >
> > My intent is to create a PR for review in
> https://github.com/apache/kafka .
> > However I find that I'm unable to add reviewers to my PR. Does this need
> > any specific permissions? If so, please could someone grant me access or
> > help me understand what I need to do to get permissions to add reviewers?
> >
> >
> > Cheers,
> > Kowshik
> >
>


Re: [DISCUSS] Apache Kafka 2.6.0 release

2020-05-29 Thread Kowshik Prakasam
Hi Randall,

We have to remove KIP-584 from the release plan, as this item will not be
completed for 2.6 release (although KIP is accepted). We plan to include it
in a next release.


Cheers,
Kowshik


On Fri, May 29, 2020 at 11:43 AM Maulin Vasavada 
wrote:

> Hi Randall Hauch
>
> Can we add KIP-519 to 2.6? It was merged to Trunk already in April -
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=128650952
> .
>
> Thanks
> Maulin
>
> On Fri, May 29, 2020 at 11:01 AM Randall Hauch  wrote:
>
> > Here's an update on the AK 2.6.0 release.
> >
> > Code freeze was Wednesday, and the release plan [1] has been updated to
> > reflect all of the KIPs that made the release. We've also cut the `2.6`
> > branch that we'll use for the release; see separate email announcing the
> > new branch.
> >
> > The next important date for the 2.6.0 release is CODE FREEZE on JUNE 10,
> > and until that date all bug fixes are still welcome on the release
> branch.
> > But after that, only blocker bugs can be merged to the release branch.
> >
> > If you have any questions or concerns, please contact me or (better yet)
> > reply to this thread.
> >
> > Thanks, and best regards!
> >
> > Randall
> >
> > [1] AK 2.6.0 Release Plan:
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> >
> >
> > On Wed, May 27, 2020 at 5:53 PM Matthias J. Sax 
> wrote:
> >
> > > Thanks Randall!
> > >
> > > I added missing KIP-594.
> > >
> > >
> > > For the postponed KIP section: I removed KIP-441 and KIP-444 as both
> are
> > > completed.
> > >
> > >
> > > -Matthias
> > >
> > > On 5/27/20 2:31 PM, Randall Hauch wrote:
> > > > Hey everyone, just a quick update on the 2.6.0 release.
> > > >
> > > > Based on the release plan (
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > > ),
> > > > today (May 27) is feature freeze. Any major feature work that is not
> > > > already complete will need to push out to the next release (either
> 2.7
> > or
> > > > 3.0). There are a few PRs for KIPs that are nearing completion, and
> > we're
> > > > having some Jenkins build issues. I will send another email later
> today
> > > or
> > > > early tomorrow with an update, and I plan to cut the release branch
> > > shortly
> > > > thereafter.
> > > >
> > > > I have also updated the list of planned KIPs on the release plan
> page (
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > > ),
> > > > and I've moved to the "Postponed" table any KIP that looks like it is
> > not
> > > > going to be complete today. If any KIP is in the wrong table, please
> > let
> > > me
> > > > know.
> > > >
> > > > If you have any questions or concerns, please feel free to reply to
> > this
> > > > thread.
> > > >
> > > > Thanks, and best regards!
> > > >
> > > > Randall
> > > >
> > > > On Wed, May 20, 2020 at 2:16 PM Sophie Blee-Goldman <
> > sop...@confluent.io
> > > >
> > > > wrote:
> > > >
> > > >> Hey Randall,
> > > >>
> > > >> Can you also add KIP-613 which was accepted yesterday?
> > > >>
> > > >> Thanks!
> > > >> Sophie
> > > >>
> > > >> On Wed, May 20, 2020 at 6:47 AM Randall Hauch 
> > wrote:
> > > >>
> > > >>> Hi, Tom. I saw last night that the KIP had enough votes before
> > today’s
> > > >>> deadline and I will add it to the roadmap today. Thanks for driving
> > > this!
> > > >>>
> > > >>> On Wed, May 20, 2020 at 6:18 AM Tom Bentley 
> > > wrote:
> > > >>>
> > >  Hi Randall,
> > > 
> > >  Can we add KIP-585? (I'm not quite sure of the protocol here, but
> > > >> thought
> > >  it better to ask than to just add it myself).
> > > 
> > >  Thanks,
> > > 
> > >  Tom
> > > 
> > >  On Tue, May 5, 2020 at 6:54 PM Randall Hauch 
> > > >> wrote:
> > > 
> > > > Greetings!
> > > >
> > > > I'd like to volunteer to be release manager for the next
> time-based
> > >  feature
> > > > release which will be 2.6.0. I've published a release plan at
> > > >
> > > 
> > > >>>
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > > > ,
> > > > and have included all of the KIPs that are currently approved or
> > > >>> actively
> > > > in discussion (though I'm happy to adjust as necessary).
> > > >
> > > > To stay on our time-based cadence, the KIP freeze is on May 20
> > with a
> > > > target release date of June 24.
> > > >
> > > > Let me know if there are any objections.
> > > >
> > > > Thanks,
> > > > Randall Hauch
> > > >
> > > 
> > > >>>
> > > >>
> > > >
> > >
> > >
> >
>


Re: [VOTE] KIP-584: Versioning scheme for features

2020-06-08 Thread Kowshik Prakasam
Hi all,

I wanted to let you know that I have made the following minor changes to
the KIP-584 write up. The purpose is to ensure the design is correct for a
few things which came up during implementation:

1. Feature version data type has been made to be int16 (instead of int64).
The reason is two fold:
a. Usage of int64 felt overkill. Feature version bumps are infrequent
(since these bumps represent breaking changes that are generally
infrequent). Therefore int16 is big enough to support version bumps of a
particular feature.
b. The int16 data type aligns well with existing API versions data
type. Please see the file
'/clients/src/main/resources/common/message/ApiVersionsResponse.json'.

2. Finalized feature version epoch data type has been made to be int32
(instead of int64). The reason is that the epoch value is the value of ZK
node version, whose data type is int32.

3. Introduced a new 'status' field in the '/features' ZK node schema. The
purpose is to implement Colin's earlier point for the strategy for
transitioning from not having a /features znode to having one. An
explanation has been provided in the following section of the KIP detailing
the different cases:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-FeatureZKnodestatus
.

Please let me know if you have any questions or concerns.


Cheers,
Kowshik



Cheers,
Kowshik

On Tue, Apr 28, 2020 at 11:24 PM Kowshik Prakasam 
wrote:

> Hi all,
>
> This KIP vote has been open for ~12 days. The summary of the votes is that
> we have 3 binding votes (Colin, Guozhang, Jun), and 3 non-binding votes
> (David, Dhruvil, Boyang). Therefore, the KIP vote passes. I'll mark KIP as
> accepted and start working on the implementation.
>
> Thanks a lot!
>
>
> Cheers,
> Kowshik
>
> On Mon, Apr 27, 2020 at 12:15 PM Colin McCabe  wrote:
>
>> Thanks, Kowshik.  +1 (binding)
>>
>> best,
>> Colin
>>
>> On Sat, Apr 25, 2020, at 13:20, Kowshik Prakasam wrote:
>> > Hi Colin,
>> >
>> > Thanks for the explanation! I agree with you, and I have updated the
>> > KIP.
>> > Here is a link to relevant section:
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Controller:ZKnodebootstrapwithdefaultvalues
>> >
>> >
>> > Cheers,
>> > Kowshik
>> >
>> > On Fri, Apr 24, 2020 at 8:50 PM Colin McCabe 
>> wrote:
>> >
>> > > On Fri, Apr 24, 2020, at 00:01, Kowshik Prakasam wrote:
>> > > > (Kowshik): Great point! However for case #1, I'm not sure why we
>> need to
>> > > > create a '/features' ZK node with disabled features. Instead, do
>> you see
>> > > > any drawback if we just do not create it? i.e. if IBP is less than
>> 2.6,
>> > > the
>> > > > controller treats the case as though the versioning system is
>> completely
>> > > > disabled, and would not create a non-existing '/features' node.
>> > >
>> > > Hi Kowshik,
>> > >
>> > > When the IBP is less than 2.6, but the software has been upgraded to a
>> > > state where it supports this KIP, that
>> > >  means the user is upgrading from an earlier version of the
>> software.  In
>> > > this case, we want to start with all the features disabled and allow
>> the
>> > > user to enable them when they are ready.
>> > >
>> > > Enabling all the possible features immediately after an upgrade could
>> be
>> > > harmful to the cluster.  On the other hand, for a new cluster, we do
>> want
>> > > to enable all the possible features immediately . I was proposing
>> this as a
>> > > way to distinguish the two cases (since the new cluster will never be
>> > > started with an old IBP).
>> > >
>> > > > Colin MccCabe wrote:
>> > > > > And now, something a little bit bigger (sorry).  For finalized
>> > > features,
>> > > > > why do we need both min_version_level and max_version_level?
>> Assuming
>> > > that
>> > > > > we want all the brokers to be on the same feature version level,
>> we
>> > > really only care
>> > > > > about three numbers for each feature, right?  The minimum
>> supported
>> > > version
>> > > > > level, the maximum supported version level, and the current active
>> > > version level.
>

Re: [ANNOUNCE] New committer: Stanislav Kozlovski

2023-01-17 Thread Kowshik Prakasam
Congratulations Stan!


Cheers,
Kowshik

On Tue, Jan 17, 2023, 5:11 PM John Roesler  wrote:

> Congrats, Stanislav!
> -John
>
> On Tue, Jan 17, 2023, at 18:56, Ismael Juma wrote:
> > Congratulations Stanislav!
> >
> > Ismael
> >
> > On Tue, Jan 17, 2023 at 7:51 AM Jun Rao 
> wrote:
> >
> >> Hi, Everyone,
> >>
> >> The PMC of Apache Kafka is pleased to announce a new Kafka committer
> >> Stanislav Kozlovski.
> >>
> >> Stan has been contributing to Apache Kafka since June 2018. He made
> various
> >> contributions including the following KIPs.
> >>
> >> KIP-455: Create an Administrative API for Replica Reassignment
> >> KIP-412: Extend Admin API to support dynamic application log levels
> >>
> >> Congratulations, Stan!
> >>
> >> Thanks,
> >>
> >> Jun (on behalf of the Apache Kafka PMC)
> >>
>


Re: [ANNOUNCE] New committer: Justine Olshan

2023-01-17 Thread Kowshik Prakasam
Congrats, Justine!


Cheers,
Kowshik

On Tue, Jan 17, 2023, 4:53 PM Guozhang Wang 
wrote:

> Congratulations, Justine (I'm also late)!
>
> On Wed, Jan 11, 2023 at 12:17 AM Bruno Cadonna  wrote:
>
> > Hi Justine,
> >
> > Re-reading my message I realized that my message might be
> > misinterpreted. I meant that I am late with congratulating you due to
> > the holidays, NOT that it took you long becoming a committer!
> >
> > Sorry for the potential confusion!
> >
> > Best,
> > Bruno
> >
> > On 11.01.23 08:57, Bruno Cadonna wrote:
> > > Better late than never!
> > >
> > > Congrats!
> > >
> > > Best,
> > > Bruno
> > >
> > > On 04.01.23 20:25, Kirk True wrote:
> > >> Congratulations!
> > >>
> > >> On Tue, Jan 3, 2023, at 7:34 PM, John Roesler wrote:
> > >>> Congrats, Justine!
> > >>> -John
> > >>>
> > >>> On Tue, Jan 3, 2023, at 13:03, Matthias J. Sax wrote:
> >  Congrats!
> > 
> >  On 12/29/22 6:47 PM, ziming deng wrote:
> > > Congratulations Justine!
> > > —
> > > Best,
> > > Ziming
> > >
> > >> On Dec 30, 2022, at 10:06, Luke Chen  wrote:
> > >>
> > >> Congratulations, Justine!
> > >> Well deserved!
> > >>
> > >> Luke
> > >>
> > >> On Fri, Dec 30, 2022 at 9:15 AM Ron Dagostino 
> > >> wrote:
> > >>
> > >>> Congratulations, Justine!Well-deserved., and I’m very happy
> > >>> for you.
> > >>>
> > >>> Ron
> > >>>
> >  On Dec 29, 2022, at 6:13 PM, Israel Ekpo 
> >  wrote:
> > 
> >  Congratulations Justine!
> > 
> > 
> > > On Thu, Dec 29, 2022 at 5:05 PM Greg Harris
> > >>> 
> > > wrote:
> > >
> > > Congratulations Justine!
> > >
> > >> On Thu, Dec 29, 2022 at 1:37 PM Bill Bejeck
> > >>  wrote:
> > >>
> > >> Congratulations Justine!
> > >>
> > >>
> > >> -Bill
> > >>
> > >>> On Thu, Dec 29, 2022 at 4:36 PM Philip Nee <
> > philip...@gmail.com>
> > >>> wrote:
> > >>
> > >>> wow congrats!
> > >>>
> > >>> On Thu, Dec 29, 2022 at 1:05 PM Chris Egerton <
> > >>> fearthecel...@gmail.com
> > >>
> > >>> wrote:
> > >>>
> >  Congrats, Justine!
> > 
> >  On Thu, Dec 29, 2022, 15:58 David Jacot 
> >  wrote:
> > 
> > > Hi all,
> > >
> > > The PMC of Apache Kafka is pleased to announce a new Kafka
> > > committer
> > > Justine
> > > Olshan.
> > >
> > > Justine has been contributing to Kafka since June 2019. She
> > >> contributed
> >  53
> > > PRs including the following KIPs.
> > >
> > > KIP-480: Sticky Partitioner
> > > KIP-516: Topic Identifiers & Topic Deletion State
> > Improvements
> > > KIP-854: Separate configuration for producer ID expiry
> > > KIP-890: Transactions Server-Side Defense (in progress)
> > >
> > > Congratulations, Justine!
> > >
> > > Thanks,
> > >
> > > David (on behalf of the Apache Kafka PMC)
> > >
> > 
> > >>>
> > >>
> > >
> > >>>
> > >
> > >
> > >>>
> > >>
> >
>


Re: [ANNOUNCE] New committer: Satish Duggana

2023-01-17 Thread Kowshik Prakasam
Congrats, Satish!


Cheers,
Kowshik

On Tue, Jan 17, 2023, 5:17 PM John Roesler  wrote:

> Ay, sorry about my autocorrect, Satish.
>
> On Tue, Jan 17, 2023, at 19:13, John Roesler wrote:
> > Congratulations, Salish! I missed the announcement before.
> >
> > -John
> >
> > On Tue, Jan 17, 2023, at 18:53, Guozhang Wang wrote:
> >> Congratulations, Satish!
> >>
> >> On Tue, Jan 10, 2023 at 11:31 AM Rajini Sivaram <
> rajinisiva...@gmail.com>
> >> wrote:
> >>
> >>> Congratulations, Satish!
> >>>
> >>> Regards,
> >>>
> >>> Rajini
> >>>
> >>> On Tue, Jan 10, 2023 at 5:12 PM Bruno Cadonna 
> wrote:
> >>>
> >>> > Congrats!
> >>> >
> >>> > Best,
> >>> > Bruno
> >>> >
> >>> > On 24.12.22 12:44, Manikumar wrote:
> >>> > > Congrats, Satish!  Well deserved.
> >>> > >
> >>> > > On Sat, Dec 24, 2022, 5:10 PM Tom Bentley 
> wrote:
> >>> > >
> >>> > >> Congratulations!
> >>> > >>
> >>> > >> On Sat, 24 Dec 2022 at 05:05, Luke Chen 
> wrote:
> >>> > >>
> >>> > >>> Congratulations, Satish!
> >>> > >>>
> >>> > >>> On Sat, Dec 24, 2022 at 4:12 AM Federico Valeri <
> >>> fedeval...@gmail.com>
> >>> > >>> wrote:
> >>> > >>>
> >>> >  Hi Satish, congrats!
> >>> > 
> >>> >  On Fri, Dec 23, 2022, 8:46 PM Viktor Somogyi-Vass
> >>> >   wrote:
> >>> > 
> >>> > > Congrats Satish!
> >>> > >
> >>> > > On Fri, Dec 23, 2022, 19:38 Mickael Maison <
> >>> mickael.mai...@gmail.com
> >>> > >>>
> >>> > > wrote:
> >>> > >
> >>> > >> Congratulations Satish!
> >>> > >>
> >>> > >> On Fri, Dec 23, 2022 at 7:36 PM Divij Vaidya <
> >>> > >>> divijvaidy...@gmail.com>
> >>> > >> wrote:
> >>> > >>>
> >>> > >>> Congratulations Satish! 🎉
> >>> > >>>
> >>> > >>> On Fri 23. Dec 2022 at 19:32, Josep Prat
> >>> > >>>  >>> > >
> >>> > >>> wrote:
> >>> > >>>
> >>> >  Congrats Satish!
> >>> > 
> >>> >  ———
> >>> >  Josep Prat
> >>> > 
> >>> >  Aiven Deutschland GmbH
> >>> > 
> >>> >  Immanuelkirchstraße 26, 10405 Berlin
> >>> >  <
> >>> > >>
> >>> > >
> >>> > 
> >>> > >>>
> >>> > >>
> >>> >
> >>>
> https://www.google.com/maps/search/Immanuelkirchstra%C3%9Fe+26,+10405+Berlin?entry=gmail&source=g
> >>> > >>>
> >>> > 
> >>> >  Amtsgericht Charlottenburg, HRB 209739 B
> >>> > 
> >>> >  Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> >>> > 
> >>> >  m: +491715557497
> >>> > 
> >>> >  w: aiven.io
> >>> > 
> >>> >  e: josep.p...@aiven.io
> >>> > 
> >>> >  On Fri, Dec 23, 2022, 19:23 Chris Egerton <
> >>> > >>> fearthecel...@gmail.com
> >>> > >
> >>> > >> wrote:
> >>> > 
> >>> > > Congrats, Satish!
> >>> > >
> >>> > > On Fri, Dec 23, 2022, 13:19 Arun Raju <
> arungav...@gmail.com>
> >>> > > wrote:
> >>> > >
> >>> > >> Congratulations 👏
> >>> > >>
> >>> > >> On Fri, Dec 23, 2022, 1:08 PM Jun Rao
> >>> > >>>  >>> > >
> >>> >  wrote:
> >>> > >>
> >>> > >>> Hi, Everyone,
> >>> > >>>
> >>> > >>> The PMC of Apache Kafka is pleased to announce a new
> >>> > >> Kafka
> >>> > >> committer
> >>> > >> Satish
> >>> > >>> Duggana.
> >>> > >>>
> >>> > >>> Satish has been a long time Kafka contributor since 2017.
> >>> > >>> He
> >>> >  is
> >>> > >> the
> >>> > > main
> >>> > >>> driver behind KIP-405 that integrates Kafka with remote
> >>> > > storage,
> >>> > >> a
> >>> > >>> significant and much anticipated feature in Kafka.
> >>> > >>>
> >>> > >>> Congratulations, Satish!
> >>> > >>>
> >>> > >>> Thanks,
> >>> > >>>
> >>> > >>> Jun (on behalf of the Apache Kafka PMC)
> >>> > >>>
> >>> > >>
> >>> > >
> >>> > 
> >>> > >>> --
> >>> > >>> Divij Vaidya
> >>> > >>
> >>> > >
> >>> > 
> >>> > >>>
> >>> > >>
> >>> > >
> >>> >
> >>>
>


Re: [ANNOUNCE] New committer: Josep Prat

2023-01-17 Thread Kowshik Prakasam
Congrats, Josep!


Cheers,
Kowshik

On Tue, Jan 17, 2023, 4:57 PM Guozhang Wang 
wrote:

> Congratulations, Josep!
>
> On Tue, Jan 3, 2023 at 11:23 AM Josep Prat 
> wrote:
>
> > Thanks all again! :)
> >
> > On Tue, Jan 3, 2023 at 6:19 PM Bill Bejeck 
> > wrote:
> >
> > > Congratulations, Josep!
> > >
> > > -Bill
> > >
> > > On Tue, Dec 20, 2022 at 9:03 PM Luke Chen  wrote:
> > >
> > > > Congratulations, Josep!
> > > >
> > > > Luke
> > > >
> > > > On Wed, Dec 21, 2022 at 6:26 AM Viktor Somogyi-Vass
> > > >  wrote:
> > > >
> > > > > Congrats Josep!
> > > > >
> > > > > On Tue, Dec 20, 2022, 21:56 Matthias J. Sax 
> > wrote:
> > > > >
> > > > > > Congrats!
> > > > > >
> > > > > > On 12/20/22 12:01 PM, Josep Prat wrote:
> > > > > > > Thank you all!
> > > > > > >
> > > > > > > ———
> > > > > > > Josep Prat
> > > > > > >
> > > > > > > Aiven Deutschland GmbH
> > > > > > >
> > > > > > > Immanuelkirchstraße 26, 10405 Berlin
> > > > > > >
> > > > > > > Amtsgericht Charlottenburg, HRB 209739 B
> > > > > > >
> > > > > > > Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> > > > > > >
> > > > > > > m: +491715557497
> > > > > > >
> > > > > > > w: aiven.io
> > > > > > >
> > > > > > > e: josep.p...@aiven.io
> > > > > > >
> > > > > > > On Tue, Dec 20, 2022, 20:42 Bill Bejeck 
> > wrote:
> > > > > > >
> > > > > > >> Congratulations Josep!
> > > > > > >>
> > > > > > >> -Bill
> > > > > > >>
> > > > > > >> On Tue, Dec 20, 2022 at 1:11 PM Mickael Maison <
> > > > > > mickael.mai...@gmail.com>
> > > > > > >> wrote:
> > > > > > >>
> > > > > > >>> Congratulations Josep!
> > > > > > >>>
> > > > > > >>> On Tue, Dec 20, 2022 at 6:55 PM Bruno Cadonna <
> > > cado...@apache.org>
> > > > > > >> wrote:
> > > > > > 
> > > > > >  Congrats, Josep!
> > > > > > 
> > > > > >  Well deserved!
> > > > > > 
> > > > > >  Best,
> > > > > >  Bruno
> > > > > > 
> > > > > >  On 20.12.22 18:40, Kirk True wrote:
> > > > > > > Congrats Josep!
> > > > > > >
> > > > > > > On Tue, Dec 20, 2022, at 9:33 AM, Jorge Esteban Quilcate
> > Otoya
> > > > > wrote:
> > > > > > >> Congrats Josep!!
> > > > > > >>
> > > > > > >> On Tue, 20 Dec 2022, 17:31 Greg Harris,
> > > > > > >>  > > > > > 
> > > > > > >> wrote:
> > > > > > >>
> > > > > > >>> Congratulations Josep!
> > > > > > >>>
> > > > > > >>> On Tue, Dec 20, 2022 at 9:29 AM Chris Egerton <
> > > > > > >>> fearthecel...@gmail.com>
> > > > > > >>> wrote:
> > > > > > >>>
> > > > > >  Congrats Josep! Well-earned.
> > > > > > 
> > > > > >  On Tue, Dec 20, 2022, 12:26 Jun Rao
> > >  > > > >
> > > > > > >>> wrote:
> > > > > > 
> > > > > > > Hi, Everyone,
> > > > > > >
> > > > > > > The PMC of Apache Kafka is pleased to announce a new
> > Kafka
> > > > > > >>> committer
> > > > > >  Josep
> > > > > > >Prat.
> > > > > > >
> > > > > > > Josep has been contributing to Kafka since May 2021. He
> > > > > > >>> contributed 20
> > > > > >  PRs
> > > > > > > including the following 2 KIPs.
> > > > > > >
> > > > > > > KIP-773 Differentiate metric latency measured in ms and
> > ns
> > > > > > > KIP-744: Migrate TaskMetadata and ThreadMetadata to an
> > > > > interface
> > > > > > >>> with
> > > > > > > internal implementation
> > > > > > >
> > > > > > > Congratulations, Josep!
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > > Jun (on behalf of the Apache Kafka PMC)
> > > > > > >
> > > > > > 
> > > > > > >>>
> > > > > > >>
> > > > > > >
> > > > > > >>>
> > > > > > >>
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> >
> > --
> > [image: Aiven] 
> >
> > *Josep Prat*
> > Open Source Engineering Director, *Aiven*
> > josep.p...@aiven.io   |   +491715557497
> > aiven.io    |   <
> https://www.facebook.com/aivencloud
> > >
> >      <
> > https://twitter.com/aiven_io>
> > *Aiven Deutschland GmbH*
> > Immanuelkirchstraße 26, 10405 Berlin
> > Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> > Amtsgericht Charlottenburg, HRB 209739 B
> >
>


Re: [ANNOUNCE] New committer: Edoardo Comar

2023-01-17 Thread Kowshik Prakasam
Congrats, Eduardo!


Cheers,
Kowshik

On Tue, Jan 17, 2023, 4:54 PM Guozhang Wang 
wrote:

> Congratulations, Edoardo!
>
> On Tue, Jan 10, 2023 at 9:11 AM Bruno Cadonna  wrote:
>
> > Congrats!
> >
> > Best,
> > Bruno
> >
> > On 10.01.23 11:00, Edoardo Comar wrote:
> > > Many thanks everyone !
> > >
> > > On Mon, 9 Jan 2023 at 19:40, Rajini Sivaram 
> > wrote:
> > >
> > >> Congratulations, Edo!
> > >>
> > >> Regards,
> > >>
> > >> Rajini
> > >>
> > >> On Mon, Jan 9, 2023 at 10:16 AM Tom Bentley 
> > wrote:
> > >>
> > >>> Congratulations!
> > >>>
> > >>> On Sun, 8 Jan 2023 at 01:14, Satish Duggana <
> satish.dugg...@gmail.com>
> > >>> wrote:
> > >>>
> >  Congratulations, Edorado!
> > 
> >  On Sun, 8 Jan 2023 at 00:15, Viktor Somogyi-Vass
> >   wrote:
> > >
> > > Congrats Edoardo!
> > >
> > > On Sat, Jan 7, 2023, 18:15 Bill Bejeck  wrote:
> > >
> > >> Congratulations, Edoardo!
> > >>
> > >> -Bill
> > >>
> > >> On Sat, Jan 7, 2023 at 12:11 PM John Roesler  >
> >  wrote:
> > >>
> > >>> Congrats, Edoardo!
> > >>> -John
> > >>>
> > >>> On Fri, Jan 6, 2023, at 20:47, Matthias J. Sax wrote:
> >  Congrats!
> > 
> >  On 1/6/23 5:15 PM, Luke Chen wrote:
> > > Congratulations, Edoardo!
> > >
> > > Luke
> > >
> > > On Sat, Jan 7, 2023 at 7:58 AM Mickael Maison <
> > >> mickael.mai...@gmail.com
> > 
> > > wrote:
> > >
> > >> Congratulations Edo!
> > >>
> > >>
> > >> On Sat, Jan 7, 2023 at 12:05 AM Jun Rao
> > >>>  > >
> > >>> wrote:
> > >>>
> > >>> Hi, Everyone,
> > >>>
> > >>> The PMC of Apache Kafka is pleased to announce a new Kafka
> >  committer
> > >> Edoardo
> > >>> Comar.
> > >>>
> > >>> Edoardo has been a long time Kafka contributor since 2016.
> > >> His
> >  major
> > >>> contributions are the following.
> > >>>
> > >>> KIP-302: Enable Kafka clients to use all DNS resolved IP
> >  addresses
> > >>> KIP-277: Fine Grained ACL for CreateTopics API
> > >>> KIP-136: Add Listener name to SelectorMetrics tags
> > >>>
> > >>> Congratulations, Edoardo!
> > >>>
> > >>> Thanks,
> > >>>
> > >>> Jun (on behalf of the Apache Kafka PMC)
> > >>
> > >
> > >>>
> > >>
> > 
> > 
> > >>>
> > >>
> > >
> >
>


Hi / Requesting permission to create KIP

2020-03-20 Thread Kowshik Prakasam
Hey everyone,

I'm looking for permission to create a KIP under
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals .
My username is 'kprakasam'. If you are an admin, and you are reading this
email, could you please grant me access? Thank you.


Cheers,
Kowshik


Re: Hi / Requesting permission to create KIP

2020-03-20 Thread Kowshik Prakasam
Thanks a lot, Matthias!


Cheers,
Kowshik


On Fri, Mar 20, 2020 at 3:55 PM Matthias J. Sax  wrote:

> Done.
>
> On 3/20/20 3:52 PM, Kowshik Prakasam wrote:
> > Hey everyone,
> >
> > I'm looking for permission to create a KIP under
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> .
> > My username is 'kprakasam'. If you are an admin, and you are reading this
> > email, could you please grant me access? Thank you.
> >
> >
> > Cheers,
> > Kowshik
> >
>
>


Hi / Requesting permission to create KIP

2020-03-20 Thread Kowshik Prakasam
Hi,

I'm looking for permission to create a KIP under
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals .
My username is 'kprakasam'. Could you please grant me access?


Cheers,
Kowshik


Re: Hi / Requesting permission to create KIP

2020-03-20 Thread Kowshik Prakasam
Hi Guozhang,

Yes, it was granted earlier today. Thank you!


Cheers,
Kowshik


On Fri, Mar 20, 2020 at 8:59 PM Guozhang Wang  wrote:

> Hello Kowshik,
>
> I saw your username is granted the permission already on wiki.
>
> Cheers,
> Guozhang
>
>
> On Fri, Mar 20, 2020 at 5:13 PM Kowshik Prakasam 
> wrote:
>
> > Hi,
> >
> > I'm looking for permission to create a KIP under
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> > .
> > My username is 'kprakasam'. Could you please grant me access?
> >
> >
> > Cheers,
> > Kowshik
> >
>
>
> --
> -- Guozhang
>


Re: [ANNOUNCE] New committer: Lucas Bradstreet

2023-02-16 Thread Kowshik Prakasam
Congratulations, Lucas!


Cheers,
Kowshik

On Thu, Feb 16, 2023, 2:07 PM Justine Olshan 
wrote:

> Congratulations Lucas!
>
> Thanks for your mentorship on some of my KIPs as well :)
>
> On Thu, Feb 16, 2023 at 1:56 PM Jun Rao  wrote:
>
> > Hi, Everyone,
> >
> > The PMC of Apache Kafka is pleased to announce a new Kafka committer
> Lucas
> > Bradstreet.
> >
> > Lucas has been a long time Kafka contributor since Oct. 2018. He has been
> > extremely valuable for Kafka on both performance and correctness
> > improvements.
> >
> > The following are his performance related contributions.
> >
> > KAFKA-9820: validateMessagesAndAssignOffsetsCompressed allocates batch
> > iterator which is not used
> > KAFKA-9685: Solve Set concatenation perf issue in AclAuthorizer
> > KAFKA-9729: avoid readLock in authorizer ACL lookups
> > KAFKA-9039: Optimize ReplicaFetcher fetch path
> > KAFKA-8841: Reduce overhead of ReplicaManager.updateFollowerFetchState
> >
> > The following are his correctness related contributions.
> >
> > KAFKA-13194: LogCleaner may clean past highwatermark
> > KAFKA-10432: LeaderEpochCache is incorrectly recovered on segment
> recovery
> > for epoch 0
> > KAFKA-9137: Fix incorrect FetchSessionCache eviction logic
> >
> > Congratulations, Lucas!
> >
> > Thanks,
> >
> > Jun (on behalf of the Apache Kafka PMC)
> >
>


Re: [ANNOUNCE] New Kafka PMC Member: David Arthur

2023-03-09 Thread Kowshik Prakasam
Congrats David!

On Thu, Mar 9, 2023 at 12:09 PM Lucas Brutschy
 wrote:

> Congratulations!
>
> On Thu, Mar 9, 2023 at 8:37 PM Manikumar 
> wrote:
> >
> > Congrats David!
> >
> >
> > On Fri, Mar 10, 2023 at 12:24 AM Josep Prat  >
> > wrote:
> > >
> > > Congrats David!
> > >
> > > ———
> > > Josep Prat
> > >
> > > Aiven Deutschland GmbH
> > >
> > > Alexanderufer 3-7, 10117 Berlin
> > >
> > > Amtsgericht Charlottenburg, HRB 209739 B
> > >
> > > Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> > >
> > > m: +491715557497
> > >
> > > w: aiven.io
> > >
> > > e: josep.p...@aiven.io
> > >
> > > On Thu, Mar 9, 2023, 19:22 Mickael Maison 
> > wrote:
> > >
> > > > Congratulations David!
> > > >
> > > > On Thu, Mar 9, 2023 at 7:20 PM Chris Egerton  >
> > > > wrote:
> > > > >
> > > > > Congrats David!
> > > > >
> > > > > On Thu, Mar 9, 2023 at 1:17 PM Bill Bejeck 
> wrote:
> > > > >
> > > > > > Congratulations David!
> > > > > >
> > > > > > On Thu, Mar 9, 2023 at 1:12 PM Jun Rao  >
> > > > wrote:
> > > > > >
> > > > > > > Hi, Everyone,
> > > > > > >
> > > > > > > David Arthur has been a Kafka committer since 2013. He has been
> > very
> > > > > > > instrumental to the community since becoming a committer. It's
> my
> > > > > > pleasure
> > > > > > > to announce that David is now a member of Kafka PMC.
> > > > > > >
> > > > > > > Congratulations David!
> > > > > > >
> > > > > > > Jun
> > > > > > > on behalf of Apache Kafka PMC
> > > > > > >
> > > > > >
> > > >
>


Re: [ANNOUNCE] New Kafka PMC Member: Chris Egerton

2023-03-09 Thread Kowshik Prakasam
Congrats Chris!

On Thu, Mar 9, 2023 at 1:33 PM Divij Vaidya  wrote:

> Congratulations Chris! I am in awe with the amount of effort you put in
> code reviews and helping out the community members. Very well deserved.
>
> --
> Divij Vaidya
>
>
>
> On Thu, Mar 9, 2023 at 9:49 PM Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
>
> > So well deserved! Congratulations Chris!!!
> >
> > On Thu, 9 Mar 2023 at 22:09, Lucas Brutschy  > .invalid>
> > wrote:
> >
> > > Congratulations!
> > >
> > > On Thu, Mar 9, 2023 at 8:48 PM Roman Schmitz 
> > > wrote:
> > > >
> > > > Congratulations Chris!
> > > >
> > > > Am Do., 9. März 2023 um 20:33 Uhr schrieb Chia-Ping Tsai <
> > > chia7...@gmail.com
> > > > >:
> > > >
> > > > > Congratulations Chris!
> > > > >
> > > > > > Mickael Maison  於 2023年3月10日 上午2:21
> 寫道:
> > > > > >
> > > > > > Congratulations Chris!
> > > > > >
> > > > > >> On Thu, Mar 9, 2023 at 7:17 PM Bill Bejeck 
> > > wrote:
> > > > > >>
> > > > > >> Congratulations Chris!
> > > > > >>
> > > > > >>> On Thu, Mar 9, 2023 at 1:12 PM Jun Rao
>  > >
> > > > > wrote:
> > > > > >>>
> > > > > >>> Hi, Everyone,
> > > > > >>>
> > > > > >>> Chris Egerton has been a Kafka committer since July 2022. He
> has
> > > been
> > > > > very
> > > > > >>> instrumental to the community since becoming a committer. It's
> my
> > > > > pleasure
> > > > > >>> to announce that Chris is now a member of Kafka PMC.
> > > > > >>>
> > > > > >>> Congratulations Chris!
> > > > > >>>
> > > > > >>> Jun
> > > > > >>> on behalf of Apache Kafka PMC
> > > > > >>>
> > > > >
> > >
> >
>


Re: [ANNOUNCE] New PMC chair: Mickael Maison

2023-04-22 Thread Kowshik Prakasam
Thanks a lot Jun for your hard work and contributions over the years.
Congrats Mickael on your new role, well deserved! Wishing both of you, and
the rest of the community the very best!


Cheers,
Kowshik

On Sat, Apr 22, 2023, 6:38 PM Satish Duggana 
wrote:

> Thanks a lot Jun for your contributions as PMC chair for all these years.
>
> Congratulations Mickael on your new role.
>
> On Sat, 22 Apr 2023 at 17:42, Manyanda Chitimbo
>  wrote:
> >
> > Congratulations Mickael.
> > And thanks Jun for the work over the years.
> >
> > On Fri, Apr 21, 2023 at 5:10 PM Jun Rao 
> wrote:
> >
> > > Hi, everyone,
> > >
> > > After more than 10 years, I am stepping down as the PMC chair of Apache
> > > Kafka. We now have a new chair Mickael Maison, who has been a PMC
> member
> > > since 2020. I plan to continue to contribute to Apache Kafka myself.
> > >
> > > Congratulations, Mickael!
> > >
> > > Jun
> > >
> >
> >
> > --
> > Manyanda Chitimbo.
>


Re: [DISCUSS] KIP-937 Improve Message Timestamp Validation

2023-06-09 Thread Kowshik Prakasam
Hi all,

Please ignore this message. I'm just using this message to bump this thread
so that it will show up in Gaurav's inbox. He wanted to send out review
comments for this KIP.


Cheers,
Kowshik


On Wed, Jun 7, 2023 at 1:39 PM Beyene, Mehari 
wrote:

> > Although it's more verbose, splitting the configuration into explicit
> ‘past’ and ‘future’ would provide the appropriate tradeoff between
> constraint and flexibility, right?
>
> +1
>
>
>
>
>


Re: [VOTE] KIP - 405: Kafka Tiered Storage.

2021-02-15 Thread Kowshik Prakasam
+1 (non-binding). Thanks for the excellent KIP!


Cheers,
Kowshik





On Mon, Feb 15, 2021 at 2:50 AM Manikumar  wrote:

> Hi Satish,
>
> Thanks for driving this KIP. I’m sure there will be a few tweaks as we
> implement the KIP, but I
> think KIP is in good shape.
>
> I'm  +1 (binding).
>
> Thanks,
> Manikumar
>
> On Thu, Feb 11, 2021 at 10:57 PM Harsha Chintalapani 
> wrote:
>
> > +1 (binding).
> >
> > Thanks,
> > Harsha
> >
> > On Thu, Feb 11, 2021 at 6:21 AM Satish Duggana 
> wrote:
> >
> > > Hi All,
> > > We would like to start voting on “KIP-405: Kafka Tiered Storage”.
> > >
> > > For reference here is the KIP:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage
> > >
> > > Thanks,
> > > Satish.
> > >
> >
>


Re: [ANNOUNCE] New Kafka PMC Member: Chia-Ping Tsai

2021-03-17 Thread Kowshik Prakasam
Congrats Chia-Ping!

On Tue, Mar 16, 2021, 6:16 PM Dongjin Lee  wrote:

> 
>
> Best,
> Dongjin
>
> On Tue, Mar 16, 2021 at 2:20 PM Konstantine Karantasis
>  wrote:
>
> > Congratulations Chia-Ping!
> >
> > Konstantine
> >
> > On Mon, Mar 15, 2021 at 4:31 AM Rajini Sivaram 
> > wrote:
> >
> > > Congratulations, Chia-Ping, well deserved!
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > > On Mon, Mar 15, 2021 at 9:59 AM Bruno Cadonna
>  > >
> > > wrote:
> > >
> > > > Congrats, Chia-Ping!
> > > >
> > > > Best,
> > > > Bruno
> > > >
> > > > On 15.03.21 09:22, David Jacot wrote:
> > > > > Congrats Chia-Ping! Well deserved.
> > > > >
> > > > > On Mon, Mar 15, 2021 at 5:39 AM Satish Duggana <
> > > satish.dugg...@gmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > >> Congrats Chia-Ping!
> > > > >>
> > > > >> On Sat, 13 Mar 2021 at 13:34, Tom Bentley 
> > > wrote:
> > > > >>
> > > > >>> Congratulations Chia-Ping!
> > > > >>>
> > > > >>> On Sat, Mar 13, 2021 at 7:31 AM Kamal Chandraprakash <
> > > > >>> kamal.chandraprak...@gmail.com> wrote:
> > > > >>>
> > > >  Congratulations, Chia-Ping!!
> > > > 
> > > >  On Sat, Mar 13, 2021 at 11:38 AM Ismael Juma  >
> > > > >> wrote:
> > > > 
> > > > > Congratulations Chia-Ping! Well deserved.
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Fri, Mar 12, 2021, 11:14 AM Jun Rao
>  > >
> > > > >>> wrote:
> > > > >
> > > > >> Hi, Everyone,
> > > > >>
> > > > >> Chia-Ping Tsai has been a Kafka committer since Oct. 15,
> 2020.
> > He
> > > > >>> has
> > > > > been
> > > > >> very instrumental to the community since becoming a committer.
> > > It's
> > > > >>> my
> > > > >> pleasure to announce that Chia-Ping  is now a member of Kafka
> > PMC.
> > > > >>
> > > > >> Congratulations Chia-Ping!
> > > > >>
> > > > >> Jun
> > > > >> on behalf of Apache Kafka PMC
> > > > >>
> > > > >
> > > > 
> > > > >>>
> > > > >>
> > > > >
> > > >
> > >
> >
>
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
>
>
> *github:  github.com/dongjinleekr
> keybase: https://keybase.io/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck:
> speakerdeck.com/dongjin
> *
>


Re: [ANNOUNCE] New Kafka PMC Member: Randall Hauch

2021-04-17 Thread Kowshik Prakasam
Congrats Randall!


Cheers,
Kowshik

On Sat, Apr 17, 2021, 5:28 AM Rankesh Kumar 
wrote:

> Congratulations, Randall!
> Best regards,
> Rankesh Kumar
> Partner Solutions Engineer
> +91 (701)913-0147
> Follow us:  Blog • Slack • Twitter • YouTube
>
> > On 17-Apr-2021, at 1:41 PM, Tom Bentley  wrote:
> >
> > Congratulations Randall!
> >
> >
> >
> > On Sat, Apr 17, 2021 at 7:36 AM feyman2009  >
> > wrote:
> >
> >> Congratulations Randall!
> >>
> >> Haoran
> >> --
> >> 发件人:Luke Chen 
> >> 发送时间:2021年4月17日(星期六) 12:05
> >> 收件人:Kafka Users 
> >> 抄 送:dev 
> >> 主 题:Re: [ANNOUNCE] New Kafka PMC Member: Randall Hauch
> >>
> >> Congratulations Randall!
> >>
> >> Luke
> >>
> >> Bill Bejeck  於 2021年4月17日 週六 上午11:33 寫道:
> >>
> >>> Congratulations Randall!
> >>>
> >>> -Bill
> >>>
> >>> On Fri, Apr 16, 2021 at 11:10 PM lobo xu 
> wrote:
> >>>
>  Congrats Randall
> 
> >>>
> >>
> >>
>
>


Re: [ANNOUNCE] New Kafka PMC Member: Bill Bejeck

2021-04-17 Thread Kowshik Prakasam
Congrats Bill!


Cheers,
Kowshik

On Mon, Apr 12, 2021, 11:15 AM Randall Hauch  wrote:

> Congratulations, Bill!
>
> On Mon, Apr 12, 2021 at 11:02 AM Guozhang Wang  wrote:
>
> > Congratulations Bill !
> >
> > Guozhang
> >
> > On Wed, Apr 7, 2021 at 6:16 PM Matthias J. Sax  wrote:
> >
> > > Hi,
> > >
> > > It's my pleasure to announce that Bill Bejeck in now a member of the
> > > Kafka PMC.
> > >
> > > Bill has been a Kafka committer since Feb 2019. He has remained
> > > active in the community since becoming a committer.
> > >
> > >
> > >
> > > Congratulations Bill!
> > >
> > >  -Matthias, on behalf of Apache Kafka PMC
> > >
> >
> >
> > --
> > -- Guozhang
> >
>


Re: [DISCUSS] KIP-778 KRaft Upgrades

2021-10-13 Thread Kowshik Prakasam
Hi David,

Thanks for the great KIP! It's good to see KIP-584 is starting to get used.

Few comments below.

7001. In the UpdateFeaturesRequest definition, there is a newly introduced
ForceDowngrade parameter. There is also an existing AllowDowngrade
parameter. My understanding is that the AllowDowngrade flag could be set in
a FeatureUpdate whenever a downgrade is attempted via the
Admin.updateFeatures API. Then, the ForceDowngrade flag could be set if we
would like to ask the controller to proceed with the downgrade even if it
was deemed unsafe. Could we document the distinction between the two flags
in the KIP? If we can just keep one of the flags, that would simplify
things. But, as it stands it seems like we will need both flags. To avoid
confusions, does it make sense to deprecate the dual boolean flags and
instead change the updateFeatures API to use an enum parameter called
DOWNGRADE_REQUEST_TYPE with 3 values: NONE (default value meaning no
downgrade is allowed), DOWNGRADE_SAFE (ask the controller to downgrade if
the operation is safe) and DOWNGRADE_UNSAFE (ask the controller to
downgrade disregarding safety)?

7002. The KIP introduces ForceDowngrade flag into UpdateFeaturesRequest
definition. But the KIP also introduces a generic --force CLI flag in the
kafka-features.sh tool. Should we call the CLI flag instead as
--force-downgrade instead so that its intent is more specific?

7003. Currently the kafka-features.sh tool does not yet implement the
"Advanced CLI usage" explained in KIP-584. The associated jira
is KAFKA-10621. For the needs of this KIP, do you need the advanced CLI or
would the basic version work?

7004. KIP-584 feature flag max version is typically incremented to indicate
a breaking change. It usually means that version level downgrades would
break something. Could this KIP explain why it is useful to support a lossy
downgrade for the metadata.version feature flag? i.e. what are some
situations when a lossy downgrade is useful?

7005. Regarding downgrade, I read the `enum MetadataVersions` type
introduced in this KIP that captures the rules for backwards compatibility.
For my understanding, is this enum an implementation detail on the
controller specific to this feature flag? i.e. when processing the
UpdateFeaturesRequest, would the controller introduce a flag-specific logic
(based on the enum values) to decide whether a downgrade is allowed?


Cheers,
Kowshik



On Tue, Oct 12, 2021 at 2:13 PM David Arthur  wrote:

> Jun and Colin, thanks very much for the comments. See below
>
> 10. Colin, I agree that ApiVersionsRequest|Response is the most
> straightforward approach here
>
> 11. This brings up an interesting point. Once a UpdateFeature request is
> finished processing, a subsequent "describe features" (ApiVersionsRequest)
> made by the admin client will go to a random broker, and so might not
> reflect the feature update. We could add blocking behavior to the admin
> client so it would polls ApiVersions for some time until the expected
> version is reflected on that broker. However, this does not mean _every_
> broker has finished upgrading/downgrading -- just the one handling that
> request. Maybe we could have the admin client poll all the brokers until
> the expected version is seen.
>
> If at a later time a broker comes online that needs to process an
> upgrade/downgrade, I don't think there will be a problem since the broker
> will be fenced until it catches up on latest metadata (which will include
> the feature level).
>
> 12. Yes, we will need changes to the admin client for "updateFeatures".
> I'll update the KIP to reflect this.
>
> 13. I'll expand the paragraph on the initial "metadata.version" into its
> own section and add some detail.
>
> 14/15. As mentioned, I think we can avoid this and rely on
> brokers/controllers generating their own snapshots. We will probably want
> some kind of manual recovery mode where we can force load a snapshot, but
> that is out of scope here (I think..)
>
> 16. Automatic upgrades should be feasible, but I think we will want to
> start with manual upgrades (while we work out the design and fix bugs).
> Following the design detailed in this KIP, we could have a controller
> component that (as you suggest) automatically finalizes the feature to the
> max of all broker supported versions. I can include a section on this or we
> could defer to a future KIP. WDYT?
>
> -David
>
>
> On Tue, Oct 12, 2021 at 1:57 PM Colin McCabe  wrote:
>
> > On Thu, Oct 7, 2021, at 17:19, Jun Rao wrote:
> > > Hi, David,
> > >
> > > Thanks for the KIP. A few comments below.
> > >
> > > 10. It would be useful to describe how the controller node determines
> the
> > > RPC version used to communicate to other controller nodes. There seems
> to
> > > be a bootstrap problem. A controller node can't read the log and
> > > therefore the feature level until a quorum leader is elected. But
> leader
> > > election requires an RPC.
> > >
> >
> > Hi Jun,
> >
> > I agree t

Re: [DISCUSS] KIP-778 KRaft Upgrades

2021-10-21 Thread Kowshik Prakasam
Hi David,

Thanks for the explanations. Few comments below.

7001. Sounds good.

7002. Sounds good. The --force-downgrade-all option can be used for the
basic CLI while the --force-downgrade option can be used for the advanced
CLI.

7003. I like your suggestion on separate sub-commands, I agree it's more
convenient to use.

7004/7005. Your explanation sounds good to me. Regarding the min finalized
version level, this becomes useful for feature version deprecation as
explained here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Featureversiondeprecation
. This is not implemented yet, and the work item is tracked in KAFKA-10622.


Cheers,
Kowshik



On Fri, Oct 15, 2021 at 11:38 AM David Arthur  wrote:

> >
> > How does the active controller know what is a valid `metadata.version`
> > to persist? Could the active controller learn this from the
> > ApiVersions response from all of the inactive controllers?
>
>
> The active controller should probably validate whatever value is read from
> meta.properties against its own range of supported versions (statically
> defined in code). If the operator sets a version unsupported by the active
> controller, that sounds like a configuration error and we should shutdown.
> I'm not sure what other validation we could do here without introducing
> ordering dependencies (e.g., must have quorum before initializing the
> version)
>
> For example, let's say that we have a cluster that only has remote
> > controllers, what are the valid metadata.version in that case?
>
>
> I believe it would be the intersection of supported versions across all
> brokers and controllers. This does raise a concern with upgrading the
> metadata.version in general. Currently, the active controller only
> validates the target version based on the brokers' support versions. We
> will need to include controllers supported versions here as well (using
> ApiVersions, probably).
>
> On Fri, Oct 15, 2021 at 1:44 PM José Armando García Sancio
>  wrote:
>
> > On Fri, Oct 15, 2021 at 7:24 AM David Arthur  wrote:
> > > Hmm. So I think you are proposing the following flow:
> > > > 1. Cluster metadata partition replicas establish a quorum using
> > > > ApiVersions and the KRaft protocol.
> > > > 2. Inactive controllers send a registration RPC to the active
> > controller.
> > > > 3. The active controller persists this information to the metadata
> log.
> > >
> > >
> > > What happens if the inactive controllers send a metadata.version range
> > > > that is not compatible with the metadata.version set for the cluster?
> > >
> > >
> > > As we discussed offline, we don't need the explicit registration step.
> > Once
> > > a controller has joined the quorum, it will learn about the finalized
> > > "metadata.version" level once it reads that record.
> >
> > How does the active controller know what is a valid `metadata.version`
> > to persist? Could the active controller learn this from the
> > ApiVersions response from all of the inactive controllers? For
> > example, let's say that we have a cluster that only has remote
> > controllers, what are the valid metadata.version in that case?
> >
> > > If it encounters a
> > > version it can't support it should probably shutdown since it might not
> > be
> > > able to process any more records.
> >
> > I think that makes sense. If a controller cannot replay the metadata
> > log, it might as well not be part of the quorum. If the cluster
> > continues in this state it won't guarantee availability based on the
> > replication factor.
> >
> > Thanks
> > --
> > -Jose
> >
>
>
> --
> David Arthur
>


Re: [VOTE] KIP-778 KRaft upgrades

2021-12-15 Thread Kowshik Prakasam
Hi David,

Excellent work. Looking forward to this KIP.
+1 (non-binding).


Cheers,
Kowshik


On Wed, Dec 15, 2021 at 10:40 AM David Arthur
 wrote:

> Hey all, I realized I omitted a small change to the public APIs regarding
> finalized feature versions. I've updated the KIP with these changes. This
> does not conceptually change anything in the KIP, so I think we can just
> continue with the vote.
>
> Thanks!
> David
>
> On Sun, Dec 12, 2021 at 1:34 AM Guozhang Wang  wrote:
>
> > Thanks David! +1.
> >
> > Guozhang
> >
> > On Fri, Dec 10, 2021 at 7:12 PM deng ziming 
> > wrote:
> >
> > > Hi, David
> > >
> > > Looking forwarding to this feature
> > >
> > > +1 (non-binding)
> > >
> > > Thanks!
> > >
> > > Ziming Deng
> > >
> > > > On Dec 11, 2021, at 4:49 AM, David Arthur 
> > > wrote:
> > > >
> > > > Hey everyone, I'd like to start a vote for KIP-778 which adds support
> > for
> > > > KRaft to KRaft upgrades.
> > > >
> > > > Notably in this KIP is the first use case of KIP-584 feature flags.
> As
> > > > such, there are some addendums to KIP-584 included.
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-778%3A+KRaft+Upgrades
> > > >
> > > > Thanks!
> > > > David
> > >
> > >
> >
> > --
> > -- Guozhang
> >
>
>
> --
> -David
>


Re: [ANNOUNCE] New Kafka PMC member: David Jacot

2021-12-18 Thread Kowshik Prakasam
Congrats David! Very well deserved.


Cheers,
Kowshik

On Fri, Dec 17, 2021 at 9:06 PM Kirk True  wrote:

> Congrats!
>
> On Fri, Dec 17, 2021, at 8:55 PM, Luke Chen wrote:
> > Congrats, David!
> > Well deserved.
> >
> > Luke
> >
> > deng ziming  於 2021年12月18日 週六 上午7:47 寫道:
> >
> > > Congrats David!
> > >
> > > --
> > > Ziming Deng
> > >
> > > > On Dec 18, 2021, at 7:08 AM, Gwen Shapira  wrote:
> > > >
> > > > Hi everyone,
> > > >
> > > > David Jacot has been an Apache Kafka committer since Oct 2020 and has
> > > been contributing to the community consistently this entire time -
> > > especially notable the fact that he reviewed around 150 PRs in the last
> > > year. It is my pleasure to announce that David agreed to join the
> Kafka PMC.
> > > >
> > > > Congratulations, David!
> > > >
> > > > Gwen Shapira, on behalf of Apache Kafka PMC
> > >
> > >
> >
>


Re: [DISCUSS] Apache Kafka 2.7.0 release

2020-10-06 Thread Kowshik Prakasam
Hey Bill,

For KIP-584 , we are in the
process of reviewing/merging the write path PR into AK trunk:
https://github.com/apache/kafka/pull/9001 . As far as the KIP goes, this PR
is a major milestone. The PR merge will hopefully be done before EOD
tomorrow in time for the feature freeze. Beyond this PR, couple things are
left to be completed for this KIP: (1) tooling support and (2) implementing
support for feature version deprecation in the broker . In particular, (1)
is important for this KIP and the code changes are external to the broker
(since it is a separate tool we intend to build). As of now, we won't be
able to merge the tooling changes before feature freeze date. Would it be
ok to merge the tooling changes before code freeze on 10/22? The tooling
requirements are explained here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584

%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Toolingsupport

I would love to hear thoughts from Boyang and Jun as well.


Thanks,
Kowshik



On Mon, Oct 5, 2020 at 3:29 PM Bill Bejeck  wrote:

> Hi John,
>
> I've updated the list of expected KIPs for 2.7.0 with KIP-478.
>
> Thanks,
> Bill
>
> On Mon, Oct 5, 2020 at 11:26 AM John Roesler  wrote:
>
> > Hi Bill,
> >
> > Sorry about this, but I've just noticed that KIP-478 is
> > missing from the list. The url is:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-478+-+Strongly+typed+Processor+API
> >
> > The KIP was accepted a long time ago, and the implementation
> > has been trickling in since 2.6 branch cut. However, most of
> > the public API implementation is done now, so I think at
> > this point, we can call it "released in 2.7.0". I'll make
> > sure it's done by feature freeze.
> >
> > Thanks,
> > -John
> >
> > On Thu, 2020-10-01 at 13:49 -0400, Bill Bejeck wrote:
> > > All,
> > >
> > > With the KIP acceptance deadline passing yesterday, I've updated the
> > > planned KIP content section of the 2.7.0 release plan
> > > <
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158872629
> > >
> > > .
> > >
> > > Removed proposed KIPs for 2.7.0 not getting approval
> > >
> > >1. KIP-653
> > ><
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-653%3A+Upgrade+log4j+to+log4j2
> > >
> > >2. KIP-608
> > ><
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-608+-+Expose+Kafka+Metrics+in+Authorizer
> > >
> > >3. KIP-508
> > ><
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-508%3A+Make+Suppression+State+Queriable
> > >
> > >
> > > KIPs added
> > >
> > >1. KIP-671
> > ><
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-671%3A+Introduce+Kafka+Streams+Specific+Uncaught+Exception+Handler
> > >
> > >
> > >
> > > Please let me know if I've missed anything.
> > >
> > > Thanks,
> > > Bill
> > >
> > > On Thu, Sep 24, 2020 at 1:47 PM Bill Bejeck  wrote:
> > >
> > > > Hi All,
> > > >
> > > > Just a reminder that the KIP freeze is next Wednesday, September
> 30th.
> > > > Any KIP aiming to go in the 2.7.0 release needs to be accepted by
> this
> > date.
> > > >
> > > > Thanks,
> > > > BIll
> > > >
> > > > On Tue, Sep 22, 2020 at 12:11 PM Bill Bejeck 
> > wrote:
> > > >
> > > > > Boyan,
> > > > >
> > > > > Done. Thanks for the heads up.
> > > > >
> > > > > -Bill
> > > > >
> > > > > On Mon, Sep 21, 2020 at 6:36 PM Boyang Chen <
> > reluctanthero...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hey Bill,
> > > > > >
> > > > > > unfortunately KIP-590 will not be in 2.7 release, could you move
> > it to
> > > > > > postponed KIPs?
> > > > > >
> > > > > > Best,
> > > > > > Boyang
> > > > > >
> > > > > > On Thu, Sep 10, 2020 at 2:41 PM Bill Bejeck 
> > wrote:
> > > > > >
> > > > > > > Hi Gary,
> > > > > > >
> > > > > > > It's been added.
> > > > > > >
> > > > > > > Regards,
> > > > > > > Bill
> > > > > > >
> > > > > > > On Thu, Sep 10, 2020 at 4:14 PM Gary Russell <
> > gruss...@vmware.com>
> > > > > > wrote:
> > > > > > > > Can someone add a link to the release plan page [1] to the
> > Future
> > > > > > > Releases
> > > > > > > > page [2]?
> > > > > > > >
> > > > > > > > I have the latter bookmarked.
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > > [1]:
> > > > > > > >
> > > > > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158872629
> > > > > > > > [2]:
> > > > > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
> > > > > > > > 
> > > > > > > > From: Bill Bejeck 
> > > > > > > > Sent: Wednesday, September 9, 2020 4:35 PM
> > > > > > > > To: dev 
> > > > > > > > Subject: Re: [DISCUSS] Apache Kafka 2.7.0 release
> > > > > > > >
> > > > > > > > Hi Dongjin,
> > > > > > > >
> > > > > > > > I've moved both KIPs to the release plan.
> > > > > > > >
> > > > > > > > Keep in mind the cutoff for KIP acceptance is Se

Re: [VOTE] KIP-584: Versioning scheme for features

2020-10-06 Thread Kowshik Prakasam
Hi Jun,

I have added the following details in the KIP-584 write up:

1. Deployment, IBP deprecation and avoidance of double rolls. This section
talks about the various phases of work that would be required to use this
KIP to eventually avoid Broker double rolls in the cluster (whenever IBP
values are advanced). Link to section:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Deployment,IBPdeprecationandavoidanceofdoublerolls
.

2. Feature version deprecation. This section explains the idea for feature
version deprecation (using highest supported feature min version) which you
had proposed during code review:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Featureversiondeprecation
.

Please let me know if you have any questions.


Cheers,
Kowshik


On Tue, Sep 29, 2020 at 11:07 AM Jun Rao  wrote:

> Hi, Kowshik,
>
> Thanks for the update. Regarding enabling a single rolling restart in the
> future, could we sketch out a bit how this will work by treating IBP as a
> feature? For example, IBP currently uses the release version and this KIP
> uses an integer for versions. How do we bridge the gap between the two?
> Does min.version still make sense for IBP as a feature?
>
> Thanks,
>
> Jun
>
> On Fri, Sep 25, 2020 at 5:57 PM Kowshik Prakasam 
> wrote:
>
> > Hi Colin,
> >
> > Thanks for the feedback. Those are very good points. I have made the
> > following changes to the KIP as you had suggested:
> > 1. Included the `timeoutMs` field in the `UpdateFeaturesRequest` schema.
> > The initial implementation won't be making use of the field, but we can
> > always use it in the future as the need arises.
> > 2. Modified the `FinalizedFeaturesEpoch` field in `ApiVersionsResponse`
> to
> > use int64. This is to avoid overflow problems in the future once ZK is
> > gone.
> >
> > I have also incorporated these changes into the versioning write path PR
> > that is currently under review:
> https://github.com/apache/kafka/pull/9001.
> >
> >
> > Cheers,
> > Kowshik
> >
> >
> >
> > On Fri, Sep 25, 2020 at 4:57 PM Kowshik Prakasam  >
> > wrote:
> >
> > > Hi Jun,
> > >
> > > Thanks for the feedback. It's a very good point. I have now modified
> the
> > > KIP-584 write-up "goals" section a bit. It now mentions one of the
> goals
> > as
> > > enabling rolling upgrades using a single restart (instead of 2). Also I
> > > have removed the text explicitly aiming for deprecation of IBP. Note
> that
> > > previously under "Potential features in Kafka" the IBP was mentioned
> > under
> > > point (4) as a possible coarse-grained feature. Hopefully, now the 2
> > > sections of the KIP align with each other well.
> > >
> > >
> > > Cheers,
> > > Kowshik
> > >
> > >
> > > On Fri, Sep 25, 2020 at 2:03 PM Colin McCabe 
> wrote:
> > >
> > >> On Tue, Sep 22, 2020, at 00:43, Kowshik Prakasam wrote:
> > >> > Hi all,
> > >> >
> > >> > I wanted to let you know that I have made the following changes to
> the
> > >> > KIP-584 write up. The purpose is to ensure the design is correct
> for a
> > >> few
> > >> > things which came up during implementation:
> > >> >
> > >>
> > >> Hi Kowshik,
> > >>
> > >> Thanks for the updates.
> > >>
> > >> >
> > >> > 1. Per FeatureUpdate error code: The UPDATE_FEATURES controller API
> is
> > >> no
> > >> > longer transactional. Going forward, we allow for individual
> > >> FeatureUpdate
> > >> > to succeed/fail in the request. As a result, the response schema now
> > >> > contains an error code per FeatureUpdate as well as a top-level
> error
> > >> code.
> > >> > Overall this is a better design because it better represents the
> > nature
> > >> of
> > >> > the API: each FeatureUpdate in the request is independent of the
> other
> > >> > updates, and the controller can process/apply these independently to
> > ZK.
> > >> > When an UPDATE_FEATURES request fails, this new design provides
> better
> > >> > clarity to the caller on which FeatureUpdate could not be applied
> (via
> > >> the
> > >> > individual error codes). In the previous design,

Re: [VOTE] KIP-584: Versioning scheme for features

2020-10-08 Thread Kowshik Prakasam
Hi Jun,

This is a very good point. I have updated the feature version deprecation
section mentioning the same:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Featureversiondeprecation
.

Thank you for the suggestion.


Cheers,
Kowshik


On Tue, Oct 6, 2020 at 5:30 PM Jun Rao  wrote:

> Hi, Kowshik,
>
> Thanks for the follow up. Both look good to me.
>
> For 2, it would be useful to also add that an admin should make sure that
> no clients are using a deprecated feature version (e.g. using the client
> version metric) before deploying a release that deprecates it.
>
> Thanks,
>
> Jun
>
> On Tue, Oct 6, 2020 at 3:46 PM Kowshik Prakasam 
> wrote:
>
> > Hi Jun,
> >
> > I have added the following details in the KIP-584 write up:
> >
> > 1. Deployment, IBP deprecation and avoidance of double rolls. This
> section
> > talks about the various phases of work that would be required to use this
> > KIP to eventually avoid Broker double rolls in the cluster (whenever IBP
> > values are advanced). Link to section:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Deployment,IBPdeprecationandavoidanceofdoublerolls
> > .
> >
> > 2. Feature version deprecation. This section explains the idea for
> feature
> > version deprecation (using highest supported feature min version) which
> you
> > had proposed during code review:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Featureversiondeprecation
> > .
> >
> > Please let me know if you have any questions.
> >
> >
> > Cheers,
> > Kowshik
> >
> >
> > On Tue, Sep 29, 2020 at 11:07 AM Jun Rao  wrote:
> >
> > > Hi, Kowshik,
> > >
> > > Thanks for the update. Regarding enabling a single rolling restart in
> the
> > > future, could we sketch out a bit how this will work by treating IBP
> as a
> > > feature? For example, IBP currently uses the release version and this
> KIP
> > > uses an integer for versions. How do we bridge the gap between the two?
> > > Does min.version still make sense for IBP as a feature?
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Fri, Sep 25, 2020 at 5:57 PM Kowshik Prakasam <
> kpraka...@confluent.io
> > >
> > > wrote:
> > >
> > > > Hi Colin,
> > > >
> > > > Thanks for the feedback. Those are very good points. I have made the
> > > > following changes to the KIP as you had suggested:
> > > > 1. Included the `timeoutMs` field in the `UpdateFeaturesRequest`
> > schema.
> > > > The initial implementation won't be making use of the field, but we
> can
> > > > always use it in the future as the need arises.
> > > > 2. Modified the `FinalizedFeaturesEpoch` field in
> `ApiVersionsResponse`
> > > to
> > > > use int64. This is to avoid overflow problems in the future once ZK
> is
> > > > gone.
> > > >
> > > > I have also incorporated these changes into the versioning write path
> > PR
> > > > that is currently under review:
> > > https://github.com/apache/kafka/pull/9001.
> > > >
> > > >
> > > > Cheers,
> > > > Kowshik
> > > >
> > > >
> > > >
> > > > On Fri, Sep 25, 2020 at 4:57 PM Kowshik Prakasam <
> > kpraka...@confluent.io
> > > >
> > > > wrote:
> > > >
> > > > > Hi Jun,
> > > > >
> > > > > Thanks for the feedback. It's a very good point. I have now
> modified
> > > the
> > > > > KIP-584 write-up "goals" section a bit. It now mentions one of the
> > > goals
> > > > as
> > > > > enabling rolling upgrades using a single restart (instead of 2).
> > Also I
> > > > > have removed the text explicitly aiming for deprecation of IBP.
> Note
> > > that
> > > > > previously under "Potential features in Kafka" the IBP was
> mentioned
> > > > under
> > > > > point (4) as a possible coarse-grained feature. Hopefully, now the
> 2
> > > > > sections of the KIP align with each other well.
> > > > >
> > > > >
> > > > > Cheers,
> &

Re: [VOTE] KIP-584: Versioning scheme for features

2020-10-13 Thread Kowshik Prakasam
Hi all,

I wanted to let you know that I have made the following minor changes to
the `kafka-features` CLI tool description in the KIP-584 write up. The
purpose is to ensure the design is correct for a few things which came up
during implementation:

1. The CLI tool now produces a tab-formatted output instead of JSON. This
aligns with the type of format produced by other admin CLI tools of Kafka,
ex: `kafka-topics`.
2. Whenever feature updates are performed, the output of the CLI tool shows
the result of each feature update that was applied.
3. The CLI tool accepts an optional argument `--dry-run` which lets the
user preview the feature updates before applying them.

The following section of the KIP has been updated with the above changes:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Toolingsupport

Please let me know if you have any questions.


Cheers,
Kowshik


On Thu, Oct 8, 2020 at 1:12 AM Kowshik Prakasam 
wrote:

> Hi Jun,
>
> This is a very good point. I have updated the feature version deprecation
> section mentioning the same:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Featureversiondeprecation
> .
>
> Thank you for the suggestion.
>
>
> Cheers,
> Kowshik
>
>
> On Tue, Oct 6, 2020 at 5:30 PM Jun Rao  wrote:
>
>> Hi, Kowshik,
>>
>> Thanks for the follow up. Both look good to me.
>>
>> For 2, it would be useful to also add that an admin should make sure that
>> no clients are using a deprecated feature version (e.g. using the client
>> version metric) before deploying a release that deprecates it.
>>
>> Thanks,
>>
>> Jun
>>
>> On Tue, Oct 6, 2020 at 3:46 PM Kowshik Prakasam 
>> wrote:
>>
>> > Hi Jun,
>> >
>> > I have added the following details in the KIP-584 write up:
>> >
>> > 1. Deployment, IBP deprecation and avoidance of double rolls. This
>> section
>> > talks about the various phases of work that would be required to use
>> this
>> > KIP to eventually avoid Broker double rolls in the cluster (whenever IBP
>> > values are advanced). Link to section:
>> >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Deployment,IBPdeprecationandavoidanceofdoublerolls
>> > .
>> >
>> > 2. Feature version deprecation. This section explains the idea for
>> feature
>> > version deprecation (using highest supported feature min version) which
>> you
>> > had proposed during code review:
>> >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Featureversiondeprecation
>> > .
>> >
>> > Please let me know if you have any questions.
>> >
>> >
>> > Cheers,
>> > Kowshik
>> >
>> >
>> > On Tue, Sep 29, 2020 at 11:07 AM Jun Rao  wrote:
>> >
>> > > Hi, Kowshik,
>> > >
>> > > Thanks for the update. Regarding enabling a single rolling restart in
>> the
>> > > future, could we sketch out a bit how this will work by treating IBP
>> as a
>> > > feature? For example, IBP currently uses the release version and this
>> KIP
>> > > uses an integer for versions. How do we bridge the gap between the
>> two?
>> > > Does min.version still make sense for IBP as a feature?
>> > >
>> > > Thanks,
>> > >
>> > > Jun
>> > >
>> > > On Fri, Sep 25, 2020 at 5:57 PM Kowshik Prakasam <
>> kpraka...@confluent.io
>> > >
>> > > wrote:
>> > >
>> > > > Hi Colin,
>> > > >
>> > > > Thanks for the feedback. Those are very good points. I have made the
>> > > > following changes to the KIP as you had suggested:
>> > > > 1. Included the `timeoutMs` field in the `UpdateFeaturesRequest`
>> > schema.
>> > > > The initial implementation won't be making use of the field, but we
>> can
>> > > > always use it in the future as the need arises.
>> > > > 2. Modified the `FinalizedFeaturesEpoch` field in
>> `ApiVersionsResponse`
>> > > to
>> > > > use int64. This is to avoid overflow problems in the future once ZK
>> is
>> > > > gone.
>> > > >
>> > > > I have also incorporated 

Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Kowshik Prakasam
Congrats David!


Cheers,
Kowshik

On Fri, Oct 16, 2020, 9:21 AM Mickael Maison 
wrote:

> Congratulations David!
>
> On Fri, Oct 16, 2020 at 6:05 PM Bill Bejeck  wrote:
> >
> > Congrats David! Well deserved.
> >
> > -Bill
> >
> > On Fri, Oct 16, 2020 at 12:01 PM Gwen Shapira  wrote:
> >
> > > The PMC for Apache Kafka has invited David Jacot as a committer, and
> > > we are excited to say that he accepted!
> > >
> > > David Jacot has been contributing to Apache Kafka since July 2015 (!)
> > > and has been very active since August 2019. He contributed several
> > > notable KIPs:
> > >
> > > KIP-511: Collect and Expose Client Name and Version in Brokers
> > > KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> > > KIP-570: Add leader epoch in StopReplicaReques
> > > KIP-599: Throttle Create Topic, Create Partition and Delete Topic
> > > Operations
> > > KIP-496 Added an API for the deletion of consumer offsets
> > >
> > > In addition, David Jacot reviewed many community contributions and
> > > showed great technical and architectural taste. Great reviews are hard
> > > and often thankless work - but this is what makes Kafka a great
> > > product and helps us grow our community.
> > >
> > > Thanks for all the contributions, David! Looking forward to more
> > > collaboration in the Apache Kafka community.
> > >
> > > --
> > > Gwen Shapira
> > >
>


Re: [ANNOUNCE] New committer: Chia-Ping Tsai

2020-10-19 Thread Kowshik Prakasam
Congrats!


Cheers,
Kowshik


On Mon, Oct 19, 2020 at 10:40 AM Bruno Cadonna  wrote:

> Congrats!
>
> Best,
> Bruno
>
> On 19.10.20 19:39, Sophie Blee-Goldman wrote:
> > Congrats!
> >
> > On Mon, Oct 19, 2020 at 10:32 AM Bill Bejeck  wrote:
> >
> >> Congratulations Chia-Ping!
> >>
> >> -Bill
> >>
> >> On Mon, Oct 19, 2020 at 1:26 PM Matthias J. Sax 
> wrote:
> >>
> >>> Congrats Chia-Ping!
> >>>
> >>> On 10/19/20 10:24 AM, Guozhang Wang wrote:
>  Hello all,
> 
>  I'm happy to announce that Chia-Ping Tsai has accepted his invitation
> >> to
>  become an Apache Kafka committer.
> 
>  Chia-Ping has been contributing to Kafka since March 2018 and has made
> >> 74
>  commits:
> 
>  https://github.com/apache/kafka/commits?author=chia7712
> 
>  He's also authored several major improvements, participated in the KIP
>  discussion and PR reviews as well. His major feature development
> >>> includes:
> 
>  * KAFKA-9654: Epoch based ReplicaAlterLogDirsThread creation.
>  * KAFKA-8334: Spiky offsetCommit latency due to lock contention.
>  * KIP-331: Add default implementation to close() and configure() for
> >>> serde
>  * KIP-367: Introduce close(Duration) to Producer and AdminClients
>  * KIP-338: Support to exclude the internal topics in kafka-topics.sh
> >>> command
> 
>  In addition, Chia-Ping has demonstrated his great diligence fixing
> test
>  failures, his impressive engineering attitude and taste in fixing
> >> tricky
>  bugs while keeping simple designs.
> 
>  Please join me to congratulate Chia-Ping for all the contributions!
> 
> 
>  -- Guozhang
> 
> >>>
> >>
> >
>


Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread Kowshik Prakasam
Congrats Sophie!


Cheers,
Kowshik


On Mon, Oct 19, 2020 at 10:31 AM Bill Bejeck  wrote:

> Congratulations Sophie!
>
> -Bill
>
> On Mon, Oct 19, 2020 at 12:49 PM Leah Thomas  wrote:
>
> > Congrats Sophie!
> >
> > On Mon, Oct 19, 2020 at 11:41 AM Matthias J. Sax 
> wrote:
> >
> > > Hi all,
> > >
> > > I am excited to announce that A. Sophie Blee-Goldman has accepted her
> > > invitation to become an Apache Kafka committer.
> > >
> > > Sophie is actively contributing to Kafka since Feb 2019 and has
> > > accumulated 140 commits. She authored 4 KIPs in the lead
> > >
> > >  - KIP-453: Add close() method to RocksDBConfigSetter
> > >  - KIP-445: In-memory Session Store
> > >  - KIP-428: Add in-memory window store
> > >  - KIP-613: Add end-to-end latency metrics to Streams
> > >
> > > and helped to implement two critical KIPs, 429 (incremental
> rebalancing)
> > > and 441 (smooth auto-scaling; not just implementation but also design).
> > >
> > > In addition, she participates in basically every Kafka Streams related
> > > KIP discussion, reviewed 142 PRs, and is active on the user mailing
> list.
> > >
> > > Thanks for all the contributions, Sophie!
> > >
> > >
> > > Please join me to congratulate her!
> > >  -Matthias
> > >
> > >
> >
>


Looking for PR review for small clean up in TopicCommand

2020-10-20 Thread Kowshik Prakasam
Hi all,

I'm looking for a PR review for a small clean up in TopicCommand class.
Here is a link to the PR: https://github.com/apache/kafka/pull/9465. I'd
appreciate the help if any of you could review/merge this PR.


Cheers,
Kowshik


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-10-27 Thread Kowshik Prakasam
ons are listed. Which one is
> chosen?
> > > > 605.2
> > > > >> In option 2, it says "Build the local leader epoch cache by
> cutting the
> > > > >> leader epoch sequence received from remote storage to [LSO, ELO].
> (LSO
> > > > >> = log start offset)." We need to do the same thing for the
> producer
> > > > >> snapshot. However, it's hard to cut the producer snapshot to an
> earlier
> > > > >> offset. Another option is to simply take the lastOffset from the
> remote
> > > > >> segment and use that as the starting fetch offset in the
> follower. This
> > > > >> avoids the need for cutting.
> > > > >>
> > > > >>
> > > > >>
> > > > >> 606. ListOffsets: Since we need a version bump, could you
> document it
> > > > >> under a protocol change section?
> > > > >>
> > > > >>
> > > > >>
> > > > >> 607. "LogStartOffset of a topic can point to either of local
> segment or
> > > > >> remote segment but it is initialised and maintained in the Log
> class
> > > > like
> > > > >> now. This is already maintained in `Log` class while loading the
> logs
> > > > and
> > > > >> it can also be fetched from RemoteLogMetadataManager." What will
> happen
> > > > to
> > > > >> the existing logic (e.g. log recovery) that currently depends on
> > > > >> logStartOffset but assumes it's local?
> > > > >>
> > > > >>
> > > > >>
> > > > >> 608. Handle expired remote segment: How does it pick up new
> > > > logStartOffset
> > > > >> from deleteRecords?
> > > > >>
> > > > >>
> > > > >>
> > > > >> 609. RLMM message format:
> > > > >> 609.1 It includes both MaxTimestamp and EventTimestamp. Where
> does it
> > > > get
> > > > >> both since the message in the log only contains one timestamp?
> 609.2 If
> > > > we
> > > > >> change just the state (e.g. to DELETE_STARTED), it seems it's
> wasteful
> > > > to
> > > > >> have to include all other fields not changed. 609.3 Could you
> document
> > > > >> which process makes the following transitions DELETE_MARKED,
> > > > >> DELETE_STARTED, DELETE_FINISHED?
> > > > >>
> > > > >>
> > > > >>
> > > > >> 610. remote.log.reader.max.pending.tasks: "Maximum remote log
> reader
> > > > >> thread pool task queue size. If the task queue is full, broker
> will stop
> > > > >> reading remote log segments." What does the broker do if the
> queue is
> > > > >> full?
> > > > >>
> > > > >>
> > > > >>
> > > > >> 611. What do we return if the request offset/epoch doesn't exist
> in the
> > > > >> following API?
> > > > >> RemoteLogSegmentMetadata remoteLogSegmentMetadata(TopicPartition
> > > > >> topicPartition, long offset, int epochForOffset)
> > > > >>
> > > > >>
> > > > >>
> > > > >> Jun
> > > > >>
> > > > >>
> > > > >>
> > > > >> On Mon, Aug 31, 2020 at 11:19 AM Satish Duggana < satish. duggana@
> > > > gmail. com
> > > > >> ( satish.dugg...@gmail.com ) > wrote:
> > > > >>
> > > > >>
> > > > >>>
> > > > >>>
> > > > >>> KIP is updated with
> > > > >>> - Remote log segment metadata topic message format/schema.
> > > > >>> - Added remote log segment metadata state transitions and
> explained how
> > > > >>> the deletion of segments is handled, including the case of
> partition
> > > > >>> deletions.
> > > > >>> - Added a few more limitations in the "Non goals" section.
> > > > >>>
> > > > >>>
> > > > >>>
> > > > >>> Thanks,
> > > > >>> Satish.
> > > > >>>
> 

Re: [VOTE] KIP-584: Versioning scheme for features

2020-10-30 Thread Kowshik Prakasam
Hi all,

I wanted to let you know that I have made the following small change to the
`kafka-features` CLI tool description in the KIP-584 write up. The purpose
is to ensure the design is compatible with post KIP-500 world. I have
eliminated the facility in Admin#describeFeatures API to be able to
optionally send a describeFeatures request to the controller. This facility
was originally seen useful for (1) debugability and (2) slightly better
consistency guarantees in the CLI tool that reads features before updating
them. But in hindsight it now poses a hindrance to post KIP-500 world where
no client would be able to access the controller directly. So, when looking
at cost vs benefit, this facility does not feel useful enough and therefore
I've removed it. We can discuss if it becomes necessary in the future, and
implement a suitable solution.

The corresponding PR containing this change is
https://github.com/apache/kafka/pull/9536 .

Please let me know if you have any questions or concerns.


Cheers,
Kowshik


On Thu, Oct 15, 2020 at 10:17 AM Jun Rao  wrote:

> Hi, Kowshik,
>
> Thanks for the update. Those changes look good to me.
>
> Jun
>
> On Tue, Oct 13, 2020 at 4:50 PM Kowshik Prakasam 
> wrote:
>
> > Hi all,
> >
> > I wanted to let you know that I have made the following minor changes to
> > the `kafka-features` CLI tool description in the KIP-584 write up. The
> > purpose is to ensure the design is correct for a few things which came up
> > during implementation:
> >
> > 1. The CLI tool now produces a tab-formatted output instead of JSON. This
> > aligns with the type of format produced by other admin CLI tools of
> Kafka,
> > ex: `kafka-topics`.
> > 2. Whenever feature updates are performed, the output of the CLI tool
> shows
> > the result of each feature update that was applied.
> > 3. The CLI tool accepts an optional argument `--dry-run` which lets the
> > user preview the feature updates before applying them.
> >
> > The following section of the KIP has been updated with the above changes:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Toolingsupport
> >
> > Please let me know if you have any questions.
> >
> >
> > Cheers,
> > Kowshik
> >
> >
> > On Thu, Oct 8, 2020 at 1:12 AM Kowshik Prakasam 
> > wrote:
> >
> > > Hi Jun,
> > >
> > > This is a very good point. I have updated the feature version
> deprecation
> > > section mentioning the same:
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Featureversiondeprecation
> > > .
> > >
> > > Thank you for the suggestion.
> > >
> > >
> > > Cheers,
> > > Kowshik
> > >
> > >
> > > On Tue, Oct 6, 2020 at 5:30 PM Jun Rao  wrote:
> > >
> > >> Hi, Kowshik,
> > >>
> > >> Thanks for the follow up. Both look good to me.
> > >>
> > >> For 2, it would be useful to also add that an admin should make sure
> > that
> > >> no clients are using a deprecated feature version (e.g. using the
> client
> > >> version metric) before deploying a release that deprecates it.
> > >>
> > >> Thanks,
> > >>
> > >> Jun
> > >>
> > >> On Tue, Oct 6, 2020 at 3:46 PM Kowshik Prakasam <
> kpraka...@confluent.io
> > >
> > >> wrote:
> > >>
> > >> > Hi Jun,
> > >> >
> > >> > I have added the following details in the KIP-584 write up:
> > >> >
> > >> > 1. Deployment, IBP deprecation and avoidance of double rolls. This
> > >> section
> > >> > talks about the various phases of work that would be required to use
> > >> this
> > >> > KIP to eventually avoid Broker double rolls in the cluster (whenever
> > IBP
> > >> > values are advanced). Link to section:
> > >> >
> > >> >
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Deployment,IBPdeprecationandavoidanceofdoublerolls
> > >> > .
> > >> >
> > >> > 2. Feature version deprecation. This section explains the idea for
> > >> feature
> > >> > version deprecation (using highest supported feature min version)
> > which
> > >> yo

KAFKA-10624: Looking for PR review

2020-11-04 Thread Kowshik Prakasam
Hi all,

I'm looking for a PR review for a small PR to address KAFKA-10624
:
https://github.com/apache/kafka/pull/9561. Would you be able to please help
review it?


Cheers,
Kowshik


Looking for PR review for small doc update

2020-11-04 Thread Kowshik Prakasam
Hi all,

I'm looking for a PR review for a small doc update
in FinalizedFeatureChangeListener. Would you be able to please help review
it? Link to PR: https://github.com/apache/kafka/pull/9562 .


Cheers,
Kowshik


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-11-06 Thread Kowshik Prakasam
face
> is exposed to other components (such as LogCleaner, ReplicaManager etc.)
> and not the underlying Log object. This approach keeps the user of the Log
> layer agnostic of the whereabouts of the data. Underneath the interface,
> the implementing classes can completely separate local log capabilities
> from the remote log. For example, the Log class can be simplified to only
> manage logic surrounding local log segments and metadata. Additionally, a
> wrapper class can be provided (implementing the higher level Log interface)
> which will contain any/all logic surrounding tiered data. The wrapper
> class will wrap around an instance of the Log class delegating the local
> log logic to it. Finally, a handle to the wrapper class can be exposed to
> the other components wherever they need a handle to the higher level Log
> interface.
>
> It is still a draft version and we can discuss code level changes in
> the PR after it is made ready for review.
>
> On Wed, Oct 28, 2020 at 6:27 AM Kowshik Prakasam 
> wrote:
> >
> > Hi Satish,
> >
> > Thanks for the updates to the KIP. Here are my first batch of
> > comments/suggestions on the latest version of the KIP.
> >
> > 5012. In the RemoteStorageManager interface, there is an API defined for
> > each file type. For example, fetchOffsetIndex, fetchTimestampIndex etc.
> To
> > avoid the duplication, I'd suggest we can instead have a FileType enum
> and
> > a common get API based on the FileType.
> >
> > 5013. There are some references to the Google doc in the KIP. I wasn't
> sure
> > if the Google doc is expected to be in sync with the contents of the
> wiki.
> > Going forward, it seems easier if just the KIP is maintained as the
> source
> > of truth. In this regard, could you please move all the references to the
> > Google doc, maybe to a separate References section at the bottom of the
> KIP?
> >
> > 5014. There are some TODO sections in the KIP. Would these be filled up
> in
> > future iterations?
> >
> > 5015. Under "Topic deletion lifecycle", I'm trying to understand why do
> we
> > need delete_partition_marked as well as the delete_partition_started
> > messages. I couldn't spot a drawback if supposing we simplified the
> design
> > such that the controller would only write delete_partition_started
> message,
> > and RemoteLogCleaner (RLC) instance picks it up for processing. What am I
> > missing?
> >
> > 5016. Under "Topic deletion lifecycle", step (4) is mentioned as "RLC
> gets
> > all the remote log segments for the partition and each of these remote
> log
> > segments is deleted with the next steps.". Since the RLC instance runs on
> > each tier topic partition leader, how does the RLC then get the list of
> > remote log segments to be deleted? It will be useful to add that detail
> to
> > the KIP.
> >
> > 5017. Under "Public Interfaces -> Configs", there is a line mentioning
> "We
> > will support flipping remote.log.storage.enable in next versions." It
> will
> > be useful to mention this in the "Future Work" section of the KIP too.
> >
> > 5018. The KIP introduces a number of configuration parameters. It will be
> > useful to mention in the KIP if the user should assume these as static
> > configuration in the server.properties file, or dynamic configuration
> which
> > can be modified without restarting the broker.
> >
> > 5019.  Maybe this is planned as a future update to the KIP, but I thought
> > I'd mention it here. Could you please add details to the KIP on why
> RocksDB
> > was chosen as the default cache implementation of RLMM, and how it is
> going
> > to be used? Were alternatives compared/considered? For example, it would
> be
> > useful to explain/evaluate the following: 1) debuggability of the RocksDB
> > JNI interface, 2) performance, 3) portability across platforms and 4)
> > interface parity of RocksDB’s JNI api with it's underlying C/C++ api.
> >
> > 5020. Following up on (5019), for the RocksDB cache, it will be useful to
> > explain the relationship/mapping between the following in the KIP: 1) #
> of
> > tiered partitions, 2) # of partitions of metadata topic
> > __remote_log_metadata and 3) # of RocksDB instances. i.e. is the plan to
> > have a RocksDB instance per tiered partition, or per metadata topic
> > partition, or just 1 for per broker?
> >
> > 5021. I was looking at the implementation prototype (PR link:
> > https://github.com/apache/kafka/pull/7561). It seems tha

Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-11-10 Thread Kowshik Prakasam
 from remote
> > storage?
> >
> > 5113. "Committed offsets can be stored in a local file to avoid reading
> the
> > messages again when a broker is restarted." Could you describe the format
> > and the location of the file? Also, could the same message be processed
> by
> > RLMM again after broker restart? If so, how do we handle that?
> >
> > 5114. Message format
> > 5114.1 There are two records named RemoteLogSegmentMetadataRecord with
> > apiKey 0 and 1.
> > 5114.2 RemoteLogSegmentMetadataRecord: Could we document whether
> endOffset
> > is inclusive/exclusive?
> > 5114.3 RemoteLogSegmentMetadataRecord: Could you explain LeaderEpoch a
> bit
> > more? Is that the epoch of the leader when it copies the segment to
> remote
> > storage? Also, how will this field be used?
> > 5114.4 EventTimestamp: Could you explain this a bit more? Each record in
> > Kafka already has a timestamp field. Could we just use that?
> > 5114.5 SegmentSizeInBytes: Could this just be int32?
> >
> > 5115. RemoteLogCleaner(RLC): This could be confused with the log cleaner
> > for compaction. Perhaps it can be renamed to sth like
> > RemotePartitionRemover.
> >
> > 5116. "RLC receives the delete_partition_marked and processes it if it is
> > not yet processed earlier." How does it know whether
> > delete_partition_marked has been processed earlier?
> >
> > 5117. Should we add a new MessageFormatter to read the tier metadata
> topic?
> >
> > 5118. "Maximum remote log reader thread pool task queue size. If the task
> > queue is full, broker will stop reading remote log segments." What do we
> > return to the fetch request in this case?
> >
> > 5119. It would be useful to list all things not supported in the first
> > version in a Future work or Limitations section. For example, compacted
> > topic, JBOD, changing remote.log.storage.enable from true to false, etc.
> >
> > Thanks,
> >
> > Jun
> >
> > On Tue, Oct 27, 2020 at 5:57 PM Kowshik Prakasam  >
> > wrote:
> >
> > > Hi Satish,
> > >
> > > Thanks for the updates to the KIP. Here are my first batch of
> > > comments/suggestions on the latest version of the KIP.
> > >
> > > 5012. In the RemoteStorageManager interface, there is an API defined
> for
> > > each file type. For example, fetchOffsetIndex, fetchTimestampIndex
> etc. To
> > > avoid the duplication, I'd suggest we can instead have a FileType enum
> and
> > > a common get API based on the FileType.
> > >
> > > 5013. There are some references to the Google doc in the KIP. I wasn't
> sure
> > > if the Google doc is expected to be in sync with the contents of the
> wiki.
> > > Going forward, it seems easier if just the KIP is maintained as the
> source
> > > of truth. In this regard, could you please move all the references to
> the
> > > Google doc, maybe to a separate References section at the bottom of the
> > > KIP?
> > >
> > > 5014. There are some TODO sections in the KIP. Would these be filled
> up in
> > > future iterations?
> > >
> > > 5015. Under "Topic deletion lifecycle", I'm trying to understand why
> do we
> > > need delete_partition_marked as well as the delete_partition_started
> > > messages. I couldn't spot a drawback if supposing we simplified the
> design
> > > such that the controller would only write delete_partition_started
> message,
> > > and RemoteLogCleaner (RLC) instance picks it up for processing. What
> am I
> > > missing?
> > >
> > > 5016. Under "Topic deletion lifecycle", step (4) is mentioned as "RLC
> gets
> > > all the remote log segments for the partition and each of these remote
> log
> > > segments is deleted with the next steps.". Since the RLC instance runs
> on
> > > each tier topic partition leader, how does the RLC then get the list of
> > > remote log segments to be deleted? It will be useful to add that
> detail to
> > > the KIP.
> > >
> > > 5017. Under "Public Interfaces -> Configs", there is a line mentioning
> "We
> > > will support flipping remote.log.storage.enable in next versions." It
> will
> > > be useful to mention this in the "Future Work" section of the KIP too.
> > >
> > > 5018. The KIP introduces a number of configuration parameters. It will
> be
> > > useful to mention in the 

Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-11-10 Thread Kowshik Prakasam
Hi Harsha,

The goal we discussed is to aim for preview in AK 3.0. In order to get us
there, it will be useful to think about the order in which the code changes
will be implemented, reviewed and merged. Since you are driving the
development, do you want to layout the order of things? For example, do you
eventually want to break up the PR into multiple smaller ones? If so, you
could list the milestones there. Another perspective is that this can be
helpful to budget time suitably and to understand the progress.
Let us know how we can help.


Cheers,
Kowshik

On Tue, Nov 10, 2020 at 3:26 PM Harsha Chintalapani  wrote:

> Thanks Kowshik for the link. Seems reasonable,  as we discussed on the
> call, code and completion of this KIP will be taken up by us.
> Regarding Milestone 2, what you think it needs to be clarified there?
> I believe what we are promising in the KIP along with unit tests, systems
> tests will be delivered and we can call that as preview.   We will be
> running this in our production and continue to provide the data and metrics
> to push this feature to GA.
>
>
>
> On Tue, Nov 10, 2020 at 10:07 AM, Kowshik Prakasam  >
> wrote:
>
> > Hi Harsha/Satish,
> >
> > Thanks for the discussion today. Here is a link to the KIP-405
> <https://issues.apache.org/jira/browse/KIP-405> development
> > milestones google doc we discussed in the meeting today: https://docs.
> > google.com/document/d/1B5_jaZvWWb2DUpgbgImq0k_IPZ4DWrR8Ru7YpuJrXdc/edit
> > . I have shared it with you. Please have a look and share your
> > feedback/improvements. As we discussed, things are clear until milestone
> 1.
> > Beyond that, we can discuss it again (perhaps in next sync or later),
> once
> > you have thought through the implementation plan/milestones and release
> > into preview in 3.0.
> >
> > Cheers,
> > Kowshik
> >
> > On Tue, Nov 10, 2020 at 6:56 AM Satish Duggana  >
> > wrote:
> >
> > Hi Jun,
> > Thanks for your comments. Please find the inline replies below.
> >
> > 605.2 "Build the local leader epoch cache by cutting the leader epoch
> > sequence received from remote storage to [LSO, ELO]." I mentioned an
> issue
> > earlier. Suppose the leader's local start offset is 100. The follower
> finds
> > a remote segment covering offset range [80, 120). The producerState with
> > this remote segment is up to offset 120. To trim the producerState to
> > offset 100 requires more work since one needs to download the previous
> > producerState up to offset 80 and then replay the messages from 80 to
> 100.
> > It seems that it's simpler in this case for the follower just to take the
> > remote segment as it is and start fetching from offset 120.
> >
> > We chose that approach to avoid any edge cases here. It may be possible
> > that the remote log segment that is received may not have the same leader
> > epoch sequence from 100-120 as it contains on the leader(this can happen
> > due to unclean leader). It is safe to start from what the leader returns
> > here.Another way is to find the remote log segment
> >
> > 5016. Just to echo what Kowshik was saying. It seems that
> > RLMM.onPartitionLeadershipChanges() is only called on the replicas for a
> > partition, not on the replicas for the __remote_log_segment_metadata
> > partition. It's not clear how the leader of __remote_log_segment_metadata
> > obtains the metadata for remote segments for deletion.
> >
> > RLMM will always receive the callback for the remote log metadata topic
> > partitions hosted on the local broker and these will be subscribed. I
> will
> > make this clear in the KIP.
> >
> > 5100. KIP-516 <https://issues.apache.org/jira/browse/KIP-516> has been
> accepted and is being implemented now. Could you
> > update the KIP based on topicID?
> >
> > We mentioned KIP-516 <https://issues.apache.org/jira/browse/KIP-516>
> and how it helps. We will update this KIP with all
> > the changes it brings with KIP-516
> <https://issues.apache.org/jira/browse/KIP-516>.
> >
> > 5101. RLMM: It would be useful to clarify how the following two APIs are
> > used. According to the wiki, the former is used for topic deletion and
> the
> > latter is used for retention. It seems that retention should use the
> former
> > since remote segments without a matching epoch in the leader (potentially
> > due to unclean leader election) also need to be garbage collected. The
> > latter seems to be used for the new leader to determine the last tiered
> > segment.
> > default Iterator
> > listRemoteLogSe

KAFKA-10723: LogManager thread pool activity post shutdown

2020-11-14 Thread Kowshik Prakasam
Hey everyone,

When I was seeing Broker error logs, I noted that the LogManager leaks
internal thread pool activity during KafkaServer shutdown sequence,
whenever it encounters an internal error. I have explained the issue in
this jira: https://issues.apache.org/jira/browse/KAFKA-10723 , and have
proposed couple of ways to fix this. If you are familiar with this code,
please could you have a look and share your thoughts on which of the
proposed fixes is the right way to go? If you feel there are other ways to
fix the issue, your thoughts are welcome.

Thank you!


Cheers,
Kowshik


Re: KAFKA-10723: LogManager thread pool activity post shutdown

2020-11-14 Thread Kowshik Prakasam
In case you are interested in the fix, I've uploaded a PR:
https://github.com/apache/kafka/pull/9596 containing fix #2 mentioned in
the jira description.


Cheers,
Kowshik


On Sat, Nov 14, 2020 at 12:40 PM Kowshik Prakasam 
wrote:

> Hey everyone,
>
> When I was seeing Broker error logs, I noted that the LogManager leaks
> internal thread pool activity during KafkaServer shutdown sequence,
> whenever it encounters an internal error. I have explained the issue in
> this jira: https://issues.apache.org/jira/browse/KAFKA-10723 , and have
> proposed couple of ways to fix this. If you are familiar with this code,
> please could you have a look and share your thoughts on which of the
> proposed fixes is the right way to go? If you feel there are other ways to
> fix the issue, your thoughts are welcome.
>
> Thank you!
>
>
> Cheers,
> Kowshik
>
>


Looking for a PR review for FinalizedFeatureCache cleanup

2020-11-16 Thread Kowshik Prakasam
Hi,

I'm looking for a PR review for a small cleanup/refactor to use string
interpolation in FinalizedFeatureCache class. Please could one of you help
review this change? Here is a link to the PR:
https://github.com/apache/kafka/pull/9602


Cheers,
Kowshik


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-11-20 Thread Kowshik Prakasam
Hi Harsha/Satish,

Hope you are doing well. Would you be able to please update the meeting
notes section for the most recent 2 meetings (from 10/13 and 11/10)? It
will be useful to share the context with the community.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage#KIP405:KafkaTieredStorage-MeetingNotes


Cheers,
Kowshik


On Tue, Nov 10, 2020 at 11:39 PM Kowshik Prakasam 
wrote:

> Hi Harsha,
>
> The goal we discussed is to aim for preview in AK 3.0. In order to get us
> there, it will be useful to think about the order in which the code changes
> will be implemented, reviewed and merged. Since you are driving the
> development, do you want to layout the order of things? For example, do you
> eventually want to break up the PR into multiple smaller ones? If so, you
> could list the milestones there. Another perspective is that this can be
> helpful to budget time suitably and to understand the progress.
> Let us know how we can help.
>
>
> Cheers,
> Kowshik
>
> On Tue, Nov 10, 2020 at 3:26 PM Harsha Chintalapani 
> wrote:
>
>> Thanks Kowshik for the link. Seems reasonable,  as we discussed on the
>> call, code and completion of this KIP will be taken up by us.
>> Regarding Milestone 2, what you think it needs to be clarified there?
>> I believe what we are promising in the KIP along with unit tests, systems
>> tests will be delivered and we can call that as preview.   We will be
>> running this in our production and continue to provide the data and
>> metrics
>> to push this feature to GA.
>>
>>
>>
>> On Tue, Nov 10, 2020 at 10:07 AM, Kowshik Prakasam <
>> kpraka...@confluent.io>
>> wrote:
>>
>> > Hi Harsha/Satish,
>> >
>> > Thanks for the discussion today. Here is a link to the KIP-405
>> <https://issues.apache.org/jira/browse/KIP-405> development
>> > milestones google doc we discussed in the meeting today: https://docs.
>> > google.com/document/d/1B5_jaZvWWb2DUpgbgImq0k_IPZ4DWrR8Ru7YpuJrXdc/edit
>> > . I have shared it with you. Please have a look and share your
>> > feedback/improvements. As we discussed, things are clear until
>> milestone 1.
>> > Beyond that, we can discuss it again (perhaps in next sync or later),
>> once
>> > you have thought through the implementation plan/milestones and release
>> > into preview in 3.0.
>> >
>> > Cheers,
>> > Kowshik
>> >
>> > On Tue, Nov 10, 2020 at 6:56 AM Satish Duggana <
>> satish.dugg...@gmail.com>
>> > wrote:
>> >
>> > Hi Jun,
>> > Thanks for your comments. Please find the inline replies below.
>> >
>> > 605.2 "Build the local leader epoch cache by cutting the leader epoch
>> > sequence received from remote storage to [LSO, ELO]." I mentioned an
>> issue
>> > earlier. Suppose the leader's local start offset is 100. The follower
>> finds
>> > a remote segment covering offset range [80, 120). The producerState with
>> > this remote segment is up to offset 120. To trim the producerState to
>> > offset 100 requires more work since one needs to download the previous
>> > producerState up to offset 80 and then replay the messages from 80 to
>> 100.
>> > It seems that it's simpler in this case for the follower just to take
>> the
>> > remote segment as it is and start fetching from offset 120.
>> >
>> > We chose that approach to avoid any edge cases here. It may be possible
>> > that the remote log segment that is received may not have the same
>> leader
>> > epoch sequence from 100-120 as it contains on the leader(this can happen
>> > due to unclean leader). It is safe to start from what the leader returns
>> > here.Another way is to find the remote log segment
>> >
>> > 5016. Just to echo what Kowshik was saying. It seems that
>> > RLMM.onPartitionLeadershipChanges() is only called on the replicas for a
>> > partition, not on the replicas for the __remote_log_segment_metadata
>> > partition. It's not clear how the leader of
>> __remote_log_segment_metadata
>> > obtains the metadata for remote segments for deletion.
>> >
>> > RLMM will always receive the callback for the remote log metadata topic
>> > partitions hosted on the local broker and these will be subscribed. I
>> will
>> > make this clear in the KIP.
>> >
>> > 5100. KIP-516 <https://issues.apache.org/jira/browse/KIP-516> has been
>> accepted and is being implemented now. Could you
>> &g

Unable to sign into VPN

2020-12-01 Thread Kowshik Prakasam
Hi,

I'm unable to sign into VPN. When I goto Okta and click on "Pulse Secure
VPN", I keep getting the attached error message. Could you please help
resolve this?


Thanks,
Kowshik


Re: Unable to sign into VPN

2020-12-01 Thread Kowshik Prakasam
Please ignore this email, it was sent to the wrong email address.

On Tue, Dec 1, 2020 at 3:19 PM Kowshik Prakasam 
wrote:

> Hi,
>
> I'm unable to sign into VPN. When I goto Okta and click on "Pulse Secure
> VPN", I keep getting the attached error message. Could you please help
> resolve this?
>
>
> Thanks,
> Kowshik
>
>


Re: Unable to sign into VPN

2020-12-01 Thread Kowshik Prakasam
Sorry!

On Tue, Dec 1, 2020 at 3:19 PM Kowshik Prakasam 
wrote:

> Please ignore this email, it was sent to the wrong email address.
>
> On Tue, Dec 1, 2020 at 3:19 PM Kowshik Prakasam 
> wrote:
>
>> Hi,
>>
>> I'm unable to sign into VPN. When I goto Okta and click on "Pulse Secure
>> VPN", I keep getting the attached error message. Could you please help
>> resolve this?
>>
>>
>> Thanks,
>> Kowshik
>>
>>


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-12-15 Thread Kowshik Prakasam
> > >
> > > > > > 5119. It would be useful to list all things not supported in the
> > > first
> > > > > > version in a Future work or Limitations section. For example,
> > > compacted
> > > > > > topic, JBOD, changing remote.log.storage.enable from true to
> false,
> > > etc.
> > > > > >
> > > > > > We already have a non-goals section which is filled with some of
> > > these
> > > > > > details. Do we need another limitations section?
> > > > > >
> > > > > > Thanks,
> > > > > > Satish.
> > > > > >
> > > > > > On Wed, Nov 4, 2020 at 11:27 PM Jun Rao 
> wrote:
> > > > > > >
> > > > > > > Hi, Satish,
> > > > > > >
> > > > > > > Thanks for the updated KIP. A few more comments below.
> > > > > > >
> > > > > > > 605.2 "Build the local leader epoch cache by cutting the leader
> > > epoch
> > > > > > > sequence received from remote storage to [LSO, ELO]." I
> mentioned
> > > an
> > > > > issue
> > > > > > > earlier. Suppose the leader's local start offset is 100. The
> > > follower
> > > > > finds
> > > > > > > a remote segment covering offset range [80, 120). The
> producerState
> > > > > with
> > > > > > > this remote segment is up to offset 120. To trim the
> producerState
> > > to
> > > > > > > offset 100 requires more work since one needs to download the
> > > previous
> > > > > > > producerState up to offset 80 and then replay the messages
> from 80
> > > to
> > > > > 100.
> > > > > > > It seems that it's simpler in this case for the follower just
> to
> > > take
> > > > > the
> > > > > > > remote segment as it is and start fetching from offset 120.
> > > > > > >
> > > > > > > 5016. Just to echo what Kowshik was saying. It seems that
> > > > > > > RLMM.onPartitionLeadershipChanges() is only called on the
> replicas
> > > for
> > > > > a
> > > > > > > partition, not on the replicas for the
> > > __remote_log_segment_metadata
> > > > > > > partition. It's not clear how the leader of
> > > > > __remote_log_segment_metadata
> > > > > > > obtains the metadata for remote segments for deletion.
> > > > > > >
> > > > > > > 5100. KIP-516 has been accepted and is being implemented now.
> > > Could you
> > > > > > > update the KIP based on topicID?
> > > > > > >
> > > > > > > 5101. RLMM: It would be useful to clarify how the following two
> > > APIs
> > > > > are
> > > > > > > used. According to the wiki, the former is used for topic
> deletion
> > > and
> > > > > the
> > > > > > > latter is used for retention. It seems that retention should
> use
> > > the
> > > > > former
> > > > > > > since remote segments without a matching epoch in the leader
> > > > > (potentially
> > > > > > > due to unclean leader election) also need to be garbage
> collected.
> > > The
> > > > > > > latter seems to be used for the new leader to determine the
> last
> > > tiered
> > > > > > > segment.
> > > > > > > default Iterator
> > > > > > > listRemoteLogSegments(TopicPartition topicPartition)
> > > > > > > Iterator
> > > > > listRemoteLogSegments(TopicPartition
> > > > > > > topicPartition, long leaderEpoch);
> > > > > > >
> > > > > > > 5102. RSM:
> > > > > > > 5102.1 For methods like fetchLogSegmentData(), it seems that
> they
> > > can
> > > > > > > use RemoteLogSegmentId instead of RemoteLogSegmentMetadata.
> > > > > > > 5102.2 In fetchLogSegmentData(), should we use long instead of
> > > Long?
> > > > > > > 5102.3 Why only some of the methods have default
> implementation and
> > > > > others
> > > > > > > don

Re: [VOTE] 2.7.0 RC5

2020-12-15 Thread Kowshik Prakasam
Hi Bill,

Just a heads up that KAFKA-9393 that was previously marked as fixed in
2.7.0 has now been changed to 2.8.0. The corresponding PR is not present in
2.7.0 commit history. Please could you regenerate the release notes to help
pick up this change?


Cheers,
Kowshik


On Tue, Dec 15, 2020 at 9:20 AM Bill Bejeck  wrote:

> Thanks for voting John, I'm cc'ing the dev list as well.
>
> On Mon, Dec 14, 2020 at 8:35 PM John Roesler  wrote:
>
> > Thanks for this release, Bill,
> >
> > I ran though the quickstart (just the zk, broker, and
> > console clients part), verified the signatures, and also
> > built and ran the tests.
> >
> > I'm +1 (binding).
> >
> > Thanks,
> > -John
> >
> > On Mon, 2020-12-14 at 14:58 -0800, Guozhang Wang wrote:
> > > I checked the docs and ran unit tests, no red flags found. +1.
> > >
> > > On Fri, Dec 11, 2020 at 5:45 AM Bill Bejeck  wrote:
> > >
> > > > Updated with link to successful Jenkins build.
> > > >
> > > > * Successful Jenkins builds for the 2.7 branch:
> > > >  Unit/integration tests:
> > > >
> > > >
> >
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-2.7-jdk8/detail/kafka-2.7-jdk8/78/
> > > >
> > > > On Thu, Dec 10, 2020 at 5:17 PM Bill Bejeck 
> wrote:
> > > >
> > > > > Hello Kafka users, developers and client-developers,
> > > > >
> > > > > This is the sixth candidate for release of Apache Kafka 2.7.0.
> > > > >
> > > > > * Configurable TCP connection timeout and improve the initial
> > metadata
> > > > > fetch
> > > > > * Enforce broker-wide and per-listener connection creation rate
> > (KIP-612,
> > > > > part 1)
> > > > > * Throttle Create Topic, Create Partition and Delete Topic
> Operations
> > > > > * Add TRACE-level end-to-end latency metrics to Streams
> > > > > * Add Broker-side SCRAM Config API
> > > > > * Support PEM format for SSL certificates and private key
> > > > > * Add RocksDB Memory Consumption to RocksDB Metrics
> > > > > * Add Sliding-Window support for Aggregations
> > > > >
> > > > > This release also includes a few other features, 53 improvements,
> > and 84
> > > > > bug fixes.
> > > > >
> > > > > Release notes for the 2.7.0 release:
> > > > >
> https://home.apache.org/~bbejeck/kafka-2.7.0-rc5/RELEASE_NOTES.html
> > > > >
> > > > > *** Please download, test and vote by Friday, December 18, 12 PM ET
> > ***
> > > > >
> > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > > https://kafka.apache.org/KEYS
> > > > >
> > > > > * Release artifacts to be voted upon (source and binary):
> > > > > https://home.apache.org/~bbejeck/kafka-2.7.0-rc5/
> > > > >
> > > > > * Maven artifacts to be voted upon:
> > > > >
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > > > >
> > > > > * Javadoc:
> > > > > https://home.apache.org/~bbejeck/kafka-2.7.0-rc5/javadoc/
> > > > >
> > > > > * Tag to be voted upon (off 2.7 branch) is the 2.7.0 tag:
> > > > > https://github.com/apache/kafka/releases/tag/2.7.0-rc5
> > > > >
> > > > > * Documentation:
> > > > > https://kafka.apache.org/27/documentation.html
> > > > >
> > > > > * Protocol:
> > > > > https://kafka.apache.org/27/protocol.html
> > > > >
> > > > > * Successful Jenkins builds for the 2.7 branch:
> > > > > Unit/integration tests: Link to follow
> > > > >
> > > > > Thanks,
> > > > > Bill
> > > > >
> > > >
> > >
> > >
> >
> >
> >
>


Re: [ANNOUNCE] Apache Kafka 2.7.0

2020-12-28 Thread Kowshik Prakasam
Thank you for running the release, Bill!
Congrats to the community!


Cheers,
Kowshik


On Mon, Dec 28, 2020 at 12:20 PM Michael Chisina  wrote:

> Hello,
>
> Is there a way to configure powerdns recursor DNS queries and Apache Kafka
> to stream to a remote postgresdb/timescaleDB server? Is there any
> documentation online which might assist?
>
> Your assistance is greatly appreciated.
>
> Regards,
>
> Michael Chisina
>
>
>
> On Mon, Dec 28, 2020, 6:55 PM Ismael Juma  wrote:
>
> > Thanks for running the release, Bill. And congratulations to the
> community
> > for another release!
> >
> > Ismael
> >
> > On Mon, Dec 21, 2020, 8:01 AM Bill Bejeck  wrote:
> >
> > > The Apache Kafka community is pleased to announce the release for
> Apache
> > > Kafka 2.7.0
> > >
> > > * Configurable TCP connection timeout and improve the initial metadata
> > > fetch
> > > * Enforce broker-wide and per-listener connection creation rate
> (KIP-612,
> > > part 1)
> > > * Throttle Create Topic, Create Partition and Delete Topic Operations
> > > * Add TRACE-level end-to-end latency metrics to Streams
> > > * Add Broker-side SCRAM Config API
> > > * Support PEM format for SSL certificates and private key
> > > * Add RocksDB Memory Consumption to RocksDB Metrics
> > > * Add Sliding-Window support for Aggregations
> > >
> > > This release also includes a few other features, 53 improvements, and
> 91
> > > bug fixes.
> > >
> > > All of the changes in this release can be found in the release notes:
> > > https://www.apache.org/dist/kafka/2.7.0/RELEASE_NOTES.html
> > >
> > > You can read about some of the more prominent changes in the Apache
> Kafka
> > > blog:
> > > https://blogs.apache.org/kafka/entry/what-s-new-in-apache4
> > >
> > > You can download the source and binary release (Scala 2.12, 2.13) from:
> > > https://kafka.apache.org/downloads#2.7.0
> > >
> > >
> > >
> >
> ---
> > >
> > >
> > > Apache Kafka is a distributed streaming platform with four core APIs:
> > >
> > >
> > > ** The Producer API allows an application to publish a stream records
> to
> > > one or more Kafka topics.
> > >
> > > ** The Consumer API allows an application to subscribe to one or more
> > > topics and process the stream of records produced to them.
> > >
> > > ** The Streams API allows an application to act as a stream processor,
> > > consuming an input stream from one or more topics and producing an
> > > output stream to one or more output topics, effectively transforming
> the
> > > input streams to output streams.
> > >
> > > ** The Connector API allows building and running reusable producers or
> > > consumers that connect Kafka topics to existing applications or data
> > > systems. For example, a connector to a relational database might
> > > capture every change to a table.
> > >
> > >
> > > With these APIs, Kafka can be used for two broad classes of
> application:
> > >
> > > ** Building real-time streaming data pipelines that reliably get data
> > > between systems or applications.
> > >
> > > ** Building real-time streaming applications that transform or react
> > > to the streams of data.
> > >
> > >
> > > Apache Kafka is in use at large and small companies worldwide,
> including
> > > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest,
> Rabobank,
> > > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> > >
> > > A big thank you for the following 117 contributors to this release!
> > >
> > > A. Sophie Blee-Goldman, Aakash Shah, Adam Bellemare, Adem Efe Gencer,
> > > albert02lowis, Alex Diachenko, Andras Katona, Andre Araujo, Andrew
> Choi,
> > > Andrew Egelhofer, Andy Coates, Ankit Kumar, Anna Povzner, Antony
> Stubbs,
> > > Arjun Satish, Ashish Roy, Auston, Badai Aqrandista, Benoit Maggi, bill,
> > > Bill Bejeck, Bob Barrett, Boyang Chen, Brian Byrne, Bruno Cadonna, Can
> > > Cecen, Cheng Tan, Chia-Ping Tsai, Chris Egerton, Colin Patrick McCabe,
> > > David Arthur, David Jacot, David Mao, Dhruvil Shah, Dima Reznik,
> Edoardo
> > > Comar, Ego, Evelyn Bayes, feyman2016, Gal Margalit, gnkoshelev, Gokul
> > > Sriniv

[jira] [Resolved] (KAFKA-10157) Multiple tests failed due to "Failed to process feature ZK node change event"

2020-06-12 Thread Kowshik Prakasam (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kowshik Prakasam resolved KAFKA-10157.
--
Resolution: Fixed

> Multiple tests failed due to "Failed to process feature ZK node change event"
> -
>
> Key: KAFKA-10157
> URL: https://issues.apache.org/jira/browse/KAFKA-10157
> Project: Kafka
>  Issue Type: Bug
>Reporter: Anna Povzner
>    Assignee: Kowshik Prakasam
>Priority: Major
>
> Multiple tests failed due to "Failed to process feature ZK node change 
> event". Looks like a result of merge of this PR: 
> [https://github.com/apache/kafka/pull/8680]
> Note that running tests without `--info` gives output like this one: 
> {quote}Process 'Gradle Test Executor 36' finished with non-zero exit value 1
> {quote}
> kafka.network.DynamicConnectionQuotaTest failed:
> {quote}
> kafka.network.DynamicConnectionQuotaTest > testDynamicConnectionQuota 
> STANDARD_OUT
>  [2020-06-11 20:52:42,596] ERROR [feature-zk-node-event-process-thread]: 
> Failed to process feature ZK node change event. The broker will eventually 
> exit. 
> (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread:76)
>  java.lang.InterruptedException
>  at 
> java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056)
>  at 
> java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2090)
>  at 
> java.base/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
>  at 
> kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread.$anonfun$doWork$1(FinalizedFeatureChangeListener.scala:147){quote}
>  
> kafka.api.CustomQuotaCallbackTest failed:
> {quote}    [2020-06-11 21:07:36,745] ERROR 
> [feature-zk-node-event-process-thread]: Failed to process feature ZK node 
> change event. The broker will eventually exit. 
> (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread:76)
>     java.lang.InterruptedException
>         at 
> java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056)
>         at 
> java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2090)
>         at 
> java.base/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
>         at 
> kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread.$anonfun$doWork$1(FinalizedFeatureChangeListener.scala:147)
>         at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
>         at scala.util.control.Exception$Catch.apply(Exception.scala:227)
>         at 
> kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread.doWork(FinalizedFeatureChangeListener.scala:147)
>         at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
> at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
>  at scala.util.control.Exception$Catch.apply(Exception.scala:227)
>  at 
> kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread.doWork(FinalizedFeatureChangeListener.scala:147)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
> {quote}
>  
> kafka.server.DynamicBrokerReconfigurationTest failed:
> {quote}    [2020-06-11 21:13:01,207] ERROR 
> [feature-zk-node-event-process-thread]: Failed to process feature ZK node 
> change event. The broker will eventually exit. 
> (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread:76)
>     java.lang.InterruptedException
>         at 
> java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056)
>         at 
> java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2090)
>         at 
> java.base/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
>         at 
> kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread.$anonfun$doWork$1(FinalizedFeatureChangeListener.scala:147)
>         at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
>         at scala.util.control.Exception$Catch.apply(Exception.scala:227)
>         at 
> kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread.doWork(FinalizedFeatureChangeListener.scala:147)
>         at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
> {quote}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9755) Implement versioning scheme for features

2020-03-24 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-9755:
---

 Summary: Implement versioning scheme for features
 Key: KAFKA-9755
 URL: https://issues.apache.org/jira/browse/KAFKA-9755
 Project: Kafka
  Issue Type: Improvement
  Components: controller, core, protocol, streams
Reporter: Kowshik Prakasam


Details are in this wiki: 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features]
 .

This Jira is for tracking the implementation of versioning scheme for features 
to facilitate client discovery and feature gating (as explained in the above 
wiki).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10026) KIP-584: Implement read path for versioning scheme for features

2020-05-20 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-10026:


 Summary: KIP-584: Implement read path for versioning scheme for 
features
 Key: KAFKA-10026
 URL: https://issues.apache.org/jira/browse/KAFKA-10026
 Project: Kafka
  Issue Type: New Feature
Reporter: Kowshik Prakasam


Goal is to implement various classes and integration for the read path of the 
feature versioning system 
([KIP-584|https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features]).
 The ultimate plan is that the cluster-wide *finalized* features information is 
going to be stored in ZK under the node {{/feature}}. The read path implemented 
in this PR is centered around reading this *finalized* features information 
from ZK, and, processing it inside the Broker.

 

Here is a summary of what's needed for this Jira (a lot of it is *new* classes):
 * A facility is provided in the broker to declare it's supported features, and 
advertise it's supported features via it's own {{BrokerIdZNode}} under a 
{{features}} key.
 * A facility is provided in the broker to listen to and propagate cluster-wide 
*finalized* feature changes from ZK.
 * When new *finalized* features are read from ZK, feature incompatibilities 
are detected by comparing against the broker's own supported features.
 * {{ApiVersionsResponse}} is now served containing supported and finalized 
feature information (using the newly added tagged fields).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10027) KIP-584: Implement read path for versioning scheme for features

2020-05-20 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-10027:


 Summary: KIP-584: Implement read path for versioning scheme for 
features
 Key: KAFKA-10027
 URL: https://issues.apache.org/jira/browse/KAFKA-10027
 Project: Kafka
  Issue Type: Sub-task
Reporter: Kowshik Prakasam


Goal is to implement various classes and integration for the read path of the 
feature versioning system 
([KIP-584|https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features]).
 The ultimate plan is that the cluster-wide *finalized* features information is 
going to be stored in ZK under the node {{/feature}}. The read path implemented 
in this PR is centered around reading this *finalized* features information 
from ZK, and, processing it inside the Broker.

 

Here is a summary of what's needed for this Jira (a lot of it is *new* classes):
 * A facility is provided in the broker to declare it's supported features, and 
advertise it's supported features via it's own {{BrokerIdZNode}} under a 
{{features}} key.
 * A facility is provided in the broker to listen to and propagate cluster-wide 
*finalized* feature changes from ZK.
 * When new *finalized* features are read from ZK, feature incompatibilities 
are detected by comparing against the broker's own supported features.
 * {{ApiVersionsResponse}} is now served containing supported and finalized 
feature information (using the newly added tagged fields).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10026) KIP-584: Implement read path for versioning scheme for features

2020-05-20 Thread Kowshik Prakasam (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kowshik Prakasam resolved KAFKA-10026.
--
Resolution: Duplicate

Duplicate of KAFKA-10027

> KIP-584: Implement read path for versioning scheme for features
> ---
>
> Key: KAFKA-10026
> URL: https://issues.apache.org/jira/browse/KAFKA-10026
> Project: Kafka
>  Issue Type: New Feature
>    Reporter: Kowshik Prakasam
>Priority: Major
>
> Goal is to implement various classes and integration for the read path of the 
> feature versioning system 
> ([KIP-584|https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features]).
>  The ultimate plan is that the cluster-wide *finalized* features information 
> is going to be stored in ZK under the node {{/feature}}. The read path 
> implemented in this PR is centered around reading this *finalized* features 
> information from ZK, and, processing it inside the Broker.
>  
> Here is a summary of what's needed for this Jira (a lot of it is *new* 
> classes):
>  * A facility is provided in the broker to declare it's supported features, 
> and advertise it's supported features via it's own {{BrokerIdZNode}} under a 
> {{features}} key.
>  * A facility is provided in the broker to listen to and propagate 
> cluster-wide *finalized* feature changes from ZK.
>  * When new *finalized* features are read from ZK, feature incompatibilities 
> are detected by comparing against the broker's own supported features.
>  * {{ApiVersionsResponse}} is now served containing supported and finalized 
> feature information (using the newly added tagged fields).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10028) KIP-584: Implement write path for versioning scheme for features

2020-05-21 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-10028:


 Summary: KIP-584: Implement write path for versioning scheme for 
features
 Key: KAFKA-10028
 URL: https://issues.apache.org/jira/browse/KAFKA-10028
 Project: Kafka
  Issue Type: Sub-task
Reporter: Kowshik Prakasam


Goal is to implement various classes and integration for the write path of the 
feature versioning system 
([KIP-584|https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features]).
 This is preceded by the read path implementation (KAFKA-10027). The write path 
implementation involves developing the new controller API: UpdateFeatures that 
enables transactional application of a set of cluster-wide feature updates to 
the ZK {{'/features'}} node, along with required ACL permissions.

 

Details about the write path are explained [in this 
part|https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-ChangestoKafkaController]
 of the KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9715) TransactionStateManager: Eliminate unused reference to interBrokerProtocolVersion

2020-03-12 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-9715:
---

 Summary: TransactionStateManager: Eliminate unused reference to 
interBrokerProtocolVersion
 Key: KAFKA-9715
 URL: https://issues.apache.org/jira/browse/KAFKA-9715
 Project: Kafka
  Issue Type: Improvement
Reporter: Kowshik Prakasam


In TransactionStateManager, the attribute interBrokerProtocolVersion is unused. 
It can therefore be eliminated from the code.

 

[https://github.com/apache/kafka/blob/07db26c20fcbccbf758591607864f7fd4bd8975f/core/src/main/scala/kafka/coordinator/transaction/TransactionStateManager.scala#L78]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13701) Pin background worker threads for certain background work (ex: UnifiedLog.flush())

2022-02-28 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-13701:


 Summary: Pin background worker threads for certain background work 
(ex: UnifiedLog.flush())
 Key: KAFKA-13701
 URL: https://issues.apache.org/jira/browse/KAFKA-13701
 Project: Kafka
  Issue Type: Improvement
Reporter: Kowshik Prakasam


Certain background work such as UnifiedLog.flush() need not support 
concurrency. Today in the existing KafkaScheduler, we are not able to pin 
background work to specific threads. As a result we are unable to prevent 
concurrent UnifiedLog.flush() calls, so we have to ensure UnifiedLog.flush() 
implementation is thread safe by modifying the code at subtle areas (ex: [PR 
#11814|https://github.com/apache/kafka/pull/11814]). The code will be simpler 
if instead KafkaScheduler (or alike) could support pinning of certain 
background work to specific threads, for example the UnifiedLog.flush() 
operation for the same topic-partition will go to the same thread. This will 
ensure strict ordering of flush() calls, thereby enabling us to write simpler 
code eventually.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (KAFKA-12240) Proposal for Log layer refactoring

2021-01-26 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-12240:


 Summary: Proposal for Log layer refactoring
 Key: KAFKA-12240
 URL: https://issues.apache.org/jira/browse/KAFKA-12240
 Project: Kafka
  Issue Type: Improvement
Reporter: Kowshik Prakasam
Assignee: Kowshik Prakasam


Link to document containing the proposed idea for Log layer refactor for 
KIP-405 be found here: 
[https://docs.google.com/document/d/1dQJL4MCwqQJSPmZkVmVzshFZKuFy_bCPtubav4wBfHQ/edit?usp=sharing]
 .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12553) Refactor Log layer recovery logic

2021-03-25 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-12553:


 Summary: Refactor Log layer recovery logic
 Key: KAFKA-12553
 URL: https://issues.apache.org/jira/browse/KAFKA-12553
 Project: Kafka
  Issue Type: Sub-task
Reporter: Kowshik Prakasam
Assignee: Kowshik Prakasam


Refactor Log layer recovery logic by extracting it out of the kafka.log.Log 
class into separate modules.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12554) Split Log layer into UnifiedLog and LocalLog

2021-03-25 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-12554:


 Summary: Split Log layer into UnifiedLog and LocalLog
 Key: KAFKA-12554
 URL: https://issues.apache.org/jira/browse/KAFKA-12554
 Project: Kafka
  Issue Type: Sub-task
Reporter: Kowshik Prakasam
Assignee: Kowshik Prakasam


Split Log layer into UnifiedLog and LocalLog based on the proposal described in 
this document: 
https://docs.google.com/document/d/1dQJL4MCwqQJSPmZkVmVzshFZKuFy_bCPtubav4wBfHQ/edit#.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12552) Extract segments map out of Log class into separate class

2021-03-25 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-12552:


 Summary: Extract segments map out of Log class into separate class
 Key: KAFKA-12552
 URL: https://issues.apache.org/jira/browse/KAFKA-12552
 Project: Kafka
  Issue Type: Sub-task
Reporter: Kowshik Prakasam
Assignee: Kowshik Prakasam


Extract segments map out of Log class into separate class. This will be 
particularly useful to refactor the recovery logic in Log class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >