Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Vahid Hashemian
Congrats Manikumar!

On Thu, Oct 11, 2018 at 11:49 AM Ryanne Dolan  wrote:

> Bravo!
>
> On Thu, Oct 11, 2018 at 1:48 PM Ismael Juma  wrote:
>
> > Congratulations Manikumar! Thanks for your continued contributions.
> >
> > Ismael
> >
> > On Thu, Oct 11, 2018 at 10:39 AM Jason Gustafson 
> > wrote:
> >
> > > Hi all,
> > >
> > > The PMC for Apache Kafka has invited Manikumar Reddy as a committer and
> > we
> > > are
> > > pleased to announce that he has accepted!
> > >
> > > Manikumar has contributed 134 commits including significant work to add
> > > support for delegation tokens in Kafka:
> > >
> > > KIP-48:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> > > KIP-249
> > > <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
> > >
> > > :
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> > >
> > > He has broad experience working with many of the core components in
> Kafka
> > > and he has reviewed over 80 PRs. He has also made huge progress
> > addressing
> > > some of our technical debt.
> > >
> > > We appreciate the contributions and we are looking forward to more.
> > > Congrats Manikumar!
> > >
> > > Jason, on behalf of the Apache Kafka PMC
> > >
> >
>


Re: [DISCUSS] KIP-379: Multiple Consumer Group Management

2018-10-19 Thread Vahid Hashemian
Thanks for proposing the KIP. Looks good to me overall.

I agree with Jason's suggestion that it would be best to keep the current
output format when a single '--group' is present. Because otherwise, there
would be an impact to users who rely on the current output format. Also,
starting with a GROUP column makes more sense to me.

Also, and for my own info, is there a valid scenario where we would want to
delete all consumer groups? It sounds to me like a potentially dangerous
feature. I would imagine that it would help more with dev/test
environments, where we normally have a few groups (for which the repeating
'--group' option should work).

Regards!
--Vahid

On Thu, Oct 18, 2018 at 11:28 PM Jason Gustafson  wrote:

> Hi Alex,
>
> Thanks for the KIP. I think it makes sense, especially since most of the
> group apis are intended for batching anyway.
>
> The only questions I have are about compatibility. For example, the csv
> format for resetting offsets is changed, so will we continue to support the
> old format? Also, if only one `--group` option is passed, do you think it's
> worth leaving it the group column out of the `--describe` output? That
> would keep the describe output consistent with the current implementation
> for the current usage. Finally, and this is just a nitpick, but do you
> think it makes sense to put the group column first in the describe output?
>
> Thanks,
> Jason
>
>
> On Wed, Oct 3, 2018 at 7:11 AM, Alex D  wrote:
>
> > Hello, friends!
> >
> > Welcome to the Multiple Consumer Group Management feature for
> > kafka-consumer-groups utility discussion thread.
> >
> > KIP is available here:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-379%
> > 3A+Multiple+Consumer+Group+Management
> >
> > Pull Request: https://github.com/apache/kafka/pull/5726
> >
> > JIRA ticket: https://issues.apache.org/jira/browse/KAFKA-7471
> >
> > What do you think?
> >
> > Thanks,
> > Alexander Dunayevsky
> >
>


Re: [VOTE] 2.0.1 RC0

2018-10-30 Thread Vahid Hashemian
+1

Tested build with Java 8 and ran quick start successfully on Ubuntu.

Thanks for running the release.
--Vahid

On Thu, Oct 25, 2018 at 7:29 PM Manikumar  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 2.0.1.
>
> This is a bug fix release closing 49 tickets:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.0.1
>
> Release notes for the 2.0.1 release:
> http://home.apache.org/~manikumar/kafka-2.0.1-rc0/RELEASE_NOTES.html
>
> *** Please download, test and vote by  Tuesday, October 30, end of day
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~manikumar/kafka-2.0.1-rc0/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~manikumar/kafka-2.0.1-rc0/javadoc/
>
> * Tag to be voted upon (off 2.0 branch) is the 2.0.1 tag:
> https://github.com/apache/kafka/releases/tag/2.0.1-rc0
>
> * Documentation:
> http://kafka.apache.org/20/documentation.html
>
> * Protocol:
> http://kafka.apache.org/20/protocol.html
>
> * Successful Jenkins builds for the 2.0 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/177/
>
> /**
>
> Thanks,
> Manikumar
>


-- 

Thanks!
--Vahid


Re: [VOTE] KIP-374: Add '--help' option to all available Kafka CLI commands

2018-11-12 Thread Vahid Hashemian
+1 (non-binding)
Thanks for the KIP.

--Vahid

On Mon, Nov 12, 2018, 04:06 Mickael Maison  wrote:

> +1 (non-binding)
> Thanks for the KIP!
> On Mon, Nov 12, 2018 at 5:16 AM Becket Qin  wrote:
> >
> > Thanks for the KIP. +1 (binding).
> >
> > On Mon, Nov 12, 2018 at 9:59 AM Harsha Chintalapani 
> wrote:
> >
> > > +1 (binding)
> > >
> > > -Harsha
> > > On Nov 11, 2018, 3:49 PM -0800, Daniele Ascione ,
> > > wrote:
> > > > +1 (non-binding)
> > > >
> > > > Il ven 9 nov 2018, 02:09 Colin McCabe  ha
> scritto:
> > > >
> > > > > +1 (binding)
> > > > >
> > > > >
> > > > >
> > > > > On Wed, Oct 31, 2018, at 05:42, Srinivas Reddy wrote:
> > > > > > Hi All,
> > > > > >
> > > > > > I would like to call for a vote on KIP-374:
> > > > > > https://cwiki.apache.org/confluence/x/FgSQBQ
> > > > > >
> > > > > > Summary:
> > > > > > Currently, the '--help' option is recognized by some Kafka
> commands
> > > > > > but not all. To provide a consistent user experience, it would
> > > > > > be nice to> add a '--help' option to all Kafka commands.
> > > > > >
> > > > > > I'd appreciate any votes or feedback.
> > > > > >
> > > > > > --
> > > > > > Srinivas Reddy
> > > > > >
> > > > > > http://mrsrinivas.com/
> > > > > >
> > > > > >
> > > > > > (Sent via gmail web)
> > > > >
> > > > >
> > >
>


Re: [VOTE] 2.1.0 RC1

2018-11-15 Thread Vahid Hashemian
Hi Dong,

Thanks for running the release.

I built binaries from the source, ran quickstart and tests on Ubuntu (with
Java 8 & 9).

I noticed two issues:
* (minor) the documentation still mentions 2.0, and submitted a minor PR
 to update it.
* the unit test
`kafka.server.DynamicBrokerReconfigurationTest.testUncleanLeaderElectionEnable`
failed for me with the following error:

```
kafka.server.DynamicBrokerReconfigurationTest >
testUncleanLeaderElectionEnable FAILED
java.lang.AssertionError: Unclean leader not elected
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at
kafka.server.DynamicBrokerReconfigurationTest.testUncleanLeaderElectionEnable(DynamicBrokerReconfigurationTest.scala:487)
```

--Vahid

On Fri, Nov 9, 2018 at 3:33 PM Dong Lin  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for feature release of Apache Kafka 2.1.0.
>
> This is a major version release of Apache Kafka. It includes 28 new KIPs
> and
>
> critical bug fixes. Please see the Kafka 2.1.0 release plan for more
> details:
>
> *
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044*
>  >
>
> Here are a few notable highlights:
>
> - Java 11 support
> - Support for Zstandard, which achieves compression comparable to gzip with
> higher compression and especially decompression speeds(KIP-110)
> - Avoid expiring committed offsets for active consumer group (KIP-211)
> - Provide Intuitive User Timeouts in The Producer (KIP-91)
> - Kafka's replication protocol now supports improved fencing of zombies.
> Previously, under certain rare conditions, if a broker became partitioned
> from Zookeeper but not the rest of the cluster, then the logs of replicated
> partitions could diverge and cause data loss in the worst case (KIP-320)
> - Streams API improvements (KIP-319, KIP-321, KIP-330, KIP-353, KIP-356)
> - Admin script and admin client API improvements to simplify admin
> operation (KIP-231, KIP-308, KIP-322, KIP-324, KIP-338, KIP-340)
> - DNS handling improvements (KIP-235, KIP-302)
>
> Release notes for the 2.1.0 release:
> http://home.apache.org/~lindong/kafka-2.1.0-rc0/RELEASE_NOTES.html
>
> *** Please download, test and vote by Thursday, Nov 15, 12 pm PT ***
>
> * Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~lindong/kafka-2.1.0-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~lindong/kafka-2.1.0-rc1/javadoc/
>
> * Tag to be voted upon (off 2.1 branch) is the 2.1.0-rc1 tag:
> https://github.com/apache/kafka/tree/2.1.0-rc1
>
> * Documentation:
> *http://kafka.apache.org/21/documentation.html*
> 
>
> * Protocol:
> http://kafka.apache.org/21/protocol.html
>
> * Successful Jenkins builds for the 2.1 branch:
> Unit/integration tests: *https://builds.apache.org/job/kafka-2.1-jdk8/50/
> *
>
> Please test and verify the release artifacts and submit a vote for this RC,
> or report any issues so we can fix them and get a new RC out ASAP. Although
> this release vote requires PMC votes to pass, testing, votes, and bug
> reports are valuable and appreciated from everyone.
>
> Cheers,
> Dong
>


-- 

Thanks!
--Vahid


Re: [ANNOUNCE] Apache Kafka 2.1.0

2018-11-21 Thread Vahid Hashemian
t; > Gunnar Morling, Guozhang Wang, hashangayasri, huxi, huxihx, Ismael
> Juma,
> > > Jagadesh Adireddi, Jason Gustafson, Jim Galasyn, Jimin Hsieh, Jimmy
> > Casey,
> > > Joan Goyeau, John Roesler, Jon Lee, jonathanskrzypek, Jun Rao, Kamal
> > > Chandraprakash, Kevin Lafferty, Kevin Lu, Koen De Groote, Konstantine
> > > Karantasis, lambdaliu, Lee Dongjin, Lincong Li, Liquan Pei, lucapette,
> > > Lucas Wang, Maciej Bryński, Magesh Nandakumar, Manikumar Reddy,
> Manikumar
> > > Reddy O, Mario Molina, Marko Stanković, Matthias J. Sax, Matthias
> > > Wessendorf, Max Zheng, Mayank Tankhiwale, mgharat, Michal Dziemianko,
> > > Michał Borowiecki, Mickael Maison, Mutasem Aldmour, Nikolay, nixsticks,
> > > nprad, okumin, Radai Rosenblatt, radai-rosenblatt, Rajini Sivaram,
> > Randall
> > > Hauch, Robert Yokota, Rohan, Ron Dagostino, Sam Lendle, Sandor
> Murakozi,
> > > Simon Clark, Stanislav Kozlovski, Stephane Maarek, Sébastien Launay,
> > Sönke
> > > Liebau, Ted Yu, uncleGen, Vahid Hashemian, Viktor Somogyi, wangshao,
> > > xinzhg, Xiongqi Wesley Wu, Xiongqi Wu, ying-zheng, Yishun Guan, Yu
> Yang,
> > > Zhanxiang (Patrick) Huang
> > >
> > > We welcome your help and feedback. For more information on how to
> > > report problems, and to get involved, visit the project website at
> > > https://kafka.apache.org/
> > >
> > > Thank you!
> > >
> > > Regards,
> > > Dong
> >
>


Re: Kafka Cluster Setup

2018-12-06 Thread Vahid Hashemian
Hi Abhimanyu,

I have answered your questions inline, but before that I just want to
emphasize the notion of topics and partitions that are critical to Kafka's
resiliency and scalability.

Topics in Kafka can have multiple partitions. Each partition can be stored
on one broker only. But the number of partitions can grow over time to make
topics scalable.
Each topic partition can be configured to replicate on multiple brokers,
and this is how data becomes resilient to failures and outages.
You can find more detailed information in the highly recommended Kafka
Documentation (https://kafka.apache.org/documentation/).

I hope you find the answers below helpful.

--Vahid

On Thu, Dec 6, 2018 at 10:19 PM Abhimanyu Nagrath <
abhimanyunagr...@gmail.com> wrote:

> Hi,
>
> I have a use case I want to set up a Kafka cluster initially at the
> starting I have 1 Kafka Broker(A) and 1 Zookeeper Node. So below mentioned
> are my queries:
>
>- On adding a new Kafka Broker(B) to the cluster. Will all data present
>on broker A will be distributed automatically? If not what I need to do
>distribute the data.
>

When you add a new broker, existing data will not automatically move. In
order to have the new broker receive data, existing partitions need to
manually move to that broker. Kafka provides a command line tool for
(re)assigning partitions to broker (kafka-reassign-partitions example).
As new topic partitions are added to the cluster they will be distributed
in a way that keeps all brokers busy.


>- Not let's suppose somehow the case! is solved my data is distributed
>on both the brokers. Now due to some maintenance issue, I want to take
> down
>the server B.
>   - How to transfer the data of Broker B to the already existing broker
>   A or to a new Broker C.
>

You can use the reassign partition tools again to achieve that. If broker B
is going to join the cluster again, you may not need to do anything,
assuming you have created your topics (partitions) with resiliency in mind
(with enough replicas). Kafka will take care of partition movements for you.


>- How can I increase the replication factor of my brokers at runtime
>

Again, you can use the same tool and increase the number of brokers
assigned to each partition (
https://kafka.apache.org/documentation/#basic_ops_increase_replication_factor
)


>- How can I change the zookeeper IPs present in Kafka Broker Config at
>runtime without restarting Kafka?
>

This is not a supported operation. Ideally you are supporting your Kafka
cluster with a ZooKeeper ensemble, that is resilient too to some failures
and maintenance outages.


>- How can I dynamically change the Kafka Configuration at runtime
>

Thanks to KIP-226 (
https://cwiki.apache.org/confluence/display/KAFKA/KIP-226+-+Dynamic+Broker+Configuration)
there are some broker configurations that can be modified without a broker
restart. Details are in that document.


>- Regarding Kafka Client:
>   - Do I need to specify all Kafka broker IP to kafkaClient for
>   connection?
>

You need to provide enough broker IPs that guarantees the client can
connect to at least one of them. As long as the client can talk to one
broker it can obtain all the information it needs (by polling the metadata)
to function.


>   - And each and every time a broker is added or removed does I need to
>   add or remove my IP in Kafka Client connection String. As it will
> always
>   require to restart my producer and consumers?
>

Other than providing a few brokers, a more robust solution is to refresh
the list of available brokers at runtime. A basic approach is querying
ZooKeeper to compile a list of available brokers, to configure the client.


>
> *Note:*
>
>- Kafka Version: 2.0.0
>- Zookeeper: 3.4.9
>- Broker Size : (2 core, 8 GB RAM) [4GB for Kafka and 4 GB for OS]
>
> Regards,
> Abhimanyu
>


-- 

Thanks!
--Vahid


Re: Kafka Cluster Setup

2018-12-07 Thread Vahid Hashemian
Hi Abhimanyu,

Suman already answered your cluster upgrade question.

To move all partitions from one broker to another, you can use the
kafka-topics (with --describe) command to list all topic partitions and
their current replicas. You can use that, replace all instances of broker B
to broker A, and form the input for kafka-reassign-partitions command.

Regards.
--Vahid

On Thu, Dec 6, 2018 at 11:35 PM Abhimanyu Nagrath <
abhimanyunagr...@gmail.com> wrote:

> Hi Vahid,
>
> I missed one more point.
>
>
>- Can I have two brokers at the same time running at multiple versions
>just for Kafka version upgrade?
>
> Regards,
> Abhimanyu
>
> On Fri, Dec 7, 2018 at 1:03 PM Abhimanyu Nagrath <
> abhimanyunagr...@gmail.com> wrote:
>
>> Thanks for clarifying all the points . Just a small doubt regarding my
>> second query:
>>
>>
>>- *Not let's suppose somehow the case! is solved my data is
>>distributed on both the brokers. Now due to some maintenance issue, I want
>>to take down the server B.*
>>   - *How to transfer the data of Broker B to the already existing
>>   broker A or to a new Broker C.*
>>
>> Is there any command through which I can say that move all data from
>> broker B to broker A.
>>
>> Regards,
>> Abhimanyu
>>
>> On Fri, Dec 7, 2018 at 12:33 PM Vahid Hashemian <
>> vahid.hashem...@gmail.com> wrote:
>>
>>> Hi Abhimanyu,
>>>
>>> I have answered your questions inline, but before that I just want to
>>> emphasize the notion of topics and partitions that are critical to
>>> Kafka's
>>> resiliency and scalability.
>>>
>>> Topics in Kafka can have multiple partitions. Each partition can be
>>> stored
>>> on one broker only. But the number of partitions can grow over time to
>>> make
>>> topics scalable.
>>> Each topic partition can be configured to replicate on multiple brokers,
>>> and this is how data becomes resilient to failures and outages.
>>> You can find more detailed information in the highly recommended Kafka
>>> Documentation (https://kafka.apache.org/documentation/).
>>>
>>> I hope you find the answers below helpful.
>>>
>>> --Vahid
>>>
>>> On Thu, Dec 6, 2018 at 10:19 PM Abhimanyu Nagrath <
>>> abhimanyunagr...@gmail.com> wrote:
>>>
>>> > Hi,
>>> >
>>> > I have a use case I want to set up a Kafka cluster initially at the
>>> > starting I have 1 Kafka Broker(A) and 1 Zookeeper Node. So below
>>> mentioned
>>> > are my queries:
>>> >
>>> >- On adding a new Kafka Broker(B) to the cluster. Will all data
>>> present
>>> >on broker A will be distributed automatically? If not what I need
>>> to do
>>> >distribute the data.
>>> >
>>>
>>> When you add a new broker, existing data will not automatically move. In
>>> order to have the new broker receive data, existing partitions need to
>>> manually move to that broker. Kafka provides a command line tool for
>>> (re)assigning partitions to broker (kafka-reassign-partitions example).
>>> As new topic partitions are added to the cluster they will be distributed
>>> in a way that keeps all brokers busy.
>>>
>>>
>>> >- Not let's suppose somehow the case! is solved my data is
>>> distributed
>>> >on both the brokers. Now due to some maintenance issue, I want to
>>> take
>>> > down
>>> >the server B.
>>> >   - How to transfer the data of Broker B to the already existing
>>> broker
>>> >   A or to a new Broker C.
>>> >
>>>
>>> You can use the reassign partition tools again to achieve that. If
>>> broker B
>>> is going to join the cluster again, you may not need to do anything,
>>> assuming you have created your topics (partitions) with resiliency in
>>> mind
>>> (with enough replicas). Kafka will take care of partition movements for
>>> you.
>>>
>>>
>>> >- How can I increase the replication factor of my brokers at runtime
>>> >
>>>
>>> Again, you can use the same tool and increase the number of brokers
>>> assigned to each partition (
>>>
>>> https://kafka.apache.org/documentation/#basic_ops_increase_replication_factor
>>> )
>>>
>>>
>>> >- How can I change the zoo

Re: Finding reviewrs for a Kafka issue fix

2018-12-07 Thread Vahid Hashemian
I can take a look at the PR over the weekend.

--Vahid

On Fri, Dec 7, 2018 at 4:42 AM lk gen  wrote:

> As a  Kafka development newbe, what is the process for selecting reviewers
> ? Is there some kind of list of reviewers ? Is it possible to assign
> reviewers without checking with them ? is there some kind of bulletin board
> for finding reviewers ?
>
>
> On Fri, Dec 7, 2018 at 11:18 AM Gwen Shapira  wrote:
>
> > We normally self-select. I think in this case, the challenge is
> > finding reviewers who are comfortable with windows...
> > On Fri, Dec 7, 2018 at 10:17 PM lk gen  wrote:
> > >
> > >   I have fixed a Kafka issue over a week ago with a CI passing pull
> > > request, but there are no reviewers
> > >
> > >   How are reviewers added/chosen for Kafka issues fixes ?
> > >
> > >   https://issues.apache.org/jira/browse/KAFKA-6988
> >
> >
> >
> > --
> > Gwen Shapira
> > Product Manager | Confluent
> > 650.450.2760 <(650)%20450-2760> | @gwenshap
> > Follow us: Twitter | blog
> >
>


Re: [VOTE] KIP-379: Multiple Consumer Group Management

2019-01-15 Thread Vahid Hashemian
+1. Thanks for the KIP.

--Vahid

On Tue, Jan 15, 2019, 04:13 Damian Guy  wrote:

> +1 (binding) thanks
>
> On Tue, 15 Jan 2019 at 09:43, Alex D  wrote:
>
> > Hello guys,
> >
> > 2 votes from Jason, Gwen
> > Any binding votes?
> >
> >
> > Hello M Manna,
> > If migrating to JSON makes sense, you can post a new KIP for this
> purpose.
> >
> > Thanks,
> > Alex Dunayevsky
> >
> > On Tue, 15 Jan 2019, 04:24 Jason Gustafson  >
> > > +1 Thanks for the KIP!
> > >
> > > -Jason
> > >
> > > On Mon, Jan 14, 2019 at 5:15 PM Gwen Shapira 
> wrote:
> > >
> > > > I was also wondering about that. I think its for compatibility with
> > > > the existing output.
> > > >
> > > > We can have a separate KIP to add JSON output.
> > > >
> > > > On Mon, Jan 14, 2019 at 7:55 AM M. Manna  wrote:
> > > > >
> > > > > Hi,
> > > > >
> > > > > Thanks for this KIP. Could you kindly clarify why CSV format is
> > useful,
> > > > but
> > > > > not anything else?
> > > > >
> > > > > CSV format is ancient and the only reason it keep existing is
> various
> > > > > application is because legacy applications aren't moving away from
> > > using
> > > > > them. Would you would consider JSON or YAML?
> > > > >
> > > > > Also, if you think about the kafka-reassign-partitions - it's also
> > > using
> > > > > JSON, not CSV. That is my only point. However, if majority feels
> that
> > > > it's
> > > > > not
> > > > > an issue I believe it's a team decision after all :).
> > > > >
> > > > > Thanks,
> > > > >
> > > > >
> > > > > On Mon, 14 Jan 2019 at 15:06, Gwen Shapira 
> > wrote:
> > > > >
> > > > > > +1. Thanks, that will be very helpful.
> > > > > >
> > > > > > On Mon, Jan 14, 2019, 4:43 AM Alex D  > > wrote:
> > > > > >
> > > > > > > Hello guys,
> > > > > > >
> > > > > > > We need your VOTES for the KIP-379: Multiple Consumer Group
> > > > Management.
> > > > > > >
> > > > > > > KIP-379:
> > > > > > >
> > > > > > >
> > > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-379%3A+Multiple+Consumer+Group+Management
> > > > > > >
> > > > > > > PR: https://github.com/apache/kafka/pull/5726/
> > > > > > >
> > > > > > > Let's start the voting session.
> > > > > > >
> > > > > > > Thank you,
> > > > > > >
> > > > > > > Alex Dunayevsky
> > > > > > >
> > > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Gwen Shapira
> > > > Product Manager | Confluent
> > > > 650.450.2760 | @gwenshap
> > > > Follow us: Twitter | blog
> > > >
> > >
> >
>


Re: [ANNOUNCE] New Committer: Vahid Hashemian

2019-01-16 Thread Vahid Hashemian
Thank you all. It's my privilege to be involved in this great community and
I look forward to continuing this collaboration.

Regards,
--Vahid

On Wed, Jan 16, 2019, 04:58 Attila Sasvári  wrote:

> Congratulations Vahid!
>
> On Tue, Jan 15, 2019 at 11:45 PM Jason Gustafson 
> wrote:
>
> > Hi All,
> >
> > The PMC for Apache Kafka has invited Vahid Hashemian as a project
> > committer and
> > we are
> > pleased to announce that he has accepted!
> >
> > Vahid has made numerous contributions to the Kafka community over the
> past
> > few years. He has authored 13 KIPs with core improvements to the consumer
> > and the tooling around it. He has also contributed nearly 100 patches
> > affecting all parts of the codebase. Additionally, Vahid puts a lot of
> > effort into community engagement, helping others on the mail lists and
> > sharing his experience at conferences and meetups.
> >
> > We appreciate the contributions and we are looking forward to more.
> > Congrats Vahid!
> >
> > Jason, on behalf of the Apache Kafka PMC
> >
>


Re: [ANNOUNCE] New Kafka PMC member: Dong Lin

2018-08-20 Thread Vahid Hashemian
Congratulations Dong!

--Vahid

On Mon, Aug 20, 2018 at 1:08 PM Guozhang Wang  wrote:

> Congratulations Dong!
>
> On Mon, Aug 20, 2018 at 10:59 AM, Matthias J. Sax 
> wrote:
>
> > Congrats Dong!
> >
> > -Matthias
> >
> > On 8/20/18 3:54 AM, Ismael Juma wrote:
> > > Hi everyone,
> > >
> > > Dong Lin became a committer in March 2018. Since then, he has remained
> > > active in the community and contributed a number of patches, reviewed
> > > several pull requests and participated in numerous KIP discussions. I
> am
> > > happy to announce that Dong is now a member of the
> > > Apache Kafka PMC.
> > >
> > > Congratulation Dong! Looking forward to your future contributions.
> > >
> > > Ismael, on behalf of the Apache Kafka PMC
> > >
> >
> >
>
>
> --
> -- Guozhang
>


-- 

Thanks!
--Vahid


Re: Apache Kafka Authentication

2018-09-20 Thread Vahid Hashemian
Hi Rasheed,

This article https://developer.ibm.com/code/howtos/kafka-authn-authz explains
how to enable authentication and authorization in a Kafka cluster.
Note: it does not cover encryption.

Regards.
--Vahid

On Wed, Sep 19, 2018 at 10:33 PM Rasheed Siddiqui 
wrote:

> Dear Team,
>
>
>
> I want to know the detail document and discussion regarding the Kafka
> Authentication.
>
>
>
> We are building the Consumer on .Net Platform. We have difficulty in
> communication with Producer as we have developed Unsecure Consumer.
>
> So Please suggest to resolve this issue.
>
> Thanks in Advance!!!
>
>
>
>
>
>
>
> *Thanks & Regards,*
>
>
>
> [image: Description: Description: cid:image002.png@01D330A4.350DE830]
>
> *Rasheed Siddiqui *
>
> *Sr.Technical Analyst *
>
> M: 8655567060 <(865)%20556-7060>  E: rash...@ccentric.co
>
> www.ccentric.co
>
> [image: Description: Description: cid:image004.png@01D330A4.350DE830]
>
> *Years of *
>
> *Customer *
>
> *Excellence*
>
>
>
>
>


Re: [ANNOUNCE] New committer: Colin McCabe

2018-09-25 Thread Vahid Hashemian
Congratulations Colin!

Regards.
--Vahid

On Tue, Sep 25, 2018 at 3:43 PM Colin McCabe  wrote:

> Thanks, everyone!
>
> best,
> Colin
>
>
> On Tue, Sep 25, 2018, at 15:26, Robert Barrett wrote:
> > Congratulations Colin!
> >
> > On Tue, Sep 25, 2018 at 1:51 PM Matthias J. Sax 
> > wrote:
> >
> > > Congrats Colin! The was over due for some time :)
> > >
> > > -Matthias
> > >
> > > On 9/25/18 1:51 AM, Edoardo Comar wrote:
> > > > Congratulations Colin !
> > > > --
> > > >
> > > > Edoardo Comar
> > > >
> > > > IBM Event Streams
> > > > IBM UK Ltd, Hursley Park, SO21 2JN
> > > >
> > > >
> > > >
> > > >
> > > > From:   Ismael Juma 
> > > > To: Kafka Users , dev <
> dev@kafka.apache.org>
> > > > Date:   25/09/2018 09:40
> > > > Subject:[ANNOUNCE] New committer: Colin McCabe
> > > >
> > > >
> > > >
> > > > Hi all,
> > > >
> > > > The PMC for Apache Kafka has invited Colin McCabe as a committer and
> we
> > > > are
> > > > pleased to announce that he has accepted!
> > > >
> > > > Colin has contributed 101 commits and 8 KIPs including significant
> > > > improvements to replication, clients, code quality and testing. A few
> > > > highlights were KIP-97 (Improved Clients Compatibility Policy),
> KIP-117
> > > > (AdminClient), KIP-227 (Incremental FetchRequests to Increase
> Partition
> > > > Scalability), the introduction of findBugs and adding Trogdor (fault
> > > > injection and benchmarking tool).
> > > >
> > > > In addition, Colin has reviewed 38 pull requests and participated in
> more
> > > > than 50 KIP discussions.
> > > >
> > > > Thank you for your contributions Colin! Looking forward to many
> more. :)
> > > >
> > > > Ismael, for the Apache Kafka PMC
> > > >
> > > >
> > > >
> > > > Unless stated otherwise above:
> > > > IBM United Kingdom Limited - Registered in England and Wales with
> number
> > > > 741598.
> > > > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire
> PO6
> > > 3AU
> > > >
> > >
> > >
>


Re: [VOTE] 2.1.1 RC1

2019-02-02 Thread Vahid Hashemian
Hi Colin,

Will there be another RC for 2.1.1?

Thanks,
--Vahid

On Fri, Feb 1, 2019 at 8:28 AM Viktor Somogyi-Vass 
wrote:

> Hi,
>
> Ran the ducktapes but the streams upgrade tests failed because the dev
> version was not updated. This will be fixed in
> https://github.com/apache/kafka/pull/6217. Once it's merged I'll rerun
> them.
>
> Viktor
>
> On Wed, 30 Jan 2019, 22:19 Jonathan Santilli  wrote:
>
> > Hello,
> >
> > I have downloaded the source code for tag *2.1.1-rc1* (667980043).
> Executed
> > integration and unit tests:
> >
> > *BUILD SUCCESSFUL in 25m 34s*
> > *136 actionable tasks: 133 executed, 3 up-to-date*
> >
> >
> > Also, I have downloaded the artifact from
> > http://home.apache.org/~cmccabe/kafka-2.1.1-rc1/kafka_2.11-2.1.1.tgz and
> > ran a cluster of 3 Brokers.
> > Kept Kafka-monitor running for about 1 hour without any issues.
> >
> > +1
> >
> > Thanks for the release Colin.
> > --
> > Jonathan Santilli
> >
> >
> > On Wed, Jan 30, 2019 at 8:18 PM Eno Thereska 
> > wrote:
> >
> > > I couldn't repro locally, that was on an m3.large. And it's not
> happening
> > > anymore. Might be a transient issue.
> > >
> > > Thanks,
> > > Eno
> > >
> > > On Wed, Jan 30, 2019 at 6:46 PM Colin McCabe 
> wrote:
> > >
> > > > (+all lists)
> > > >
> > > > Hi Eno,
> > > >
> > > > Thanks for testing this.
> > > >
> > > > Those tests passed in the Jenkins build we did here:
> > > > https://builds.apache.org/job/kafka-2.1-jdk8/118/
> > > >
> > > > Perhaps there is an environment issue at play here?  Do you get the
> > same
> > > > failures running those tests on the 2.1 release?
> > > >
> > > > Best,
> > > > Colin
> > > >
> > > > On Wed, Jan 30, 2019, at 09:11, Eno Thereska wrote:
> > > > > Hi Colin,
> > > > >
> > > > > I've been running the tests and so far I get the following
> failures.
> > > Are
> > > > > they known?
> > > > >
> > > > > kafka.server.ReplicaManagerQuotasTest >
> > > > shouldGetBothMessagesIfQuotasAllow
> > > > > FAILED
> > > > > kafka.server.ReplicaManagerQuotasTest >
> > > > > testCompleteInDelayedFetchWithReplicaThrottling FAILED
> > > > > kafka.server.ReplicaManagerQuotasTest >
> > > > > shouldExcludeSubsequentThrottledPartitions FAILED
> > > > > kafka.server.ReplicaManagerQuotasTest >
> > > > > shouldGetNoMessagesIfQuotasExceededOnSubsequentPartitions FAILED
> > > > > kafka.server.ReplicaManagerQuotasTest >
> > > > > shouldIncludeInSyncThrottledReplicas FAILED
> > > > >
> > > > > Thanks
> > > > > Eno
> > > > >
> > > > > On Sun, Jan 27, 2019 at 9:46 PM Colin McCabe 
> > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > This is the second candidate for release of Apache Kafka 2.1.1.
> > This
> > > > > > release includes many bug fixes for Apache Kafka 2.1.
> > > > > >
> > > > > > Compared to rc0, this release includes the following changes:
> > > > > > * MINOR: Upgrade ducktape to 0.7.5 (#6197)
> > > > > > * KAFKA-7837: Ensure offline partitions are picked up as soon as
> > > > possible
> > > > > > when shrinking ISR
> > > > > > * tests/kafkatest/__init__.py now contains __version__ = '2.1.1'
> > > rather
> > > > > > than '2.1.1.dev0'
> > > > > > * Maven artifacts should be properly staged this time
> > > > > > * I have added my GPG key to https://kafka.apache.org/KEYS
> > > > > >
> > > > > > Check out the release notes here:
> > > > > >
> http://home.apache.org/~cmccabe/kafka-2.1.1-rc1/RELEASE_NOTES.html
> > > > > >
> > > > > > The vote will go until Friday, February 1st.
> > > > > >
> > > > > > * Release artifacts to be voted upon (source and binary):
> > > > > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc1/
> > > > > >
> > > > > > * Maven artifacts to be voted upon:
> > > > > > https://repository.apache.org/content/groups/staging/
> > > > > >
> > > > > > * Javadoc:
> > > > > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc1/javadoc/
> > > > > >
> > > > > > * Tag to be voted upon (off 2.1 branch) is the 2.1.1 tag:
> > > > > > https://github.com/apache/kafka/releases/tag/2.1.1-rc1
> > > > > >
> > > > > > * Successful Jenkins builds for the 2.1 branch:
> > > > > > Unit/integration tests:
> > > > https://builds.apache.org/job/kafka-2.1-jdk8/118/
> > > > > >
> > > > > > thanks,
> > > > > > Colin
> > > > > >
> > > > >
> > > >
> > >
> >
> >
> > --
> > Santilli Jonathan
> >
>


Re: Please add me to dev list

2019-02-09 Thread Vahid Hashemian
Hi Abhishek,

Thanks for your interest. What's you user name?

--Vahid

On Sat, Feb 9, 2019 at 2:32 PM abhishek jain 
wrote:

> Hey, I am abhishek jain
> Please add me to Dev list so that i could assign some jiras to myself and
> work on it.
>


Re: Please add me to dev list

2019-02-09 Thread Vahid Hashemian
Added you to the list. Thanks!

--Vahid

On Sat, Feb 9, 2019 at 5:03 PM abhishek jain 
wrote:

> jainabhi1
>
> On Sun, 10 Feb 2019 at 6:23 AM, Vahid Hashemian  >
> wrote:
>
> > Hi Abhishek,
> >
> > Thanks for your interest. What's you user name?
> >
> > --Vahid
> >
> > On Sat, Feb 9, 2019 at 2:32 PM abhishek jain 
> > wrote:
> >
> > > Hey, I am abhishek jain
> > > Please add me to Dev list so that i could assign some jiras to myself
> and
> > > work on it.
> > >
> >
>


Re: [VOTE] 2.1.1 RC2

2019-02-10 Thread Vahid Hashemian
+1

I built from the source, and ran the Quickstart on Ubuntu 18.04 with Java 8.
Some unit tests fail for me but none consistently across several runs.

Thanks for running the release Colin.

--Vahid

On Sat, Feb 9, 2019 at 2:24 PM Jakub Scholz  wrote:

> +1 (non-binding). I built it from source and run my tests. Everything seems
> to be fine.
>
> On Sat, Feb 9, 2019 at 12:10 AM Magnus Edenhill 
> wrote:
>
> > +1
> >
> > Passes librdkafka test suite.
> >
> > Den fre 8 feb. 2019 kl 21:02 skrev Colin McCabe :
> >
> > > Hi all,
> > >
> > > This is the third candidate for release of Apache Kafka 2.1.1.  This
> > > release includes many bug fixes for Apache Kafka 2.1.
> > >
> > > Compared to rc1, this release includes the following changes:
> > > * MINOR: release.py: fix some compatibility problems.
> > > * KAFKA-7897; Disable leader epoch cache when older message formats are
> > > used
> > > * KAFKA-7902: Replace original loginContext if SASL/OAUTHBEARER refresh
> > > login fails
> > > * MINOR: Fix more places where the version should be bumped from 2.1.0
> ->
> > > 2.1.1
> > > * KAFKA-7890: Invalidate ClusterConnectionState cache for a broker if
> the
> > > hostname of the broker changes.
> > > * KAFKA-7873; Always seek to beginning in KafkaBasedLog
> > > * MINOR: Correctly set dev version in version.py
> > >
> > > Check out the release notes here:
> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/RELEASE_NOTES.html
> > >
> > > The vote will go until Wednesday, February 13st.
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/
> > >
> > > * Javadoc:
> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/javadoc/
> > >
> > > * Tag to be voted upon (off 2.1 branch) is the 2.1.1 tag:
> > > https://github.com/apache/kafka/releases/tag/2.1.1-rc2
> > >
> > > * Jenkins builds for the 2.1 branch:
> > > Unit/integration tests: https://builds.apache.org/job/kafka-2.1-jdk8/
> > >
> > > Thanks to everyone who tested the earlier RCs.
> > >
> > > cheers,
> > > Colin
> > >
> >
>


Re: Contribution

2019-02-11 Thread Vahid Hashemian
Hi Valeria,

I added you as a contributor so you should now be able to assign JIRA
tickets to yourself.
Thanks for your interest.

--Vahid

On Mon, Feb 11, 2019 at 6:51 AM Kartik Kalaghatgi <
karthikkalaghatgi...@gmail.com> wrote:

> Hi Valeria,
>
> >> if you want to work on any existing open jira . Ask the team to assign
> it to you. Once done you can start working on it.
>
> >> if you have hit any issues which seems to be a bug, then you can raise a
> jira ticket and assign it to you and start working on it.
> (Don’t forget to check jira, before opening the new ticket)
>
> >> if you have any new idea, you can open a new KIP and have a discussion
> on it.
>
>
> Kafka team correct me if I am wrong.
>
> Regards,
> Kartik
>
> On Mon, 11 Feb 2019 at 6:08 PM, Valeria Vasylieva <
> valeria.vasyli...@gmail.com> wrote:
>
> > Thank you, Kartik, I have read it.
> > As I have understood, can I only start working on an issue when Kafka
> > committer or PMC member will add me as a contributor, so that I will be
> > able to assign JIRA issues to myself.
> > My JIRA ID: nimfadora (
> > https://issues.apache.org/jira/secure/ViewProfile.jspa?name=nimfadora)
> >
> >
> > пн, 11 февр. 2019 г. в 15:21, Christopher Bogan <
> > ambitiousking...@gmail.com
> > >:
> >
> > > Hello I'm a little confused about the information how do I get added
> > >
> > > On Mon, Feb 11, 2019, 7:13 AM Kartik Kalaghatgi <
> > > karthikkalaghatgi...@gmail.com wrote:
> > >
> > > > Check out :
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
> > > >
> > > > -Kartik
> > > >
> > > > On Mon, 11 Feb 2019 at 5:00 PM, Валерия Васильева <
> > > > valeria.vasyli...@gmail.com> wrote:
> > > >
> > > > > Hi!
> > > > >
> > > > > My name is Valeria Vasylieva, I am Java/Scala developer interested
> in
> > > > > contributing to Apache Kafka. I have viewed Apache Kafka JIRA
> issues
> > > and
> > > > > found interesting ones.
> > > > > Please, could you give me a quick introduction on how to contribute
> > to
> > > > the
> > > > > project, what steps should I follow?
> > > > >
> > > > > Thank you,
> > > > >
> > > > > Valeria Vasylieva
> > > > >
> > > >
> > >
> >
>


Re: Access to Contribution

2019-02-11 Thread Vahid Hashemian
Hi Gurudatt,

Your id is added to the contributors list.
Thanks for your interest.

--Vahid

On Mon, Feb 11, 2019 at 5:12 AM Gurudatt Kulkarni 
wrote:

> Hi Kafka Team,
>
> I would like to start contributing to Apache Kafka. My JIRA id is
> *gurudatt
> . *
>
> Regards,
> Gurudatt Kulkarni
>


Re: Add to Contributors list.

2019-02-11 Thread Vahid Hashemian
Hi Kartik,

I added you to the list. Thanks in advance for contributing.

--Vahid

On Mon, Feb 11, 2019 at 8:19 AM Kartik Kalaghatgi <
karthikkalaghatgi...@gmail.com> wrote:

> Hi Team.
>
> Can you add me to contributors list?
> Jira ID :
> https://issues.apache.org/jira/secure/ViewProfile.jspa?name=kartikvk1996
>
> Regards,
> Kartik
>


Re: [ANNOUNCE] New Committer: Bill Bejeck

2019-02-13 Thread Vahid Hashemian
Congratulations Bill!

On Wed, Feb 13, 2019 at 5:09 PM Matthias J. Sax 
wrote:

> Congrats! Well deserved!
>
> -Matthias
>
> On 2/13/19 4:56 PM, Guozhang Wang wrote:
> > Hello all,
> >
> > The PMC of Apache Kafka is happy to announce that we've added Bill Bejeck
> > as our newest project committer.
> >
> > Bill has been active in the Kafka community since 2015. He has made
> > significant contributions to the Kafka Streams project with more than 100
> > PRs and 4 authored KIPs, including the streams topology optimization
> > framework. Bill's also very keen on tightening Kafka's unit test / system
> > tests coverage, which is a great value to our project codebase.
> >
> > In addition, Bill has been very active in evangelizing Kafka for stream
> > processing in the community. He has given several Kafka meetup talks in
> the
> > past year, including a presentation at Kafka Summit SF. He's also
> authored
> > a book about Kafka Streams (
> > https://www.manning.com/books/kafka-streams-in-action), as well as
> various
> > of posts in public venues like DZone as well as his personal blog (
> > http://codingjunkie.net/).
> >
> > We really appreciate the contributions and are looking forward to see
> more
> > from him. Congratulations, Bill !
> >
> >
> > Guozhang, on behalf of the Apache Kafka PMC
> >
>
>


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-14 Thread Vahid Hashemian
Congrats Randall!

On Thu, Feb 14, 2019, 19:44 Ismael Juma  wrote:

> Congratulations Randall!
>
> On Thu, Feb 14, 2019, 6:16 PM Guozhang Wang 
> > Hello all,
> >
> > The PMC of Apache Kafka is happy to announce another new committer
> joining
> > the project today: we have invited Randall Hauch as a project committer
> and
> > he has accepted.
> >
> > Randall has been participating in the Kafka community for the past 3
> years,
> > and is well known as the founder of the Debezium project, a popular
> project
> > for database change-capture streams using Kafka (https://debezium.io).
> > More
> > recently he has become the main person keeping Kafka Connect moving
> > forward, participated in nearly all KIP discussions and QAs on the
> mailing
> > list. He's authored 6 KIPs and authored 50 pull requests and conducted
> over
> > a hundred reviews around Kafka Connect, and has also been evangelizing
> > Kafka Connect at several Kafka Summit venues.
> >
> >
> > Thank you very much for your contributions to the Connect community
> Randall
> > ! And looking forward to many more :)
> >
> >
> > Guozhang, on behalf of the Apache Kafka PMC
> >
>


Re: [DISCUSS] KIP-427: Add AtMinIsr topic partition category (new metric & TopicCommand option)

2019-03-03 Thread Vahid Hashemian
Hi Kevin,

Thanks for the great write-up and the examples in the KIP that help with
better understanding the motivation.

I also think that having such a category would help with Kafka operations
by providing a more actionable indicator.

One minor concern that I have is even with this new category and depending
on the situation some Kafka SREs may still need to define their custom
alerting. For example, for some, atMinIsr may be too late and they might
want to be notified when a partition is atMinIsr + 1.

But having this new category should be beneficial with Kafka monitoring in
most cases without having to define customized alerts.

Thanks,
--Vahid


On Tue, Feb 12, 2019, 09:02 Kevin Lu  wrote:

> Hi All,
>
> Getting the discussion thread started for KIP-427 in case anyone is free
> right now.
>
> I’d like to propose a new category of topic partitions *AtMinIsr* which are
> partitions that only have the minimum number of in sync replicas left in
> the ISR set (as configured by min.insync.replicas).
>
> This would add two new metrics *ReplicaManager.AtMinIsrPartitionCount *&
> *Partition.AtMinIsr*, and a new TopicCommand option*
> --at-min-isr-partitions* to help in monitoring and alerting.
>
> KIP link: KIP-427: Add AtMinIsr topic partition category (new metric &
> TopicCommand option)
> <
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103089398
> >
>
> Please take a look and let me know what you think.
>
> Regards,
> Kevin
>


Re: [VOTE] KIP-427: Add AtMinIsr topic partition category (new metric & TopicCommand option)

2019-03-06 Thread Vahid Hashemian
Thanks for the KIP Kevin.

+1 (binding)

--Vahid

On Wed, Mar 6, 2019 at 8:39 PM Dongjin Lee  wrote:

> +1 (non-binding)
>
> On Wed, Mar 6, 2019, 3:14 AM Dong Lin  wrote:
>
> > Hey Kevin,
> >
> > Thanks for the KIP!
> >
> > +1 (binding)
> >
> > Thanks,
> > Dong
> >
> > On Tue, Mar 5, 2019 at 9:38 AM Kevin Lu  wrote:
> >
> > > Hi All,
> > >
> > > I would like to start the vote thread for KIP-427: Add AtMinIsr topic
> > > partition category (new metric & TopicCommand option).
> > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103089398
> > >
> > > Thanks!
> > >
> > > Regards,
> > > Kevin
> > >
> >
>


Re: add as contributor

2019-03-28 Thread Vahid Hashemian
Hi Brad,

What's your user id?

--Vahid

On Fri, Mar 29, 2019, 04:36 Brad Ellis  wrote:

> Hi,
>
> I'd like to contribute to the kafka project.  Can you add me as a
> contributor?
> In particular, I'm planning on picking up this JIRA as a first issue:
>
> https://issues.apache.org/jira/browse/KAFKA-8157
>
> Best regards,
>
> Brad Ellis
> https://github.com/tbradellis
>


Re: add as contributor

2019-03-30 Thread Vahid Hashemian
Hi Brad,

I just added you to the list of contributors.

Thanks!
--Vahid

On Sun, Mar 31, 2019, 00:49 Brad Ellis  wrote:

> Hey,
> Just checking in here. I've got some time this weekend and would love to
> start contributing.  But, I don't want to go too far before assigning a
> Jira to myself and end up duplicating work.
>
> Can you add me as a contributor so that I can assign a Jira to myself (or
> assign this one to me: KAFKA-8157
> )?
>
> Name: Travis Brad Ellis
> username: Brad Ellis
> https://issues.apache.org/jira/secure/ViewProfile.jspa
> https://github.com/tbradellis
>
>
> On Thu, Mar 28, 2019 at 4:41 PM Brad Ellis  wrote:
>
> > Hi,
> >
> > I'd like to contribute to the kafka project.  Can you add me as a
> > contributor?
> > In particular, I'm planning on picking up this JIRA as a first issue:
> >
> > https://issues.apache.org/jira/browse/KAFKA-8157
> >
> > Best regards,
> >
> > Brad Ellis
> > https://github.com/tbradellis
> >
>


[UPDATE] KIP-341: Update Sticky Assignor's User Data Protocol

2019-04-15 Thread Vahid Hashemian
Just a heads up to the community that the implementation of this KIP is
almost complete. I'd like to just mention that there was a slight deviation
in implementation from the approved KIP. I have updated the KIP to keep it
consistent with the final implementation.

To check what has changes please see this version comparison
.
Please let me know within the next couple of days if there is any objection
to this update. Otherwise, the corresponding PR will be merged to trunk in
its current form.

Thank you!
--Vahid


Re: [ANNOUNCE] New Kafka PMC member: Matthias J. Sax

2019-04-18 Thread Vahid Hashemian
Congratulations Matthias!

--Vahid

On Thu, Apr 18, 2019 at 9:39 PM Manikumar  wrote:

> Congrats Matthias!. well deserved.
>
> On Fri, Apr 19, 2019 at 7:44 AM Dong Lin  wrote:
>
> > Congratulations Matthias!
> >
> > Very well deserved!
> >
> > On Thu, Apr 18, 2019 at 2:35 PM Guozhang Wang 
> wrote:
> >
> > > Hello Everyone,
> > >
> > > I'm glad to announce that Matthias J. Sax is now a member of Kafka PMC.
> > >
> > > Matthias has been a committer since Jan. 2018, and since then he
> > continued
> > > to be active in the community and made significant contributions the
> > > project.
> > >
> > >
> > > Congratulations to Matthias!
> > >
> > > -- Guozhang
> > >
> >
>


-- 

Thanks!
--Vahid


Re: Cannot create a KIP

2019-04-23 Thread Vahid Hashemian
Hi Daniyar,

I gave KIP creation permission to user "daniyar.yeralin", assuming it's you
:)

Thanks for contributing!
--Vahid

On Tue, Apr 23, 2019 at 10:50 AM Daniyar Yeralin 
wrote:

> Hello,
>
> I was trying to submit a PR: https://github.com/apache/kafka/pull/6592
> Matthias J. Sax told me that I need to go through a formal process on
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
>
> However, after creating an account, I do not have a permission to create a
> KIP. Furthermore, according to https://kafka.apache.org/contributing It
> simply says "Please contact us to be added the contributor list.", but it
> doesn't specifies how. So, the best way I found is to shoot an email to
> dev@kafka.apache.org
>
> I'm sorry for any inconvenience.
>
> Best,
> Daniyar Yeralin
>


-- 

Thanks!
--Vahid


[DISCUSS] 2.2.1 Bug Fix Release

2019-04-24 Thread Vahid Hashemian
Hi all,

I'd like to volunteer for the release manager of the 2.2.1 bug fix release.
Kafka 2.2.0 was released on March 22, 2019.

At this point, there are 29 resolved JIRA issues scheduled for inclusion in
2.2.1:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%202.2.1

The release plan is documented here:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.2.1

Thanks!
--Vahid


Re: [DISCUSS] 2.2.1 Bug Fix Release

2019-05-02 Thread Vahid Hashemian
If there are no objections on the proposed plan, I'll start preparing the
first release candidate.

Thanks,
--Vahid

On Thu, Apr 25, 2019 at 6:27 AM Ismael Juma  wrote:

> Thanks Vahid!
>
> On Wed, Apr 24, 2019 at 10:44 PM Vahid Hashemian <
> vahid.hashem...@gmail.com>
> wrote:
>
> > Hi all,
> >
> > I'd like to volunteer for the release manager of the 2.2.1 bug fix
> release.
> > Kafka 2.2.0 was released on March 22, 2019.
> >
> > At this point, there are 29 resolved JIRA issues scheduled for inclusion
> in
> > 2.2.1:
> >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%202.2.1
> >
> > The release plan is documented here:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.2.1
> >
> > Thanks!
> > --Vahid
> >
>


-- 

Thanks!
--Vahid


Re: [DISCUSS] KIP-419 Safely notify Kafka Connect SourceTask is stopped

2019-05-02 Thread Vahid Hashemian
Hi Andrew,

Thanks for the KIP. I'm not too familiar with the internals of KC so I hope
you can clarify a couple of things:

   - It seems the KIP is proposing a new interface because the existing
   "stop()" interface doesn't fully perform what it should ideally be doing.
   Is that a fair statement?
   - You mentioned the "stop()" interface can be called multiple times.
   Would the same thing be true for the proposed interface? Does it matter? Or
   there is a guard against that?
   - I also agree with Ryan that using a verb sounds more intuitive for an
   interface that's supposed to trigger some action.

Regards,
--Vahid


On Thu, Jan 24, 2019 at 9:23 AM Ryanne Dolan  wrote:

> Ah, I'm sorta wrong -- in the current implementation, restartTask()
> stops the task and starts a *new* task instance with the same task ID.
> (I'm not certain that is clear from the documentation or interfaces,
> or if that may change in the future.)
>
> Ryanne
>
> On Thu, Jan 24, 2019 at 10:25 AM Ryanne Dolan 
> wrote:
> >
> > Andrew, I believe the task can be started again with start() during the
> stopping and stopped states in your diagram.
> >
> > Ryanne
> >
> > On Thu, Jan 24, 2019, 10:20 AM Andrew Schofield <
> andrew_schofi...@live.com wrote:
> >>
> >> I've now added a diagram to illustrate the states of a SourceTask. The
> KIP is essentially trying to give a clear signal to SourceTask when all
> work has stopped. In particular, if a SourceTask has a session to the
> source system that it uses in poll() and commit(), it now has a safe way to
> release this.
> >>
> >> Andrew Schofield
> >> IBM Event Streams
> >>
> >> On 21/01/2019, 10:13, "Andrew Schofield" 
> wrote:
> >>
> >> Ryanne,
> >> Thanks for your comments. I think my overarching point is that the
> various states of a SourceTask and the transitions between them seem a bit
> loose and that makes it difficult to figure out when the resources held by
> a SourceTask can be safely released. Your "I can't tell from the
> documentation" comment is key here __ Neither could I.
> >>
> >> The problem is that stop() is a signal to stop polling. It's
> basically a request from the framework to the task and it doesn't tell the
> task that it's actually finished. One of the purposes of the KC framework
> is to make life easy for a connector developer and a nice clean "all done
> now" method would help.
> >>
> >> I think I'll add a diagram to illustrate to the KIP.
> >>
> >> Andrew Schofield
> >> IBM Event Streams
> >>
> >> On 18/01/2019, 19:02, "Ryanne Dolan"  wrote:
> >>
> >> Andrew, do we know whether the SourceTask may be start()ed
> again? If this
> >> is the last call to a SourceTask I suggest we call it close().
> I can't tell
> >> from the documentation.
> >>
> >> Also, do we need this if a SourceTask can keep track of whether
> it was
> >> start()ed since the last stop()?
> >>
> >> Ryanne
> >>
> >>
> >> On Fri, Jan 18, 2019, 12:02 PM Andrew Schofield <
> andrew_schofi...@live.com
> >> wrote:
> >>
> >> > Hi,
> >> > I’ve created a new KIP to enhance the SourceTask interface in
> Kafka
> >> > Connect.
> >> >
> >> >
> >> >
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-419%3A%2BSafely%2Bnotify%2BKafka%2BConnect%2BSourceTask%2Bis%2Bstopped&data=02%7C01%7C%7Cfa73e281fe0348a2740b08d67f8924b8%7C84df9e7fe9f640afb435%7C1%7C0%7C636836624328119778&sdata=v6BU3q3W4Q2RIkdWtHCCn5uCSTF%2BMAnbj%2F%2B2%2Flladco%3D&reserved=0
> >> >
> >> > Comments welcome.
> >> >
> >> > Andrew Schofield
> >> > IBM Event Streams
> >> >
> >> >
> >>
> >>
> >>
> >>
>


-- 

Thanks!
--Vahid


Re: [DISCUSS] 2.2.1 Bug Fix Release

2019-05-03 Thread Vahid Hashemian
Thanks for the filter fix and the heads up John.
I'll wait for that to go through then.

--Vahid

On Fri, May 3, 2019 at 8:33 AM John Roesler  wrote:

> Thanks for volunteering, Vahid!
>
> I noticed that the "unresolved issues" filter on the plan page was still
> set to 2.1.1 (I fixed it).
>
> There's one blocker left: https://issues.apache.org/jira/browse/KAFKA-8289
> ,
> but it's merged to trunk and we're cherry-picking to 2.2 today.
>
> Thanks again!
> -John
>
> On Thu, May 2, 2019 at 10:38 PM Vahid Hashemian  >
> wrote:
>
> > If there are no objections on the proposed plan, I'll start preparing the
> > first release candidate.
> >
> > Thanks,
> > --Vahid
> >
> > On Thu, Apr 25, 2019 at 6:27 AM Ismael Juma  wrote:
> >
> > > Thanks Vahid!
> > >
> > > On Wed, Apr 24, 2019 at 10:44 PM Vahid Hashemian <
> > > vahid.hashem...@gmail.com>
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I'd like to volunteer for the release manager of the 2.2.1 bug fix
> > > release.
> > > > Kafka 2.2.0 was released on March 22, 2019.
> > > >
> > > > At this point, there are 29 resolved JIRA issues scheduled for
> > inclusion
> > > in
> > > > 2.2.1:
> > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%202.2.1
> > > >
> > > > The release plan is documented here:
> > > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.2.1
> > > >
> > > > Thanks!
> > > > --Vahid
> > > >
> > >
> >
> >
> > --
> >
> > Thanks!
> > --Vahid
> >
>


-- 

Thanks!
--Vahid


Re: [DISCUSS] 2.2.1 Bug Fix Release

2019-05-03 Thread Vahid Hashemian
Hi John,

Thanks for confirming.
I'll wait for final bug fix PR for this issue to get merged so we can
safely resolve the ticket. That makes it easier with the release script.
Hopefully, the current build passes.

--Vahid

On Fri, May 3, 2019 at 3:07 PM John Roesler  wrote:

> Hi Vahid,
>
> The fix is merged to 2.2. The ticket isn't resolved yet, because the tests
> failed on the 2.1 merge, but I think the 2.2.1 release is unblocked now.
>
> Thanks,
> -John
>
> On Fri, May 3, 2019 at 10:41 AM Vahid Hashemian  >
> wrote:
>
> > Thanks for the filter fix and the heads up John.
> > I'll wait for that to go through then.
> >
> > --Vahid
> >
> > On Fri, May 3, 2019 at 8:33 AM John Roesler  wrote:
> >
> > > Thanks for volunteering, Vahid!
> > >
> > > I noticed that the "unresolved issues" filter on the plan page was
> still
> > > set to 2.1.1 (I fixed it).
> > >
> > > There's one blocker left:
> > https://issues.apache.org/jira/browse/KAFKA-8289
> > > ,
> > > but it's merged to trunk and we're cherry-picking to 2.2 today.
> > >
> > > Thanks again!
> > > -John
> > >
> > > On Thu, May 2, 2019 at 10:38 PM Vahid Hashemian <
> > vahid.hashem...@gmail.com
> > > >
> > > wrote:
> > >
> > > > If there are no objections on the proposed plan, I'll start preparing
> > the
> > > > first release candidate.
> > > >
> > > > Thanks,
> > > > --Vahid
> > > >
> > > > On Thu, Apr 25, 2019 at 6:27 AM Ismael Juma 
> wrote:
> > > >
> > > > > Thanks Vahid!
> > > > >
> > > > > On Wed, Apr 24, 2019 at 10:44 PM Vahid Hashemian <
> > > > > vahid.hashem...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I'd like to volunteer for the release manager of the 2.2.1 bug
> fix
> > > > > release.
> > > > > > Kafka 2.2.0 was released on March 22, 2019.
> > > > > >
> > > > > > At this point, there are 29 resolved JIRA issues scheduled for
> > > > inclusion
> > > > > in
> > > > > > 2.2.1:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%202.2.1
> > > > > >
> > > > > > The release plan is documented here:
> > > > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.2.1
> > > > > >
> > > > > > Thanks!
> > > > > > --Vahid
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > >
> > > > Thanks!
> > > > --Vahid
> > > >
> > >
> >
> >
> > --
> >
> > Thanks!
> > --Vahid
> >
>


-- 

Thanks!
--Vahid


Re: [DISCUSS] 2.2.1 Bug Fix Release

2019-05-03 Thread Vahid Hashemian
Hi Sophie,

Thanks for the heads-up. Once the fix is confirmed, could you please create
a ticket for it and assign it to 2.2.1 release?

Thanks,
--Vahid

On Fri, May 3, 2019 at 3:24 PM Sophie Blee-Goldman 
wrote:

> Hey Vahid,
>
> We also have another minor bug fix we just uncovered and are hoping to get
> in today although I don't think there's a ticket for it atm...just waiting
> for the build to pass.
>
> Thanks for volunteering!
>
> Cheers,
> Sophie
>
> On Fri, May 3, 2019 at 3:16 PM Vahid Hashemian 
> wrote:
>
> > Hi John,
> >
> > Thanks for confirming.
> > I'll wait for final bug fix PR for this issue to get merged so we can
> > safely resolve the ticket. That makes it easier with the release script.
> > Hopefully, the current build passes.
> >
> > --Vahid
> >
> > On Fri, May 3, 2019 at 3:07 PM John Roesler  wrote:
> >
> > > Hi Vahid,
> > >
> > > The fix is merged to 2.2. The ticket isn't resolved yet, because the
> > tests
> > > failed on the 2.1 merge, but I think the 2.2.1 release is unblocked
> now.
> > >
> > > Thanks,
> > > -John
> > >
> > > On Fri, May 3, 2019 at 10:41 AM Vahid Hashemian <
> > vahid.hashem...@gmail.com
> > > >
> > > wrote:
> > >
> > > > Thanks for the filter fix and the heads up John.
> > > > I'll wait for that to go through then.
> > > >
> > > > --Vahid
> > > >
> > > > On Fri, May 3, 2019 at 8:33 AM John Roesler 
> wrote:
> > > >
> > > > > Thanks for volunteering, Vahid!
> > > > >
> > > > > I noticed that the "unresolved issues" filter on the plan page was
> > > still
> > > > > set to 2.1.1 (I fixed it).
> > > > >
> > > > > There's one blocker left:
> > > > https://issues.apache.org/jira/browse/KAFKA-8289
> > > > > ,
> > > > > but it's merged to trunk and we're cherry-picking to 2.2 today.
> > > > >
> > > > > Thanks again!
> > > > > -John
> > > > >
> > > > > On Thu, May 2, 2019 at 10:38 PM Vahid Hashemian <
> > > > vahid.hashem...@gmail.com
> > > > > >
> > > > > wrote:
> > > > >
> > > > > > If there are no objections on the proposed plan, I'll start
> > preparing
> > > > the
> > > > > > first release candidate.
> > > > > >
> > > > > > Thanks,
> > > > > > --Vahid
> > > > > >
> > > > > > On Thu, Apr 25, 2019 at 6:27 AM Ismael Juma 
> > > wrote:
> > > > > >
> > > > > > > Thanks Vahid!
> > > > > > >
> > > > > > > On Wed, Apr 24, 2019 at 10:44 PM Vahid Hashemian <
> > > > > > > vahid.hashem...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi all,
> > > > > > > >
> > > > > > > > I'd like to volunteer for the release manager of the 2.2.1
> bug
> > > fix
> > > > > > > release.
> > > > > > > > Kafka 2.2.0 was released on March 22, 2019.
> > > > > > > >
> > > > > > > > At this point, there are 29 resolved JIRA issues scheduled
> for
> > > > > > inclusion
> > > > > > > in
> > > > > > > > 2.2.1:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%202.2.1
> > > > > > > >
> > > > > > > > The release plan is documented here:
> > > > > > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.2.1
> > > > > > > >
> > > > > > > > Thanks!
> > > > > > > > --Vahid
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > >
> > > > > > Thanks!
> > > > > > --Vahid
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > >
> > > > Thanks!
> > > > --Vahid
> > > >
> > >
> >
> >
> > --
> >
> > Thanks!
> > --Vahid
> >
>


-- 

Thanks!
--Vahid


Re: [DISCUSS] 2.2.1 Bug Fix Release

2019-05-07 Thread Vahid Hashemian
Hi John,

Thanks for checking, and sorry for the delay. I'm working on it.
I'm working through some issues with the release script / my environment.
Hopefully not much longer :)

Thanks,
--Vahid

On Tue, May 7, 2019 at 10:53 AM John Roesler  wrote:

> Hi Vahid,
>
> Can you let us know the status of the release? I don't mean to
> pressure you, but I actually just had someone ask me for a status
> update on some of my bugfixes.
>
> Thanks again for driving this!
> -John
>
> On Fri, May 3, 2019 at 5:51 PM Vahid Hashemian
>  wrote:
> >
> > Hi Sophie,
> >
> > Thanks for the heads-up. Once the fix is confirmed, could you please
> create
> > a ticket for it and assign it to 2.2.1 release?
> >
> > Thanks,
> > --Vahid
> >
> > On Fri, May 3, 2019 at 3:24 PM Sophie Blee-Goldman 
> > wrote:
> >
> > > Hey Vahid,
> > >
> > > We also have another minor bug fix we just uncovered and are hoping to
> get
> > > in today although I don't think there's a ticket for it atm...just
> waiting
> > > for the build to pass.
> > >
> > > Thanks for volunteering!
> > >
> > > Cheers,
> > > Sophie
> > >
> > > On Fri, May 3, 2019 at 3:16 PM Vahid Hashemian <
> vahid.hashem...@gmail.com>
> > > wrote:
> > >
> > > > Hi John,
> > > >
> > > > Thanks for confirming.
> > > > I'll wait for final bug fix PR for this issue to get merged so we can
> > > > safely resolve the ticket. That makes it easier with the release
> script.
> > > > Hopefully, the current build passes.
> > > >
> > > > --Vahid
> > > >
> > > > On Fri, May 3, 2019 at 3:07 PM John Roesler 
> wrote:
> > > >
> > > > > Hi Vahid,
> > > > >
> > > > > The fix is merged to 2.2. The ticket isn't resolved yet, because
> the
> > > > tests
> > > > > failed on the 2.1 merge, but I think the 2.2.1 release is unblocked
> > > now.
> > > > >
> > > > > Thanks,
> > > > > -John
> > > > >
> > > > > On Fri, May 3, 2019 at 10:41 AM Vahid Hashemian <
> > > > vahid.hashem...@gmail.com
> > > > > >
> > > > > wrote:
> > > > >
> > > > > > Thanks for the filter fix and the heads up John.
> > > > > > I'll wait for that to go through then.
> > > > > >
> > > > > > --Vahid
> > > > > >
> > > > > > On Fri, May 3, 2019 at 8:33 AM John Roesler 
> > > wrote:
> > > > > >
> > > > > > > Thanks for volunteering, Vahid!
> > > > > > >
> > > > > > > I noticed that the "unresolved issues" filter on the plan page
> was
> > > > > still
> > > > > > > set to 2.1.1 (I fixed it).
> > > > > > >
> > > > > > > There's one blocker left:
> > > > > > https://issues.apache.org/jira/browse/KAFKA-8289
> > > > > > > ,
> > > > > > > but it's merged to trunk and we're cherry-picking to 2.2 today.
> > > > > > >
> > > > > > > Thanks again!
> > > > > > > -John
> > > > > > >
> > > > > > > On Thu, May 2, 2019 at 10:38 PM Vahid Hashemian <
> > > > > > vahid.hashem...@gmail.com
> > > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > If there are no objections on the proposed plan, I'll start
> > > > preparing
> > > > > > the
> > > > > > > > first release candidate.
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > --Vahid
> > > > > > > >
> > > > > > > > On Thu, Apr 25, 2019 at 6:27 AM Ismael Juma <
> ism...@juma.me.uk>
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Thanks Vahid!
> > > > > > > > >
> > > > > > > > > On Wed, Apr 24, 2019 at 10:44 PM Vahid Hashemian <
> > > > > > > > > vahid.hashem...@gmail.com>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi all,
> > > > > > > > > >
> > > > > > > > > > I'd like to volunteer for the release manager of the
> 2.2.1
> > > bug
> > > > > fix
> > > > > > > > > release.
> > > > > > > > > > Kafka 2.2.0 was released on March 22, 2019.
> > > > > > > > > >
> > > > > > > > > > At this point, there are 29 resolved JIRA issues
> scheduled
> > > for
> > > > > > > > inclusion
> > > > > > > > > in
> > > > > > > > > > 2.2.1:
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%202.2.1
> > > > > > > > > >
> > > > > > > > > > The release plan is documented here:
> > > > > > > > > >
> > > > > >
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.2.1
> > > > > > > > > >
> > > > > > > > > > Thanks!
> > > > > > > > > > --Vahid
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > >
> > > > > > > > Thanks!
> > > > > > > > --Vahid
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > >
> > > > > > Thanks!
> > > > > > --Vahid
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > >
> > > > Thanks!
> > > > --Vahid
> > > >
> > >
> >
> >
> > --
> >
> > Thanks!
> > --Vahid
>


-- 

Thanks!
--Vahid


[VOTE] 2.2.1 RC0

2019-05-08 Thread Vahid Hashemian
Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka 2.2.1, which
includes many bug fixes for Apache Kafka 2.2.

Release notes for the 2.2.1 release:
https://home.apache.org/~vahid/kafka-2.2.1-rc0/RELEASE_NOTES.html

*** Please download, test and vote by Monday, May 13, 6:00 pm PT.

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~vahid/kafka-2.2.1-rc0/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~vahid/kafka-2.2.1-rc0/javadoc/

* Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
https://github.com/apache/kafka/releases/tag/2.2.1-rc0

* Documentation:
https://kafka.apache.org/22/documentation.html

* Protocol:
https://kafka.apache.org/22/protocol.html

* Successful Jenkins builds for the 2.2 branch:
Unit/integration tests: https://builds.apache.org/job/kafka-2.2-jdk8/106/

Thanks,
--Vahid


Re: [VOTE] 2.2.1 RC0

2019-05-10 Thread Vahid Hashemian
Hi Jason,

Sure. I'll wait for your PR to merge before cutting another RC.

Thanks!
--Vahid

On Fri, May 10, 2019 at 9:34 AM Jason Gustafson  wrote:

> Hi Vahid,
>
> I'd like to make the case for
> https://issues.apache.org/jira/browse/KAFKA-8335. This issue can cause
> unbounded growth in the __consumer_offsets topic when using transactions. I
> will have a patch ready today. Can we do another RC?
>
> Thanks,
> Jason
>
> On Wed, May 8, 2019 at 1:26 PM Vahid Hashemian 
> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the first candidate for release of Apache Kafka 2.2.1, which
> > includes many bug fixes for Apache Kafka 2.2.
> >
> > Release notes for the 2.2.1 release:
> > https://home.apache.org/~vahid/kafka-2.2.1-rc0/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Monday, May 13, 6:00 pm PT.
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~vahid/kafka-2.2.1-rc0/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~vahid/kafka-2.2.1-rc0/javadoc/
> >
> > * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
> > https://github.com/apache/kafka/releases/tag/2.2.1-rc0
> >
> > * Documentation:
> > https://kafka.apache.org/22/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/22/protocol.html
> >
> > * Successful Jenkins builds for the 2.2 branch:
> > Unit/integration tests:
> https://builds.apache.org/job/kafka-2.2-jdk8/106/
> >
> > Thanks,
> > --Vahid
> >
>


-- 

Thanks!
--Vahid


Re: [VOTE] 2.2.1 RC0

2019-05-12 Thread Vahid Hashemian
Hi Jonathan,

Thanks for reporting the issue.
Do you know if it is something that's introduced since 2.2? Did you run a
similar test with 2.2?

Thanks,
--Vahid

On Sun, May 12, 2019 at 2:22 PM Jonathan Santilli <
jonathansanti...@gmail.com> wrote:

> Hello Vahid,
>
>
> am testing one of our Kafka Stream Apps with the 2.2.1-rc, after few
> minutes, I see this WARN:
>
>
> 2019-05-09 13:14:37,025 WARN  [test-app-id-dc27624a-8e02-
> 4031-963b-7596a8a77097-StreamThread-1] internals.ProcessorStateManager (
> ProcessorStateManager.java:349) - task [0_0] Failed to write offset
> checkpoint file to [/tmp/kafka-stream-app/test-app-id/0_0/.checkpoint]
>
> java.io.FileNotFoundException:
> /tmp/kafka-stream-app/test-app-id/0_0/.checkpoint.tmp (No such file or
> directory)
>
> at java.io.FileOutputStream.open0(Native Method) ~[?:1.8.0_191]
>
> at java.io.FileOutputStream.open(FileOutputStream.java:270) ~[?:1.8.0_191]
>
> at java.io.FileOutputStream.(FileOutputStream.java:213)
> ~[?:1.8.0_191]
>
> at java.io.FileOutputStream.(FileOutputStream.java:162)
> ~[?:1.8.0_191]
>
> at org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(
> OffsetCheckpoint.java:79) ~[kafka-streams-2.2.1.jar:?]
>
> at
>
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(
> ProcessorStateManager.java:347) [kafka-streams-2.2.1.jar:?]
>
> at org.apache.kafka.streams.processor.internals.StreamTask.commit(
> StreamTask.java:476) [kafka-streams-2.2.1.jar:?]
>
> at org.apache.kafka.streams.processor.internals.StreamTask.suspend(
> StreamTask.java:598) [kafka-streams-2.2.1.jar:?]
>
> at org.apache.kafka.streams.processor.internals.StreamTask.close(
> StreamTask.java:724) [kafka-streams-2.2.1.jar:?]
>
> at org.apache.kafka.streams.processor.internals.AssignedTasks.close(
> AssignedTasks.java:337) [kafka-streams-2.2.1.jar:?]
>
> at org.apache.kafka.streams.processor.internals.TaskManager.shutdown(
> TaskManager.java:267) [kafka-streams-2.2.1.jar:?]
>
> at
> org.apache.kafka.streams.processor.internals.StreamThread.completeShutdown(
> StreamThread.java:1208) [kafka-streams-2.2.1.jar:?]
>
> at org.apache.kafka.streams.processor.internals.StreamThread.run(
> StreamThread.java:785) [kafka-streams-2.2.1.jar:?]
>
> Checking the system, in fact, the folder does not exist, but, others were
> created:
>
> # ls /tmp/kafka-stream-app/test-app-id/
> # 1_0 1_1 1_2
>
> After restarting the App, the same WARN shows-up but in this case, the
> folders were created but not the .checkpoint.tmp file:
>
> # ls /tmp/kafka-stream-app/test-app-id/
> # 0_0 0_1 0_2 1_0 1_1 1_2
>
> Am just reporting this because I found it strange/suspicious.
>
>
> Cheers!
> --
> Jonathan
>
>
>
> On Wed, May 8, 2019 at 9:26 PM Vahid Hashemian 
> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the first candidate for release of Apache Kafka 2.2.1, which
> > includes many bug fixes for Apache Kafka 2.2.
> >
> > Release notes for the 2.2.1 release:
> > https://home.apache.org/~vahid/kafka-2.2.1-rc0/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Monday, May 13, 6:00 pm PT.
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~vahid/kafka-2.2.1-rc0/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~vahid/kafka-2.2.1-rc0/javadoc/
> >
> > * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
> > https://github.com/apache/kafka/releases/tag/2.2.1-rc0
> >
> > * Documentation:
> > https://kafka.apache.org/22/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/22/protocol.html
> >
> > * Successful Jenkins builds for the 2.2 branch:
> > Unit/integration tests:
> https://builds.apache.org/job/kafka-2.2-jdk8/106/
> >
> > Thanks,
> > --Vahid
> >
>
>
> --
> Santilli Jonathan
>


-- 

Thanks!
--Vahid


Re: [VOTE] 2.2.1 RC0

2019-05-13 Thread Vahid Hashemian
Thanks Patrik for the reference.

That JIRA seems to be covering the exact same issue with multiple releases,
and is not marked as a blocker at this point.

--Vahid

On Mon, May 13, 2019 at 1:22 AM Patrik Kleindl  wrote:

> Hi
> This might be related to https://issues.apache.org/jira/browse/KAFKA-5998
> at least the behaviour is described there.
> Regards
> Patrik
>
> > Am 13.05.2019 um 00:23 schrieb Vahid Hashemian <
> vahid.hashem...@gmail.com>:
> >
> > Hi Jonathan,
> >
> > Thanks for reporting the issue.
> > Do you know if it is something that's introduced since 2.2? Did you run a
> > similar test with 2.2?
> >
> > Thanks,
> > --Vahid
> >
> > On Sun, May 12, 2019 at 2:22 PM Jonathan Santilli <
> > jonathansanti...@gmail.com> wrote:
> >
> >> Hello Vahid,
> >>
> >>
> >> am testing one of our Kafka Stream Apps with the 2.2.1-rc, after few
> >> minutes, I see this WARN:
> >>
> >>
> >> 2019-05-09 13:14:37,025 WARN  [test-app-id-dc27624a-8e02-
> >> 4031-963b-7596a8a77097-StreamThread-1] internals.ProcessorStateManager (
> >> ProcessorStateManager.java:349) - task [0_0] Failed to write offset
> >> checkpoint file to [/tmp/kafka-stream-app/test-app-id/0_0/.checkpoint]
> >>
> >> java.io.FileNotFoundException:
> >> /tmp/kafka-stream-app/test-app-id/0_0/.checkpoint.tmp (No such file or
> >> directory)
> >>
> >> at java.io.FileOutputStream.open0(Native Method) ~[?:1.8.0_191]
> >>
> >> at java.io.FileOutputStream.open(FileOutputStream.java:270)
> ~[?:1.8.0_191]
> >>
> >> at java.io.FileOutputStream.(FileOutputStream.java:213)
> >> ~[?:1.8.0_191]
> >>
> >> at java.io.FileOutputStream.(FileOutputStream.java:162)
> >> ~[?:1.8.0_191]
> >>
> >> at org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(
> >> OffsetCheckpoint.java:79) ~[kafka-streams-2.2.1.jar:?]
> >>
> >> at
> >>
> >>
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(
> >> ProcessorStateManager.java:347) [kafka-streams-2.2.1.jar:?]
> >>
> >> at org.apache.kafka.streams.processor.internals.StreamTask.commit(
> >> StreamTask.java:476) [kafka-streams-2.2.1.jar:?]
> >>
> >> at org.apache.kafka.streams.processor.internals.StreamTask.suspend(
> >> StreamTask.java:598) [kafka-streams-2.2.1.jar:?]
> >>
> >> at org.apache.kafka.streams.processor.internals.StreamTask.close(
> >> StreamTask.java:724) [kafka-streams-2.2.1.jar:?]
> >>
> >> at org.apache.kafka.streams.processor.internals.AssignedTasks.close(
> >> AssignedTasks.java:337) [kafka-streams-2.2.1.jar:?]
> >>
> >> at org.apache.kafka.streams.processor.internals.TaskManager.shutdown(
> >> TaskManager.java:267) [kafka-streams-2.2.1.jar:?]
> >>
> >> at
> >>
> org.apache.kafka.streams.processor.internals.StreamThread.completeShutdown(
> >> StreamThread.java:1208) [kafka-streams-2.2.1.jar:?]
> >>
> >> at org.apache.kafka.streams.processor.internals.StreamThread.run(
> >> StreamThread.java:785) [kafka-streams-2.2.1.jar:?]
> >>
> >> Checking the system, in fact, the folder does not exist, but, others
> were
> >> created:
> >>
> >> # ls /tmp/kafka-stream-app/test-app-id/
> >> # 1_0 1_1 1_2
> >>
> >> After restarting the App, the same WARN shows-up but in this case, the
> >> folders were created but not the .checkpoint.tmp file:
> >>
> >> # ls /tmp/kafka-stream-app/test-app-id/
> >> # 0_0 0_1 0_2 1_0 1_1 1_2
> >>
> >> Am just reporting this because I found it strange/suspicious.
> >>
> >>
> >> Cheers!
> >> --
> >> Jonathan
> >>
> >>
> >>
> >> On Wed, May 8, 2019 at 9:26 PM Vahid Hashemian <
> vahid.hashem...@gmail.com>
> >> wrote:
> >>
> >>> Hello Kafka users, developers and client-developers,
> >>>
> >>> This is the first candidate for release of Apache Kafka 2.2.1, which
> >>> includes many bug fixes for Apache Kafka 2.2.
> >>>
> >>> Release notes for the 2.2.1 release:
> >>> https://home.apache.org/~vahid/kafka-2.2.1-rc0/RELEASE_NOTES.html
> >>>
> >>> *** Please download, test and vote by Monday, May 13, 6:00 pm PT.
> >>>
> >>> Kafka's KEYS file containing PGP keys we use to sign the release:
> >>> https://kafka.apache.org/KEYS
> >>>
> >>> * Release artifacts to be voted upon (source and binary):
> >>> https://home.apache.org/~vahid/kafka-2.2.1-rc0/
> >>>
> >>> * Maven artifacts to be voted upon:
> >>> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >>>
> >>> * Javadoc:
> >>> https://home.apache.org/~vahid/kafka-2.2.1-rc0/javadoc/
> >>>
> >>> * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
> >>> https://github.com/apache/kafka/releases/tag/2.2.1-rc0
> >>>
> >>> * Documentation:
> >>> https://kafka.apache.org/22/documentation.html
> >>>
> >>> * Protocol:
> >>> https://kafka.apache.org/22/protocol.html
> >>>
> >>> * Successful Jenkins builds for the 2.2 branch:
> >>> Unit/integration tests:
> >> https://builds.apache.org/job/kafka-2.2-jdk8/106/
> >>>
> >>> Thanks,
> >>> --Vahid
> >>>
> >>
> >>
> >> --
> >> Santilli Jonathan
> >>
> >
> >
> > --
> >
> > Thanks!
> > --Vahid
>


-- 

Thanks!
--Vahid


Re: [VOTE] 2.2.1 RC0

2019-05-13 Thread Vahid Hashemian
Hi Jason,

Thanks for quick resolution on this ticket.
I'll work on generating RC1.

Thanks!
--Vahid

On Mon, May 13, 2019 at 9:02 AM Jason Gustafson  wrote:

> Hi Vahid,
>
> I merged the patch for KAFKA-8335 into 2.2.
>
> Thanks,
> Jason
>
> On Mon, May 13, 2019 at 7:17 AM Vahid Hashemian  >
> wrote:
>
> > Thanks Patrik for the reference.
> >
> > That JIRA seems to be covering the exact same issue with multiple
> releases,
> > and is not marked as a blocker at this point.
> >
> > --Vahid
> >
> > On Mon, May 13, 2019 at 1:22 AM Patrik Kleindl 
> wrote:
> >
> > > Hi
> > > This might be related to
> > https://issues.apache.org/jira/browse/KAFKA-5998
> > > at least the behaviour is described there.
> > > Regards
> > > Patrik
> > >
> > > > Am 13.05.2019 um 00:23 schrieb Vahid Hashemian <
> > > vahid.hashem...@gmail.com>:
> > > >
> > > > Hi Jonathan,
> > > >
> > > > Thanks for reporting the issue.
> > > > Do you know if it is something that's introduced since 2.2? Did you
> > run a
> > > > similar test with 2.2?
> > > >
> > > > Thanks,
> > > > --Vahid
> > > >
> > > > On Sun, May 12, 2019 at 2:22 PM Jonathan Santilli <
> > > > jonathansanti...@gmail.com> wrote:
> > > >
> > > >> Hello Vahid,
> > > >>
> > > >>
> > > >> am testing one of our Kafka Stream Apps with the 2.2.1-rc, after few
> > > >> minutes, I see this WARN:
> > > >>
> > > >>
> > > >> 2019-05-09 13:14:37,025 WARN  [test-app-id-dc27624a-8e02-
> > > >> 4031-963b-7596a8a77097-StreamThread-1]
> > internals.ProcessorStateManager (
> > > >> ProcessorStateManager.java:349) - task [0_0] Failed to write offset
> > > >> checkpoint file to
> [/tmp/kafka-stream-app/test-app-id/0_0/.checkpoint]
> > > >>
> > > >> java.io.FileNotFoundException:
> > > >> /tmp/kafka-stream-app/test-app-id/0_0/.checkpoint.tmp (No such file
> or
> > > >> directory)
> > > >>
> > > >> at java.io.FileOutputStream.open0(Native Method) ~[?:1.8.0_191]
> > > >>
> > > >> at java.io.FileOutputStream.open(FileOutputStream.java:270)
> > > ~[?:1.8.0_191]
> > > >>
> > > >> at java.io.FileOutputStream.(FileOutputStream.java:213)
> > > >> ~[?:1.8.0_191]
> > > >>
> > > >> at java.io.FileOutputStream.(FileOutputStream.java:162)
> > > >> ~[?:1.8.0_191]
> > > >>
> > > >> at org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(
> > > >> OffsetCheckpoint.java:79) ~[kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> at
> > > >>
> > > >>
> > >
> >
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(
> > > >> ProcessorStateManager.java:347) [kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> at org.apache.kafka.streams.processor.internals.StreamTask.commit(
> > > >> StreamTask.java:476) [kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> at org.apache.kafka.streams.processor.internals.StreamTask.suspend(
> > > >> StreamTask.java:598) [kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> at org.apache.kafka.streams.processor.internals.StreamTask.close(
> > > >> StreamTask.java:724) [kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> at org.apache.kafka.streams.processor.internals.AssignedTasks.close(
> > > >> AssignedTasks.java:337) [kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> at
> org.apache.kafka.streams.processor.internals.TaskManager.shutdown(
> > > >> TaskManager.java:267) [kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> at
> > > >>
> > >
> >
> org.apache.kafka.streams.processor.internals.StreamThread.completeShutdown(
> > > >> StreamThread.java:1208) [kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> at org.apache.kafka.streams.processor.internals.StreamThread.run(
> > > >> StreamThread.java:785) [kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> Checking the system, in fact, the folder does not exist, but, others
> > > were
> > > >> cr

[VOTE] 2.2.1 RC1

2019-05-13 Thread Vahid Hashemian
Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka 2.2.1.

Compared to RC0, this release candidate also fixes the following issues:

   - [KAFKA-6789] - Add retry logic in AdminClient requests
   - [KAFKA-8348] - Document of kafkaStreams improvement
   - [KAFKA-7633] - Kafka Connect requires permission to create internal
   topics even if they exist
   - [KAFKA-8240] - Source.equals() can fail with NPE
   - [KAFKA-8335] - Log cleaner skips Transactional mark and batch record,
   causing unlimited growth of __consumer_offsets
   - [KAFKA-8352] - Connect System Tests are failing with 404

Release notes for the 2.2.1 release:
https://home.apache.org/~vahid/kafka-2.2.1-rc1/RELEASE_NOTES.html

*** Please download, test and vote by Thursday, May 16, 9:00 pm PT.

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~vahid/kafka-2.2.1-rc1/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~vahid/kafka-2.2.1-rc1/javadoc/

* Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
https://github.com/apache/kafka/releases/tag/2.2.1-rc1

* Documentation:
https://kafka.apache.org/22/documentation.html

* Protocol:
https://kafka.apache.org/22/protocol.html

* Successful Jenkins builds for the 2.2 branch:
Unit/integration tests: https://builds.apache.org/job/kafka-2.2-jdk8/115/

Thanks!
--Vahid


Re: [VOTE] 2.2.1 RC1

2019-05-16 Thread Vahid Hashemian
Since there is no vote on this RC yet, I'll extend the deadline to Monday,
May 20, at 9:00 am.

Thanks in advance for checking / testing / voting.

--Vahid


On Mon, May 13, 2019, 20:15 Vahid Hashemian 
wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for release of Apache Kafka 2.2.1.
>
> Compared to RC0, this release candidate also fixes the following issues:
>
>- [KAFKA-6789] - Add retry logic in AdminClient requests
>- [KAFKA-8348] - Document of kafkaStreams improvement
>- [KAFKA-7633] - Kafka Connect requires permission to create internal
>topics even if they exist
>- [KAFKA-8240] - Source.equals() can fail with NPE
>- [KAFKA-8335] - Log cleaner skips Transactional mark and batch
>record, causing unlimited growth of __consumer_offsets
>- [KAFKA-8352] - Connect System Tests are failing with 404
>
> Release notes for the 2.2.1 release:
> https://home.apache.org/~vahid/kafka-2.2.1-rc1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Thursday, May 16, 9:00 pm PT.
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~vahid/kafka-2.2.1-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~vahid/kafka-2.2.1-rc1/javadoc/
>
> * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
> https://github.com/apache/kafka/releases/tag/2.2.1-rc1
>
> * Documentation:
> https://kafka.apache.org/22/documentation.html
>
> * Protocol:
> https://kafka.apache.org/22/protocol.html
>
> * Successful Jenkins builds for the 2.2 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.2-jdk8/115/
>
> Thanks!
> --Vahid
>


Re: [kafka-clients] Re: [VOTE] 2.2.1 RC1

2019-05-17 Thread Vahid Hashemian
Thanks Ismael, and no worries.
I'll look forward to hearing some feedback next week.

--Vahid

On Fri, May 17, 2019 at 12:41 AM Ismael Juma  wrote:

> Sorry for the delay Vahid. I suspect votes are more likely next week,
> after the feature freeze.
>
> Ismael
>
> On Thu, May 16, 2019 at 9:45 PM Vahid Hashemian 
> wrote:
>
>> Since there is no vote on this RC yet, I'll extend the deadline to Monday,
>> May 20, at 9:00 am.
>>
>> Thanks in advance for checking / testing / voting.
>>
>> --Vahid
>>
>>
>> On Mon, May 13, 2019, 20:15 Vahid Hashemian 
>> wrote:
>>
>> > Hello Kafka users, developers and client-developers,
>> >
>> > This is the second candidate for release of Apache Kafka 2.2.1.
>> >
>> > Compared to RC0, this release candidate also fixes the following issues:
>> >
>> >- [KAFKA-6789] - Add retry logic in AdminClient requests
>> >- [KAFKA-8348] - Document of kafkaStreams improvement
>> >- [KAFKA-7633] - Kafka Connect requires permission to create internal
>> >topics even if they exist
>> >- [KAFKA-8240] - Source.equals() can fail with NPE
>> >- [KAFKA-8335] - Log cleaner skips Transactional mark and batch
>> >record, causing unlimited growth of __consumer_offsets
>> >- [KAFKA-8352] - Connect System Tests are failing with 404
>> >
>> > Release notes for the 2.2.1 release:
>> > https://home.apache.org/~vahid/kafka-2.2.1-rc1/RELEASE_NOTES.html
>> >
>> > *** Please download, test and vote by Thursday, May 16, 9:00 pm PT.
>> >
>> > Kafka's KEYS file containing PGP keys we use to sign the release:
>> > https://kafka.apache.org/KEYS
>> >
>> > * Release artifacts to be voted upon (source and binary):
>> > https://home.apache.org/~vahid/kafka-2.2.1-rc1/
>> >
>> > * Maven artifacts to be voted upon:
>> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
>> >
>> > * Javadoc:
>> > https://home.apache.org/~vahid/kafka-2.2.1-rc1/javadoc/
>> >
>> > * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
>> > https://github.com/apache/kafka/releases/tag/2.2.1-rc1
>> >
>> > * Documentation:
>> > https://kafka.apache.org/22/documentation.html
>> >
>> > * Protocol:
>> > https://kafka.apache.org/22/protocol.html
>> >
>> > * Successful Jenkins builds for the 2.2 branch:
>> > Unit/integration tests:
>> https://builds.apache.org/job/kafka-2.2-jdk8/115/
>> >
>> > Thanks!
>> > --Vahid
>> >
>>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at https://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAD5tkZbE-c6X2f4%2BKf%3DeX3WxVBJrXE%2BCqw9Z0%2BvufSZsOW1E%3Dw%40mail.gmail.com
> <https://groups.google.com/d/msgid/kafka-clients/CAD5tkZbE-c6X2f4%2BKf%3DeX3WxVBJrXE%2BCqw9Z0%2BvufSZsOW1E%3Dw%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>


-- 

Thanks!
--Vahid


Re: [VOTE] 2.2.1 RC1

2019-05-22 Thread Vahid Hashemian
Bumping this thread to get some more votes, especially from committers, so
we can hopefully make a decision on this RC by the end of the week.

Thanks,
--Vahid

On Mon, May 13, 2019 at 8:15 PM Vahid Hashemian 
wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for release of Apache Kafka 2.2.1.
>
> Compared to RC0, this release candidate also fixes the following issues:
>
>- [KAFKA-6789] - Add retry logic in AdminClient requests
>- [KAFKA-8348] - Document of kafkaStreams improvement
>- [KAFKA-7633] - Kafka Connect requires permission to create internal
>topics even if they exist
>- [KAFKA-8240] - Source.equals() can fail with NPE
>- [KAFKA-8335] - Log cleaner skips Transactional mark and batch
>record, causing unlimited growth of __consumer_offsets
>- [KAFKA-8352] - Connect System Tests are failing with 404
>
> Release notes for the 2.2.1 release:
> https://home.apache.org/~vahid/kafka-2.2.1-rc1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Thursday, May 16, 9:00 pm PT.
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~vahid/kafka-2.2.1-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~vahid/kafka-2.2.1-rc1/javadoc/
>
> * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
> https://github.com/apache/kafka/releases/tag/2.2.1-rc1
>
> * Documentation:
> https://kafka.apache.org/22/documentation.html
>
> * Protocol:
> https://kafka.apache.org/22/protocol.html
>
> * Successful Jenkins builds for the 2.2 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.2-jdk8/115/
>
> Thanks!
> --Vahid
>


-- 

Thanks!
--Vahid


Re: [VOTE] 2.2.1 RC1

2019-06-01 Thread Vahid Hashemian
I'm a +1 on this RC too. I compiled the source, ran quickstart and tests
successfully.

Therefore, 2.2.1 RC1 passes with the following +1 votes and no -1 or 0
votes:

Binding +1s: Harsha, Matthias, Vahid
Non-binding +1s: Jonathan, Jakub, Victor, Andrew, Mickael, Satish

Here are the vote threads:
- https://www.mail-archive.com/dev@kafka.apache.org/msg97862.html
- https://www.mail-archive.com/users@kafka.apache.org/msg34256.html

Thanks to everyone who spent time verifying this release candidate.

I'll proceed with the release process.

--Vahid


On Mon, May 13, 2019 at 8:15 PM Vahid Hashemian 
wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for release of Apache Kafka 2.2.1.
>
> Compared to RC0, this release candidate also fixes the following issues:
>
>- [KAFKA-6789] - Add retry logic in AdminClient requests
>- [KAFKA-8348] - Document of kafkaStreams improvement
>- [KAFKA-7633] - Kafka Connect requires permission to create internal
>topics even if they exist
>- [KAFKA-8240] - Source.equals() can fail with NPE
>- [KAFKA-8335] - Log cleaner skips Transactional mark and batch
>record, causing unlimited growth of __consumer_offsets
>- [KAFKA-8352] - Connect System Tests are failing with 404
>
> Release notes for the 2.2.1 release:
> https://home.apache.org/~vahid/kafka-2.2.1-rc1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Thursday, May 16, 9:00 pm PT.
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~vahid/kafka-2.2.1-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~vahid/kafka-2.2.1-rc1/javadoc/
>
> * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
> https://github.com/apache/kafka/releases/tag/2.2.1-rc1
>
> * Documentation:
> https://kafka.apache.org/22/documentation.html
>
> * Protocol:
> https://kafka.apache.org/22/protocol.html
>
> * Successful Jenkins builds for the 2.2 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.2-jdk8/115/
>
> Thanks!
> --Vahid
>


-- 

Thanks!
--Vahid


Re: [VOTE] 2.2.1 RC1

2019-06-01 Thread Vahid Hashemian
Sorry for the confusion.

We are still one binding +1 (by PMC) short (current binding votes are by
Harsha and Matthias).
I'll keep the vote thread open and send the result once we have that vote.

Thanks!
--Vahid

On Sat, Jun 1, 2019 at 2:33 PM Vahid Hashemian 
wrote:

> I'm a +1 on this RC too. I compiled the source, ran quickstart and tests
> successfully.
>
> Therefore, 2.2.1 RC1 passes with the following +1 votes and no -1 or 0
> votes:
>
> Binding +1s: Harsha, Matthias, Vahid
> Non-binding +1s: Jonathan, Jakub, Victor, Andrew, Mickael, Satish
>
> Here are the vote threads:
> - https://www.mail-archive.com/dev@kafka.apache.org/msg97862.html
> - https://www.mail-archive.com/users@kafka.apache.org/msg34256.html
>
> Thanks to everyone who spent time verifying this release candidate.
>
> I'll proceed with the release process.
>
> --Vahid
>
>
> On Mon, May 13, 2019 at 8:15 PM Vahid Hashemian 
> wrote:
>
>> Hello Kafka users, developers and client-developers,
>>
>> This is the second candidate for release of Apache Kafka 2.2.1.
>>
>> Compared to RC0, this release candidate also fixes the following issues:
>>
>>- [KAFKA-6789] - Add retry logic in AdminClient requests
>>- [KAFKA-8348] - Document of kafkaStreams improvement
>>- [KAFKA-7633] - Kafka Connect requires permission to create internal
>>topics even if they exist
>>- [KAFKA-8240] - Source.equals() can fail with NPE
>>- [KAFKA-8335] - Log cleaner skips Transactional mark and batch
>>record, causing unlimited growth of __consumer_offsets
>>- [KAFKA-8352] - Connect System Tests are failing with 404
>>
>> Release notes for the 2.2.1 release:
>> https://home.apache.org/~vahid/kafka-2.2.1-rc1/RELEASE_NOTES.html
>>
>> *** Please download, test and vote by Thursday, May 16, 9:00 pm PT.
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>> https://kafka.apache.org/KEYS
>>
>> * Release artifacts to be voted upon (source and binary):
>> https://home.apache.org/~vahid/kafka-2.2.1-rc1/
>>
>> * Maven artifacts to be voted upon:
>> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>>
>> * Javadoc:
>> https://home.apache.org/~vahid/kafka-2.2.1-rc1/javadoc/
>>
>> * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
>> https://github.com/apache/kafka/releases/tag/2.2.1-rc1
>>
>> * Documentation:
>> https://kafka.apache.org/22/documentation.html
>>
>> * Protocol:
>> https://kafka.apache.org/22/protocol.html
>>
>> * Successful Jenkins builds for the 2.2 branch:
>> Unit/integration tests: https://builds.apache.org/job/kafka-2.2-jdk8/115/
>>
>> Thanks!
>> --Vahid
>>
>
>
> --
>
> Thanks!
> --Vahid
>


-- 

Thanks!
--Vahid


Re: [VOTE] 2.2.1 RC1

2019-06-01 Thread Vahid Hashemian
Hi Gwen,

Thanks for reviewing the RC and also updating the KEYS file. I think that
step in the release process document is one of the post-approval to-dos; so
one fewer thing for me to do :)

I'll send the vote results shortly.

--Vahid

On Sat, Jun 1, 2019 at 7:38 PM Gwen Shapira  wrote:

> +1 (binding)
>
> Validated signatures and last good test.
>
> I also took the liberty of adding Vahid's keys to
> http://www.apache.org/dist/kafka/KEYS.
> The signature process
> (https://www.apache.org/dev/release-signing.html#keys-policy) requires
> that the keys used to sign the release will be added and in the past
> the PMC received emails expressing concern about the validity of our
> releases.
>
> Gwen
>
> On Sat, Jun 1, 2019 at 2:33 PM Vahid Hashemian
>  wrote:
> >
> > I'm a +1 on this RC too. I compiled the source, ran quickstart and tests
> > successfully.
> >
> > Therefore, 2.2.1 RC1 passes with the following +1 votes and no -1 or 0
> > votes:
> >
> > Binding +1s: Harsha, Matthias, Vahid
> > Non-binding +1s: Jonathan, Jakub, Victor, Andrew, Mickael, Satish
> >
> > Here are the vote threads:
> > - https://www.mail-archive.com/dev@kafka.apache.org/msg97862.html
> > - https://www.mail-archive.com/users@kafka.apache.org/msg34256.html
> >
> > Thanks to everyone who spent time verifying this release candidate.
> >
> > I'll proceed with the release process.
> >
> > --Vahid
> >
> >
> > On Mon, May 13, 2019 at 8:15 PM Vahid Hashemian <
> vahid.hashem...@gmail.com>
> > wrote:
> >
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the second candidate for release of Apache Kafka 2.2.1.
> > >
> > > Compared to RC0, this release candidate also fixes the following
> issues:
> > >
> > >- [KAFKA-6789] - Add retry logic in AdminClient requests
> > >- [KAFKA-8348] - Document of kafkaStreams improvement
> > >- [KAFKA-7633] - Kafka Connect requires permission to create
> internal
> > >topics even if they exist
> > >- [KAFKA-8240] - Source.equals() can fail with NPE
> > >- [KAFKA-8335] - Log cleaner skips Transactional mark and batch
> > >record, causing unlimited growth of __consumer_offsets
> > >- [KAFKA-8352] - Connect System Tests are failing with 404
> > >
> > > Release notes for the 2.2.1 release:
> > > https://home.apache.org/~vahid/kafka-2.2.1-rc1/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Thursday, May 16, 9:00 pm PT.
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > https://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > https://home.apache.org/~vahid/kafka-2.2.1-rc1/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >
> > > * Javadoc:
> > > https://home.apache.org/~vahid/kafka-2.2.1-rc1/javadoc/
> > >
> > > * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
> > > https://github.com/apache/kafka/releases/tag/2.2.1-rc1
> > >
> > > * Documentation:
> > > https://kafka.apache.org/22/documentation.html
> > >
> > > * Protocol:
> > > https://kafka.apache.org/22/protocol.html
> > >
> > > * Successful Jenkins builds for the 2.2 branch:
> > > Unit/integration tests:
> https://builds.apache.org/job/kafka-2.2-jdk8/115/
> > >
> > > Thanks!
> > > --Vahid
> > >
> >
> >
> > --
> >
> > Thanks!
> > --Vahid
>
>
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


-- 

Thanks!
--Vahid


[RESULTS] [VOTE] 2.2.1 RC1

2019-06-01 Thread Vahid Hashemian
This vote passes with 10 +1 votes (3 bindings) and no 0 or -1 votes.

+1 votes
PMC Members:
* Harsha
* Matthias
* Gwen

Committers:
* Vahid

Community:
* Jonathan
* Jakub
* Victor
* Andrew
* Mickael
* Satish

0 votes
* No votes

-1 votes
* No votes

Vote threads:
https://www.mail-archive.com/dev@kafka.apache.org/msg97862.html
https://www.mail-archive.com/users@kafka.apache.org/msg34256.html

I'll continue with the release process and the release announcement will
follow in the next few days.

Thanks!
--Vahid


[ANNOUNCE] Apache Kafka 2.2.1

2019-06-03 Thread Vahid Hashemian
The Apache Kafka community is pleased to announce the release for Apache
Kafka 2.2.1

This is a bugfix release for Kafka 2.2.0. All of the changes in this
release can be found in the release notes:
https://www.apache.org/dist/kafka/2.2.1/RELEASE_NOTES.html

You can download the source and binary release from:
https://kafka.apache.org/downloads#2.2.1

---

Apache Kafka is a distributed streaming platform with four core APIs:

** The Producer API allows an application to publish a stream records to
one or more Kafka topics.

** The Consumer API allows an application to subscribe to one or more
topics and process the stream of records produced to them.

** The Streams API allows an application to act as a stream processor,
consuming an input stream from one or more topics and producing an output
stream to one or more output topics, effectively transforming the input
streams to output streams.

** The Connector API allows building and running reusable producers or
consumers that connect Kafka topics to existing applications or data
systems. For example, a connector to a relational database might capture
every change to a table.

With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data
between systems or applications.

** Building real-time streaming applications that transform or react to the
streams of data.

Apache Kafka is in use at large and small companies worldwide, including
Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
Target, The New York Times, Uber, Yelp, and Zalando, among others.

A big thank you for the following 30 contributors to this release!

Anna Povzner, Arabelle Hou, A. Sophie Blee-Goldman, Bill Bejeck, Bob
Barrett, Chris Egerton, Colin Patrick McCabe, Cyrus Vafadari, Dhruvil Shah,
Doroszlai, Attila, Guozhang Wang, huxi, Jason Gustafson, John Roesler,
Konstantine Karantasis, Kristian Aurlien, Lifei Chen, Magesh Nandakumar,
Manikumar Reddy, Massimo Siani, Matthias J. Sax, Nicholas Parker, pkleindl,
Rajini Sivaram, Randall Hauch, Sebastián Ortega, Vahid Hashemian, Victoria
Bialas, Yaroslav Klymko, Zhanxiang (Patrick) Huang

We welcome your help and feedback. For more information on how to report
problems, and to get involved, visit the project website at
https://kafka.apache.org/

Thank you!

Regards,
--Vahid Hashemian


Re: [VOTE] 2.3.0 RC2

2019-06-16 Thread Vahid Hashemian
+1 (non-binding)

I also verifies signatures, build from source and tested the Quickstart
successfully on the built binary.

BTW, I don't see a link to documentation for 2.3. Is there a reason?

Thanks,
--Vahid

On Sat, Jun 15, 2019 at 6:38 PM Gwen Shapira  wrote:

> +1 (binding)
>
> Verified signatures, built from sources, ran quickstart on binary and
> checked out the passing jenkins build on the branch.
>
> Gwen
>
>
> On Thu, Jun 13, 2019 at 11:58 AM Colin McCabe  wrote:
> >
> > Hi all,
> >
> > Good news: I have run a junit test build for RC2, and it passed.  Check
> out https://builds.apache.org/job/kafka-2.3-jdk8/51/
> >
> > Also, the vote will go until Saturday, June 15th (sorry for the typo
> earlier in the vote end time).
> >
> > best,
> > Colin
> >
> >
> > On Wed, Jun 12, 2019, at 15:55, Colin McCabe wrote:
> > > Hi all,
> > >
> > > We discovered some problems with the first release candidate (RC1) of
> > > 2.3.0.  Specifically, KAFKA-8484 and KAFKA-8500.  I have created a new
> > > release candidate that includes fixes for these issues.
> > >
> > > Check out the release notes for the 2.3.0 release here:
> > > https://home.apache.org/~cmccabe/kafka-2.3.0-rc2/RELEASE_NOTES.html
> > >
> > > The vote will go until Friday, June 7th, or until we create another R
> > >
> > > * Kafka's KEYS file containing PGP keys we use to sign the release can
> > > be found here:
> > > https://kafka.apache.org/KEYS
> > >
> > > * The release artifacts to be voted upon (source and binary) are here:
> > > https://home.apache.org/~cmccabe/kafka-2.3.0-rc2/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >
> > > * Javadoc:
> > > https://home.apache.org/~cmccabe/kafka-2.3.0-rc2/javadoc/
> > >
> > > * The tag to be voted upon (off the 2.3 branch) is the 2.3.0 tag:
> > > https://github.com/apache/kafka/releases/tag/2.3.0-rc2
> > >
> > > best,
> > > Colin
> > >
>
>
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


-- 

Thanks!
--Vahid


Re: [VOTE] KIP-396: Add Commit/List Offsets Operations to AdminClient

2019-08-14 Thread Vahid Hashemian
+1 (binding)

Thanks Michael for the suggestion of simplifying offset
retrieval/alteration operations.

--Vahid

On Wed, Aug 14, 2019 at 4:42 PM Bill Bejeck  wrote:

> Thanks for the KIP Mickael, looks very useful.
> +1 (binding)
>
> -Bill
>
> On Wed, Aug 14, 2019 at 6:14 PM Harsha Chintalapani 
> wrote:
>
> > Thanks for the KIP Mickael. LGTM +1 (binding).
> > -Harsha
> >
> >
> > On Wed, Aug 14, 2019 at 1:10 PM, Colin McCabe 
> wrote:
> >
> > > Thanks, Mickael. +1 (binding)
> > >
> > > best,
> > > Colin
> > >
> > > On Wed, Aug 14, 2019, at 12:07, Gabor Somogyi wrote:
> > >
> > > +1 (non-binding)
> > > I've read it through in depth and as Jungtaek said Spark can make good
> > use
> > > of it.
> > >
> > > On Wed, 14 Aug 2019, 17:06 Jungtaek Lim,  wrote:
> > >
> > > +1 (non-binding)
> > >
> > > I found it very useful for Spark's case. (Discussion on KIP-505
> described
> > > it.)
> > >
> > > Thanks for driving the effort!
> > >
> > > 2019년 8월 14일 (수) 오후 8:49, Mickael Maison 님이
> > 작성:
> > >
> > > Hi Guozhang,
> > >
> > > Thanks for taking a look.
> > >
> > > 1. Right, I updated the titles of the code blocks
> > >
> > > 2. Yes that's a good idea. I've updated the KIP
> > >
> > > Thank you
> > >
> > > On Wed, Aug 14, 2019 at 11:05 AM Mickael Maison
> > >  wrote:
> > >
> > > Hi Colin,
> > >
> > > Thanks for raising these 2 valid points. I've updated the KIP
> > >
> > > accordingly.
> > >
> > > On Tue, Aug 13, 2019 at 9:50 PM Guozhang Wang 
> > >
> > > wrote:
> > >
> > > Hi Mickael,
> > >
> > > Thanks for the KIP!
> > >
> > > Just some minor comments.
> > >
> > > 1. Java class names are stale, e.g. "CommitOffsetsOptions.java
> > > "
> > >
> > > should
> > >
> > > be
> > >
> > > "AlterOffsetsOptions".
> > >
> > > 2. I'd suggest we change the future structure of "AlterOffsetsResult"
> > >
> > > to
> > >
> > > *KafkaFuture>>*
> > >
> > > This is because we will have a hierarchy of two-layers of errors
> > >
> > > since
> > >
> > > we
> > >
> > > need to find out the group coordinator first and then issue the
> > >
> > > commit
> > >
> > > offset request (see e.g. the ListConsumerGroupOffsetsResult which
> > >
> > > exclude
> > >
> > > partitions that have errors, or the DeleteMembersResult as part of
> > >
> > > KIP-345).
> > >
> > > If the discover-coordinator returns non-triable error, we would set
> > >
> > > it
> > >
> > > on
> > >
> > > the first layer of the KafkaFuture, and the per-partition error would
> > >
> > > be
> > >
> > > set on the second layer of the KafkaFuture.
> > >
> > > Guozhang
> > >
> > > On Tue, Aug 13, 2019 at 9:36 AM Colin McCabe 
> > >
> > > wrote:
> > >
> > > Hi Mickael,
> > >
> > > Considering that KIP-496, which adds a way of deleting consumer
> > >
> > > offsets
> > >
> > > from AdminClient, looks like it is going to get in, this seems like
> > > functionality we should definitely have.
> > >
> > > For alterConsumerGroupOffsets, is the intention to ignore
> > >
> > > partitions
> > >
> > > that
> > >
> > > are not specified in the map? If so, we should specify that in the
> > >
> > > JavaDoc.
> > >
> > > isolationLevel seems like it should be an enum rather than a
> > >
> > > string. The
> > >
> > > existing enum is in org.apache.kafka.common.requests, so we should
> > >
> > > probably
> > >
> > > create a new one which is public in org.apache.kafka.clients.admin.
> > >
> > > best,
> > > Colin
> > >
> > > On Mon, Mar 25, 2019, at 06:10, Mickael Maison wrote:
> > >
> > > Bumping this thread once again
> > >
> > > Ismael, have I answered your questions?
> > > While this has received a few non-binding +1s, no committers have voted
> > > yet. If you have concerns or questions, please let me know.
> > >
> > > Thanks
> > >
> > > On Mon, Feb 11, 2019 at 11:51 AM Mickael Maison
> > >  wrote:
> > >
> > > Bumping this thread as it's been a couple of weeks.
> > >
> > > On Tue, Jan 22, 2019 at 2:26 PM Mickael Maison <
> > >
> > > mickael.mai...@gmail.com> wrote:
> > >
> > > Thanks Ismael for the feedback. I think your point has 2
> > >
> > > parts:
> > >
> > > - Having the reset functionality in the AdminClient: The fact we have a
> > > command line tool illustrate that this
> > >
> > > operation
> > >
> > > is
> > >
> > > relatively common. I seems valuable to be able to perform
> > >
> > > this
> > >
> > > operation directly via a proper API in addition of the CLI
> > >
> > > tool.
> > >
> > > - Sending an OffsetCommit directly instead of relying on
> > >
> > > KafkaConsumer:
> > >
> > > The KafkaConsumer requires a lot of stuff to commit offsets.
> > >
> > > Its
> > >
> > > group
> > >
> > > cannot change so you need to start a new Consumer every time,
> > >
> > > that
> > >
> > > creates new connections and overal sends more requests. Also
> > >
> > > there
> > >
> > > are
> > >
> > > already a bunch of AdminClient APIs that have logic very
> > >
> > > close to
> > >
> > > what needs to be done to send a commit request, keeping the
> > >
> > > code
> > >
> >

Re: [VOTE] KIP-352: Distinguish URPs caused by reassignment

2019-08-22 Thread Vahid Hashemian
+1 (binding)

Thanks Jason. This is super useful.

--Vahid

On Tue, Aug 20, 2019 at 10:55 AM Jason Gustafson  wrote:

> Hi All,
>
> I'd like to start a vote on KIP-352, which is a follow-up to KIP-455 to fix
> a long-known shortcoming of URP reporting and to improve reassignment
> monitoring:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-352%3A+Distinguish+URPs+caused+by+reassignment
> .
>
> Note that I have added one new metric following the discussion. It seemed
> useful to have a lag metric for reassigning partitions.
>
> Thanks,
> Jason
>


-- 

Thanks!
--Vahid


Re: Request for contributor permissions

2019-09-02 Thread Vahid Hashemian
Hi Wladimir,

I added your user.
Thanks for your interest in contributing to Kafka.

Regards,
--Vahid

On Sun, Sep 1, 2019 at 10:35 AM Wladimir Gaus  wrote:

> Hello everyone,
>
> I'm interested in supporting the development of the Kafka project.
> Please add me to the contributors list. My Jira ID is: wgaus
> Thank you in advance.
>
> Best,
> Wladimir Gaus
>


-- 

Thanks!
--Vahid


Re: [ANNOUNCE] New committer: Mickael Maison

2019-11-07 Thread Vahid Hashemian
Congrats Mickael,

Well deserved!

--Vahid

On Thu, Nov 7, 2019 at 9:10 PM Maulin Vasavada 
wrote:

> Congratulations Mickael!
>
> On Thu, Nov 7, 2019 at 8:27 PM Manikumar 
> wrote:
>
> > Congrats Mickeal!
> >
> > On Fri, Nov 8, 2019 at 9:05 AM Dong Lin  wrote:
> >
> > > Congratulations Mickael!
> > >
> > > On Thu, Nov 7, 2019 at 1:38 PM Jun Rao  wrote:
> > >
> > > > Hi, Everyone,
> > > >
> > > > The PMC of Apache Kafka is pleased to announce a new Kafka committer
> > > > Mickael
> > > > Maison.
> > > >
> > > > Mickael has been contributing to Kafka since 2016. He proposed and
> > > > implemented multiple KIPs. He has also been propomating Kafka through
> > > blogs
> > > > and public talks.
> > > >
> > > > Congratulations, Mickael!
> > > >
> > > > Thanks,
> > > >
> > > > Jun (on behalf of the Apache Kafka PMC)
> > > >
> > >
> >
>


-- 

Thanks!
--Vahid


Re: [ANNOUNCE] New committer: John Roesler

2019-11-12 Thread Vahid Hashemian
Congratulations John!

--Vahid

On Tue, Nov 12, 2019 at 4:38 PM Adam Bellemare 
wrote:

> Congratulations John, and thanks for all your help on KIP-213!
>
> > On Nov 12, 2019, at 6:24 PM, Bill Bejeck  wrote:
> >
> > Congratulations John!
> >
> > On Tue, Nov 12, 2019 at 6:20 PM Matthias J. Sax 
> > wrote:
> >
> >> Congrats John!
> >>
> >>
> >>> On 11/12/19 2:52 PM, Boyang Chen wrote:
> >>> Great work John! Well deserved
> >>>
> >>> On Tue, Nov 12, 2019 at 1:56 PM Guozhang Wang 
> >> wrote:
> >>>
>  Hi Everyone,
> 
>  The PMC of Apache Kafka is pleased to announce a new Kafka committer,
> >> John
>  Roesler.
> 
>  John has been contributing to Apache Kafka since early 2018. His main
>  contributions are primarily around Kafka Streams, but have also
> included
>  improving our test coverage beyond Streams as well. Besides his own
> code
>  contributions, John has also actively participated on community
> >> discussions
>  and reviews including several other contributors' big proposals like
>  foreign-key join in Streams (KIP-213). He has also been writing,
> >> presenting
>  and evangelizing Apache Kafka in many venues.
> 
>  Congratulations, John! And look forward to more collaborations with
> you
> >> on
>  Apache Kafka.
> 
> 
>  Guozhang, on behalf of the Apache Kafka PMC
> 
> >>>
> >>
> >>
>


-- 

Thanks!
--Vahid


Re: [ANNOUNCE] New committer: Boyang Chen

2020-06-23 Thread Vahid Hashemian
Congrats Boyang!

--Vahid

On Tue, Jun 23, 2020 at 6:41 AM Wang (Leonard) Ge  wrote:

> Congrats Boyang! This is a great achievement.
>
> On Tue, Jun 23, 2020 at 10:33 AM Mickael Maison 
> wrote:
>
> > Congrats Boyang! Well deserved
> >
> > On Tue, Jun 23, 2020 at 8:20 AM Tom Bentley  wrote:
> > >
> > > Congratulations Boyang!
> > >
> > > On Tue, Jun 23, 2020 at 8:11 AM Bruno Cadonna 
> > wrote:
> > >
> > > > Congrats, Boyang!
> > > >
> > > > Best,
> > > > Bruno
> > > >
> > > > On Tue, Jun 23, 2020 at 7:50 AM Konstantine Karantasis
> > > >  wrote:
> > > > >
> > > > > Congrats, Boyang!
> > > > >
> > > > > -Konstantine
> > > > >
> > > > > On Mon, Jun 22, 2020 at 9:19 PM Navinder Brar
> > > > >  wrote:
> > > > >
> > > > > > Many Congratulations Boyang. Very well deserved.
> > > > > >
> > > > > > Regards,Navinder
> > > > > >
> > > > > > On Tuesday, 23 June, 2020, 07:21:23 am IST, Matt Wang <
> > > > wang...@163.com>
> > > > > > wrote:
> > > > > >
> > > > > >  Congratulations, Boyang!
> > > > > >
> > > > > >
> > > > > > --
> > > > > >
> > > > > > Best,
> > > > > > Matt Wang
> > > > > >
> > > > > >
> > > > > > On 06/23/2020 07:59,Boyang Chen
> wrote:
> > > > > > Thanks a lot everyone, I really appreciate the recognition, and
> > hope to
> > > > > > make more solid contributions to the community in the future!
> > > > > >
> > > > > > On Mon, Jun 22, 2020 at 4:50 PM Matthias J. Sax <
> mj...@apache.org>
> > > > wrote:
> > > > > >
> > > > > > Congrats! Well deserved!
> > > > > >
> > > > > > -Matthias
> > > > > >
> > > > > > On 6/22/20 4:38 PM, Bill Bejeck wrote:
> > > > > > Congratulations Boyang! Well deserved.
> > > > > >
> > > > > > -Bill
> > > > > >
> > > > > > On Mon, Jun 22, 2020 at 7:35 PM Colin McCabe  >
> > > > wrote:
> > > > > >
> > > > > > Congratulations, Boyang!
> > > > > >
> > > > > > cheers,
> > > > > > Colin
> > > > > >
> > > > > > On Mon, Jun 22, 2020, at 16:26, Guozhang Wang wrote:
> > > > > > The PMC for Apache Kafka has invited Boyang Chen as a committer
> > and we
> > > > > > are
> > > > > > pleased to announce that he has accepted!
> > > > > >
> > > > > > Boyang has been active in the Kafka community more than two years
> > ago.
> > > > > > Since then he has presented his experience operating with Kafka
> > Streams
> > > > > > at
> > > > > > Pinterest as well as several feature development including
> > rebalance
> > > > > > improvements (KIP-345) and exactly-once scalability improvements
> > > > > > (KIP-447)
> > > > > > in various Kafka Summit and Kafka Meetups. More recently he's
> also
> > been
> > > > > > participating in Kafka broker development including
> post-Zookeeper
> > > > > > controller design (KIP-500). Besides all the code contributions,
> > Boyang
> > > > > > has
> > > > > > also helped reviewing even more PRs and KIPs than his own.
> > > > > >
> > > > > > Thanks for all the contributions Boyang! And look forward to more
> > > > > > collaborations with you on Apache Kafka.
> > > > > >
> > > > > >
> > > > > > -- Guozhang, on behalf of the Apache Kafka PMC
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > >
> > > >
> >
>
>
> --
> Leonard Ge
> Software Engineer Intern - Confluent
>


-- 

Thanks!
--Vahid


Re: [ANNOUNCE] New committer: Xi Hu

2020-06-24 Thread Vahid Hashemian
Congrats Xi!

--Vahid

On Wed, Jun 24, 2020 at 2:56 PM Mickael Maison 
wrote:

> Congratulations Xi!
>
> On Wed, Jun 24, 2020 at 7:25 PM Matthias J. Sax  wrote:
> >
> > Congrats Xi!
> >
> > On 6/24/20 9:45 AM, Tom Bentley wrote:
> > > Congratulations Xi!
> > >
> > > On Wed, Jun 24, 2020 at 5:34 PM Guozhang Wang 
> wrote:
> > >
> > >> The PMC for Apache Kafka has invited Xi Hu as a committer and we are
> > >> pleased to announce that he has accepted!
> > >>
> > >> Xi Hu has been actively contributing to Kafka since 2016, and is well
> > >> recognized especially for his non-code contributions: he maintains a
> tech
> > >> blog post evangelizing Kafka in the Chinese speaking community (
> > >> https://www.cnblogs.com/huxi2b/), and is one of the most active
> answering
> > >> member in Zhihu (Chinese Reddit / StackOverflow) Kafka topic. He has
> > >> presented in Kafka meetup events in the past and authored a
> > >> book deep-diving on Kafka architecture design and operations as well (
> > >> https://www.amazon.cn/dp/B07JH9G2FL). Code wise, he has contributed
> 75
> > >> patches so far.
> > >>
> > >>
> > >> Thanks for all the contributions Xi. Congratulations!
> > >>
> > >> -- Guozhang, on behalf of the Apache Kafka PMC
> > >>
> > >
> >
>


-- 

Thanks!
--Vahid


Re: [ANNOUNCE] New Kafka PMC Member: Mickael Maison

2020-08-01 Thread Vahid Hashemian
Congratulations Mickael!
--Vahid

On Fri, Jul 31, 2020 at 9:32 PM John Roesler  wrote:

> Congrats, Mickael!
> -John
>
> On Fri, Jul 31, 2020, at 23:29, Gwen Shapira wrote:
> > Congratulations Mickael. Thrilled to have you as part of the PMC :)
> >
> > On Fri, Jul 31, 2020 at 8:14 AM Jun Rao  wrote:
> >
> > > Hi, Everyone,
> > >
> > > Mickael Maison has been a Kafka committer since Nov. 5, 2019. He has
> > > remained active in the community since becoming a committer. It's my
> > > pleasure to announce that Mickael is now a member of Kafka PMC.
> > >
> > > Congratulations Mickael!
> > >
> > > Jun
> > > on behalf of Apache Kafka PMC
> > >
> >
> >
> > --
> > Gwen Shapira
> > Engineering Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter | blog
> >
>


-- 

Thanks!
--Vahid


Re: [ANNOUNCE] New Kafka PMC Member: John Roesler

2020-08-11 Thread Vahid Hashemian
Congrats John!

--Vahid

On Tue, Aug 11, 2020, 02:37 Ismael Juma  wrote:

> Congratulations John!
>
> Ismael
>
> On Mon, Aug 10, 2020 at 1:11 PM Jun Rao  wrote:
>
> > Hi, Everyone,
> >
> > John Roesler has been a Kafka committer since Nov. 5, 2019. He has
> remained
> > active in the community since becoming a committer. It's my pleasure to
> > announce that John is now a member of Kafka PMC.
> >
> > Congratulations John!
> >
> > Jun
> > on behalf of Apache Kafka PMC
> >
>


Re: [DISCUSS] KIP-977: Partition-Level Throughput Metrics

2023-11-27 Thread Vahid Hashemian
Hi Qichao,

Thanks for proposing this KIP. It'd be super valuable to have the ability
to have those partition level metrics for Kafka topics.

Sorry I'm late to the discussion. I just wanted to bring up a point
for clarification and one question:

Let's assume that a production cluster cannot afford to enable high
verbosity on a permanent basis (at least not for all topics) due to
performance concerns.

Since this new config can be set dynamically, in case of an issue or
investigation that warrants obtaining partition level metrics, one can
simply enable high verbosity for select topic(s), temporarily collect
metrics at partition level, and then change the config back to the previous
setting. Since the config values are not set incrementally, the operator
would need to run a `describe` to get the existing config first, and then
amend it to enable high verbosity for the topic(s) of interest. Finally,
when the investigation concludes, the config has to be reverted to its
permanent setting.

If the above execution path makes sense, in case the operator forgets to
take an inventory of the existing (permanent) config and simply overwrites
it, then that permanent config will be gone and not retrievable. Is this
correct?

We usually don't need to temporarily change broker configs and I see this
config as one that can be temporarily changed. So keeping track of what the
value was before the change is rather important.

Aside from this point, my question is: What's the impact of `medium`
setting for `level`? I couldn't find it described in the KIP.

Thanks!
--Vahid



On Mon, Nov 13, 2023 at 5:34 AM Divij Vaidya 
wrote:

> Thank you for updating the KIP Qichao.
>
> I don't have any more questions or suggestions. Looks good to move forward
> from my perspective.
>
>
>
> --
> Divij Vaidya
>
>
>
> On Fri, Nov 10, 2023 at 2:25 PM Qichao Chu 
> wrote:
>
> > Thank you again for the nice suggestions, Jorge!
> > I will wait for Divij's response and move it to the vote stage once the
> > generic filter part reached concensus.
> >
> > Qichao Chu
> > Software Engineer | Data - Kafka
> > [image: Uber] 
> >
> >
> > On Fri, Nov 10, 2023 at 6:49 AM Jorge Esteban Quilcate Otoya <
> > quilcate.jo...@gmail.com> wrote:
> >
> > > Hi Qichao,
> > >
> > > Thanks for updating the KIP, all updates look good to me.
> > >
> > > Looking forward to see this KIP moving forward!
> > >
> > > Cheers,
> > > Jorge.
> > >
> > >
> > >
> > > On Wed, 8 Nov 2023 at 08:55, Qichao Chu 
> wrote:
> > >
> > > > Hi Divij,
> > > >
> > > > Thank you for the feedback. I updated the KIP to make it a little bit
> > > more
> > > > generic: filters will stay in an array instead of different top-level
> > > > objects. In this way, if we need language filters in the future. The
> > > logic
> > > > relationship of filters is also added.
> > > >
> > > > Hi Jorge,
> > > >
> > > > Thank you for the review and great comments. Here is the reply for
> each
> > > of
> > > > the suggestions:
> > > >
> > > > 1) The words describing the property are now updated to include more
> > > > details of the keys in the JSON. It also explicitly mentions the JSON
> > > > nature of the config now.
> > > > 2) The JSON entries should be non-conflict so the order is not
> > relevant.
> > > If
> > > > there's conflict, the conflict resolution rules are stated in the
> KIP.
> > To
> > > > make it more clear, ordering and duplication rules are updated in the
> > > > Restrictions section of the *level* property.
> > > > 3) Yeah we did take a look at the RecordingLevel config and it does
> not
> > > > work for this case. The RecodingLevel config does not offer the
> > > capability
> > > > of filtering and it has a drawback of needing to be added to all the
> > > future
> > > > sensors. To reduce the duplication, I propose we merge the
> > RecordingLevel
> > > > to this more generic config in the future. Please take a look into
> the
> > > > *Using
> > > > the Existing RecordingLevel Config* section under *Rejected
> > Alternatives*
> > > > for more details.
> > > > 4) This suggestion makes a lot of sense. My idea is to create a
> > > > table/form/doc in the documentation for the verbosity levels of all
> > > metric
> > > > series. If it's too verbose to be in the docs, I will update the KIP
> to
> > > > include this info. I will create a JIRA for this effort once the KIP
> is
> > > > approved.
> > > > 5) Sure we can expand to all other series, added to the KIP.
> > > > 6) Added a new section(*Working with the Configuration via CLI)* with
> > the
> > > > user experience details
> > > > 7) Links are updated.
> > > >
> > > > Please take another look and let me know if you have any more
> concerns.
> > > >
> > > > Best,
> > > > Qichao Chu
> > > > Software Engineer | Data - Kafka
> > > > [image: Uber] 
> > > >
> > > >
> > > > On Wed, Nov 8, 2023 at 6:29 AM Jorge Esteban Quilcate Otoya <
> > > > quilcate.jo...@gmail.com> wrote:
> > > >
> > > > > Hi Qichao,
> > > > >

Re: [DISCUSS] KIP-977: Partition-Level Throughput Metrics

2023-11-28 Thread Vahid Hashemian
Hi Qichao,

Thanks for answering my questions and updating the KIP accordingly.

It looks good to me.

--Vahid


On Tue, Nov 28, 2023, 7:22 PM Qichao Chu  wrote:

> Hi Vahid,
>
> Thank you for taking the time to review the KIP and asking great questions.
>
> The execution path mentioned is exactly how this KIP is going to function.
> We believe this config, compared with many other configurations like quota,
> will not pose significant alteration to the broker's functionality. Thus
> overwriting the permanent config dynamically is acceptable. After thinking
> about this topic again, I think your question makes a lot of sense and we
> shouldn't override the permanent config. Permanent config usually
> represents the normal operation condition, so the temporal config should be
> removed after the cluster is restarted/re-provisioned to be 'clean'. This
> also helps to prevent undesirable behavior and to reduce the risk involved
> in operation. I have updated the KIP to mention this behavior.
>
> `Medium` level is not mentioned as we don't have any medium-level metrics
> for now. This is for future extension. I have also updated the KIP to
> reflect this information.
>
> Best,
> Qichao
>
>
>
> On Tue, Nov 28, 2023 at 9:22 AM Vahid Hashemian  wrote:
>
> > Hi Qichao,
> >
> > Thanks for proposing this KIP. It'd be super valuable to have the ability
> > to have those partition level metrics for Kafka topics.
> >
> > Sorry I'm late to the discussion. I just wanted to bring up a point
> > for clarification and one question:
> >
> > Let's assume that a production cluster cannot afford to enable high
> > verbosity on a permanent basis (at least not for all topics) due to
> > performance concerns.
> >
> > Since this new config can be set dynamically, in case of an issue or
> > investigation that warrants obtaining partition level metrics, one can
> > simply enable high verbosity for select topic(s), temporarily collect
> > metrics at partition level, and then change the config back to the
> previous
> > setting. Since the config values are not set incrementally, the operator
> > would need to run a `describe` to get the existing config first, and then
> > amend it to enable high verbosity for the topic(s) of interest. Finally,
> > when the investigation concludes, the config has to be reverted to its
> > permanent setting.
> >
> > If the above execution path makes sense, in case the operator forgets to
> > take an inventory of the existing (permanent) config and simply
> overwrites
> > it, then that permanent config will be gone and not retrievable. Is this
> > correct?
> >
> > We usually don't need to temporarily change broker configs and I see this
> > config as one that can be temporarily changed. So keeping track of what
> the
> > value was before the change is rather important.
> >
> > Aside from this point, my question is: What's the impact of `medium`
> > setting for `level`? I couldn't find it described in the KIP.
> >
> > Thanks!
> > --Vahid
> >
> >
> >
> > On Mon, Nov 13, 2023 at 5:34 AM Divij Vaidya 
> > wrote:
> >
> > > Thank you for updating the KIP Qichao.
> > >
> > > I don't have any more questions or suggestions. Looks good to move
> > forward
> > > from my perspective.
> > >
> > >
> > >
> > > --
> > > Divij Vaidya
> > >
> > >
> > >
> > > On Fri, Nov 10, 2023 at 2:25 PM Qichao Chu 
> > > wrote:
> > >
> > > > Thank you again for the nice suggestions, Jorge!
> > > > I will wait for Divij's response and move it to the vote stage once
> the
> > > > generic filter part reached concensus.
> > > >
> > > > Qichao Chu
> > > > Software Engineer | Data - Kafka
> > > > [image: Uber] <https://uber.com/>
> > > >
> > > >
> > > > On Fri, Nov 10, 2023 at 6:49 AM Jorge Esteban Quilcate Otoya <
> > > > quilcate.jo...@gmail.com> wrote:
> > > >
> > > > > Hi Qichao,
> > > > >
> > > > > Thanks for updating the KIP, all updates look good to me.
> > > > >
> > > > > Looking forward to see this KIP moving forward!
> > > > >
> > > > > Cheers,
> > > > > Jorge.
> > > > >
> > > > >
> > > > >
> > > > > On Wed, 8 Nov 2023 at 08:55, Qichao Chu 
> > > wro

Re: [VOTE] KIP-997: Partition-Level Throughput Metrics

2023-11-28 Thread Vahid Hashemian
+1 (binding).

Thanks!
--Vahid


On Tue, Nov 21, 2023, 1:00 AM Qichao Chu  wrote:

> Hi All,
>
> It would be nice if we could have more people to review and vote for this
> KIP.
> Many thanks!
>
> Qichao
>
>
> On Mon, Nov 20, 2023 at 2:43 PM Qichao Chu  wrote:
>
> > @Matthias: yeah it should be 977, sorry for the confusion.
> > Btw, do you want to cast another binding vote for it?
> >
> > Best,
> > Qichao Chu
> >
> >
> > On Fri, Nov 17, 2023 at 12:45 AM Matthias J. Sax 
> wrote:
> >
> >> This is KIP-977, right? Not as the subject says.
> >>
> >> Guess we won't be able to fix this now. Hope it does not cause confusion
> >> down the line...
> >>
> >>
> >> -Matthias
> >>
> >> On 11/16/23 4:43 AM, Kamal Chandraprakash wrote:
> >> > +1 (non-binding). Thanks for the KIP!
> >> >
> >> > On Thu, Nov 16, 2023 at 9:00 AM Satish Duggana <
> >> satish.dugg...@gmail.com>
> >> > wrote:
> >> >
> >> >> Thanks Qichao for the KIP.
> >> >>
> >> >> +1 (binding)
> >> >>
> >> >> ~Satish.
> >> >>
> >> >> On Thu, 16 Nov 2023 at 02:20, Jorge Esteban Quilcate Otoya
> >> >>  wrote:
> >> >>>
> >> >>> Qichao, thanks again for leading this proposal!
> >> >>>
> >> >>> +1 (non-binding)
> >> >>>
> >> >>> Cheers,
> >> >>> Jorge.
> >> >>>
> >> >>> On Wed, 15 Nov 2023 at 19:17, Divij Vaidya  >
> >> >> wrote:
> >> >>>
> >>  +1 (binding)
> >> 
> >>  I was involved in the discussion thread for this KIP and support it
> >> in
> >> >> its
> >>  current form.
> >> 
> >>  --
> >>  Divij Vaidya
> >> 
> >> 
> >> 
> >>  On Wed, Nov 15, 2023 at 10:55 AM Qichao Chu
>  >> >
> >>  wrote:
> >> 
> >> > Hi all,
> >> >
> >> > I'd like to call a vote for KIP-977: Partition-Level Throughput
> >> >> Metrics.
> >> >
> >> > Please take a look here:
> >> >
> >> >
> >> 
> >> >>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-977%3A+Partition-Level+Throughput+Metrics
> >> >
> >> > Best,
> >> > Qichao Chu
> >> >
> >> 
> >> >>
> >> >
> >>
> >
>


Re: [kafka-clients] [VOTE] 2.5.0 RC2

2020-03-30 Thread Vahid Hashemian
Hi David,

Thanks for running this release.

Sorry for the delay in bringing this up.
I just wanted to draw attention to
https://issues.apache.org/jira/browse/KAFKA-9731 that blocked us from
upgrading to 2.4.
Based on the earlier discussion, the fix may not require a lot of work.

Regards,
--Vahid

On Tue, Mar 17, 2020 at 8:10 AM David Arthur  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the third candidate for release of Apache Kafka 2.5.0.
>
> * TLS 1.3 support (1.2 is now the default)
> * Co-groups for Kafka Streams
> * Incremental rebalance for Kafka Consumer
> * New metrics for better operational insight
> * Upgrade Zookeeper to 3.5.7
> * Deprecate support for Scala 2.11
>
>
>  Release notes for the 2.5.0 release:
> https://home.apache.org/~davidarthur/kafka-2.5.0-rc2/RELEASE_NOTES.html
>
> *** Please download, test and vote by Tuesday March 24, 2020 by 5pm PT.
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~davidarthur/kafka-2.5.0-rc2/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~davidarthur/kafka-2.5.0-rc2/javadoc/
>
> * Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
> https://github.com/apache/kafka/releases/tag/2.5.0-rc2
>
> * Documentation:
> https://kafka.apache.org/25/documentation.html
>
> * Protocol:
> https://kafka.apache.org/25/protocol.html
>
>
> I'm thrilled to be able to include links to both build jobs with
> successful builds! Thanks to everyone who has helped reduce our flaky test
> exposure these past few weeks :)
>
> * Successful Jenkins builds for the 2.5 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.5-jdk8/64/
> System tests:
> https://jenkins.confluent.io/job/system-test-kafka/job/2.5/42/
>
> --
> David Arthur
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6rKfpaJL3WL_fA%2BNaTiRAe0Fiab0GRfOtcE-T1KP%3DLkCw%40mail.gmail.com
> 
> .
>


-- 

Thanks!
--Vahid


Re: [kafka-clients] [VOTE] 2.4.0 RC0

2019-11-17 Thread Vahid Hashemian
Thanks Manikumar for managing this release. Looking forward to it.

I built binary from the source and was able to successfully run the
quickstarts.

However, this streams unit test also fails for me constantly:

ClientMetricsTest. shouldAddCommitIdMetric

java.lang.AssertionError:
  Unexpected method call
StreamsMetricsImpl.addClientLevelImmutableMetric("commit-id", "The version
control commit ID of the Kafka Streams client", INFO, "unknown"):
StreamsMetricsImpl.addClientLevelImmutableMetric("commit-id", "The
version control commit ID of the Kafka Streams client", INFO,
and(not("unknown"), notNull())): expected: 1, actual: 0
at
org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:44)
at
org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:101)
at
org.easymock.internal.ClassProxyFactory$MockMethodInterceptor.intercept(ClassProxyFactory.java:97)
...

Thanks,
--Vahid

On Thu, Nov 14, 2019 at 10:21 AM Manikumar 
wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 2.4.0.
> There is work in progress for couple blockers PRs. I am publishing RC0 to
> avoid further delays in testing the release.
>
> This release includes many new features, including:
> - Allow consumers to fetch from closest replica
> - Support for incremental cooperative rebalancing to the consumer
> rebalance protocol
> - MirrorMaker 2.0 (MM2), a new multi-cluster, cross-datacenter replication
> engine
> - New Java authorizer Interface
> - Support for  non-key joining in KTable
> - Administrative API for replica reassignment
> - Sticky partitioner
> - Return topic metadata and configs in CreateTopics response
> - Securing Internal connect REST endpoints
> - API to delete consumer offsets and expose it via the AdminClient.
>
> Release notes for the 2.4.0 release:
> https://home.apache.org/~manikumar/kafka-2.4.0-rc0/RELEASE_NOTES.html
>
> *** Please download, test  by  Thursday, November 20, 9am PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~manikumar/kafka-2.4.0-rc0/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~manikumar/kafka-2.4.0-rc0/javadoc/
>
> * Tag to be voted upon (off 2.4 branch) is the 2.4.0 tag:
> https://github.com/apache/kafka/releases/tag/2.4.0-rc0
>
> * Documentation:
> https://kafka.apache.org/24/documentation.html
>
> * Protocol:
> https://kafka.apache.org/24/protocol.html
>
> Thanks,
> Manikumar
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAMVt_Aw945uqcpisFjZHAR5m8Sidw6hW4ia%2B7%3DjxEfadmBPzcw%40mail.gmail.com
> 
> .
>


-- 

Thanks!
--Vahid


Re: [kafka-clients] [VOTE] 2.4.0 RC0

2019-11-18 Thread Vahid Hashemian
Thanks Bruno,

Just to clarify, I ran the tests from the command line: ./gradlew
streams:test

Regards,
--Vahid

On Mon, Nov 18, 2019 at 6:16 AM Bruno Cadonna  wrote:

> Hi,
>
> ClientMetricsTest.shouldAddCommitIdMetric should only fail if executed
> from an IDE. The test fails because the test expects a file on the
> class path which is not there when the test is executed from the IDE,
> but is there when the test is executed from gradle. I will try to fix
> the test so that it can also be executed from the IDE.
>
> Best,
> Bruno
>
> On Mon, Nov 18, 2019 at 6:51 AM Vahid Hashemian
>  wrote:
> >
> > Thanks Manikumar for managing this release. Looking forward to it.
> >
> > I built binary from the source and was able to successfully run the
> quickstarts.
> >
> > However, this streams unit test also fails for me constantly:
> >
> > ClientMetricsTest. shouldAddCommitIdMetric
> >
> > java.lang.AssertionError:
> >   Unexpected method call
> StreamsMetricsImpl.addClientLevelImmutableMetric("commit-id", "The version
> control commit ID of the Kafka Streams client", INFO, "unknown"):
> > StreamsMetricsImpl.addClientLevelImmutableMetric("commit-id", "The
> version control commit ID of the Kafka Streams client", INFO,
> and(not("unknown"), notNull())): expected: 1, actual: 0
> > at
> org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:44)
> > at
> org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:101)
> > at
> org.easymock.internal.ClassProxyFactory$MockMethodInterceptor.intercept(ClassProxyFactory.java:97)
> > ...
> >
> > Thanks,
> > --Vahid
> >
> > On Thu, Nov 14, 2019 at 10:21 AM Manikumar 
> wrote:
> >>
> >> Hello Kafka users, developers and client-developers,
> >>
> >> This is the first candidate for release of Apache Kafka 2.4.0.
> >> There is work in progress for couple blockers PRs. I am publishing RC0
> to avoid further delays in testing the release.
> >>
> >> This release includes many new features, including:
> >> - Allow consumers to fetch from closest replica
> >> - Support for incremental cooperative rebalancing to the consumer
> rebalance protocol
> >> - MirrorMaker 2.0 (MM2), a new multi-cluster, cross-datacenter
> replication engine
> >> - New Java authorizer Interface
> >> - Support for  non-key joining in KTable
> >> - Administrative API for replica reassignment
> >> - Sticky partitioner
> >> - Return topic metadata and configs in CreateTopics response
> >> - Securing Internal connect REST endpoints
> >> - API to delete consumer offsets and expose it via the AdminClient.
> >>
> >> Release notes for the 2.4.0 release:
> >> https://home.apache.org/~manikumar/kafka-2.4.0-rc0/RELEASE_NOTES.html
> >>
> >> *** Please download, test  by  Thursday, November 20, 9am PT
> >>
> >> Kafka's KEYS file containing PGP keys we use to sign the release:
> >> https://kafka.apache.org/KEYS
> >>
> >> * Release artifacts to be voted upon (source and binary):
> >> https://home.apache.org/~manikumar/kafka-2.4.0-rc0/
> >>
> >> * Maven artifacts to be voted upon:
> >> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >>
> >> * Javadoc:
> >> https://home.apache.org/~manikumar/kafka-2.4.0-rc0/javadoc/
> >>
> >> * Tag to be voted upon (off 2.4 branch) is the 2.4.0 tag:
> >> https://github.com/apache/kafka/releases/tag/2.4.0-rc0
> >>
> >> * Documentation:
> >> https://kafka.apache.org/24/documentation.html
> >>
> >> * Protocol:
> >> https://kafka.apache.org/24/protocol.html
> >>
> >> Thanks,
> >> Manikumar
> >>
> >> --
> >> You received this message because you are subscribed to the Google
> Groups "kafka-clients" group.
> >> To unsubscribe from this group and stop receiving emails from it, send
> an email to kafka-clients+unsubscr...@googlegroups.com.
> >> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAMVt_Aw945uqcpisFjZHAR5m8Sidw6hW4ia%2B7%3DjxEfadmBPzcw%40mail.gmail.com
> .
> >
> >
> >
> > --
> >
> > Thanks!
> > --Vahid
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "kafka-clients" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to kafka-clients+unsubscr...@googlegroups.com.
> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAHR2v2mKJtHG6S9P%3Dmw08SxbWjQCowp8cpZNpzr9acW1EcdegQ%40mail.gmail.com
> .
>


-- 

Thanks!
--Vahid


Re: [DISCUSS] KIP-548 Add Option to enforce rack-aware custom partition reassignment execution

2019-11-22 Thread Vahid Hashemian
Thanks Satish for drafting the KIP. Looks good overall. I would suggest
emphasizing on the default option for --disable-rack-aware option when used
with --execute option.
Also, it would be great to also emphasize that the new format for
--disable-rack-aware (which now takes a true/false value) would not impact
the existing usages (e.g. with --generate option) that did not require a
value for the option.

Victor, to answer your first question, in my experience the assignment json
file is not always created by the same command (through --generate option):

   - Sometimes when a broker is not healthy we manually update the existing
   assignment to change partition replicas to reduce load on the degraded
   broker.
   - In generating full partition assignment plan we also want use some
   custom assignment strategy to have more control over partition placements
   and do not use the default strategy used by Kafka.

In these scenarios, it would be very helpful to have the option of
enforcing rack awareness with the command's --execute option.

Regards,
--Vahid

On Fri, Nov 22, 2019 at 2:57 AM Viktor Somogyi-Vass 
wrote:

> Hi Satish,
>
> Couple of questions/suggestions:
> 1. You say that when you execute the planned reassignment then it would
> throw an error if the generated reassignment doesn't comply with the
> rack-aware requirement. Opposed to this: why don't you have the --generate
> option to generate a rack-aware reassignment plan? This way users won't
> have to do the extra round.
> 2. Please move your KIP under
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> ,
> people will have a hard time finding it if it's under KIP-36.
> (@Stan fyi:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-548+Add+Option+to+enforce+rack-aware+custom+partition+reassignment+execution
> )
>
> Thanks,
> Viktor
>
> On Fri, Nov 22, 2019 at 11:37 AM Stanislav Kozlovski <
> stanis...@confluent.io>
> wrote:
>
> > Hello Satish,
> >
> > Could you provide a link to the KIP? I am unable to find it in the KIP
> > parent page
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> >
> > Thanks,
> > Stanislav
> >
> > On Fri, Nov 22, 2019 at 8:21 AM Satish Bellapu 
> > wrote:
> >
> > > Hi All,
> > >
> > > This [KIP-548] is basically extending the capabilities of
> > > "kafka-reassign-partitions" tool by adding rack-aware verification
> option
> > > when used along with custom or manually generated reassignment planner
> > with
> > > --execute scenario.
> > >
> > > @sbellapu.
> > >
> >
> >
> > --
> > Best,
> > Stanislav
> >
>


-- 

Thanks!
--Vahid


Re: [DISCUSS] KIP-548 Add Option to enforce rack-aware custom partition reassignment execution

2019-11-23 Thread Vahid Hashemian
Hi Satish,

Thanks for the update. What you suggest in the KIP is ideal, the only issue
is the --execute option will not be backward compatible; so the same
command that used to work before, may no longer work if this suggestion is
implemented. One option for keeping backward compatibility would be using a
new option, such as --enable-rack-aware that either works only with
--execute, or works with other options too. This would not necessary be the
best option, but at least it would not be a breaking change.

Let's see if others have better ideas.

Thanks,
--Vahid

On Fri, Nov 22, 2019 at 5:59 PM Satish Bellapu 
wrote:

> Hi Vahid,
> After re-thinking on this, i have following updates on the KIP, with
> aligning to the other options on ReassignPartitionsCommand,
>
> The --execute command by default take rack awareness into consideration,
> and if the custom generated reassignment planner has conflicts along with
> the racks then it will throw the error msg with appropriate reason and
> conflict of partitions along with the racks info. The users need to
> explicitly choose the option --disable-rack-aware if they want to ignore
> the rack awareness.
>
> By this change the usage of options in --execute command will be aligned
> with --generate option, the rack awareness will be consider by default both
> for --generate as well as --execute unless explicitly set to
> --disable-rack-aware.
>
> Let me know whats your thoughts on the same.
>
> --sbellapu
>
> On 2019/11/22 16:32:08, Vahid Hashemian 
> wrote:
> > Thanks Satish for drafting the KIP. Looks good overall. I would suggest
> > emphasizing on the default option for --disable-rack-aware option when
> used
> > with --execute option.
> > Also, it would be great to also emphasize that the new format for
> > --disable-rack-aware (which now takes a true/false value) would not
> impact
> > the existing usages (e.g. with --generate option) that did not require a
> > value for the option.
> >
> > Victor, to answer your first question, in my experience the assignment
> json
> > file is not always created by the same command (through --generate
> option):
> >
> >- Sometimes when a broker is not healthy we manually update the
> existing
> >assignment to change partition replicas to reduce load on the degraded
> >broker.
> >- In generating full partition assignment plan we also want use some
> >custom assignment strategy to have more control over partition
> placements
> >and do not use the default strategy used by Kafka.
> >
> > In these scenarios, it would be very helpful to have the option of
> > enforcing rack awareness with the command's --execute option.
> >
> > Regards,
> > --Vahid
> >
> > On Fri, Nov 22, 2019 at 2:57 AM Viktor Somogyi-Vass <
> viktorsomo...@gmail.com>
> > wrote:
> >
> > > Hi Satish,
> > >
> > > Couple of questions/suggestions:
> > > 1. You say that when you execute the planned reassignment then it would
> > > throw an error if the generated reassignment doesn't comply with the
> > > rack-aware requirement. Opposed to this: why don't you have the
> --generate
> > > option to generate a rack-aware reassignment plan? This way users won't
> > > have to do the extra round.
> > > 2. Please move your KIP under
> > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> > > ,
> > > people will have a hard time finding it if it's under KIP-36.
> > > (@Stan fyi:
> > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-548+Add+Option+to+enforce+rack-aware+custom+partition+reassignment+execution
> > > )
> > >
> > > Thanks,
> > > Viktor
> > >
> > > On Fri, Nov 22, 2019 at 11:37 AM Stanislav Kozlovski <
> > > stanis...@confluent.io>
> > > wrote:
> > >
> > > > Hello Satish,
> > > >
> > > > Could you provide a link to the KIP? I am unable to find it in the
> KIP
> > > > parent page
> > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> > > >
> > > > Thanks,
> > > > Stanislav
> > > >
> > > > On Fri, Nov 22, 2019 at 8:21 AM Satish Bellapu <
> satishbabu...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > This [KIP-548] is basically extending the capabilities of
> > > > > "kafka-reassign-partitions" tool by adding rack-aware verification
> > > option
> > > > > when used along with custom or manually generated reassignment
> planner
> > > > with
> > > > > --execute scenario.
> > > > >
> > > > > @sbellapu.
> > > > >
> > > >
> > > >
> > > > --
> > > > Best,
> > > > Stanislav
> > > >
> > >
> >
> >
> > --
> >
> > Thanks!
> > --Vahid
> >
>


-- 

Thanks!
--Vahid


Re: [kafka-clients] [ANNOUNCE] Apache Kafka 2.2.2

2019-12-01 Thread Vahid Hashemian
Awesome. Thanks for managing this release Randall!

Regards,
--Vahid

On Sun, Dec 1, 2019 at 5:45 PM Randall Hauch  wrote:

> The Apache Kafka community is pleased to announce the release for Apache
> Kafka 2.2.2
>
> This is a bugfix release for Apache Kafka 2.2.
> All of the changes in this release can be found in the release notes:
> https://www.apache.org/dist/kafka/2.2.2/RELEASE_NOTES.html
>
> You can download the source and binary release from:
> https://kafka.apache.org/downloads#2.2.2
>
>
> ---
>
>
> Apache Kafka is a distributed streaming platform with four core APIs:
>
>
> ** The Producer API allows an application to publish a stream records to
> one or more Kafka topics.
>
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
>
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an
> output stream to one or more output topics, effectively transforming the
> input streams to output streams.
>
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might
> capture every change to a table.
>
>
> With these APIs, Kafka can be used for two broad classes of application:
>
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
>
> ** Building real-time streaming applications that transform or react
> to the streams of data.
>
>
> Apache Kafka is in use at large and small companies worldwide, including
> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> Target, The New York Times, Uber, Yelp, and Zalando, among others.
>
> A big thank you for the following 41 contributors to this release!
>
> A. Sophie Blee-Goldman, Matthias J. Sax, Bill Bejeck, Jason Gustafson,
> Chris Egerton, Boyang Chen, Alex Diachenko, cpettitt-confluent, Magesh
> Nandakumar, Randall Hauch, Ismael Juma, John Roesler, Konstantine
> Karantasis, Mickael Maison, Nacho Muñoz Gómez, Nigel Liang, Paul, Rajini
> Sivaram, Robert Yokota, Stanislav Kozlovski, Vahid Hashemian, Victoria
> Bialas, cadonna, cwildman, mjarvie, sdreynolds, slim, vinoth chandar,
> wenhoujx, Arjun Satish, Chia-Ping Tsai, Colin P. Mccabe, David Arthur,
> Dhruvil Shah, Greg Harris, Gunnar Morling, Hai-Dang Dam, Lifei Chen, Lucas
> Bradstreet, Manikumar Reddy, Michał Borowiecki
>
> We welcome your help and feedback. For more information on how to
> report problems, and to get involved, visit the project website at
> https://kafka.apache.org/
>
> Thank you!
>
>
> Regards,
> Randall Hauch
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CALYgK0EsNFakX7F0FDkXvMNmUe8g8w-GNRM7EJjD9CJLK7sn0A%40mail.gmail.com
> <https://groups.google.com/d/msgid/kafka-clients/CALYgK0EsNFakX7F0FDkXvMNmUe8g8w-GNRM7EJjD9CJLK7sn0A%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>


-- 

Thanks!
--Vahid


Re: [ANNOUNCE] New Kafka PMC Members: Colin, Vahid and Manikumar

2020-01-15 Thread Vahid Hashemian
Thank you all.

Regards,
--Vahid

On Wed, Jan 15, 2020 at 9:15 AM Colin McCabe  wrote:

> Thanks, everyone!
>
> best,
> Colin
>
> On Wed, Jan 15, 2020, at 07:50, Sean Glover wrote:
> > Congratulations Colin, Vahid and Manikumar and thank you for all your
> > excellent work on Apache Kafka!
> >
> > On Wed, Jan 15, 2020 at 8:42 AM Ron Dagostino  wrote:
> >
> > > Congratulations!
> > >
> > > > On Jan 15, 2020, at 5:04 AM, Viktor Somogyi-Vass <
> > > viktorsomo...@gmail.com> wrote:
> > > >
> > > > Congrats to you guys, it's a great accomplishment! :)
> > > >
> > > >> On Wed, Jan 15, 2020 at 10:20 AM David Jacot 
> > > wrote:
> > > >>
> > > >> Congrats!
> > > >>
> > > >>> On Wed, Jan 15, 2020 at 12:00 AM James Cheng  >
> > > wrote:
> > > >>>
> > > >>> Congrats Colin, Vahid, and Manikumar!
> > > >>>
> > > >>> -James
> > > >>>
> > > >>>> On Jan 14, 2020, at 10:59 AM, Tom Bentley 
> > > wrote:
> > > >>>>
> > > >>>> Congratulations!
> > > >>>>
> > > >>>> On Tue, Jan 14, 2020 at 6:57 PM Rajini Sivaram <
> > > >> rajinisiva...@gmail.com>
> > > >>>> wrote:
> > > >>>>
> > > >>>>> Congratulations Colin, Vahid and Manikumar!
> > > >>>>>
> > > >>>>> Regards,
> > > >>>>> Rajini
> > > >>>>>
> > > >>>>> On Tue, Jan 14, 2020 at 6:32 PM Mickael Maison <
> > > >>> mickael.mai...@gmail.com>
> > > >>>>> wrote:
> > > >>>>>
> > > >>>>>> Congrats Colin, Vahid and Manikumar!
> > > >>>>>>
> > > >>>>>> On Tue, Jan 14, 2020 at 5:43 PM Ismael Juma 
> > > >> wrote:
> > > >>>>>>>
> > > >>>>>>> Congratulations Colin, Vahid and Manikumar!
> > > >>>>>>>
> > > >>>>>>> Ismael
> > > >>>>>>>
> > > >>>>>>> On Tue, Jan 14, 2020 at 9:30 AM Gwen Shapira <
> g...@confluent.io>
> > > >>>>> wrote:
> > > >>>>>>>
> > > >>>>>>>> Hi everyone,
> > > >>>>>>>>
> > > >>>>>>>> I'm happy to announce that Colin McCabe, Vahid Hashemian and
> > > >>>>> Manikumar
> > > >>>>>>>> Reddy are now members of Apache Kafka PMC.
> > > >>>>>>>>
> > > >>>>>>>> Colin and Manikumar became committers on Sept 2018 and Vahid
> on
> > > Jan
> > > >>>>>>>> 2019. They all contributed many patches, code reviews and
> > > >>>>> participated
> > > >>>>>>>> in many KIP discussions. We appreciate their contributions
> and are
> > > >>>>>>>> looking forward to many more to come.
> > > >>>>>>>>
> > > >>>>>>>> Congrats Colin, Vahid and Manikumar!
> > > >>>>>>>>
> > > >>>>>>>> Gwen, on behalf of Apache Kafka PMC
> > > >>>>>>>>
> > > >>>>>>
> > > >>>>>
> > > >>>
> > > >>>
> > > >>
> > >
> >
>


-- 

Thanks!
--Vahid


Re: [ANNOUNCE] New committer: Konstantine Karantasis

2020-02-26 Thread Vahid Hashemian
Congratulations Konstantine!

Regards,
--Vahid

On Wed, Feb 26, 2020 at 6:49 PM Boyang Chen 
wrote:

> Congrats Konstantine!
>
> On Wed, Feb 26, 2020 at 6:32 PM Manikumar 
> wrote:
>
> > Congrats Konstantine!
>
>
> > On Thu, Feb 27, 2020 at 7:46 AM Matthias J. Sax 
> wrote:
> >
> > > -BEGIN PGP SIGNED MESSAGE-
> > > Hash: SHA512
> > >
> > > Congrats!
> > >
> > > On 2/27/20 2:21 AM, Jeremy Custenborder wrote:
> > > > Congrats Konstantine!
> > > >
> > > > On Wed, Feb 26, 2020 at 2:39 PM Bill Bejeck 
> > > > wrote:
> > > >>
> > > >> Congratulations Konstantine! Well deserved.
> > > >>
> > > >> -Bill
> > > >>
> > > >> On Wed, Feb 26, 2020 at 5:37 PM Jason Gustafson
> > > >>  wrote:
> > > >>
> > > >>> The PMC for Apache Kafka has invited Konstantine Karantasis as
> > > >>> a committer and we are pleased to announce that he has
> > > >>> accepted!
> > > >>>
> > > >>> Konstantine has contributed 56 patches and helped to review
> > > >>> even more. His recent work includes a major overhaul of the
> > > >>> Connect task management system in order to support incremental
> > > >>> rebalancing. In addition to code contributions, Konstantine
> > > >>> helps the community in many other ways including talks at
> > > >>> meetups and at Kafka Summit and answering questions on
> > > >>> stackoverflow. He consistently shows good judgement in design
> > > >>> and a careful attention to details when it comes to code.
> > > >>>
> > > >>> Thanks for all the contributions and looking forward to more!
> > > >>>
> > > >>> Jason, on behalf of the Apache Kafka PMC
> > > >>>
> > > -BEGIN PGP SIGNATURE-
> > >
> > > iQIzBAEBCgAdFiEEI8mthP+5zxXZZdDSO4miYXKq/OgFAl5XJl0ACgkQO4miYXKq
> > > /OjEERAAp08DioD903r9aqGCJ3oHbkUi2ZwEr0yeu1veFD3rGFflhni7sm4J87/Z
> > > NArUdpHQ7+99YATtm+HY8gHN+eQeVY1+NV4lX77vPNLEONx09aIxbcSnc4Ih7JnX
> > > XxcHHBeR6N9EJkMQ82nOUk5cQaDiIdkOvbOv8+NoOcZC6RMy9PKsFgwd8NTMKL8l
> > > nqG8KwnV6WYNOJ05BszFSpTwJBJBg8zhZlRcmAF7iRD0cLzM4BReOmEVCcoas7ar
> > > RK38okIAALDRkC9JNYpQ/s0si4V+OwP4igp0MAjM+Y2NVLhC6kK6uqNzfeD21M6U
> > > mWm7nE9Tbh/K+8hgZbqfprN6vw6+NfU8dwPD0iaEOfisXwbavCfDeonwSWK0BoHo
> > > zEeHRGEx7e2FHWp8KyC6XgfFWmkWJP6tCWiTtCFEScxSTzZUC+cG+a5PF1n6hIHo
> > > /CH3Oml2ZGDxoEl1zt8Hs5AgKW8X4PQCsfA4LWqA4GgR6PPFPn6g8mX3/AR3wkyn
> > > 8Dmlh3k8ZtsW8wX26IYBywm/yyjbnlSzRVSgAHAbpaIqe5PoMG+5fdc1tnO/2Tuf
> > > jd1BjbgAD6u5BksmIGBeZXADblQ/qqfp5Q+WRTJSYLItf8HMNAZoqJFp0bRy/QXP
> > > 5EZzI9J+ngJG8MYr08UcWQMZt0ytwBVTX/+FVC8Rx5r0D0fqizo=
> > > =68IH
> > > -END PGP SIGNATURE-
> > >
> >
>


-- 

Thanks!
--Vahid


Re: Subject: [VOTE] 2.4.1 RC0

2020-03-07 Thread Vahid Hashemian
+1 (binding)

Verified signature, built from source, and ran quickstart
successfully (using openjdk version "11.0.6").
I also ran unit tests locally which resulted in a few flaky tests for which
there are already open Jiras:

ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker
ConsumerBounceTest.testCloseDuringRebalance

ConsumerBounceTest.testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize
PlaintextEndToEndAuthorizationTest.testNoConsumeWithDescribeAclViaAssign

SaslClientsWithInvalidCredentialsTest.testManualAssignmentConsumerWithAuthenticationFailure
SaslMultiMechanismConsumerTest.testCoordinatorFailover

Thanks for running the release Bill.

Regards,
--Vahid

On Fri, Mar 6, 2020 at 9:20 AM Colin McCabe  wrote:

> +1 (binding)
>
> Checked the git hash and branch, looked at the docs a bit.  Ran quickstart
> (although not the connect or streams parts).  Looks good.
>
> best,
> Colin
>
>
> On Fri, Mar 6, 2020, at 07:31, David Arthur wrote:
> > +1 (binding)
> >
> > Download kafka_2.13-2.4.1 and verified signature, ran quickstart,
> > everything looks good.
> >
> > Thanks for running this release, Bill!
> >
> > -David
> >
> >
> >
> > On Wed, Mar 4, 2020 at 6:06 AM Eno Thereska 
> wrote:
> >
> > > Hi Bill,
> > >
> > > I built from source and ran unit and integration tests. They passed.
> > > There was a large number of skipped tests, but I'm assuming that is
> > > intentional.
> > >
> > > Cheers
> > > Eno
> > >
> > > On Tue, Mar 3, 2020 at 8:42 PM Eric Lalonde  wrote:
> > > >
> > > > Hi,
> > > >
> > > > I ran:
> > > > $
> https://github.com/elalonde/kafka/blob/master/bin/verify-kafka-rc.sh
> > > 
> > > 2.4.1 https://home.apache.org/~bbejeck/kafka-2.4.1-rc0 <
> > > https://home.apache.org/~bbejeck/kafka-2.4.1-rc0>
> > > >
> > > > All checksums and signatures are good and all unit and integration
> tests
> > > that were executed passed successfully.
> > > >
> > > > - Eric
> > > >
> > > > > On Mar 2, 2020, at 6:39 PM, Bill Bejeck  wrote:
> > > > >
> > > > > Hello Kafka users, developers and client-developers,
> > > > >
> > > > > This is the first candidate for release of Apache Kafka 2.4.1.
> > > > >
> > > > > This is a bug fix release and it includes fixes and improvements
> from
> > > 38
> > > > > JIRAs, including a few critical bugs.
> > > > >
> > > > > Release notes for the 2.4.1 release:
> > > > >
> https://home.apache.org/~bbejeck/kafka-2.4.1-rc0/RELEASE_NOTES.html
> > > > >
> > > > > *Please download, test and vote by Thursday, March 5, 9 am PT*
> > > > >
> > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > > https://kafka.apache.org/KEYS
> > > > >
> > > > > * Release artifacts to be voted upon (source and binary):
> > > > > https://home.apache.org/~bbejeck/kafka-2.4.1-rc0/
> > > > >
> > > > > * Maven artifacts to be voted upon:
> > > > >
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > > > >
> > > > > * Javadoc:
> > > > > https://home.apache.org/~bbejeck/kafka-2.4.1-rc0/javadoc/
> > > > >
> > > > > * Tag to be voted upon (off 2.4 branch) is the 2.4.1 tag:
> > > > > https://github.com/apache/kafka/releases/tag/2.4.1-rc0
> > > > >
> > > > > * Documentation:
> > > > > https://kafka.apache.org/24/documentation.html
> > > > >
> > > > > * Protocol:
> > > > > https://kafka.apache.org/24/protocol.html
> > > > >
> > > > > * Successful Jenkins builds for the 2.4 branch:
> > > > > Unit/integration tests: Links to successful unit/integration test
> > > build to
> > > > > follow
> > > > > System tests:
> > > > > https://jenkins.confluent.io/job/system-test-kafka/job/2.4/152/
> > > > >
> > > > >
> > > > > Thanks,
> > > > > Bill Bejeck
> > > >
> > >
> >
> >
> > --
> > David Arthur
> >
>


-- 

Thanks!
--Vahid


Re: [ANNOUNCE] Apache Kafka 2.4.1

2020-03-13 Thread Vahid Hashemian
Thanks a lot for running this release Bill!

Regards,
--Vahid

On Fri, Mar 13, 2020 at 2:56 AM Mickael Maison 
wrote:

> Thanks Bill for managing this release!
>
> On Fri, Mar 13, 2020 at 5:58 AM Guozhang Wang  wrote:
> >
> > Thanks Bill for driving this. And Many thanks to all who've contributed
> to
> > this release!
> >
> >
> > Guozhang
> >
> > On Thu, Mar 12, 2020 at 3:00 PM Matthias J. Sax 
> wrote:
> >
> > > Thanks for driving the release Bill!
> > >
> > > -Matthias
> > >
> > > On 3/12/20 1:22 PM, Bill Bejeck wrote:
> > > > The Apache Kafka community is pleased to announce the release for
> Apache
> > > > Kafka 2.4.1
> > > >
> > > > This is a bug fix release and it includes fixes and improvements
> from 39
> > > > JIRAs, including a few critical bugs.
> > > >
> > > > All of the changes in this release can be found in the release notes:
> > > > https://www.apache.org/dist/kafka/2.4.1/RELEASE_NOTES.html
> > > >
> > > >
> > > > You can download the source and binary release (Scala 2.11, 2.12, and
> > > 2.13)
> > > > from:
> > > > https://kafka.apache.org/downloads#2.4.1
> > > >
> > > >
> > >
> ---
> > > >
> > > >
> > > > Apache Kafka is a distributed streaming platform with four core APIs:
> > > >
> > > >
> > > > ** The Producer API allows an application to publish a stream
> records to
> > > > one or more Kafka topics.
> > > >
> > > > ** The Consumer API allows an application to subscribe to one or more
> > > > topics and process the stream of records produced to them.
> > > >
> > > > ** The Streams API allows an application to act as a stream
> processor,
> > > > consuming an input stream from one or more topics and producing an
> > > > output stream to one or more output topics, effectively transforming
> the
> > > > input streams to output streams.
> > > >
> > > > ** The Connector API allows building and running reusable producers
> or
> > > > consumers that connect Kafka topics to existing applications or data
> > > > systems. For example, a connector to a relational database might
> > > > capture every change to a table.
> > > >
> > > >
> > > > With these APIs, Kafka can be used for two broad classes of
> application:
> > > >
> > > > ** Building real-time streaming data pipelines that reliably get data
> > > > between systems or applications.
> > > >
> > > > ** Building real-time streaming applications that transform or react
> > > > to the streams of data.
> > > >
> > > >
> > > > Apache Kafka is in use at large and small companies worldwide,
> including
> > > > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest,
> Rabobank,
> > > > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> > > >
> > > > A big thank you for the following 35 contributors to this release!
> > > >
> > > > A. Sophie Blee-Goldman, Alex Kokachev, bill, Bill Bejeck, Boyang
> Chen,
> > > > Brian Bushree, Brian Byrne, Bruno Cadonna, Chia-Ping Tsai, Chris
> Egerton,
> > > > Colin Patrick McCabe, David Jacot, David Kim, David Mao, Dhruvil
> Shah,
> > > > Gunnar Morling, Guozhang Wang, huxi, Ismael Juma, Ivan Yurchenko,
> Jason
> > > > Gustafson, John Roesler, Konstantine Karantasis, Lev Zemlyanov,
> Manikumar
> > > > Reddy, Matthew Wong, Matthias J. Sax, Michael Gyarmathy, Michael
> Viamari,
> > > > Nigel Liang, Rajini Sivaram, Randall Hauch, Tomislav, Vikas Singh,
> Xin
> > > Wang
> > > >
> > > > We welcome your help and feedback. For more information on how to
> > > > report problems, and to get involved, visit the project website at
> > > > https://kafka.apache.org/
> > > >
> > > > Thank you!
> > > >
> > > >
> > > > Regards,
> > > >
> > > > Bill Bejeck
> > > >
> > >
> > >
> >
> > --
> > -- Guozhang
>


-- 

Thanks!
--Vahid


Re: [DISCUSS] KIP-712: Shallow Mirroring

2021-02-10 Thread Vahid Hashemian
Retitled the thread to conform to the common format.

On Fri, Feb 5, 2021 at 4:00 PM Ning Zhang  wrote:

> Hello Henry,
>
> This is a very interesting proposal.
> https://issues.apache.org/jira/browse/KAFKA-10728 reflects the similar
> concern of re-compressing data in mirror maker.
>
> Probably one thing may need to clarify is: how "shallow" mirroring is only
> applied to mirrormaker use case, if the changes need to be made on generic
> consumer and producer (e.g. by adding `fetch.raw.bytes` and
> `send.raw.bytes` to producer and consumer config)
>
> On 2021/02/05 00:59:57, Henry Cai  wrote:
> > Dear Community members,
> >
> > We are proposing a new feature to improve the performance of Kafka mirror
> > maker:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-712%3A+Shallow+Mirroring
> >
> > The current Kafka MirrorMaker process (with the underlying Consumer and
> > Producer library) uses significant CPU cycles and memory to
> > decompress/recompress, deserialize/re-serialize messages and copy
> multiple
> > times of messages bytes along the mirroring/replicating stages.
> >
> > The KIP proposes a *shallow mirror* feature which brings back the shallow
> > iterator concept to the mirror process and also proposes to skip the
> > unnecessary message decompression and recompression steps.  We argue in
> > many cases users just want a simple replication pipeline to replicate the
> > message as it is from the source cluster to the destination cluster.  In
> > many cases the messages in the source cluster are already compressed and
> > properly batched, users just need an identical copy of the message bytes
> > through the mirroring without any transformation or repartitioning.
> >
> > We have a prototype implementation in house with MirrorMaker v1 and
> > observed *CPU usage dropped from 50% to 15%* for some mirror pipelines.
> >
> > We name this feature: *shallow mirroring* since it has some resemblance
> to
> > the old Kafka 0.7 namesake feature but the implementations are not quite
> > the same.  ‘*Shallow*’ means 1. we *shallowly* iterate RecordBatches
> inside
> > MemoryRecords structure instead of deep iterating records inside
> > RecordBatch; 2. We *shallowly* copy (share) pointers inside ByteBuffer
> > instead of deep copying and deserializing bytes into objects.
> >
> > Please share discussions/feedback along this email thread.
> >
>


-- 

Thanks!
--Vahid


Re: [DISCUSS] KIP-712: Shallow Mirroring

2021-02-22 Thread Vahid Hashemian
As Henry mentions in the KIP, we are seeing a great deal of improvements
and efficiency by using the mirroring enhancement proposed in this KIP, and
believe it would be equally beneficial to everyone that runs Kafka and
Kafka Mirror at scale.

I'm bumping up this thread in case there are additional feedback or
comments.

Thanks,
--Vahid

On Sat, Feb 13, 2021, 13:59 Ryanne Dolan  wrote:

> Glad to hear that latency and thruput aren't negatively affected somehow. I
> would love to see this KIP move forward.
>
> Ryanne
>
> On Sat, Feb 13, 2021, 3:00 PM Henry Cai  wrote:
>
> > Ryanne,
> >
> > Yes, network performance is also important.  In our deployment, we are
> > bottlenecked on the CPU/memory on the mirror hosts.  We are using c5.2x
> and
> > m5.2x nodes in AWS, before the deployment, CPU would peak to 80% but
> there
> > is enough network bandwidth left on those hosts.  Having said that, we
> > maintain the same network throughput before and after the switch.
> >
> > On Fri, Feb 12, 2021 at 12:20 PM Ryanne Dolan 
> > wrote:
> >
> >> Hey Henry, great KIP. The performance improvements are impressive!
> >> However, often cpu, ram, gc are not the metrics most important to a
> >> replication pipeline -- often the network is mostly saturated anyway. Do
> >> you know how this change affects latency or thruput? I suspect less GC
> >> pressure means slightly less p99 latency, but it would be great to see
> that
> >> confirmed. I don't think it's necessary that this KIP improves these
> >> metrics, but I think it's important to show that they at least aren't
> made
> >> worse.
> >>
> >> I suspect any improvement in MM1 would be magnified in MM2, given there
> >> is a lot more machinery between consumer and producer in MM2.
> >>
> >>
> >> I'd like to do some performance analysis based on these changes. Looking
> >> forward to a PR!
> >>
> >> Ryanne
> >>
> >> On Wed, Feb 10, 2021, 3:50 PM Henry Cai  wrote:
> >>
> >>> On the question "whether shallow mirror is only applied on mirror maker
> >>> v1", the code change is mostly on consumer and producer code path, the
> >>> change to mirrormaker v1 is very trivial.  We chose to modify the
> >>> consumer/producer path (instead of creating a new mirror product) so
> other
> >>> use cases can use that feature as well.  The change to mirror maker v2
> >>> should be straightforward as well but we don't have that environment in
> >>> house.  I think the community can easily port this change to mirror
> maker
> >>> v2.
> >>>
> >>>
> >>>
> >>> On Wed, Feb 10, 2021 at 12:58 PM Vahid Hashemian <
> >>> vahid.hashem...@gmail.com> wrote:
> >>>
> >>>> Retitled the thread to conform to the common format.
> >>>>
> >>>> On Fri, Feb 5, 2021 at 4:00 PM Ning Zhang 
> >>>> wrote:
> >>>>
> >>>> > Hello Henry,
> >>>> >
> >>>> > This is a very interesting proposal.
> >>>> > https://issues.apache.org/jira/browse/KAFKA-10728 reflects the
> >>>> similar
> >>>> > concern of re-compressing data in mirror maker.
> >>>> >
> >>>> > Probably one thing may need to clarify is: how "shallow" mirroring
> is
> >>>> only
> >>>> > applied to mirrormaker use case, if the changes need to be made on
> >>>> generic
> >>>> > consumer and producer (e.g. by adding `fetch.raw.bytes` and
> >>>> > `send.raw.bytes` to producer and consumer config)
> >>>> >
> >>>> > On 2021/02/05 00:59:57, Henry Cai 
> wrote:
> >>>> > > Dear Community members,
> >>>> > >
> >>>> > > We are proposing a new feature to improve the performance of Kafka
> >>>> mirror
> >>>> > > maker:
> >>>> > >
> >>>> >
> >>>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-712%3A+Shallow+Mirroring
> >>>> > >
> >>>> > > The current Kafka MirrorMaker process (with the underlying
> Consumer
> >>>> and
> >>>> > > Producer library) uses significant CPU cycles and memory to
> >>>> > > decompress/recompress, deserialize/re-serialize messages and copy
> >

Re: [ANNOUNCE] New committer: David Jacot

2020-10-18 Thread Vahid Hashemian
Congrats David!

--Vahid

On Sun, Oct 18, 2020 at 4:23 PM Satish Duggana 
wrote:

> Congratulations David!
>
> On Sat, Oct 17, 2020 at 10:46 AM Boyang Chen 
> wrote:
> >
> > Congrats David, well deserved!
> >
> > On Fri, Oct 16, 2020 at 6:45 PM John Roesler 
> wrote:
> >
> > > Congratulations, David!
> > > -John
> > >
> > > On Fri, Oct 16, 2020, at 20:15, Konstantine Karantasis wrote:
> > > > Congrats, David!
> > > >
> > > > Konstantine
> > > >
> > > >
> > > > On Fri, Oct 16, 2020 at 3:36 PM Ismael Juma 
> wrote:
> > > >
> > > > > Congratulations David!
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Fri, Oct 16, 2020 at 9:01 AM Gwen Shapira 
> > > wrote:
> > > > >
> > > > > > The PMC for Apache Kafka has invited David Jacot as a committer,
> and
> > > > > > we are excited to say that he accepted!
> > > > > >
> > > > > > David Jacot has been contributing to Apache Kafka since July
> 2015 (!)
> > > > > > and has been very active since August 2019. He contributed
> several
> > > > > > notable KIPs:
> > > > > >
> > > > > > KIP-511: Collect and Expose Client Name and Version in Brokers
> > > > > > KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> > > > > > KIP-570: Add leader epoch in StopReplicaReques
> > > > > > KIP-599: Throttle Create Topic, Create Partition and Delete Topic
> > > > > > Operations
> > > > > > KIP-496 Added an API for the deletion of consumer offsets
> > > > > >
> > > > > > In addition, David Jacot reviewed many community contributions
> and
> > > > > > showed great technical and architectural taste. Great reviews are
> > > hard
> > > > > > and often thankless work - but this is what makes Kafka a great
> > > > > > product and helps us grow our community.
> > > > > >
> > > > > > Thanks for all the contributions, David! Looking forward to more
> > > > > > collaboration in the Apache Kafka community.
> > > > > >
> > > > > > --
> > > > > > Gwen Shapira
> > > > > >
> > > > >
> > > >
> > >
>


-- 

Thanks!
--Vahid


[jira] [Resolved] (KAFKA-7604) Flaky Test `ConsumerCoordinatorTest.testRebalanceAfterTopicUnavailableWithPatternSubscribe`

2018-11-10 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian resolved KAFKA-7604.

Resolution: Fixed

> Flaky Test 
> `ConsumerCoordinatorTest.testRebalanceAfterTopicUnavailableWithPatternSubscribe`
> ---
>
> Key: KAFKA-7604
> URL: https://issues.apache.org/jira/browse/KAFKA-7604
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> {code}
> java.lang.AssertionError: Metadata refresh requested unnecessarily
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest.unavailableTopicTest(ConsumerCoordinatorTest.java:1034)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest.testRebalanceAfterTopicUnavailableWithPatternSubscribe(ConsumerCoordinatorTest.java:984)
> {code}
> The problem seems to be a race condition in the test case with the heartbeat 
> thread and the foreground thread unsafely attempting to update metadata at 
> the same time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6951) Implement offset expiration semantics for unsubscribed topics

2018-05-25 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-6951:
--

 Summary: Implement offset expiration semantics for unsubscribed 
topics
 Key: KAFKA-6951
 URL: https://issues.apache.org/jira/browse/KAFKA-6951
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Vahid Hashemian
Assignee: Vahid Hashemian
 Fix For: 2.1.0


[This 
portion|https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets#KIP-211:ReviseExpirationSemanticsofConsumerGroupOffsets-UnsubscribingfromaTopic]
 of KIP-211 will be implemented separately from the main PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6956) Use Java AdminClient in BrokerApiVersionsCommand

2018-05-27 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian resolved KAFKA-6956.

Resolution: Duplicate

> Use Java AdminClient in BrokerApiVersionsCommand
> 
>
> Key: KAFKA-6956
> URL: https://issues.apache.org/jira/browse/KAFKA-6956
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>    Assignee: Vahid Hashemian
>Priority: Major
>
> The Scala AdminClient was introduced as a stop gap until we had an officially 
> supported API. The Java AdminClient is the supported API so we should migrate 
> all usages to it and remove the Scala AdminClient. This JIRA is for using the 
> Java AdminClient in BrokerApiVersionsCommand. We would need to verify that 
> the necessary APIs are available via the Java AdminClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7026) Sticky assignor could assign a partition to multiple consumers

2018-06-08 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-7026:
--

 Summary: Sticky assignor could assign a partition to multiple 
consumers
 Key: KAFKA-7026
 URL: https://issues.apache.org/jira/browse/KAFKA-7026
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Vahid Hashemian
Assignee: Vahid Hashemian


In the following scenario sticky assignor assigns a topic partition to two 
consumers in the group:
 # Create a topic {{test}} with a single partition
 # Start consumer {{c1}} in group {{sticky-group}} ({{c1}} becomes group leader 
and gets {{test-0}})
 # Start consumer {{c2}}  in group {{sticky-group}} ({{c1}} holds onto 
{{test-0}}, {{c2}} does not get any partition) 
 # Pause {{c1}} (e.g. using Java debugger) ({{c2}} becomes leader and takes 
over {{test-0}}, {{c1}} leaves the group)
 # Resume {{c1}}

At this point both {{c1}} and {{c2}} will have {{test-0}} assigned to them.

 

The reason is {{c1}} still has kept its previous assignment ({{test-0}}) from 
the last assignment it received from the leader (itself) and did not get the 
next round of assignments (when {{c2}} became leader) because it was paused. 
Both {{c1}} and {{c2}} enter the rebalance supplying {{test-0}} as their 
existing assignment. The sticky assignor code does not currently check for this 
duplication.

 


Note: This issue was originally reported on 
[StackOverflow|https://stackoverflow.com/questions/50761842/kafka-stickyassignor-breaking-delivery-to-single-consumer-in-the-group].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7141) kafka-consumer-group doesn't describe existing group

2018-07-10 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian resolved KAFKA-7141.

Resolution: Not A Problem

> kafka-consumer-group doesn't describe existing group
> 
>
> Key: KAFKA-7141
> URL: https://issues.apache.org/jira/browse/KAFKA-7141
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.11.0.0, 1.0.1
>Reporter: Bohdana Panchenko
>Priority: Major
>
> I am running two consumers: akka-stream-kafka consumer with standard config 
> section as described in the 
> [https://doc.akka.io/docs/akka-stream-kafka/current/consumer.html] and  
> kafka-console-consumer.
> akka-stream-kafka consumer configuration looks like this
> {color:#33}_akka.kafka.consumer{_{color}
> {color:#33}  _kafka-clients{_{color}
> {color:#33}    _group.id = "myakkastreamkafka-1"_{color}
> {color:#33}   _enable.auto.commit = false_{color}
> }
> {color:#33} }{color}
>  
>  I am able to see the both groups with the command
>  
>  *kafka-consumer-groups --bootstrap-server 127.0.0.1:9092 --list*
>  _Note: This will not show information about old Zookeeper-based consumers._
>  
>  _myakkastreamkafka-1_
>  _console-consumer-57171_
> {color:#33}I am able to view details about the console consumer 
> group{color}
> *kafka-consumer-groups --describe --bootstrap-server 127.0.0.1:9092 --group 
> console-consumer-57171*
>  _{color:#205081}Note: This will not show information about old 
> Zookeeper-based consumers.{color}_
> _{color:#205081}TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID 
> HOST CLIENT-ID{color}_
>  _{color:#205081}STREAM-TEST 0 0 0 0 
> consumer-1-6b928e07-196a-4322-9928-068681617878 /172.19.0.4 consumer-1{color}_
> {color:#33}But the command to describe my akka stream consumer gives me 
> empty output:{color}
> *kafka-consumer-groups --describe --bootstrap-server 127.0.0.1:9092 --group 
> myakkastreamkafka-1*
>  {color:#205081}_Note: This will not show information about old 
> Zookeeper-based consumers._{color}
> {color:#205081}_TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID 
> HOST CLIENT-ID_{color}
>  
> {color:#33}That is strange. Can you please check the issue?{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7156) Deleting topics with long names can bring all brokers to unrecoverable state

2018-07-13 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian resolved KAFKA-7156.

Resolution: Duplicate

> Deleting topics with long names can bring all brokers to unrecoverable state
> 
>
> Key: KAFKA-7156
> URL: https://issues.apache.org/jira/browse/KAFKA-7156
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.1.0
>Reporter: Petr Pchelko
>Priority: Major
>
> Kafka limit for the topic name is 249 symbols, so creating a topic with a 
> name 248 symbol long is possible. However, when deleting the topic, Kafka 
> tries to rename the data directory for the topic to add some hash and 
> `-deleted` in the data directory, so that the resulting file name exceeds the 
> 255 symbol file name limit in most of the Unix file systems. This provokes a  
> java.nio.file.FileSystemException which in turn immediately shuts down all 
> the brokers. Further attemts to restart the broker fail with the same 
> exception. The only way to resurrect the cluster is to manually delete the 
> affected topic from zookeeper and from the disk on all the broker machines.
> Steps to reproduce:
> (Note: delete.topic.enable=true must be set in the config)
> {code:java}
> > kafka-topics.sh --zookeeper localhost:2181 --create --topic 
> > 
> >  --partitions 1 --replication-factor 1
> > kafka-topics.sh --zookeeper localhost:2181 --delete --topic 
> > aaa
>  {code}
> After these 2 commands executed all the brokers where this topic is 
> replicated immediately shut down with the following logs:
> {code:java}
> ERROR Error while renaming dir for 
> -0
>  in log dir /tmp/kafka-logs (kafka.server.LogDirFailureChannel)
> java.nio.file.FileSystemException: 
> /tmp/kafka-logs/-0
>  -> 
> /tmp/kafka-logs/-0.093fd1e1728f438ea990cbad8a514b9f-delete:
>  File name too long
> at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
> at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:457)
> at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
> at java.nio.file.Files.move(Files.java:1395)
> ...
> Suppressed: java.nio.file.FileSystemException: 
> /tmp/kafka-logs/-0
>  -> 
> /tmp/kafka-logs/-0.093fd1e1728f438ea990cbad8a514b9f-delete:
>  File name too long
> at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
> at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396)
> at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
> at java.nio.file.Files.move(Files.java:1395)
> at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
> ... 23 more
> [2018-07-12 13:34:45,847] INFO [ReplicaM

[jira] [Resolved] (KAFKA-6717) TopicPartition Assined twice to a consumer group for 2 consumer instances

2018-07-17 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian resolved KAFKA-6717.

Resolution: Duplicate

Marking it as duplicate to keep all the discussion in the other JIRA.

> TopicPartition Assined twice to a consumer group for 2 consumer instances 
> --
>
> Key: KAFKA-6717
> URL: https://issues.apache.org/jira/browse/KAFKA-6717
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.1
>Reporter: Yuancheng PENG
>Priority: Major
>
> I'm using \{{StickyAssignor}} for consuming more than 100 topics with certain 
> pattern.
> There are 10 consumers with the same group id.
> I expected that topic-partition to be assigned to only one consumer instance. 
> However some topic partitions are assigned twice in 2 different difference 
> instance, hence the consumer group process duplicate messages.
> {code:java}
> props.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG, 
> Collections.singletonList(StickyAssignor.class));
> KafkaConsumer c = new KafkaConsumer<>(props);
> c.subscribe(Pattern.compile(TOPIC_PATTERN), new 
> NoOpConsumerRebalanceListener());
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6055) Running tools on Windows fail due to misconfigured JVM config

2017-10-11 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-6055:
--

 Summary: Running tools on Windows fail due to misconfigured JVM 
config
 Key: KAFKA-6055
 URL: https://issues.apache.org/jira/browse/KAFKA-6055
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Vahid Hashemian
Assignee: Vahid Hashemian
Priority: Blocker
 Fix For: 1.0.0


This affects the current trunk and 1.0.0 RC0.

When running any of the Windows commands under {{bin/windows}} the following 
error is returned:

{code}
Missing +/- setting for VM option 'ExplicitGCInvokesConcurrent'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6075) Kafka cannot recover after an unclean shutdown on Windows

2017-10-17 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-6075:
--

 Summary: Kafka cannot recover after an unclean shutdown on Windows
 Key: KAFKA-6075
 URL: https://issues.apache.org/jira/browse/KAFKA-6075
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.1
Reporter: Vahid Hashemian


An unclean shutdown of broker on Windows cannot be recovered by Kafka. Steps to 
reproduce from a fresh build:
# Start zookeeper
# Start a broker
# Create a topic {{test}}
# Do an unclean shutdown of broker (find the process id by {{wmic process where 
"caption = 'java.exe' and commandline like '%server.properties%'" get 
processid}}), then kill the process by {{taskkill /pid  /f}}
# Start the broker again

This leads to the following errors:
{code}
[2017-10-17 17:13:24,819] ERROR Error while loading log dir C:\tmp\kafka-logs 
(kafka.log.LogManager)
java.nio.file.FileSystemException: 
C:\tmp\kafka-logs\test-0\.timeindex: The process cannot 
access the file because it is being used by another process.

at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
at 
sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
at java.nio.file.Files.deleteIfExists(Files.java:1165)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:333)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:295)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegmentFiles(Log.scala:295)
at kafka.log.Log.loadSegments(Log.scala:404)
at kafka.log.Log.(Log.scala:201)
at kafka.log.Log$.apply(Log.scala:1729)
at 
kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:221)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$8$$anonfun$apply$16$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:292)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2017-10-17 17:13:24,819] ERROR Error while deleting the clean shutdown file in 
dir C:\tmp\kafka-logs (kafka.server.LogDirFailureChannel)
java.nio.file.FileSystemException: 
C:\tmp\kafka-logs\test-0\.timeindex: The process cannot 
access the file because it is being used by another process.

at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
at 
sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
at java.nio.file.Files.deleteIfExists(Files.java:1165)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:333)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:295)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegmentFiles(Log.scala:295)
at kafka.log.Log.loadSegments(Log.scala:404)
at kafka.log.Log.(Log.scala:201)
at kafka.log.Log$.apply(Log.scala:1729)
at 
kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:221)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$8$$anonfun$apply$16$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:292)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)
at 
java.util.concurrent.Executors$RunnableAdapter.call(

[jira] [Created] (KAFKA-6100) Streams quick start crashes Java on Windows

2017-10-20 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-6100:
--

 Summary: Streams quick start crashes Java on Windows 
 Key: KAFKA-6100
 URL: https://issues.apache.org/jira/browse/KAFKA-6100
 Project: Kafka
  Issue Type: Bug
  Components: streams
 Environment: Windows 10 VM
Reporter: Vahid Hashemian
 Attachments: Screen Shot 2017-10-20 at 11.53.14 AM.png

*This issue was detected in 1.0.0 RC2.*

The following step in streams quick start crashes Java on Windows 10:
{{bin/kafka-run-class.sh 
org.apache.kafka.streams.examples.wordcount.WordCountDemo}}

I tracked this down to [this 
change|https://github.com/apache/kafka/commit/196bcfca0c56420793f85514d1602bde564b0651#diff-6512f838e273b79676cac5f72456127fR67],
 and it seems to new version of RocksDB is to blame.  I tried the quick start 
with the previous version of RocksDB (5.7.3) and did not run into this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6110) Warning when running the broker on Windows

2017-10-23 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-6110:
--

 Summary: Warning when running the broker on Windows
 Key: KAFKA-6110
 URL: https://issues.apache.org/jira/browse/KAFKA-6110
 Project: Kafka
  Issue Type: Bug
Reporter: Vahid Hashemian
Priority: Minor


The following warning appears in the broker log at startup:
{code}
[2017-10-23 15:29:49,370] WARN Error processing 
kafka.log:type=LogManager,name=LogDirectoryOffline,logDirectory=C:\tmp\kafka-logs
 (com.yammer.metrics.reporting.JmxReporter)
javax.management.MalformedObjectNameException: Invalid character ':' in value 
part of property
at javax.management.ObjectName.construct(ObjectName.java:618)
at javax.management.ObjectName.(ObjectName.java:1382)
at 
com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)
at 
com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)
at 
com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)
at 
com.yammer.metrics.core.MetricsRegistry.newGauge(MetricsRegistry.java:79)
at 
kafka.metrics.KafkaMetricsGroup$class.newGauge(KafkaMetricsGroup.scala:80)
at kafka.log.LogManager.newGauge(LogManager.scala:50)
at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:117)
at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:116)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.log.LogManager.(LogManager.scala:116)
at kafka.log.LogManager$.apply(LogManager.scala:799)
at kafka.server.KafkaServer.startup(KafkaServer.scala:222)
at 
kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:92)
at kafka.Kafka.main(Kafka.scala)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6110) Warning when running the broker on Windows

2017-11-07 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian resolved KAFKA-6110.

   Resolution: Duplicate
Fix Version/s: 1.1.0

> Warning when running the broker on Windows
> --
>
> Key: KAFKA-6110
> URL: https://issues.apache.org/jira/browse/KAFKA-6110
> Project: Kafka
>  Issue Type: Bug
> Environment: Windows 10 VM
>    Reporter: Vahid Hashemian
>Priority: Minor
> Fix For: 1.1.0
>
>
> *This issue exists in 1.0.0-RC2.*
> The following warning appears in the broker log at startup:
> {code}
> [2017-10-23 15:29:49,370] WARN Error processing 
> kafka.log:type=LogManager,name=LogDirectoryOffline,logDirectory=C:\tmp\kafka-logs
>  (com.yammer.metrics.reporting.JmxReporter)
> javax.management.MalformedObjectNameException: Invalid character ':' in value 
> part of property
> at javax.management.ObjectName.construct(ObjectName.java:618)
> at javax.management.ObjectName.(ObjectName.java:1382)
> at 
> com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)
> at 
> com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)
> at 
> com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)
> at 
> com.yammer.metrics.core.MetricsRegistry.newGauge(MetricsRegistry.java:79)
> at 
> kafka.metrics.KafkaMetricsGroup$class.newGauge(KafkaMetricsGroup.scala:80)
> at kafka.log.LogManager.newGauge(LogManager.scala:50)
> at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:117)
> at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:116)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at kafka.log.LogManager.(LogManager.scala:116)
> at kafka.log.LogManager$.apply(LogManager.scala:799)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:222)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
> at kafka.Kafka$.main(Kafka.scala:92)
> at kafka.Kafka.main(Kafka.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-7993) Revert/Update the fix for KAFKA-7937 when coordinator lookup retries is implemented

2019-02-24 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-7993:
--

 Summary: Revert/Update the fix for KAFKA-7937 when coordinator 
lookup retries is implemented 
 Key: KAFKA-7993
 URL: https://issues.apache.org/jira/browse/KAFKA-7993
 Project: Kafka
  Issue Type: Improvement
Reporter: Vahid Hashemian


Since the new {{AdminClient}} API does not support coordinator lookup retries, 
[KAFKA-7937| https://issues.apache.org/jira/browse/KAFKA-7937] improved some 
unit test by adding a wait until coordinator is available. 
[KAFKA-6789|https://issues.apache.org/jira/browse/KAFKA-6789] is an open ticket 
to add the retry to the new API. Once that is implemented the unit test fix 
should be reverted / updated accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7962) StickyAssignor: throws NullPointerException during assignments if topic is deleted

2019-02-27 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian resolved KAFKA-7962.

   Resolution: Fixed
 Reviewer: Vahid Hashemian
Fix Version/s: 2.3.0

> StickyAssignor: throws NullPointerException during assignments if topic is 
> deleted
> --
>
> Key: KAFKA-7962
> URL: https://issues.apache.org/jira/browse/KAFKA-7962
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 2.1.0
> Environment: 1. MacOS, com.salesforce.kafka.test.KafkaTestUtils (kind 
> of embedded kafka integration tests)
> 2. Linux, dockerised kafka and our service
>Reporter: Oleg Smirnov
>Assignee: huxihx
>Priority: Major
> Fix For: 2.3.0
>
> Attachments: NPE-StickyAssignor-issues.apache.log
>
>
> Integration tests with  com.salesforce.kafka.test.KafkaTestUtils, local 
> setup, StickyAssignor used, local topics are created / removed, one topic is 
> created in the beginning of test and without unsubscribing from it - deleted.
> Same happens in real environment.
>  
>  # have single "topic" with 1 partition
>  # single consumer subscribed to this "topic" (StickyAssignor)
>  # delete "topic"
> =>
>  * rebalance starts, topic partition(s) is revoked
>  * on assignment StickyAssignor throws exception (line 223), because 
> partitionsPerTopic.("topic") returns null in for loop (topic deleted - no 
> partitions are present)
>  
> In the provided log part, tearDown() causes topic deletion, while consumer 
> still running and tries to poll data from topic.
> RangeAssignor works fine (revokes partition, assigns empty set).
> Problem doesn't have workaround (like handle i in onPartitionsAssigned and 
> remove unsubscribe topic), because everything happens before listener called.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7946) Flaky Test DeleteConsumerGroupsTest#testDeleteNonEmptyGroup

2019-05-03 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian resolved KAFKA-7946.

   Resolution: Fixed
Fix Version/s: 2.2.1

> Flaky Test DeleteConsumerGroupsTest#testDeleteNonEmptyGroup
> ---
>
> Key: KAFKA-7946
> URL: https://issues.apache.org/jira/browse/KAFKA-7946
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/17/]
> {quote}java.lang.NullPointerException at 
> kafka.admin.DeleteConsumerGroupsTest.testDeleteNonEmptyGroup(DeleteConsumerGroupsTest.scala:96){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8289) KTable, Long> can't be suppressed

2019-05-03 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian resolved KAFKA-8289.

Resolution: Fixed

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.3.0, 2.1.2, 2.2.1
>
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-

[jira] [Assigned] (KAFKA-5434) Console consumer hangs if not existing partition is specified

2017-06-12 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-5434:
--

Assignee: Vahid Hashemian

> Console consumer hangs if not existing partition is specified
> -
>
> Key: KAFKA-5434
> URL: https://issues.apache.org/jira/browse/KAFKA-5434
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Paolo Patierno
>    Assignee: Vahid Hashemian
>
> Hi,
> if I specify the --partition option for the console consumer with a not 
> existing partition for a topic, the application hangs indefinitely.
> Debugging the code I see that it asks for metadata but when it receives topic 
> information and it doesn't find the requested partition inside such metadata, 
> the code retries new time.
> Could be it worst to check if the partition exists using the partitionFor 
> method before calling the assign in the seek of the BaseConsumer and throwing 
> an exception so printing an error on the console ?
> Thanks,
> Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-3129) Console producer issue when request-required-acks=0

2017-06-12 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047074#comment-16047074
 ] 

Vahid Hashemian commented on KAFKA-3129:


[~pmishra01] I tried this on Ubuntu, Windows 7 and Windows 10 but were not able 
to reproduce it after a few tries.
Please note that the default {{acks}} value has changed from 0 to 1 based on 
the [this PR|https://github.com/apache/kafka/pull/1795]. So if you like to try 
producing with {{acks=0}} you'd have to overwrite the default.

> Console producer issue when request-required-acks=0
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Vahid Hashemian
>Assignee: Dustin Cote
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5434) Console consumer hangs if not existing partition is specified

2017-06-13 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16048030#comment-16048030
 ] 

Vahid Hashemian commented on KAFKA-5434:


[~ppatierno] Sure. Feel free to assign the JIRA to yourself.

> Console consumer hangs if not existing partition is specified
> -
>
> Key: KAFKA-5434
> URL: https://issues.apache.org/jira/browse/KAFKA-5434
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Paolo Patierno
>    Assignee: Vahid Hashemian
>
> Hi,
> if I specify the --partition option for the console consumer with a not 
> existing partition for a topic, the application hangs indefinitely.
> Debugging the code I see that it asks for metadata but when it receives topic 
> information and it doesn't find the requested partition inside such metadata, 
> the code retries new time.
> Could be it worst to check if the partition exists using the partitionFor 
> method before calling the assign in the seek of the BaseConsumer and throwing 
> an exception so printing an error on the console ?
> Thanks,
> Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5434) Console consumer hangs if not existing partition is specified

2017-06-13 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16048048#comment-16048048
 ] 

Vahid Hashemian commented on KAFKA-5434:


Yeah, I couldn't assign it to you either. You can take over the JIRA whenever 
you get access.

> Console consumer hangs if not existing partition is specified
> -
>
> Key: KAFKA-5434
> URL: https://issues.apache.org/jira/browse/KAFKA-5434
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Paolo Patierno
>Assignee: Vahid Hashemian
>
> Hi,
> if I specify the --partition option for the console consumer with a not 
> existing partition for a topic, the application hangs indefinitely.
> Debugging the code I see that it asks for metadata but when it receives topic 
> information and it doesn't find the requested partition inside such metadata, 
> the code retries new time.
> Could be it worst to check if the partition exists using the partitionFor 
> method before calling the assign in the seek of the BaseConsumer and throwing 
> an exception so printing an error on the console ?
> Thanks,
> Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4585) Offset fetch and commit requests use the same permissions

2017-06-13 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-4585:
---
Labels: kip  (was: needs-kip)

> Offset fetch and commit requests use the same permissions
> -
>
> Key: KAFKA-4585
> URL: https://issues.apache.org/jira/browse/KAFKA-4585
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.1.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Vahid Hashemian
>  Labels: kip
>
> Currently the handling of permissions for consumer groups seems a bit odd 
> because most of the requests use the Read permission on the Group (join, 
> sync, heartbeat, leave, offset commit, and offset fetch). This means you 
> cannot lock down certain functionality for certain users. For this issue I'll 
> highlight a realistic issue since conflating the ability to perform most of 
> these operations may not be a serious issue.
> In particular, if you want tooling for monitoring offsets (i.e. you want to 
> be able to read from all groups) but don't want that tool to be able to write 
> offsets, you currently cannot achieve this. Part of the reason this seems odd 
> to me is that any operation which can mutate state seems like it should be a 
> Write operation (i.e. joining, syncing, leaving, and committing; maybe 
> heartbeat as well). However, [~hachikuji] has mentioned that the use of Read 
> may have been intentional. If that is the case, changing at least offset 
> fetch to be a Describe operation instead would allow isolating the mutating 
> vs non-mutating request types.
> Note that this would require a KIP and would potentially have some 
> compatibility implications. Note however, that if we went with the Describe 
> option, Describe is allowed by default when Read, Write, or Delete are 
> allowed, so this may not have to have any compatibility issues (if the user 
> previously allowed Read, they'd still have all the same capabilities as 
> before).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Work started] (KAFKA-5370) Replace uses of old consumer with the new consumer

2017-06-13 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5370 started by Vahid Hashemian.
--
> Replace uses of old consumer with the new consumer 
> ---
>
> Key: KAFKA-5370
> URL: https://issues.apache.org/jira/browse/KAFKA-5370
> Project: Kafka
>  Issue Type: Improvement
>    Reporter: Vahid Hashemian
>    Assignee: Vahid Hashemian
>Priority: Minor
>
> Where possible, use the new consumer In tools and tests instead of the old 
> consumer, and remove the deprecation warning.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5348) kafka-consumer-groups.sh refuses to remove groups without ids

2017-06-13 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16048486#comment-16048486
 ] 

Vahid Hashemian commented on KAFKA-5348:


[~bobrik] In the scenario you described I assume some consumer id exists under 
the {{/ids}} path. By design, the consumer group (for old consumers) can be 
deleted only if there is no active consumer in the group. There is no active 
consumer group in the group iff the path {{/ids}} exists for this group and 
there are consumer ids inside this path. If this is not what you're 
experiencing please advise and perhaps provide steps to reproduce. Thanks.


> kafka-consumer-groups.sh refuses to remove groups without ids
> -
>
> Key: KAFKA-5348
> URL: https://issues.apache.org/jira/browse/KAFKA-5348
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.10.2.0
>Reporter: Ivan Babrou
>Assignee: Vahid Hashemian
>
> In zookeeper I have:
> {noformat}
> [zk: foo(CONNECTED) 37] ls /kafka/logs/consumers/console-consumer-4107
> [offsets]
> {noformat}
> This consumer group also shows up when I list consumer groups:
> {noformat}
> $ /usr/local/kafka/bin/kafka-consumer-groups.sh --zookeeper 
> foo:2181/kafka/logs --list | fgrep console-consumer-4107
> Note: This will only show information about consumers that use ZooKeeper (not 
> those using the Java consumer API).
> console-consumer-4107
> {noformat}
> But I cannot remove this group:
> {noformat}
> $ /usr/local/kafka/bin/kafka-consumer-groups.sh --zookeeper 
> 36zk1.in.pdx.cfdata.org:2181/kafka/logs --delete --group console-consumer-4107
> Note: This will only show information about consumers that use ZooKeeper (not 
> those using the Java consumer API).
> Error: Delete for group 'console-consumer-4107' failed because group does not 
> exist.
> {noformat}
> I ran tcpdump and it turns out that /ids path is checked:
> {noformat}
> $.e.P.fP...&..<...//kafka/logs/consumers/console-consumer-4107/ids.
> {noformat}
> I think kafka should not check for /ids, it should check for / instead here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   4   5   >