[GitHub] kafka pull request #3999: MINOR: Add attributes `processedKeys` and `process...

2017-10-02 Thread mewwts
GitHub user mewwts opened a pull request:

https://github.com/apache/kafka/pull/3999

MINOR: Add attributes `processedKeys` and `processedValues` to 
MockProcessorSupplier

This would allow for easier testing of topologies using the following 
pattern:
```Scala
// in Scala
val builder = new KStreamBuilder
val stream: KStream[K, V] = builder.stream(KSerde, VSerde, topic)

val processedStream: KStream[K, VR] =createTopology(stream, builder)

val processorSupplier = new MyMockProcessorSupplier[K, VR]
processedStream.process(processorSupplier)

val streamDriver = new MyKStreamTestDriver(builder, 
TestUtils.tempDirectory())
streamDriver.setTime(0L)

streamDriver.process(topic, somethingK, somethingV)
streamDriver.flushState()

val results = (processorSupplier.processedKeys zip 
processorSupplier.processedValues).toMap
results(expectedK) should be(expectedVR)
```
Without breaking any existing tests that rely on the `processed` 
`ArrayList`. Of course it's not as elegant as rewriting the logic here, as 
we're (almost) duplicating the information in the `processed` array.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mewwts/kafka add-processed-keys-and-values

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3999.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3999


commit ab8d7071ffbf7231f5bc82265f654616acd8b483
Author: Mats Julian Olsen 
Date:   2017-10-02T07:04:21Z

Add attributes `processedKeys` and `processedValues`

to MockProcessorSupplier.




---


[jira] [Created] (KAFKA-6000) streams 0.10.2.1 - kafka 0.11.0.1 state restore not working

2017-10-02 Thread Bart Vercammen (JIRA)
Bart Vercammen created KAFKA-6000:
-

 Summary: streams 0.10.2.1 - kafka 0.11.0.1 state restore not 
working
 Key: KAFKA-6000
 URL: https://issues.apache.org/jira/browse/KAFKA-6000
 Project: Kafka
  Issue Type: Bug
  Components: core, streams
Affects Versions: 0.11.0.0, 0.10.2.1
Reporter: Bart Vercammen
Priority: Blocker


Potential interop issue between Kafka Streams (0.10.2.1) and Kafka (0.11.0.1)

{noformat}
11:24:16.416 [StreamThread-3] DEBUG rocessorStateManager - task [0_2] 
Registering state store lateststate to its state manager 
11:24:16.472 [StreamThread-3] TRACE rocessorStateManager - task [0_2] Restoring 
state store lateststate from changelog topic scratch.lateststate.dsh 
11:24:16.472 [StreamThread-3] DEBUG  o.a.k.c.c.i.Fetcher - Resetting offset for 
partition scratch.lateststate.dsh-2 to latest offset. 
11:24:16.472 [StreamThread-3] DEBUG  o.a.k.c.c.i.Fetcher - Partition 
scratch.lateststate.dsh-2 is unknown for fetching offset, wait for metadata 
refresh 
11:24:16.474 [StreamThread-3] TRACE  o.a.k.c.c.i.Fetcher - Sending 
ListOffsetRequest (type=ListOffsetRequest, replicaId=-1, 
partitionTimestamps={scratch.lateststate.dsh-2=-1}, minVersion=0) to broker 
broker-1.tt.kafka.marathon.mesos:9091 (id: 1002 rack: null) 
11:24:16.476 [StreamThread-3] TRACE  o.a.k.c.c.i.Fetcher - Received 
ListOffsetResponse 
{responses=[{topic=scratch.lateststate.dsh,partition_responses=[{partition=2,error_code=0,timestamp=-1,offset=1773763}]}]}
 from broker broker-1.tt.kafka.marathon.mesos:9091 (id: 1002 rack: null) 
11:24:16.476 [StreamThread-3] DEBUG  o.a.k.c.c.i.Fetcher - Handling 
ListOffsetResponse response for scratch.lateststate.dsh-2. Fetched offset 
1773763, timestamp -1 
11:24:16.477 [StreamThread-3] DEBUG  o.a.k.c.c.i.Fetcher - Resetting offset for 
partition scratch.lateststate.dsh-2 to earliest offset. 
11:24:16.478 [StreamThread-3] TRACE  o.a.k.c.c.i.Fetcher - Sending 
ListOffsetRequest (type=ListOffsetRequest, replicaId=-1, 
partitionTimestamps={scratch.lateststate.dsh-2=-2}, minVersion=0) to broker 
broker-1.tt.kafka.marathon.mesos:9091 (id: 1002 rack: null) 
11:24:16.480 [StreamThread-3] TRACE  o.a.k.c.c.i.Fetcher - Received 
ListOffsetResponse 
{responses=[{topic=scratch.lateststate.dsh,partition_responses=[{partition=2,error_code=0,timestamp=-1,offset=0}]}]}
 from broker broker-1.tt.kafka.marathon.mesos:9091 (id: 1002 rack: null) 
11:24:16.481 [StreamThread-3] DEBUG  o.a.k.c.c.i.Fetcher - Handling 
ListOffsetResponse response for scratch.lateststate.dsh-2. Fetched offset 0, 
timestamp -1 
11:24:16.483 [StreamThread-3] DEBUG rocessorStateManager - restoring partition 
scratch.lateststate.dsh-2 from offset 0 to endOffset 1773763 
11:24:16.484 [StreamThread-3] TRACE  o.a.k.c.c.i.Fetcher - Added fetch request 
for partition scratch.lateststate.dsh-2 at offset 0 to node 
broker-1.tt.kafka.marathon.mesos:9091 (id: 1002 rack: null) 
11:24:16.485 [StreamThread-3] DEBUG  o.a.k.c.c.i.Fetcher - Sending fetch for 
partitions [scratch.lateststate.dsh-2] to broker 
broker-1.tt.kafka.marathon.mesos:9091 (id: 1002 rack: null) 
11:24:16.486 [StreamThread-3] TRACE  o.a.k.c.c.i.Fetcher - Skipping fetch for 
partition scratch.lateststate.dsh-2 because there is an in-flight request to 
broker-1.tt.kafka.marathon.mesos:9091 (id: 1002 rack: null) 
11:24:16.490 [StreamThread-3] TRACE  o.a.k.c.c.i.Fetcher - Adding fetched 
record for partition scratch.lateststate.dsh-2 with offset 0 to buffered record 
list 
11:24:16.492 [StreamThread-3] TRACE  o.a.k.c.c.i.Fetcher - Received 3 records 
in fetch response for partition scratch.lateststate.dsh-2 with offset 0 
11:24:16.493 [StreamThread-3] TRACE  o.a.k.c.c.i.Fetcher - Returning fetched 
records at offset 0 for assigned partition scratch.lateststate.dsh-2 and update 
position to 1586527 
11:24:16.494 [StreamThread-3] DEBUG  o.a.k.c.c.i.Fetcher - Ignoring fetched 
records for scratch.lateststate.dsh-2 at offset 0 since the current position is 
1586527 
11:24:16.496 [StreamThread-3] TRACE  o.a.k.c.c.i.Fetcher - Added fetch request 
for partition scratch.lateststate.dsh-2 at offset 1586527 to node 
broker-1.tt.kafka.marathon.mesos:9091 (id: 1002 rack: null) 
11:24:16.496 [StreamThread-3] DEBUG  o.a.k.c.c.i.Fetcher - Sending fetch for 
partitions [scratch.lateststate.dsh-2] to broker 
broker-1.tt.kafka.marathon.mesos:9091 (id: 1002 rack: null) 
11:24:16.498 [StreamThread-3] TRACE  o.a.k.c.c.i.Fetcher - Skipping fetch for 
partition scratch.lateststate.dsh-2 because there is an in-flight request to 
broker-1.tt.kafka.marathon.mesos:9091 (id: 1002 rack: null) 
11:24:16.499 [StreamThread-3] TRACE  o.a.k.c.c.i.Fetcher - Adding fetched 
record for partition scratch.lateststate.dsh-2 with offset 1586527 to buffered 
record list 
11:24:16.500 [StreamThread-3] TRACE  o.a.k.c.c.i.Fetcher - Received 0 records 
in fetch response for partition scratch.la

Re: [DISCUSS] KIP-179: Change ReassignPartitionsCommand to use AdminClient

2017-10-02 Thread Tom Bentley
One question I have is about whether/how to scope throttling to a
reassignment. Currently throttles are only loosely associated with
reassignment: You can start a reassignment without any throttling, add
throttling to an in-flight reassignment, and remember/forget to remove
throttling after the reassignment is complete. There's is great flexibility
in that, but also the risk that you forget the remove the throttle(s).

Just adding an API for setting the throttled rate makes this situation
worse: While it's nice to be able to auto-remove the throttles rate what
about the config for the throttled replicas? Also you might add a throttle
thinking a reassignment is in-flight, but it has in fact just finished:
Those throttles will now hang around until reset or the end of the next
reassignment. For these reasons it would be good if the throttle were more
directly scoped to the reassignment.

On the other hand, taking LinkedIn's Cruise Control as an example, there
they seem to modify the reassignment znode directly and incrementally and
so there is no notion of "the reassignment". Reassignments will be running
continuously, with partitions added before all of the current partitions
have completed. If there is no meaningful cluster-wide "reassignment" then
it would be better to remove remove the throttle by changing the list of
replicas as each replica catches up.

I'm interested in any use cases people can share on this, as I'd like the
throttle API to be useful for a broad range of use cases, rather than being
too narrowly focussed on what's needed by the existing CLI tools.

Thanks,

Tom




On 28 September 2017 at 17:22, Tom Bentley  wrote:

> I'm starting to think about KIP-179 again. In order to have more
> manageably-scoped KIPs and PRs I think it might be worth factoring-out the
> throttling part into a separate KIP. Wdyt?
>
> Keeping the throttling discussion in this thread for the moment...
>
> The throttling behaviour is currently spread across the
> `(leader|follower).replication.throttled.replicas` topic config and the
> `(leader|follower).replication.throttled.rate` dynamic broker config.
> It's not really clear to me exactly what "removing the throttle" is
> supposed to mean. I mean we could reset the rate to Long.MAV_VALUE or we
> could change the list of replicas to an empty list. The
> ReassignPartitionsCommand does both, but there is some small utility in
> leaving the rate, but clearing the list, if you've discovered the "right"
> rate for your cluster/workload and to want it to be sticky for next time.
> Does any one do this in practice?
>
> With regards to throttling, it would be
>>> worth thinking about a way where the throttling configs can be
>>> automatically removed without the user having to re-run the tool.
>>>
>>
>> Isn't that just a matter of updating the topic configs for
>> (leader|follower).replication.throttled.replicas at the same time we
>> remove the reassignment znode? That leaves open the question about whether
>> to reset the rates at the same time.
>>
>
> Thinking some more about my "update the configs at the same time we remove
> the reassignment znode" suggestion. The reassignment znode is persistent,
> so the reassignment will survive a zookeeper restart. If there was a flag
> for the auto-removal of the throttle it would likewise need to be
> persistent. Otherwise a ZK restart would remember the reassignment, but
> forget about the preference for auto removal of throttles. So, we would use
> a persistent znode (a child of the reassignment path, perhaps) to store a
> flag for throttle removal.
>
> Thoughts?
>
> Cheers,
>
> Tom
>


[GitHub] kafka pull request #4000: KAFKA-5445: Document exceptions thrown by AdminCli...

2017-10-02 Thread adyach
GitHub user adyach opened a pull request:

https://github.com/apache/kafka/pull/4000

KAFKA-5445: Document exceptions thrown by AdminClient methods

Exceptions are processed internally in KafkaAdminClient without throwing 
them to the client code, hence the documentation of the exception is done in 
unusual way.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/adyach/kafka KAFKA-5445

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4000.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4000


commit 5dfc375469608cbe997e40377c08c0489a12c1f0
Author: Andrey Dyachkov 
Date:   2017-09-28T16:04:31Z

kafka-5445: create topic exception description

commit 29c626bc75b1e981dae0b82061941d6c252d5760
Author: Andrey Dyachkov 
Date:   2017-10-02T12:45:20Z

kafka-5445: rest of the commands descritpion




---


[DISCUSS] URIs on Producer and Consumer

2017-10-02 Thread Clebert Suconic
At ActiveMQ and ActiveMQ Artemis, ConnectionFactories have an
interesting feature where you can pass parameters through an URI.

I was looking at Producer and Consumer APIs, and these two classes are
using a method that I considered old for Artemis resembling HornetQ:

Instead of passing a Properties (aka HashMaps), users would be able to
create a Consumer or Producer by simply doing:

new Consumer("tcp::/host:port?properties=values;properties=values...etc");

Example:


Instead of the following:

Map config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:");
config.put(ConsumerConfig.RECEIVE_BUFFER_CONFIG, -2);
new KafkaConsumer<>(config, new ByteArrayDeserializer(), new
ByteArrayDeserializer());



Someone could do

new KafkaConsumer<>("tcp://localhost:?receive.buffer.bytes=-2",
new ByteArrayDeserializer(), new ByteArrayDeserializer());



I don't know if that little API improvement would be welcomed? I would be
able to send a Pull Request but I don't want to do it if that wouldn't
be welcomed in the first place:


Just an idea...  let me know if that is welcomed or not.

If so I can forward the discussion into how I would implement it.


Re: [DISCUSS] KIP-201: Rationalising Policy interfaces

2017-10-02 Thread Tom Bentley
Hi All,

I've updated KIP-201 again so there is now a single policy interface (and
thus a single key by which to configure it) for topic creation,
modification, deletion and record deletion, which each have their own
validation method.

There are still a few loose ends:

1. I currently propose validateAlterTopic(), but it would be possible to be
more fine grained about this: validateAlterConfig(), validAddPartitions()
and validateReassignPartitions(), for example. Obviously this results in a
policy method per operation, and makes it more clear what is being changed.
I guess the down side is its more work for implementer, and potentially
makes it harder to change the interface in the future.

2. A couple of TODOs about what the TopicState interface should return when
a topic's partitions are being reassigned.

Your thoughts on these or any other points are welcome.

Thanks,

Tom

On 27 September 2017 at 11:45, Paolo Patierno  wrote:

> Hi Ismael,
>
>
>   1.  I don't have a real requirement now but "deleting" is an operation
> that could be really dangerous so it's always better having a way for
> having more control on that. I know that we have the authorizer used for
> that (delete on topic) but fine grained control could be better (even
> already happens for topic deletion).
>   2.  I know about the problem of restarting broker due to changes on
> policies but what do you mean by doing that on the clients ?
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: isma...@gmail.com  on behalf of Ismael Juma <
> ism...@juma.me.uk>
> Sent: Wednesday, September 27, 2017 10:30 AM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-201: Rationalising Policy interfaces
>
> A couple of questions:
>
> 1. Is this a concrete requirement from a user or is it hypothetical?
> 2. You sure you would want to do this in the broker instead of the clients?
> It's worth remembering that updating broker policies involves a rolling
> restart of the cluster, so it's not the right place for things that change
> frequently.
>
> Ismael
>
> On Wed, Sep 27, 2017 at 11:26 AM, Paolo Patierno 
> wrote:
>
> > Hi Ismael,
> >
> > regarding motivations for delete records, as I said during the discussion
> > on KIP-204, it gives the possibility to avoid deleting messages for
> > specific partitions (inside the topic) and starting from a specific
> offset.
> > I could think on some users solutions where they know exactly what the
> > partitions means in a specific topic (because they are using a custom
> > partitioner on the producer side) so they know what kind of messages are
> > inside a partition allowing to delete them but not the others.  In such a
> > policy a user could also check the timestamp related to the offset for
> > allowing or not deletion on time base.
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
> >
> > 
> > From: isma...@gmail.com  on behalf of Ismael Juma <
> > ism...@juma.me.uk>
> > Sent: Wednesday, September 27, 2017 10:18 AM
> > To: dev@kafka.apache.org
> > Subject: Re: [DISCUSS] KIP-201: Rationalising Policy interfaces
> >
> > A couple more comments:
> >
> > 1. "If this KIP is accepted for Kafka 1.1.0 this removal could happen in
> > Kafka 1.2.0 or a later release." -> we only remove code in major
> releases.
> > So, if it's deprecated in 1.1.0, it would be removed in 2.0.0.
> >
> > 2. Deleting all messages in a topic is not really the same as deleting a
> > topic. The latter will cause consumers and producers to error out while
> the
> > former will not. It would be good to motivate the need for the delete
> > records policy more.
> >
> > Ismael
> >
> > On Wed, Sep 27, 2017 at 11:12 AM, Ismael Juma  wrote:
> >
> > > Another quick comment: the KIP states that having multiple interfaces
> > > imply that the logic must be in 2 places. That is not true because the
> > same
> > > class can implement multiple interfaces (this aspect was considered
> when
> > we
> > > decided to introduce policies incrementally).
> > >
> > > The main reason why I think the original approach doesn't work well is
> > > that there is no direct mapping between an operation and the policy.
> That
> > > is, we initially thought we would have create/alter/delete topics, but
> > that
> > > didn't work out as the alter case is better served by multiple request
> > > types. Given that, it's a bit awkward to maintain the original approach
> > and
> >

[GitHub] kafka-site pull request #77: MINOR: Add streams child topics to left-hand na...

2017-10-02 Thread joel-hamill
Github user joel-hamill commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/77#discussion_r142175889
  
--- Diff: includes/_nav.htm ---
@@ -11,6 +11,12 @@
 getting started
 APIs
 kafka streams
+
--- End diff --

@guozhangwang here you go: 

![image](https://user-images.githubusercontent.com/11722533/31085984-7244dd88-a74d-11e7-919d-a3130ea918ac.png)



---


[jira] [Created] (KAFKA-6001) Remove from usages of Materialized in Streams

2017-10-02 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-6001:
-

 Summary: Remove  from usages of Materialized in 
Streams
 Key: KAFKA-6001
 URL: https://issues.apache.org/jira/browse/KAFKA-6001
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.0.0
Reporter: Damian Guy
Assignee: Damian Guy
 Fix For: 1.0.0


We can remove `` from usages of `Materialized` in the DSL. This 
will make the api a little nicer to work with. `` is already 
enforced.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4001: KAFKA-6001: remove from Materializ...

2017-10-02 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/4001

KAFKA-6001: remove  from Materialized usages

Make the API simpler by removing `` from usages of 
`Materialized`. This is already enforced when by 
`Materialized.as(KeyValueBytesStore)` etc.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka remove-types-from-materialized

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4001.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4001


commit ccb10351f890e36f2fbf92b1bb75ae15143028c0
Author: Damian Guy 
Date:   2017-10-02T15:58:03Z

remove  from Materialized usages




---


[GitHub] kafka pull request #4002: KAFKA-5989: resume consumption of tasks that have ...

2017-10-02 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/4002

KAFKA-5989: resume consumption of tasks that have state stores but no 
changelogging

Stores where logging is disabled where never consumed as the partitions 
were paused, but never resumed.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka restore

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4002.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4002


commit 1e1dec0db90f6d0a28a4d6b497593e22523f4380
Author: Damian Guy 
Date:   2017-10-02T17:37:44Z

resume consumption of tasks that have state stores but no changelogging




---


[jira] [Created] (KAFKA-6002) Kafka Connect Transform transforming JSON string into actual object

2017-10-02 Thread Edvard Poliakov (JIRA)
Edvard Poliakov created KAFKA-6002:
--

 Summary: Kafka Connect Transform transforming JSON string into 
actual object
 Key: KAFKA-6002
 URL: https://issues.apache.org/jira/browse/KAFKA-6002
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Edvard Poliakov
Priority: Minor


My colleague and I have been working on a new Transform, that takes a JSON 
string and transforms it into an actual object, like this:

{code} 
{
  "a" : "{\"b\": 23}"
}
{code}
into
{code}
{
  "a" : {
   "b" : 23
  }
}
{code}

There is no robust way of building a Schema from a JSON object itself, as it 
can be something like an empty array or a null, that doesn't provide any info 
on the schema of the object. So I see two options here.

1. For a transform to take in schema as a transform parameter. The problem I 
found with this is that it is not clear what JSON schema specification should 
be used for this? I assume it would be reasonable to use 
http://json-schema.org/, but it doesn't seem that Kafka Connect supports it 
currently, moreover reading through JsonConverter class in Kafka Connect, I am 
not able to understand what spec does the Json Schema have that is used in that 
class, for example {{asConnectSchema}} method on {{JsonConverte}}.

2. On each object received, keep updating the schema, but I can't see a 
standard and robust way of handling edge cases.

I am happy to create a pull request for this transform, if we can agree on 
something here. :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5985) Mention the need to close store iterators

2017-10-02 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy resolved KAFKA-5985.
---
Resolution: Fixed

Issue resolved by pull request 3994
[https://github.com/apache/kafka/pull/3994]

> Mention the need to close store iterators
> -
>
> Key: KAFKA-5985
> URL: https://issues.apache.org/jira/browse/KAFKA-5985
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation, streams
>Affects Versions: 0.11.0.0
>Reporter: Stanislav Chizhov
>Assignee: Bill Bejeck
> Fix For: 1.0.0
>
>
> Store iterators should be closed in all/most of the cases, but currently it 
> is not consistently reflected in the documentation and javadocs. For instance 
>  
> https://kafka.apache.org/0110/documentation/streams/developer-guide#streams_developer-guide_interactive-queries_custom-stores
>  does not mention the need to close an iterator and provide an example that 
> does not do that. 
> Some of the fetch methods do mention the need to close an iterator returned 
> (e.g. 
> https://kafka.apache.org/0110/javadoc/org/apache/kafka/streams/state/ReadOnlyKeyValueStore.html#range(K,%20K)),
>  but others do not: 
> https://kafka.apache.org/0110/javadoc/org/apache/kafka/streams/state/ReadOnlyWindowStore.html#fetch(K,%20long,%20long)
> It makes sense to: 
> - update javadoc for all store methods that do return iterators to reflect 
> that the iterator returned needs to be closed
> - mention it in the documentation and to update related examples.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3994: KAFKA-5985: update javadoc regarding closing itera...

2017-10-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3994


---


[GitHub] kafka pull request #4003: MINOR: add suppress warnings annotations

2017-10-02 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/4003

MINOR: add suppress warnings annotations



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka minor-deprecated

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4003.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4003


commit 09d5eb7f81eecaed6c7f58c7c78156ee6005d324
Author: Matthias J. Sax 
Date:   2017-10-02T19:19:02Z

MINOR: add suppress warnings annotations




---


Build failed in Jenkins: kafka-trunk-jdk7 #2836

2017-10-02 Thread Apache Jenkins Server
See 


Changes:

[damian.guy] KAFKA-5985; update javadoc regarding closing iterators

--
[...truncated 370.10 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldNotAllowDivergentLogs STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldNotAllowDivergentLogs PASSED

kafka.server.MultipleListenersWithAdditionalJaasContextTest > 
testProduceConsume STARTED

kafka.se

[GitHub] kafka pull request #3405: KAFKA-5495: Update docs to use `kafka-consumer-gro...

2017-10-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3405


---


[jira] [Created] (KAFKA-6003) Replication Fetcher thread for a partition with no data fails to start

2017-10-02 Thread Stanislav Chizhov (JIRA)
Stanislav Chizhov created KAFKA-6003:


 Summary: Replication Fetcher thread for a partition with no data 
fails to start
 Key: KAFKA-6003
 URL: https://issues.apache.org/jira/browse/KAFKA-6003
 Project: Kafka
  Issue Type: Bug
  Components: replication
Affects Versions: 0.11.0.1
Reporter: Stanislav Chizhov


If a partition of a topic with idempotent producer has no data on 1 of the 
brokers, but it does exist on others and some of the segments for this 
partition have been already deleted replication thread responsible for this 
partition on the broker which has no data for it fails to start with out of 
order sequence exception:
{code}
[2017-10-02 09:44:23,825] ERROR [ReplicaFetcherThread-2-4]: Error due to 
(kafka.server.ReplicaFetcherThread)
kafka.common.KafkaException: error processing data for partition 
[stage.data.adevents.v2,20] offset 1660336429
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:203)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:174)
at scala.Option.foreach(Option.scala:257)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:174)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:171)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply$mcV$sp(AbstractFetcherThread.scala:171)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:171)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:171)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:169)
at 
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:112)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64)
Caused by: org.apache.kafka.common.errors.OutOfOrderSequenceException: Invalid 
sequence number for new epoch: 0 (request epoch), 154277489 (seq. number)
{code}
We run kafka 0.11.0.1 and we ran into the situation when 1 of replication 
threads was stopped for few days, while everything else on that broker was 
functional. This is our staging cluster and retention is less than a day, so at 
the moment we have a broker which cannot start replication for few partition. I 
was also able to reproduce in my local test environment.
Another possible use case is disk failure or any situation when previously 
deleting all the data for the partition on a broker helped - since it would 
just fetch all the data from other replicas. Now it does not work for topics 
with idempotent producers. It might also affect other not-idempotent topics if 
those are unlucky to share same replication fetcher thread. 

This seems to be caused by this logic: 
https://github.com/apache/kafka/blob/0.11.0.1/core/src/main/scala/kafka/log/ProducerStateManager.scala#L119

and might be fixed in the scope of 
https://issues.apache.org/jira/browse/KAFKA-5793.

However any hints on how to get those partition to fully replicated state are 
highly appreciated.
Any hints on how to get this broker 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka-site issue #77: MINOR: Add streams child topics to left-hand nav

2017-10-02 Thread dguy
Github user dguy commented on the issue:

https://github.com/apache/kafka-site/pull/77
  
@miguno


---


[GitHub] kafka pull request #3971: MINOR: additional kip-182 doc updates

2017-10-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3971


---


[GitHub] kafka-site issue #77: MINOR: Add streams child topics to left-hand nav

2017-10-02 Thread joel-hamill
Github user joel-hamill commented on the issue:

https://github.com/apache/kafka-site/pull/77
  
@derrickdoo ping ^


---


[GitHub] kafka-site pull request #78: MINOR: Add header items

2017-10-02 Thread joel-hamill
Github user joel-hamill closed the pull request at:

https://github.com/apache/kafka-site/pull/78


---


[GitHub] kafka pull request #3970: KAFKA-5225: StreamsResetter doesn't allow custom C...

2017-10-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3970


---


[jira] [Created] (KAFKA-6004) Enable custom authentication plugins to return error messages to clients

2017-10-02 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-6004:
-

 Summary: Enable custom authentication plugins to return error 
messages to clients
 Key: KAFKA-6004
 URL: https://issues.apache.org/jira/browse/KAFKA-6004
 Project: Kafka
  Issue Type: Improvement
  Components: security
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 1.0.1


KIP-152 enables authentication failures to be returned to clients to simplify 
diagnosis of security configuration issues. At the moment, a fixed message is 
returned to clients by SaslServerAuthenticator which says "Authentication 
failed due to invalid credentials with SASL mechanism $mechanism".

We have added an error message string to SaslAuthenticateResponse to return 
custom messages from the broker to clients. Custom SASL server implementations 
may want to return more specific error messages in some cases. We should allow 
this by returning error messages from specific exceptions (e.g. 
org.apache.kafka.common.errors.AuthenticationException) in 
SaslAuthenticateResponse. It would be better not to return the error message 
from SaslException since it may contain information that we do not want to leak 
to clients.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-206: Add support for UUID serialization and deserialization

2017-10-02 Thread Jakub Scholz
Hi,

Unless there are some further discussion points, I will put this KIP for
vote tomorrow around this time.

Thanks & Regards
Jakub

On Tue, Sep 26, 2017 at 7:27 PM, Jakub Scholz  wrote:

> Hi Ted,
>
> Thanks. The link to this thread is there now.
>
> Regards
> Jakub
>
> On Tue, Sep 26, 2017 at 7:22 PM, Ted Yu  wrote:
>
>> Please add link to Discussion thread field.
>>
>> Looks good overall.
>>
>> On Tue, Sep 26, 2017 at 10:18 AM, Jakub Scholz  wrote:
>>
>> > Hi,
>> >
>> > I'd like to start a discussion for KIP-206. It is about adding
>> serializers
>> > and deserializers for UUIDs. The details can be found on the wiki:
>> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > 206%3A+Add+support+for+UUID+serialization+and+deserialization
>> >
>> > Thanks & Regards
>> > Jakub
>> >
>>
>
>


Jenkins build is back to normal : kafka-trunk-jdk7 #2837

2017-10-02 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #3954: KAFKA-5758: Don't fail fetch request if replica is...

2017-10-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3954


---


[GitHub] kafka-site issue #77: MINOR: Add streams child topics to left-hand nav

2017-10-02 Thread joel-hamill
Github user joel-hamill commented on the issue:

https://github.com/apache/kafka-site/pull/77
  
after talking with @derrickdoo, I have removed the changes to the left-hand 
nav. 


---


[GitHub] kafka pull request #4004: KAFKA-6003: Accept appends on replicas uncondition...

2017-10-02 Thread apurvam
GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/4004

KAFKA-6003: Accept appends on replicas unconditionally when local producer 
state doesn't exist

Without this patch, if the replica's log was somehow truncated before
the leader's it is possible for the replica fetcher thread to
continuously through an OutOfOrderSequenceException because the
incoming sequence would be non-zero and there is no local state.

This patch changes the behavior so that the replica state is updated to
the leader's state if there was no local state for the producer at the
time of the append.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
KAFKA-6003-handle-unknown-producer-on-replica

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4004.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4004


commit 341a0d2ba3ec0716f1830860869bf773f1bf8d85
Author: Apurva Mehta 
Date:   2017-10-02T22:41:19Z

KAFKA-6003: Accept appends on replicas unconditionally when local
producer state doesn't exist.

Without this patch, if the replica's log was somehow truncated before
the leader's it is possible for the replica fetcher thread to
continuously through an OutOfOrderSequenceException because the
incoming sequence would be non-zero and there is no local state.

This patch changes the behavior so that the replica state is updated to
the leader's state if there was no local state for the producer at the
time of the append.




---


Consumer Offsets partition skew on Kafka 0.10.1.1

2017-10-02 Thread Marcos Juarez
I was investigating some performance issues we're issues in one of our
production clusters, and I ran into extremely unbalanced offset partitions
for the __consumer_offsets topic.  I only pasted the top 8 below, out of 50
total.  As you can see, between the top 5 partitions, those servers have to
handle 83% of the commit volume, and brokers 9 and 10 show up repeatedly on
as leader as well as replicas.


Partition Offsets Percentage Leader Replicas ISR
6 52,761,610,477 34.24% 10 (10,6,7) (7,6,10)
5 46,196,021,230 29.98% 9 (9,5,6) (5,6,9)
42 17,530,298,423 11.38% 10 (10,9,11) (10,11,9)
31 12,927,081,106 8.39% 11 (11,9,10) (10,11,9)
0 8,557,903,671 5.55% 4 (4,12,1) (4,12,1)
2 3,969,232,652 2.58% 6 (6,2,3) (6,3,2)
49 3,555,754,347 2.31% 5 (5,11,7) (5,7,11)
33 2,273,951,745 1.48% 1 (1,11,12) (1,12,11)
Those brokers (9, 10 and 11) also happen to be the ones we're having
performance issues with.  We can't be sure yet if this is the cause of the
performance issues, but it's looking extremely likely.

So, I was wondering, what can be done to "rebalance" these consumer
offsets?  This was something, as far as I know, automatically decided, I
don't believe we ever changed a setting related to this.  I also don't
believe we can influence which partition gets which offsets when
consuming.

It would also be interesting to know what is the algorithm/pattern used to
decide the consumer offset partition, and is this something we can change
or influence?

Thanks,

Marcos Juarez


Build failed in Jenkins: kafka-trunk-jdk8 #2092

2017-10-02 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-5758; Don't fail fetch request if replica is no longer a follower

--
[...truncated 3.86 MB...]
org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownTask STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownTask PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRequestProcessingOrder STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRequestProcessingOrder PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToLeader STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToOwner STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToOwner PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigAdded STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigUpdate STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigUpdate PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPaused STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPaused PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumed STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumed PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testUnknownConnectorPaused STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testUnknownConnectorPaused PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPausedRunningTaskOnly STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPausedRunningTaskOnly PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumedRunningTaskOnly STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumedRunningTaskOnly PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testTaskConfigAdded STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testTaskConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinLeaderCatchUpFails STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinLeaderCatchUpFails PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testInconsistentConfigs STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testInconsistentConfigs PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedCustomValidation STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedCustomValidation PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedBasicValidation STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedBasicValidation PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testAccessors STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTask STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTask PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testDestroyConnector STARTED

org.apache.kafka.connect.r

[GitHub] kafka pull request #4005: MINOR: fix JavaDocs warnings

2017-10-02 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/4005

MINOR: fix JavaDocs warnings

 - add some missing annotations for deprecated methods

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka minor-fix-javadoc-warnings

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4005.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4005


commit 3d58e8a58e346cba87559f13caac115d878df4b2
Author: Matthias J. Sax 
Date:   2017-10-03T03:15:17Z

MINOR: fix JavaDocs warnings
 - add some missing annotations for deprecated methods




---


[jira] [Resolved] (KAFKA-5995) Rename AlterReplicaDir to AlterReplicaDirs

2017-10-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-5995.

   Resolution: Fixed
Fix Version/s: 1.0.0

Issue resolved by pull request 3993
[https://github.com/apache/kafka/pull/3993]

> Rename AlterReplicaDir to AlterReplicaDirs
> --
>
> Key: KAFKA-5995
> URL: https://issues.apache.org/jira/browse/KAFKA-5995
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
> Fix For: 1.0.0
>
>
> This is needed to follow the naming convention of other AdminClient methods 
> that are plural.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3993: KAFKA-5995; Rename AlterReplicaDir to AlterReplica...

2017-10-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3993


---


[VOTE] KIP-204 : adding records deletion operation to the new Admin Client API

2017-10-02 Thread Paolo Patierno
Hi all,

I didn't see any further discussion around this KIP, so I'd like to start the 
vote for it.

Just for reference : 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-204+%3A+adding+records+deletion+operation+to+the+new+Admin+Client+API


Thanks,

Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


Re: [VOTE] KIP-176 : Remove deprecated new-consumer option for tools

2017-10-02 Thread Paolo Patierno
Just as reminder for other committers in order to have other binding votes, for 
now we have ...


binding

+1 Ismael Juma

+1 Guozhang Wang


non binding

+1 Mickael Maison

+1 Vahid Hashemian


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: isma...@gmail.com  on behalf of Ismael Juma 

Sent: Tuesday, September 26, 2017 7:46 AM
To: dev@kafka.apache.org
Subject: Re: [VOTE] KIP-176 : Remove deprecated new-consumer option for tools

Removals can only happen in major releases.

Ismael

On Tue, Sep 26, 2017 at 8:37 AM, Paolo Patierno  wrote:

> Hi devs,
>
> I know that we are already voting for this (+1 bindings from Ismael Juma
> and Guozhang Wang, +1 non binding from Mickael Maison) but I'd like to ask
> a question about the possible cycle release for this change.
>
> We are really closed to the 1.0.0 release which will have the
> --new-consumer deprecation. The current KIP-176 proposes to remove it in
> the 2.0.0 release but I'm starting to think that it could happen even in
> one year or so while we could have more releases in the middle (1.x.y)
> every 4 months.
>
> Maybe we could have this KIP-176 included in the 1.1.0 release ? Wdyt ?
>
>
> Thanks.
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Mickael Maison 
> Sent: Thursday, September 7, 2017 9:53 AM
> To: dev@kafka.apache.org
> Subject: Re: [VOTE] KIP-176 : Remove deprecated new-consumer option for
> tools
>
> +1 (non binding)
> Thanks
>
> On Thu, Sep 7, 2017 at 9:09 AM, Paolo Patierno  wrote:
> > KIP updated to clarify it will be removed in the 2.0.0 version.
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Windows Embedded & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
> >
> > 
> > From: Vahid S Hashemian 
> > Sent: Wednesday, September 6, 2017 11:45 PM
> > To: dev@kafka.apache.org
> > Subject: Re: [VOTE] KIP-176 : Remove deprecated new-consumer option for
> tools
> >
> > +1. Thanks for the KIP.
> >
> > --Vahid
> >
> >
> >
> > From:   Guozhang Wang 
> > To: "dev@kafka.apache.org" 
> > Date:   09/06/2017 03:41 PM
> > Subject:Re: [VOTE] KIP-176 : Remove deprecated new-consumer
> option
> > for tools
> >
> >
> >
> > +1. Thanks.
> >
> > On Wed, Sep 6, 2017 at 7:57 AM, Ismael Juma  wrote:
> >
> >> Thanks for the KIP. +1 (binding). Please make it clear in the KIP that
> >> removal will happen in 2.0.0.
> >>
> >> Ismael
> >>
> >> On Tue, Aug 8, 2017 at 11:53 AM, Paolo Patierno 
> >> wrote:
> >>
> >> > Hi devs,
> >> >
> >> >
> >> > I didn't see any more comments about this KIP. The JIRAs related to
> > the
> >> > first step (so making --new-consumer as deprecated with warning
> > messages)
> >> > are merged.
> >> >
> >> > I'd like to start a vote for this KIP.
> >> >
> >> >
> >> > Thanks,
> >> >
> >> >
> >> > Paolo Patierno
> >> > Senior Software Engineer (IoT) @ Red Hat
> >> > Microsoft MVP on Windows Embedded & IoT
> >> > Microsoft Azure Advisor
> >> >
> >> > Twitter : @ppatierno<
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__twitter.
> com_ppatierno&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_
> itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=JR2f85JOCHQwbXDKcQkgD6ay-
> ECdCkrh3HaWqtSjF5w&s=YUFjd7tCwfVz6mZjZ8KbxeH_yQLhzwBXTmm8Wwf_4Wk&e=
> >>
> >> > Linkedin : paolopatierno<
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__it.
> linkedin.com_in_paolopatierno&d=DwIBaQ&c=jf_iaSHvJObTbx-
> siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=
> JR2f85JOCHQwbXDKcQkgD6ay-ECdCkrh3HaWqtSjF5w&s=-
> q3fI0mMvYa2M1PyPrZxDOFWZoyt66zllRYNw00vIIk&e=
> >>
> >> > Blog : DevExperience<
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__
> paolopatierno.wordpress.com_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=
> JR2f85JOCHQwbXDKcQkgD6ay-ECdCkrh3HaWqtSjF5w&s=M0NFIAt5g9yjQXa4D-DHFs4-
> UQ3F0KWHfVFPLIaLAyg&e=
> >>
> >> >
> >>
> >
> >
> >
> > --
> > -- Guozhang
> >
> >
> >
> >
>


[GitHub] kafka-site issue #77: MINOR: Add streams child topics to left-hand nav

2017-10-02 Thread miguno
Github user miguno commented on the issue:

https://github.com/apache/kafka-site/pull/77
  
Could you please edit/update the PR title then?

I understand that this PR is now a change to the content pane of the main 
page (kafka.apache.org) only?


---