[jira] [Commented] (KAFKA-2711) SaslClientAuthenticator no longer needs KerberosNameParser in constructor

2015-10-30 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982019#comment-14982019
 ] 

Ismael Juma commented on KAFKA-2711:


SASL_KERBEROS_PRINCIPAL_TO_LOCAL_RULES is only used in 
KerberosName.shortName(). The other methods in KerberosName are independent of 
that config. Maybe this code can be refactored to make this clearer.

> SaslClientAuthenticator no longer needs KerberosNameParser in constructor
> -
>
> Key: KAFKA-2711
> URL: https://issues.apache.org/jira/browse/KAFKA-2711
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Priority: Minor
> Fix For: 0.9.1
>
>
> Since the sasl client doesn't need to know the principal name, we don't need 
> to pass in KerberosNameParser to SaslClientAuthenticator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2687: Add support for ListGroups and Des...

2015-10-30 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/388

KAFKA-2687: Add support for ListGroups and DescribeGroup APIs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka K2687

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/388.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #388


commit f5f9a9159bd34b0ebb5fa1ed818faea2a5a3c44c
Author: Jason Gustafson 
Date:   2015-10-28T22:34:04Z

KAFKA-2687: Add support for ListGroups and DescribeGroup APIs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2687) Allow GroupMetadataRequest to return member metadata when received by group coordinator

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982035#comment-14982035
 ] 

ASF GitHub Bot commented on KAFKA-2687:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/388

KAFKA-2687: Add support for ListGroups and DescribeGroup APIs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka K2687

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/388.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #388


commit f5f9a9159bd34b0ebb5fa1ed818faea2a5a3c44c
Author: Jason Gustafson 
Date:   2015-10-28T22:34:04Z

KAFKA-2687: Add support for ListGroups and DescribeGroup APIs




> Allow GroupMetadataRequest to return member metadata when received by group 
> coordinator
> ---
>
> Key: KAFKA-2687
> URL: https://issues.apache.org/jira/browse/KAFKA-2687
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Since the new consumer currently has no persistence in Zookeeper (pending 
> outcome of KAFKA-2017), there is no way for administrators to investigate 
> group status including getting the list of members in the group and their 
> partition assignments. We therefore propose to modify GroupMetadataRequest 
> (previously known as ConsumerMetadataRequest) to return group metadata when 
> received by the respective group's coordinator. When received by another 
> broker, the request will be handled as before: by only returning coordinator 
> host and port information.
> {code}
> GroupMetadataRequest => GroupId IncludeMetadata
>   GroupId => String
>   IncludeMetadata => Boolean
> GroupMetadataResponse => ErrorCode Coordinator GroupMetadata
>   ErrorCode => int16
>   Coordinator => Id Host Port
> Id => int32
> Host => string
> Port => int32
>   GroupMetadata => State ProtocolType Generation Protocol Leader  Members
> State => String
> ProtocolType => String
> Generation => int32
> Protocol => String
> Leader => String
> Members => [Member MemberMetadata MemberAssignment]
>   Member => MemberIp ClientId
> MemberIp => String
> ClientId => String
>   MemberMetadata => Bytes
>   MemberAssignment => Bytes
> {code}
> The request schema includes a flag to indicate whether metadata is needed, 
> which saves clients from having to read all group metadata when they are just 
> trying to find the coordinator. This is important to reduce group overhead 
> for use cases which involve a large number of topic subscriptions (e.g. 
> mirror maker).
> Tools will use the protocol type to determine how to parse metadata. For 
> example, when the protocolType is "consumer", the tool can use 
> ConsumerProtocol to parse the member metadata as topic subscriptions and 
> partition assignments. 
> The detailed proposal can be found below.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-40%3A+ListGroups+and+DescribeGroup



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2017) Persist Coordinator State for Coordinator Failover

2015-10-30 Thread Onur Karaman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982071#comment-14982071
 ] 

Onur Karaman commented on KAFKA-2017:
-

Thanks for the patch! I'm still looking through GroupMetadataManager.scala.

Regarding the schema evolution: what was the reason for making v1 represent 
offset schema and v2 represent group schema? I think we did a similar pattern 
for OffsetFetchRequest and OffsetCommitRequest (where one version represented 
commits to zookeeper and another represented commits to kafka), but I think 
circumstances were different in that kafka-based offset storage was meant to 
replace zookeeper-based offset storage.

> Persist Coordinator State for Coordinator Failover
> --
>
> Key: KAFKA-2017
> URL: https://issues.apache.org/jira/browse/KAFKA-2017
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Onur Karaman
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2017.patch, KAFKA-2017_2015-05-20_09:13:39.patch, 
> KAFKA-2017_2015-05-21_19:02:47.patch
>
>
> When a coordinator fails, the group membership protocol tries to failover to 
> a new coordinator without forcing all the consumers rejoin their groups. This 
> is possible if the coordinator persists its state so that the state can be 
> transferred during coordinator failover. This state consists of most of the 
> information in GroupRegistry and ConsumerRegistry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: ignore subproject .gitignore file for e...

2015-10-30 Thread vesense
GitHub user vesense opened a pull request:

https://github.com/apache/kafka/pull/389

MINOR: ignore subproject .gitignore file for eclipse IDE



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vesense/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/389.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #389


commit 8403829dd194ad99f507acc94a01274a6d2b5f39
Author: vesense 
Date:   2015-10-30T08:34:21Z

ignore subproject .gitignore file




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka_system_tests #125

2015-10-30 Thread ewen
See 

--
[...truncated 845 lines...]
test_id:
2015-10-30--001.kafkatest.tests.replication_test.ReplicationTest.test_replication_with_broker_failure.interbroker_security_protocol=PLAINTEXT.failure_mode=clean_bounce
status: PASS
run time:   8 minutes 16.855 seconds

test_id:
2015-10-30--001.kafkatest.tests.replication_test.ReplicationTest.test_replication_with_broker_failure.interbroker_security_protocol=PLAINTEXT.failure_mode=hard_shutdown
status: PASS
run time:   3 minutes 57.930 seconds

test_id:
2015-10-30--001.kafkatest.tests.replication_test.ReplicationTest.test_replication_with_broker_failure.interbroker_security_protocol=PLAINTEXT.failure_mode=clean_shutdown
status: PASS
run time:   3 minutes 48.704 seconds

test_id:
2015-10-30--001.kafkatest.tests.benchmark_test.Benchmark.test_consumer_throughput.interbroker_security_protocol=PLAINTEXT.security_protocol=SSL.new_consumer=True
status: PASS
run time:   5 minutes 26.742 seconds
{"records_per_sec": 211416.4905, "mb_per_sec": 20.1622}

test_id:
2015-10-30--001.kafkatest.tests.benchmark_test.Benchmark.test_consumer_throughput.interbroker_security_protocol=SSL.security_protocol=SSL.new_consumer=True
status: PASS
run time:   5 minutes 41.551 seconds
{"records_per_sec": 203778.045, "mb_per_sec": 19.4338}

test_id:
2015-10-30--001.kafkatest.tests.benchmark_test.Benchmark.test_consumer_throughput.security_protocol=PLAINTEXT.new_consumer=False
status: PASS
run time:   4 minutes 6.595 seconds
{"records_per_sec": 292971.6111, "mb_per_sec": 27.94}

test_id:
2015-10-30--001.kafkatest.tests.benchmark_test.Benchmark.test_consumer_throughput.security_protocol=PLAINTEXT.new_consumer=True
status: PASS
run time:   4 minutes 9.358 seconds
{"records_per_sec": 280946.2269, "mb_per_sec": 26.7931}

test_id:
2015-10-30--001.kafkatest.tests.benchmark_test.Benchmark.test_end_to_end_latency.interbroker_security_protocol=PLAINTEXT.security_protocol=PLAINTEXT
status: PASS
run time:   3 minutes 2.451 seconds
{"latency_99th_ms": 61.0, "latency_50th_ms": 3.0, "latency_999th_ms": 95.0}

test_id:
2015-10-30--001.kafkatest.tests.benchmark_test.Benchmark.test_end_to_end_latency.interbroker_security_protocol=SSL.security_protocol=SSL
status: PASS
run time:   3 minutes 48.627 seconds
{"latency_99th_ms": 71.0, "latency_50th_ms": 5.0, "latency_999th_ms": 112.0}

test_id:
2015-10-30--001.kafkatest.tests.benchmark_test.Benchmark.test_end_to_end_latency.interbroker_security_protocol=PLAINTEXT.security_protocol=SSL
status: PASS
run time:   3 minutes 20.761 seconds
{"latency_99th_ms": 66.0, "latency_50th_ms": 3.0, "latency_999th_ms": 113.0}

test_id:
2015-10-30--001.kafkatest.tests.benchmark_test.Benchmark.test_long_term_producer_throughput.interbroker_security_protocol=PLAINTEXT.security_protocol=PLAINTEXT
status: PASS
run time:   3 minutes 16.971 seconds
{"0": {"records_per_sec": 110517.997856, "mb_per_sec": 10.54}}

test_id:
2015-10-30--001.kafkatest.tests.benchmark_test.Benchmark.test_long_term_producer_throughput.interbroker_security_protocol=SSL.security_protocol=SSL
status: PASS
run time:   4 minutes 30.356 seconds
{"0": {"records_per_sec": 63249.906706, "mb_per_sec": 6.03}}

test_id:
2015-10-30--001.kafkatest.tests.benchmark_test.Benchmark.test_long_term_producer_throughput.interbroker_security_protocol=PLAINTEXT.security_protocol=SSL
status: PASS
run time:   4 minutes 30.196 seconds
{"0": {"records_per_sec": 65408.212655, "mb_per_sec": 6.24}}

test_id:
2015-10-30--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_and_consumer.interbroker_security_protocol=PLAINTEXT.security_protocol=SSL.new_consumer=True
status: PASS
run time:   4 minutes 40.917 seconds
{"consumer": {"records_per_sec": 0.0, "mb_per_sec": 0.0}, "producer": 
{"records_per_sec": 63693.050451,

[GitHub] kafka pull request: KAFKA-2711; SaslClientAuthenticator no longer ...

2015-10-30 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/390

KAFKA-2711; SaslClientAuthenticator no longer needs KerberosNameParser in 
constructor

Also refactor `KerberosNameParser` and `KerberosName` to make the code
clearer and easier to use when `shortName` is not needed.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-2711

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/390.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #390


commit 751e52ffee04401a6bf8170469594fe5f526bff1
Author: Ismael Juma 
Date:   2015-10-30T11:13:11Z

Remove unnecessary usage of `KerberosNameParser` in 
`SaslClientAuthenticator`

Also refactor `KerberosNameParser` and `KerberosName` to make the code
clearer.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2711) SaslClientAuthenticator no longer needs KerberosNameParser in constructor

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982384#comment-14982384
 ] 

ASF GitHub Bot commented on KAFKA-2711:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/390

KAFKA-2711; SaslClientAuthenticator no longer needs KerberosNameParser in 
constructor

Also refactor `KerberosNameParser` and `KerberosName` to make the code
clearer and easier to use when `shortName` is not needed.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-2711

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/390.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #390


commit 751e52ffee04401a6bf8170469594fe5f526bff1
Author: Ismael Juma 
Date:   2015-10-30T11:13:11Z

Remove unnecessary usage of `KerberosNameParser` in 
`SaslClientAuthenticator`

Also refactor `KerberosNameParser` and `KerberosName` to make the code
clearer.




> SaslClientAuthenticator no longer needs KerberosNameParser in constructor
> -
>
> Key: KAFKA-2711
> URL: https://issues.apache.org/jira/browse/KAFKA-2711
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Priority: Minor
> Fix For: 0.9.1
>
>
> Since the sasl client doesn't need to know the principal name, we don't need 
> to pass in KerberosNameParser to SaslClientAuthenticator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2711) SaslClientAuthenticator no longer needs KerberosNameParser in constructor

2015-10-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-2711:
--

Assignee: Ismael Juma

> SaslClientAuthenticator no longer needs KerberosNameParser in constructor
> -
>
> Key: KAFKA-2711
> URL: https://issues.apache.org/jira/browse/KAFKA-2711
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>Priority: Minor
> Fix For: 0.9.1
>
>
> Since the sasl client doesn't need to know the principal name, we don't need 
> to pass in KerberosNameParser to SaslClientAuthenticator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2711) SaslClientAuthenticator no longer needs KerberosNameParser in constructor

2015-10-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2711:
---
Reviewer: Jun Rao
  Status: Patch Available  (was: Open)

> SaslClientAuthenticator no longer needs KerberosNameParser in constructor
> -
>
> Key: KAFKA-2711
> URL: https://issues.apache.org/jira/browse/KAFKA-2711
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>Priority: Minor
> Fix For: 0.9.1
>
>
> Since the sasl client doesn't need to know the principal name, we don't need 
> to pass in KerberosNameParser to SaslClientAuthenticator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2690) Protect passwords from logging

2015-10-30 Thread Jakub Nowak (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakub Nowak updated KAFKA-2690:
---
Status: Patch Available  (was: In Progress)

> Protect passwords from logging
> --
>
> Key: KAFKA-2690
> URL: https://issues.apache.org/jira/browse/KAFKA-2690
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Jakub Nowak
> Fix For: 0.9.0.0
>
>
> We currently store the key (ssl.key.password), keystore 
> (ssl.keystore.password) and truststore (ssl.truststore.password) passwords as 
> a String in `KafkaConfig`, `ConsumerConfig` and `ProducerConfig`.
> The problem with this approach is that we may accidentally log the password 
> when logging the config.
> A possible solution is to introduce a new `ConfigDef.Type` that overrides 
> `toString` so that the value is hidden.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1255) Offset in RecordMetadata is Incorrect with New Producer Ack = -1

2015-10-30 Thread rathdeep (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982574#comment-14982574
 ] 

rathdeep commented on KAFKA-1255:
-

when is this bug expected to release. Is there any workaround to get correct 
offset with ack = -1.

> Offset in RecordMetadata is Incorrect with New Producer Ack = -1
> 
>
> Key: KAFKA-1255
> URL: https://issues.apache.org/jira/browse/KAFKA-1255
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Jay Kreps
> Fix For: 0.10.0.0
>
> Attachments: sendwithAckMinusOne, sendwithAckOne
>
>
> With the new producer's integration test, one observation is that when 
> producer ack = -1, the returned offset is incorrect.
> Output files with two scenarios (send 100 messages with ack = 1 and -1) 
> attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1260) Integration Test for New Producer Part II: Broker Failure Handling

2015-10-30 Thread rathdeep (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982578#comment-14982578
 ] 

rathdeep commented on KAFKA-1260:
-

when is this bug expected to release. Is there any workaround to get correct 
offset with ack = -1.


> Integration Test for New Producer Part II: Broker Failure Handling
> --
>
> Key: KAFKA-1260
> URL: https://issues.apache.org/jira/browse/KAFKA-1260
> Project: Kafka
>  Issue Type: Sub-task
>  Components: producer 
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Attachments: KAFKA-1260-fix.patch, KAFKA-1260.patch, 
> KAFKA-1260_2014-02-13_15:14:21.patch, KAFKA-1260_2014-02-14_15:00:16.patch, 
> KAFKA-1260_2014-02-19_13:49:19.patch, KAFKA-1260_2014-02-19_15:55:06.patch, 
> KAFKA-1260_2014-02-20_15:26:54.patch, KAFKA-1260_2014-02-20_15:45:11.patch, 
> KAFKA-1260_2014-02-25_16:58:27.patch, KAFKA-1260_2014-02-25_17:46:14.patch, 
> KAFKA-1260_2014-02-26_09:47:08.patch, KAFKA-1260_2014-02-26_13:49:24.patch, 
> KAFKA-1260_2014-02-27_09:37:17.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2704) SimpleConsumer should throw InterruptedException when interrupted

2015-10-30 Thread Hitoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitoshi Ozawa updated KAFKA-2704:
-
Reviewer: Jun Rao  (was: Gwen Shapira)

> SimpleConsumer should throw InterruptedException when interrupted
> -
>
> Key: KAFKA-2704
> URL: https://issues.apache.org/jira/browse/KAFKA-2704
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Hitoshi Ozawa
> Attachments: simpleconsumer_2.patch
>
>
> SimpleConsumer does not throw InterruptedException when interrupted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2703) SimpleConsumer.scala is declared as threadsafe but is not

2015-10-30 Thread Hitoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitoshi Ozawa updated KAFKA-2703:
-
Reviewer: Jun Rao  (was: Gwen Shapira)

> SimpleConsumer.scala is declared as threadsafe but is not
> -
>
> Key: KAFKA-2703
> URL: https://issues.apache.org/jira/browse/KAFKA-2703
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Hitoshi Ozawa
>  Labels: thread-safe
> Attachments: simpleconsumer.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> SimpleConsumer.scala is declared as threadsafe but is not is missing 
> @throws[IOException] annotation in methods. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Fix homophone typo in Design documentat...

2015-10-30 Thread chrnola
GitHub user chrnola opened a pull request:

https://github.com/apache/kafka/pull/391

MINOR: Fix homophone typo in Design documentation

Noticed that there was a small typo in section 4.1 of the Design 
documentation on the 
[website](https://kafka.apache.org/documentation.html#majordesignelements) 
('new' vs. 'knew'). This patch corrects that.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chrnola/kafka minor/design-doc-typo

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/391.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #391


commit 3cd72ecd4880206d10b0b32cf2f42ac814e263aa
Author: Chris Pinola 
Date:   2015-10-30T13:44:28Z

MINOR: Fix homophone typo in Design documentation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-10-30 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982636#comment-14982636
 ] 

Grant Henke commented on KAFKA-2702:


[~gwenshap] Yeah, there is definitely some config cleanup needed. This is for 
similar reason to what [~jkreps] was saying:
{quote}
Originally the presence or absence of a default indicated whether something was 
required
{quote}

Before doing that I want to be sure we agree on the approach. Should we:
A. Keep the required field, adjust the sort, and cleanup configs.
B. Remove the required field, and cleanup configs adding defaults where needed.


> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Attachments: ConsumerConfig-After.html, ConsumerConfig-Before.html
>
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-10-30 Thread Andrii Biletskyi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982679#comment-14982679
 ] 

Andrii Biletskyi commented on KAFKA-2702:
-

It's been a while, but, yes, as far as I remember I added required field 
because not all configs had default values and we couldn't instantiate Config 
unless all settings have their value - the default one, or the one came from 
the user (config file).

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Attachments: ConsumerConfig-After.html, ConsumerConfig-Before.html
>
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Update system test MANIFEST.in

2015-10-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/385


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2711; SaslClientAuthenticator no longer ...

2015-10-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/390


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2711) SaslClientAuthenticator no longer needs KerberosNameParser in constructor

2015-10-30 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2711:
---
   Resolution: Fixed
Fix Version/s: (was: 0.9.1)
   0.9.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 390
[https://github.com/apache/kafka/pull/390]

> SaslClientAuthenticator no longer needs KerberosNameParser in constructor
> -
>
> Key: KAFKA-2711
> URL: https://issues.apache.org/jira/browse/KAFKA-2711
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>Priority: Minor
> Fix For: 0.9.0.0
>
>
> Since the sasl client doesn't need to know the principal name, we don't need 
> to pass in KerberosNameParser to SaslClientAuthenticator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2711) SaslClientAuthenticator no longer needs KerberosNameParser in constructor

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982747#comment-14982747
 ] 

ASF GitHub Bot commented on KAFKA-2711:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/390


> SaslClientAuthenticator no longer needs KerberosNameParser in constructor
> -
>
> Key: KAFKA-2711
> URL: https://issues.apache.org/jira/browse/KAFKA-2711
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>Priority: Minor
> Fix For: 0.9.0.0
>
>
> Since the sasl client doesn't need to know the principal name, we don't need 
> to pass in KerberosNameParser to SaslClientAuthenticator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #739

2015-10-30 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Update system test MANIFEST.in

[junrao] KAFKA-2711; SaslClientAuthenticator no longer needs KerberosNameParser

--
[...truncated 1671 lines...]

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #78

2015-10-30 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Update system test MANIFEST.in

[junrao] KAFKA-2711; SaslClientAuthenticator no longer needs KerberosNameParser

--
[...truncated 4585 lines...]

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndMapToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCopycatSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToCopycatConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaPrimitiveToCopycat 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatNonStringKeys 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToCopycat PASSED
:copycat:runtime:checkstyleMain
:copycat:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:copycat:runtime:processTestResources
:copycat:runtime:testClasses
:copycat:runtime:checkstyleTest
:copycat:runtime:test

org.apache.kafka.copycat.runtime.WorkerTest > testAddRemoveConnector PASSED

org.apache.kafka.copycat.runtime.WorkerTest > testStopInvalidConnector PASSED

org.apache.kafka.copycat.runtime.WorkerTest > testReconfigureConnectorTasks 
PASSED

org.apache.kafka.copycat.runtime.WorkerTest > testAddRemoveTask PASSED

org.apache.kafka.copycat.runtime.WorkerTest > testStopInvalidTask PASSED

org.apache.kafka.copycat.runtime.WorkerTest > testCleanupTasksOnStop PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > testPollsInBackground 
PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > testCommit PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > testCommitFailure PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > 
testSendRecordsConvertsData PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testPollsInBackground 
PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testCommit PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testDeliverConvertsData 
PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > 
testCommitTaskFlushFailure PASSED

org.apache.k

[jira] [Commented] (KAFKA-2681) SASL authentication in official docs

2015-10-30 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983043#comment-14983043
 ] 

Sriharsha Chintalapani commented on KAFKA-2681:
---

[~junrao] Here is the wiki 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61326390

> SASL authentication in official docs
> 
>
> Key: KAFKA-2681
> URL: https://issues.apache.org/jira/browse/KAFKA-2681
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SASL 
> authentication:
> http://kafka.apache.org/documentation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2697) add leave group logic to the consumer

2015-10-30 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983047#comment-14983047
 ] 

Jun Rao commented on KAFKA-2697:


[~onurkaraman], do you plan to work on this before the 0.9.0 release? We plan 
to cut the release branch in a week or so.

> add leave group logic to the consumer
> -
>
> Key: KAFKA-2697
> URL: https://issues.apache.org/jira/browse/KAFKA-2697
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
> Fix For: 0.9.0.0
>
>
> KAFKA-2397 added logic on the coordinator to handle LeaveGroupRequests. We 
> need to add logic to KafkaConsumer to send out a LeaveGroupRequest on close.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2697) add leave group logic to the consumer

2015-10-30 Thread Onur Karaman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983052#comment-14983052
 ] 

Onur Karaman commented on KAFKA-2697:
-

Hi Jun. I have some scrap code that I used while testing the broker side 
changes here:
https://github.com/onurkaraman/kafka/commit/cdb559f0c73417aa69ef750f06b791822bae4a48

I could try to get something out this weekend.

> add leave group logic to the consumer
> -
>
> Key: KAFKA-2697
> URL: https://issues.apache.org/jira/browse/KAFKA-2697
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
> Fix For: 0.9.0.0
>
>
> KAFKA-2397 added logic on the coordinator to handle LeaveGroupRequests. We 
> need to add logic to KafkaConsumer to send out a LeaveGroupRequest on close.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2681) SASL authentication in official docs

2015-10-30 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983104#comment-14983104
 ] 

Gwen Shapira commented on KAFKA-2681:
-

[~harsha_ch], if you don't mind, I can add the wiki to the official docs and 
submit the PR

> SASL authentication in official docs
> 
>
> Key: KAFKA-2681
> URL: https://issues.apache.org/jira/browse/KAFKA-2681
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SASL 
> authentication:
> http://kafka.apache.org/documentation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2687) Add support for ListGroups and DescribeGroup APIs

2015-10-30 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-2687:
---
Summary: Add support for ListGroups and DescribeGroup APIs  (was: Allow 
GroupMetadataRequest to return member metadata when received by group 
coordinator)

> Add support for ListGroups and DescribeGroup APIs
> -
>
> Key: KAFKA-2687
> URL: https://issues.apache.org/jira/browse/KAFKA-2687
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Since the new consumer currently has no persistence in Zookeeper (pending 
> outcome of KAFKA-2017), there is no way for administrators to investigate 
> group status including getting the list of members in the group and their 
> partition assignments. We therefore propose to modify GroupMetadataRequest 
> (previously known as ConsumerMetadataRequest) to return group metadata when 
> received by the respective group's coordinator. When received by another 
> broker, the request will be handled as before: by only returning coordinator 
> host and port information.
> {code}
> GroupMetadataRequest => GroupId IncludeMetadata
>   GroupId => String
>   IncludeMetadata => Boolean
> GroupMetadataResponse => ErrorCode Coordinator GroupMetadata
>   ErrorCode => int16
>   Coordinator => Id Host Port
> Id => int32
> Host => string
> Port => int32
>   GroupMetadata => State ProtocolType Generation Protocol Leader  Members
> State => String
> ProtocolType => String
> Generation => int32
> Protocol => String
> Leader => String
> Members => [Member MemberMetadata MemberAssignment]
>   Member => MemberIp ClientId
> MemberIp => String
> ClientId => String
>   MemberMetadata => Bytes
>   MemberAssignment => Bytes
> {code}
> The request schema includes a flag to indicate whether metadata is needed, 
> which saves clients from having to read all group metadata when they are just 
> trying to find the coordinator. This is important to reduce group overhead 
> for use cases which involve a large number of topic subscriptions (e.g. 
> mirror maker).
> Tools will use the protocol type to determine how to parse metadata. For 
> example, when the protocolType is "consumer", the tool can use 
> ConsumerProtocol to parse the member metadata as topic subscriptions and 
> partition assignments. 
> The detailed proposal can be found below.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-40%3A+ListGroups+and+DescribeGroup



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2681) SASL authentication in official docs

2015-10-30 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983162#comment-14983162
 ] 

Sriharsha Chintalapani commented on KAFKA-2681:
---

[~gwenshap] Please go ahead. Thanks.

> SASL authentication in official docs
> 
>
> Key: KAFKA-2681
> URL: https://issues.apache.org/jira/browse/KAFKA-2681
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SASL 
> authentication:
> http://kafka.apache.org/documentation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


One more Kafka Meetup hosted by LinkedIn in 2016 (this time in San Francisco) - does anyone want to talk?

2015-10-30 Thread Ed Yakabosky
Hi all,

LinkedIn is hoping to host one more Apache Kafka meetup this year on
November 18 in our San Francisco office.  We're working on building the
agenda now.  Does anyone want to talk?  Please send me (and Clark) a
private email with a short description of what you would be talking about
if interested.

-- 
Thanks,

Ed Yakabosky
​Technical Program Management @ LinkedIn​


[jira] [Commented] (KAFKA-2681) SASL authentication in official docs

2015-10-30 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983202#comment-14983202
 ] 

Jun Rao commented on KAFKA-2681:


[~sriharsha], thanks for your help!

> SASL authentication in official docs
> 
>
> Key: KAFKA-2681
> URL: https://issues.apache.org/jira/browse/KAFKA-2681
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SASL 
> authentication:
> http://kafka.apache.org/documentation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-10-30 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983215#comment-14983215
 ] 

Gwen Shapira commented on KAFKA-2702:
-

I'd prefer B, and I think that both Jay and you mentioned the same - the 
"required" field is not needed and just adds confusion.

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Attachments: ConsumerConfig-After.html, ConsumerConfig-Before.html
>
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2713) Copycat worker should not call connector's/task's start methods in the control thread

2015-10-30 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2713:


 Summary: Copycat worker should not call connector's/task's start 
methods in the control thread
 Key: KAFKA-2713
 URL: https://issues.apache.org/jira/browse/KAFKA-2713
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava


Currently the DistributedHerder calls start() methods in the same thread as the 
group membership is handled. This is simple and makes lifecycles easier to 
reason about, but also means that user code (and even code that can simply 
block for a long time, like sink task's connectors) can potentially block group 
membership, which in turn causes the worker to fall out of the group.

To avoid this, we should run these methods in the worker thread for each 
connector/task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-10-30 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983248#comment-14983248
 ] 

Ismael Juma commented on KAFKA-2702:


For SSL and SASL configs, a default of null can be used for all optional 
configs to fit within the original design of the library.

Personally, I'd prefer if the library supported optional types in a way where 
one could leverage the type system (ie Option and Optional), but that's a 
bigger change and needs to be carefully considered. Relying on each config to 
follow a convention in order to choose the right default for the optional value 
for a given type is error-prone and leads to inconsistencies (-1, null, empty 
list, false, etc.).

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Attachments: ConsumerConfig-After.html, ConsumerConfig-Before.html
>
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Removed previous system_test folder

2015-10-30 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/392

MINOR: Removed previous system_test folder

@ewencp Nothing too complicated here

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka minor-remove-system-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/392.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #392


commit 81df4b946721dea99d401e7a7038b8345ea0411c
Author: Geoff Anderson 
Date:   2015-10-30T21:11:53Z

Removed previous system_test folder




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2714) Add integration tests for exceptional cases in Fetching for new consumer

2015-10-30 Thread Anna Povzner (JIRA)
Anna Povzner created KAFKA-2714:
---

 Summary: Add integration tests for exceptional cases in Fetching 
for new consumer
 Key: KAFKA-2714
 URL: https://issues.apache.org/jira/browse/KAFKA-2714
 Project: Kafka
  Issue Type: Test
Reporter: Anna Povzner
Assignee: Anna Povzner


We currently don't have integration tests for exceptional cases in fetches for 
new consumer. This ticket is to create the following test scenarios:
1. When reset policy is NONE, verify that NoOffsetForPartitionException is 
thrown if no initial position is set.
2. When reset policy is NONE, verify that OffsetOutOfRange is thrown if you 
seek out of range.
3. Verify RecordTooLargeException is thrown if a message is too large for the 
configured fetch size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2714: Added integration tests for except...

2015-10-30 Thread apovzner
Github user apovzner closed the pull request at:

https://github.com/apache/kafka/pull/384


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2714) Add integration tests for exceptional cases in Fetching for new consumer

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983348#comment-14983348
 ] 

ASF GitHub Bot commented on KAFKA-2714:
---

Github user apovzner closed the pull request at:

https://github.com/apache/kafka/pull/384


> Add integration tests for exceptional cases in Fetching for new consumer
> 
>
> Key: KAFKA-2714
> URL: https://issues.apache.org/jira/browse/KAFKA-2714
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>
> We currently don't have integration tests for exceptional cases in fetches 
> for new consumer. This ticket is to create the following test scenarios:
> 1. When reset policy is NONE, verify that NoOffsetForPartitionException is 
> thrown if no initial position is set.
> 2. When reset policy is NONE, verify that OffsetOutOfRange is thrown if you 
> seek out of range.
> 3. Verify RecordTooLargeException is thrown if a message is too large for the 
> configured fetch size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2714: Added integration tests for except...

2015-10-30 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/393

KAFKA-2714: Added integration tests for exceptional cases in fetching



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka cpkafka-84

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/393.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #393


commit 4f175d813270ae2943dd4466c51bedbf11e819ea
Author: Anna Povzner 
Date:   2015-10-29T22:21:01Z

MINOR: Added integration tests for exceptional cases in fetching

commit 6aa6aaf4b4f3ebb4f101fc0a5088195d52a2a0ba
Author: Anna Povzner 
Date:   2015-10-30T00:00:50Z

MINOR: Checking correct values in exceptions thrown in integration tests 
for exceptional cases in fetching




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2714) Add integration tests for exceptional cases in Fetching for new consumer

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983358#comment-14983358
 ] 

ASF GitHub Bot commented on KAFKA-2714:
---

GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/393

KAFKA-2714: Added integration tests for exceptional cases in fetching



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka cpkafka-84

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/393.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #393


commit 4f175d813270ae2943dd4466c51bedbf11e819ea
Author: Anna Povzner 
Date:   2015-10-29T22:21:01Z

MINOR: Added integration tests for exceptional cases in fetching

commit 6aa6aaf4b4f3ebb4f101fc0a5088195d52a2a0ba
Author: Anna Povzner 
Date:   2015-10-30T00:00:50Z

MINOR: Checking correct values in exceptions thrown in integration tests 
for exceptional cases in fetching




> Add integration tests for exceptional cases in Fetching for new consumer
> 
>
> Key: KAFKA-2714
> URL: https://issues.apache.org/jira/browse/KAFKA-2714
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>
> We currently don't have integration tests for exceptional cases in fetches 
> for new consumer. This ticket is to create the following test scenarios:
> 1. When reset policy is NONE, verify that NoOffsetForPartitionException is 
> thrown if no initial position is set.
> 2. When reset policy is NONE, verify that OffsetOutOfRange is thrown if you 
> seek out of range.
> 3. Verify RecordTooLargeException is thrown if a message is too large for the 
> configured fetch size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-2714) Add integration tests for exceptional cases in Fetching for new consumer

2015-10-30 Thread Anna Povzner (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2714 started by Anna Povzner.
---
> Add integration tests for exceptional cases in Fetching for new consumer
> 
>
> Key: KAFKA-2714
> URL: https://issues.apache.org/jira/browse/KAFKA-2714
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>
> We currently don't have integration tests for exceptional cases in fetches 
> for new consumer. This ticket is to create the following test scenarios:
> 1. When reset policy is NONE, verify that NoOffsetForPartitionException is 
> thrown if no initial position is set.
> 2. When reset policy is NONE, verify that OffsetOutOfRange is thrown if you 
> seek out of range.
> 3. Verify RecordTooLargeException is thrown if a message is too large for the 
> configured fetch size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1694) kafka command line and centralized operations

2015-10-30 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983366#comment-14983366
 ] 

Grant Henke commented on KAFKA-1694:


[~abiletskyi], are you still able to work on this? I would like to pick the 
work up if not. I am looking forward to getting this functionality into Kafka 
as soon as possible.

> kafka command line and centralized operations
> -
>
> Key: KAFKA-1694
> URL: https://issues.apache.org/jira/browse/KAFKA-1694
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>Assignee: Andrii Biletskyi
>Priority: Critical
> Attachments: KAFKA-1694.patch, KAFKA-1694_2014-12-24_21:21:51.patch, 
> KAFKA-1694_2015-01-12_15:28:41.patch, KAFKA-1694_2015-01-12_18:54:48.patch, 
> KAFKA-1694_2015-01-13_19:30:11.patch, KAFKA-1694_2015-01-14_15:42:12.patch, 
> KAFKA-1694_2015-01-14_18:07:39.patch, KAFKA-1694_2015-03-12_13:04:37.patch, 
> KAFKA-1772_1802_1775_1774_v2.patch
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2714: Added integration tests for except...

2015-10-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/393


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2714) Add integration tests for exceptional cases in Fetching for new consumer

2015-10-30 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2714.
-
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 393
[https://github.com/apache/kafka/pull/393]

> Add integration tests for exceptional cases in Fetching for new consumer
> 
>
> Key: KAFKA-2714
> URL: https://issues.apache.org/jira/browse/KAFKA-2714
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
> Fix For: 0.9.0.0
>
>
> We currently don't have integration tests for exceptional cases in fetches 
> for new consumer. This ticket is to create the following test scenarios:
> 1. When reset policy is NONE, verify that NoOffsetForPartitionException is 
> thrown if no initial position is set.
> 2. When reset policy is NONE, verify that OffsetOutOfRange is thrown if you 
> seek out of range.
> 3. Verify RecordTooLargeException is thrown if a message is too large for the 
> configured fetch size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2714) Add integration tests for exceptional cases in Fetching for new consumer

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983409#comment-14983409
 ] 

ASF GitHub Bot commented on KAFKA-2714:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/393


> Add integration tests for exceptional cases in Fetching for new consumer
> 
>
> Key: KAFKA-2714
> URL: https://issues.apache.org/jira/browse/KAFKA-2714
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
> Fix For: 0.9.0.0
>
>
> We currently don't have integration tests for exceptional cases in fetches 
> for new consumer. This ticket is to create the following test scenarios:
> 1. When reset policy is NONE, verify that NoOffsetForPartitionException is 
> thrown if no initial position is set.
> 2. When reset policy is NONE, verify that OffsetOutOfRange is thrown if you 
> seek out of range.
> 3. Verify RecordTooLargeException is thrown if a message is too large for the 
> configured fetch size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2369) Add Copycat REST API

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983430#comment-14983430
 ] 

ASF GitHub Bot commented on KAFKA-2369:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/378


> Add Copycat REST API
> 
>
> Key: KAFKA-2369
> URL: https://issues.apache.org/jira/browse/KAFKA-2369
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Add a REST API for Copycat. At a minimum, for a single worker this should 
> support:
> * add/remove connector
> * connector status
> * task status
> * worker status
> In distributed mode this should handle forwarding if necessary, but it may 
> make sense to defer the distributed support for a later JIRA.
> This will require the addition of new dependencies to support implementing 
> the REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2369: Add REST API for Copycat.

2015-10-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/378


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2369) Add Copycat REST API

2015-10-30 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2369.
-
Resolution: Fixed

Issue resolved by pull request 378
[https://github.com/apache/kafka/pull/378]

> Add Copycat REST API
> 
>
> Key: KAFKA-2369
> URL: https://issues.apache.org/jira/browse/KAFKA-2369
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Add a REST API for Copycat. At a minimum, for a single worker this should 
> support:
> * add/remove connector
> * connector status
> * task status
> * worker status
> In distributed mode this should handle forwarding if necessary, but it may 
> make sense to defer the distributed support for a later JIRA.
> This will require the addition of new dependencies to support implementing 
> the REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2715) Remove the previous system test folder

2015-10-30 Thread Geoff Anderson (JIRA)
Geoff Anderson created KAFKA-2715:
-

 Summary: Remove the previous system test folder
 Key: KAFKA-2715
 URL: https://issues.apache.org/jira/browse/KAFKA-2715
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.9.0.0
Reporter: Geoff Anderson
Assignee: Geoff Anderson


As part of KAFKA-25, we want to remove the existing system tests



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2715: Removed previous system_test folde...

2015-10-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/392


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2715) Remove the previous system test folder

2015-10-30 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2715.
-
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 392
[https://github.com/apache/kafka/pull/392]

> Remove the previous system test folder
> --
>
> Key: KAFKA-2715
> URL: https://issues.apache.org/jira/browse/KAFKA-2715
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>  Labels: test
> Fix For: 0.9.0.0
>
>
> As part of KAFKA-25, we want to remove the existing system tests



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2715) Remove the previous system test folder

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983469#comment-14983469
 ] 

ASF GitHub Bot commented on KAFKA-2715:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/392


> Remove the previous system test folder
> --
>
> Key: KAFKA-2715
> URL: https://issues.apache.org/jira/browse/KAFKA-2715
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>  Labels: test
> Fix For: 0.9.0.0
>
>
> As part of KAFKA-25, we want to remove the existing system tests



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #79

2015-10-30 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2714: Added integration tests for exceptional cases in fetching

[cshapi] KAFKA-2369: Add REST API for Copycat.

[cshapi] KAFKA-2715: Removed previous system_test folder

--
[...truncated 233 lines...]
ControllerStats.uncleanLeaderElectionRate
^
:294:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:380:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:115:
 value METADATA_FETCH_TIMEOUT_CONFIG in object ProducerConfig is deprecated: 
see corresponding Javadoc for more information.
props.put(ProducerConfig.METADATA_FETCH_TIMEOUT_CONFIG, 
config.metadataFetchTimeoutMs.toString)
 ^
:117:
 value TIMEOUT_CONFIG in object ProducerConfig is deprecated: see corresponding 
Javadoc for more information.
props.put(ProducerConfig.TIMEOUT_CONFIG, config.requestTimeoutMs.toString)
 ^
:121:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  props.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "false")
   ^
:75:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
producerProps.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
 ^
:194:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  maybeSetDefaultProperty(producerProps, 
ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
^
:389:
 class BrokerEndPoint in object UpdateMetadataRequest is deprecated: see 
corresponding Javadoc for more information.
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
there were 15 feature warning(s); re-run with -feature for details
16 warnings found
warning: [options] bootstrap class path not set in conjunction with -source 1.7
1 warning
:core:processResources UP-TO-DATE
:core:classes
:clients:compileTestJava UP-TO-DATE
:clients:processTestResources UP-TO-DATE
:clients:testClasses UP-TO-DATE
:core:copyDependantLibs UP-TO-DATE
:core:jar UP-TO-DATE
:examples:compileJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
1 warning

:examples:processResources UP-TO-DATE
:examples:classes
:examples:jar
:streams:compileJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:streams:processResources UP-TO-DATE
:streams:classes
:clients:javadoc:1166:
 warning - T

[jira] [Updated] (KAFKA-1763) validate_index_log in system tests runs remotely but uses local paths

2015-10-30 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-1763:
-
Resolution: Invalid
Status: Resolved  (was: Patch Available)

No longer valid since the old system tests have been removed.

> validate_index_log in system tests runs remotely but uses local paths
> -
>
> Key: KAFKA-1763
> URL: https://issues.apache.org/jira/browse/KAFKA-1763
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 0.8.1.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Attachments: KAFKA-1763.patch
>
>
> validate_index_log is the only validation step in the system tests that needs 
> to execute a Kafka binary and it's currently doing so remotely, like the rest 
> of the test binaries. However, this is probably incorrect since it looks like 
> logs are synced back to the driver host and in other cases are operated on 
> locally. It looks like validate_index_log mixes up local/remote paths, 
> causing an exception in DumpLogSegments:
> {quote}
> 2014-11-10 12:09:57,665 - DEBUG - executing command [ssh vagrant@worker1 -o 
> 'HostName 127.0.0.1' -o 'Port ' -o 'UserKnownHostsFile /dev/null' -o 
> 'StrictHostKeyChecking no' -o 'PasswordAuthentication no' -o 'IdentityFile 
> /Users/ewencp/.vagrant.d/insecure_private_key' -o 'IdentitiesOnly yes' -o 
> 'LogLevel FATAL'  '/opt/kafka/bin/kafka-run-class.sh 
> kafka.tools.DumpLogSegments  --file 
> /Users/ewencp/kafka.git/system_test/replication_testsuite/testcase_0008/logs/broker-3/kafka_server_3_logs/test_1-2/1294.index
>  --verify-index-only 2>&1'] (system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - Dumping 
> /Users/ewencp/kafka.git/system_test/replication_testsuite/testcase_0008/logs/broker-3/kafka_server_3_logs/test_1-2/1294.index
>  (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - Exception in thread "main" 
> java.io.FileNotFoundException: 
> /Users/ewencp/kafka.git/system_test/replication_testsuite/testcase_0008/logs/broker-3/kafka_server_3_logs/test_1-2/1294.log
>  (No such file or directory) (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at java.io.FileInputStream.open(Native 
> Method) (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at 
> java.io.FileInputStream.(FileInputStream.java:146) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at 
> kafka.utils.Utils$.openChannel(Utils.scala:162) (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at 
> kafka.log.FileMessageSet.(FileMessageSet.scala:74) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at 
> kafka.tools.DumpLogSegments$.kafka$tools$DumpLogSegments$$dumpIndex(DumpLogSegments.scala:108)
>  (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at 
> kafka.tools.DumpLogSegments$$anonfun$main$1.apply(DumpLogSegments.scala:80) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,674 - DEBUG - at 
> kafka.tools.DumpLogSegments$$anonfun$main$1.apply(DumpLogSegments.scala:73) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,674 - DEBUG - at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>  (kafka_system_test_utils)
> 2014-11-10 12:09:58,674 - DEBUG - at 
> scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,674 - DEBUG - at 
> kafka.tools.DumpLogSegments$.main(DumpLogSegments.scala:73) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,674 - DEBUG - at 
> kafka.tools.DumpLogSegments.main(DumpLogSegments.scala) 
> (kafka_system_test_utils)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1771) replicate_testsuite data verification broken if num_partitions > replica_factor

2015-10-30 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-1771.
--
Resolution: Invalid

No longer valid since the old system tests have been removed.

> replicate_testsuite data verification broken if num_partitions > 
> replica_factor
> ---
>
> Key: KAFKA-1771
> URL: https://issues.apache.org/jira/browse/KAFKA-1771
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 0.8.1.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Attachments: kafka-1771.wip.patch
>
>
> As discussed in KAFKA-1763,   testcase_0131,  testcase_0132, and 
> testcase_0133 currently fail with an exception:
> {quote}
> Traceback (most recent call last):
> File
> "/mnt/u001/kafka_replication_system_test/system_test/replication_testsuite/
> replica_basic_test.py", line 434, in runTest
> kafka_system_test_utils.validate_simple_consumer_data_matched_across_replic
> as(self.systemTestEnv, self.testcaseEnv)
> File
> "/mnt/u001/kafka_replication_system_test/system_test/utils/kafka_system_tes
> t_utils.py", line 2223, in
> validate_simple_consumer_data_matched_across_replicas
> replicaIdxMsgIdList[replicaIdx - 1][topicPartition] = consumerMsgIdList
> IndexError: list index out of range
> {quote}
> The root cause seems to be kafka_system_test_utils.start_simple_consumer. The 
> current logic seems incorrect. It should be generating one consumer per 
> partition per replica so it can verify the data from all sources, but it 
> currently has a loop involving the list of brokers, where that loop variable 
> isn't even used.
> But probably a bigger issue is that it's generating multiple processes in the 
> background. It records pids to the single well-known entity pid path, which 
> means only the last pid is saved and we could easily leave zombie processes 
> if one of them hangs for some reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1734) System test metric plotting nonexistent file warnings

2015-10-30 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-1734.
--
Resolution: Invalid

No longer valid since the old system tests have been removed.

> System test metric plotting nonexistent file warnings
> -
>
> Key: KAFKA-1734
> URL: https://issues.apache.org/jira/browse/KAFKA-1734
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew Olson
>Priority: Minor
>
> Running the system tests (trunk code), there are many "The file ... does not 
> exist for plotting (metrics)" warning messages, for example,
> {noformat}
> 2014-10-27 14:47:58,478 - WARNING - The file 
> /opt/kafka/system_test/replication_testsuite/testcase_0007/logs/broker-3/metrics/kafka.network.RequestMetrics.Produce-RemoteTimeMs.csv
>  does not exist for plotting (metrics)
> {noformat}
> Looks like the generated metric file names only include the last part of the 
> metric, e.g. "Produce-RemoteTimeMs.csv" not 
> "kafka.network.RequestMetrics.Produce-RemoteTimeMs.csv".
> {noformat}
> $ ls 
> /opt/kafka/system_test/replication_testsuite/testcase_0007/logs/broker-3/metrics/*Produce*
> /opt/kafka/system_test/replication_testsuite/testcase_0007/logs/broker-3/metrics/Produce-RemoteTimeMs.csv
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2562: update kafka scripts to use new to...

2015-10-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/242


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2562) check Kafka scripts for 0.9.0.0

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983503#comment-14983503
 ] 

ASF GitHub Bot commented on KAFKA-2562:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/242


> check Kafka scripts for 0.9.0.0
> ---
>
> Key: KAFKA-2562
> URL: https://issues.apache.org/jira/browse/KAFKA-2562
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Manikumar Reddy
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to make a pass to make sure all scripts in bin/ are up to date for 
> 0.9.0.0. For example, bin/kafka-producer-perf-test.sh currently still points 
> to kafka.tools.ProducerPerformance and it should be changed to 
> org.apache.kafka.clients.tools.ProducerPerformance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2562) check Kafka scripts for 0.9.0.0

2015-10-30 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2562:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 242
[https://github.com/apache/kafka/pull/242]

> check Kafka scripts for 0.9.0.0
> ---
>
> Key: KAFKA-2562
> URL: https://issues.apache.org/jira/browse/KAFKA-2562
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Manikumar Reddy
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to make a pass to make sure all scripts in bin/ are up to date for 
> 0.9.0.0. For example, bin/kafka-producer-perf-test.sh currently still points 
> to kafka.tools.ProducerPerformance and it should be changed to 
> org.apache.kafka.clients.tools.ProducerPerformance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2691) Improve handling of authorization failure during metadata refresh

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983605#comment-14983605
 ] 

ASF GitHub Bot commented on KAFKA-2691:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/394

KAFKA-2691: Improve handling of authorization failure during metadata 
refresh



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2691

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/394.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #394


commit 722980095636f8d0f633aa1f40a6aa5736facdbe
Author: Jason Gustafson 
Date:   2015-10-30T23:40:00Z

KAFKA-2691: Improve handling of authorization failure during metadata 
refresh




> Improve handling of authorization failure during metadata refresh
> -
>
> Key: KAFKA-2691
> URL: https://issues.apache.org/jira/browse/KAFKA-2691
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> There are two problems, one more severe than the other:
> 1. The consumer blocks indefinitely if there is non-transient authorization 
> failure during metadata refresh due to KAFKA-2391
> 2. We get a TimeoutException instead of an AuthorizationException in the 
> producer for the same case
> If the fix for KAFKA-2391 is to add a timeout, then we will have issue `2` in 
> both producer and consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2691: Improve handling of authorization ...

2015-10-30 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/394

KAFKA-2691: Improve handling of authorization failure during metadata 
refresh



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2691

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/394.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #394


commit 722980095636f8d0f633aa1f40a6aa5736facdbe
Author: Jason Gustafson 
Date:   2015-10-30T23:40:00Z

KAFKA-2691: Improve handling of authorization failure during metadata 
refresh




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: HOTFIX: log4j-appender not getting built

2015-10-30 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/395

HOTFIX: log4j-appender not getting built



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka HOTFIX-Log4jAppender

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/395.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #395


commit ad5f992a9a5f18e2d4286f9f10735d3e179a9c26
Author: Ashish Singh 
Date:   2015-10-30T23:46:20Z

HOTFIX: log4j-appender not getting built




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Remove unreachable if check

2015-10-30 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/396

MINOR: Remove unreachable if check

@gwenshap @granders could you guys take a look at this trivial change. This 
piece makes one think that for SSL, not passing `new_consumer=True` should be 
fine. It was fine until recently.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka MinorTestChange

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/396.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #396


commit 7d7c9d339e6e98e73610c339960c55e42987c9c9
Author: Ashish Singh 
Date:   2015-10-31T00:05:00Z

MINOR: Remove unreachable if check




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #80

2015-10-30 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2562: update kafka scripts to use new tools/code

--
[...truncated 3956 lines...]

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOf

[GitHub] kafka pull request: HOTFIX: log4j-appender not getting built

2015-10-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/395


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk7 #740

2015-10-30 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk8 #81

2015-10-30 Thread Apache Jenkins Server
See 

Changes:

[cshapi] HOTFIX: log4j-appender not getting built

--
[...truncated 147 lines...]
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:264:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:293:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:294:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:380:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:115:
 value METADATA_FETCH_TIMEOUT_CONFIG in object ProducerConfig is deprecated: 
see corresponding Javadoc for more information.
props.put(ProducerConfig.METADATA_FETCH_TIMEOUT_CONFIG, 
config.metadataFetchTimeoutMs.toString)
 ^
:117:
 value TIMEOUT_CONFIG in object ProducerConfig is deprecated: see corresponding 
Javadoc for more information.
props.put(ProducerConfig.TIMEOUT_CONFIG, config.requestTimeoutMs.toString)
 ^
:121:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  props.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "false")
   ^
:75:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
producerProps.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
 ^
:194:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  maybeSetDefaultProperty(producerProps, 
ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
^
:234:
 method readLine in class DeprecatedConsole is deprecated: Use the method in 
scala.io.StdIn
Console.readLine().equalsIgnoreCase("y")
^
:353:
 method readLine in class DeprecatedConsole is deprecated: Use the method in 
scala.io.StdIn
if (!Console.readLine().equalsIgnoreCase("y")) {
 ^


Build failed in Jenkins: kafka-trunk-jdk7 #741

2015-10-30 Thread Apache Jenkins Server
See 

Changes:

[cshapi] HOTFIX: log4j-appender not getting built

--
[...truncated 481 lines...]
 ^
:389:
 class BrokerEndPoint in object UpdateMetadataRequest is deprecated: see 
corresponding Javadoc for more information.
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
there were 15 feature warnings; re-run with -feature for details
18 warnings found
:kafka-trunk-jdk7:core:processResources UP-TO-DATE
:kafka-trunk-jdk7:core:classes
:kafka-trunk-jdk7:log4j-appender:javadoc UP-TO-DATE
:kafka-trunk-jdk7:core:javadoc
:kafka-trunk-jdk7:core:javadocJar
:kafka-trunk-jdk7:core:scaladoc
[ant:scaladoc] Element 
' 
does not exist.
[ant:scaladoc] 
:293:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
:294:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 15 feature warnings; re-run with -feature 
for details
[ant:scaladoc] 
:28:
 warning: Could not find any member to link for "NoReplicaOnlineException".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1160:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc] /**
[ant:scaladoc] ^
[ant:scaladoc] 
:1334:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1293:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:490:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc] /**
[ant:scaladoc] ^
[ant:scaladoc] 
:455:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc] /**
[ant:scaladoc] ^
[ant:scaladoc] 
:1276:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1250:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1438:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1415:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:274:
 warning: Could not

[jira] [Created] (KAFKA-2716) Make Kafka core not depend on log4j-appender

2015-10-30 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-2716:
-

 Summary: Make Kafka core not depend on log4j-appender
 Key: KAFKA-2716
 URL: https://issues.apache.org/jira/browse/KAFKA-2716
 Project: Kafka
  Issue Type: Bug
Reporter: Ashish K Singh
Assignee: Ashish K Singh


Investigate why core needs to depend on log4j-appender. AFAIK, there is no real 
dependency, however it the dependency is removed, tests won't build it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2716) Make Kafka core not depend on log4j-appender

2015-10-30 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983702#comment-14983702
 ] 

Ashish K Singh commented on KAFKA-2716:
---

[~ijuma] lets keep the discussion here, so that it is not lost on some closed 
PR.

> Make Kafka core not depend on log4j-appender
> 
>
> Key: KAFKA-2716
> URL: https://issues.apache.org/jira/browse/KAFKA-2716
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Investigate why core needs to depend on log4j-appender. AFAIK, there is no 
> real dependency, however it the dependency is removed, tests won't build it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2574: Add ducktape based ssl test for Ka...

2015-10-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/319


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2574) Add ducktape based ssl test for KafkaLog4jAppender

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983706#comment-14983706
 ] 

ASF GitHub Bot commented on KAFKA-2574:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/319


> Add ducktape based ssl test for KafkaLog4jAppender
> --
>
> Key: KAFKA-2574
> URL: https://issues.apache.org/jira/browse/KAFKA-2574
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.0
>
>
> KAFKA-2447 adds support for using SSL in KafkaLog4jAppender and KAFKA-2531 
> adds basic ducktape based tests for KafkaLog4jAppender. This should add 
> ducktape based ssl tests for KafkaLog4jAppender.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2574) Add ducktape based ssl test for KafkaLog4jAppender

2015-10-30 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2574.
-
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 319
[https://github.com/apache/kafka/pull/319]

> Add ducktape based ssl test for KafkaLog4jAppender
> --
>
> Key: KAFKA-2574
> URL: https://issues.apache.org/jira/browse/KAFKA-2574
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.0
>
>
> KAFKA-2447 adds support for using SSL in KafkaLog4jAppender and KAFKA-2531 
> adds basic ducktape based tests for KafkaLog4jAppender. This should add 
> ducktape based ssl tests for KafkaLog4jAppender.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2660; Correct cleanableRatio calculation

2015-10-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/316


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2660) Correct cleanableRatio calculation

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983725#comment-14983725
 ] 

ASF GitHub Bot commented on KAFKA-2660:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/316


> Correct cleanableRatio calculation
> --
>
> Key: KAFKA-2660
> URL: https://issues.apache.org/jira/browse/KAFKA-2660
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
> Fix For: 0.9.0.0
>
>
> There is a bug in LogToClean that causes cleanableRatio to be over-estimated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2660) Correct cleanableRatio calculation

2015-10-30 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-2660.

Resolution: Fixed

Issue resolved by pull request 316
[https://github.com/apache/kafka/pull/316]

> Correct cleanableRatio calculation
> --
>
> Key: KAFKA-2660
> URL: https://issues.apache.org/jira/browse/KAFKA-2660
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
> Fix For: 0.9.0.0
>
>
> There is a bug in LogToClean that causes cleanableRatio to be over-estimated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #82

2015-10-30 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2574: Add ducktape based ssl test for KafkaLog4jAppender

--
[...truncated 419 lines...]
:kafka-trunk-jdk8:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScala UP-TO-DATE
:kafka-trunk-jdk8:core:processResources UP-TO-DATE
:kafka-trunk-jdk8:core:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:javadoc
cache fileHashes.bin 
(/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/.gradle/2.8/taskArtifacts/fileHashes.bin)
 is corrupt. Discarding.
:kafka-trunk-jdk8:core:javadoc
:kafka-trunk-jdk8:core:javadocJar
:kafka-trunk-jdk8:core:scaladoc
[ant:scaladoc] Element 
'/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/build/resources/main'
 does not exist.
[ant:scaladoc] 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/server/KafkaServer.scala:293:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/server/KafkaServer.scala:294:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 15 feature warning(s); re-run with -feature 
for details
[ant:scaladoc] 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala:72:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala:32:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala:137:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala:120:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala:97:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#put".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala:152:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#take".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 9 warnings found
:kafka-trunk-jdk8:core:scaladocJar
:kafka-trunk-jdk8:core:docsJar
:docsJar_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:compileJava UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/api/OffsetCommitRequest.scala:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,


[GitHub] kafka pull request: MINOR: getRootLogger() should be accessed in s...

2015-10-30 Thread vesense
GitHub user vesense opened a pull request:

https://github.com/apache/kafka/pull/397

MINOR: getRootLogger() should be accessed in static way



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vesense/kafka patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/397.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #397


commit 1befbb5b368e407aa0886787f8f06f5a22c66bc1
Author: Xin Wang 
Date:   2015-10-31T01:30:36Z

getRootLogger() should be accessed in static way




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #742

2015-10-30 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2574: Add ducktape based ssl test for KafkaLog4jAppender

[junrao] KAFKA-2660; Correct cleanableRatio calculation

--
[...truncated 329 lines...]
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:compileJava UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala UP-TO-DATE
:kafka-trunk-jdk7:core:processResources UP-TO-DATE
:kafka-trunk-jdk7:core:classes UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:javadoc
:kafka-trunk-jdk7:core:javadoc
:kafka-trunk-jdk7:core:javadocJar
:kafka-trunk-jdk7:core:scaladoc
[ant:scaladoc] Element 
' 
does not exist.
[ant:scaladoc] 
:293:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
:294:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 15 feature warning(s); re-run with -feature 
for details
[ant:scaladoc] 
:72:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:32:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:137:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:120:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:97:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#put".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:152:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#take".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 9 warnings found
:kafka-trunk-jdk7:core:scaladocJar
:kafka-trunk-jdk7:core:docsJar
:docsJar_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk7:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes UP-TO-DATE
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:compileJava UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,
   

Build failed in Jenkins: kafka-trunk-jdk8 #83

2015-10-30 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2660; Correct cleanableRatio calculation

--
[...truncated 365 lines...]
:kafka-trunk-jdk8:log4j-appender:compileJava UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScala UP-TO-DATE
:kafka-trunk-jdk8:core:processResources UP-TO-DATE
:kafka-trunk-jdk8:core:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:javadoc
:kafka-trunk-jdk8:core:javadoc
:kafka-trunk-jdk8:core:javadocJar
:kafka-trunk-jdk8:core:scaladoc
[ant:scaladoc] Element 
' 
does not exist.
[ant:scaladoc] 
:293:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
:294:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 15 feature warning(s); re-run with -feature 
for details
[ant:scaladoc] 
:72:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:32:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:137:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:120:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:97:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#put".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:152:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#take".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 9 warnings found
:kafka-trunk-jdk8:core:scaladocJar
:kafka-trunk-jdk8:core:docsJar
:docsJar_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:compileJava UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^


[jira] [Commented] (KAFKA-2716) Make Kafka core not depend on log4j-appender

2015-10-30 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983837#comment-14983837
 ] 

Ismael Juma commented on KAFKA-2716:


It's worth saying that we can't change this after the release without 
potentially breaking users of the appender who get it via a dependency on core.

> Make Kafka core not depend on log4j-appender
> 
>
> Key: KAFKA-2716
> URL: https://issues.apache.org/jira/browse/KAFKA-2716
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Investigate why core needs to depend on log4j-appender. AFAIK, there is no 
> real dependency, however it the dependency is removed, tests won't build it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2716) Make Kafka core not depend on log4j-appender

2015-10-30 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983838#comment-14983838
 ] 

Gwen Shapira commented on KAFKA-2716:
-

Log4j Appender was in core in 0.8.2, we moved it out so users can use the log4j 
appender without having to carry the rest of Kafka along...

Perhaps this is why the dependency exists? to avoid breaking existing usage?

> Make Kafka core not depend on log4j-appender
> 
>
> Key: KAFKA-2716
> URL: https://issues.apache.org/jira/browse/KAFKA-2716
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Investigate why core needs to depend on log4j-appender. AFAIK, there is no 
> real dependency, however it the dependency is removed, tests won't build it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2716) Make Kafka core not depend on log4j-appender

2015-10-30 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983842#comment-14983842
 ] 

Ismael Juma commented on KAFKA-2716:


That is a plausible explanation [~gwenshap] (assuming we kept the package and 
class name the same). In that case, maybe there is nothing to be done.

> Make Kafka core not depend on log4j-appender
> 
>
> Key: KAFKA-2716
> URL: https://issues.apache.org/jira/browse/KAFKA-2716
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Investigate why core needs to depend on log4j-appender. AFAIK, there is no 
> real dependency, however it the dependency is removed, tests won't build it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)