Re: [DISCUSS] KIP-227: Introduce Incremental FetchRequests to Increase Partition Scalability

2017-12-08 Thread Jan Filipiak

Hi,

sorry for the late reply, busy times :-/

I would ask you one thing maybe. Since the timeout
argument seems to be settled I have no further argument
form your side except the "i don't want to".

Can you see that connection.max.idle.max is the exact time
that expresses "We expect the client to be away for this long,
and come back and continue"?

also clarified some stuff inline

Best Jan




On 05.12.2017 23:14, Colin McCabe wrote:

On Tue, Dec 5, 2017, at 13:13, Jan Filipiak wrote:

Hi Colin

Addressing the topic of how to manage slots from the other thread.
With tcp connections all this comes for free essentially.

Hi Jan,

I don't think that it's accurate to say that cache management "comes for
free" by coupling the incremental fetch session with the TCP session.
When a new TCP session is started by a fetch request, you still have to
decide whether to grant that request an incremental fetch session or
not.  If your answer is that you always grant the request, I would argue
that you do not have cache management.

First I would say, the client has a big say in this. If the client
is not going to issue incremental he shouldn't ask for a cache
when the client ask for the cache we still have all options to deny.



I guess you could argue that timeouts are cache management, but I don't
find that argument persuasive.  Anyone could just create a lot of TCP
sessions and use a lot of resources, in that case.  So there is
essentially no limit on memory use.  In any case, TCP sessions don't
help us implement fetch session timeouts.

We still have all the options denying the request to keep the state.
What you want seems like a max connections / ip safeguard.
I can currently take down a broker with to many connections easily.



I still would argue we disable it by default and make a flag in the
broker to ask the leader to maintain the cache while replicating and also only
have it optional in consumers (default to off) so one can turn it on
where it really hurts.  MirrorMaker and audit consumers prominently.

I agree with Jason's point from earlier in the thread.  Adding extra
configuration knobs that aren't really necessary can harm usability.
Certainly asking people to manually turn on a feature "where it really
hurts" seems to fall in that category, when we could easily enable it
automatically for them.

This doesn't make much sense to me. You also wanted to implement
a "turn of in case of bug"-knob. Having the client indicate if the feauture
will be used seems reasonable to me.,



Otherwise I left a few remarks in-line, which should help to understand
my view of the situation better

Best Jan


On 05.12.2017 08:06, Colin McCabe wrote:

On Mon, Dec 4, 2017, at 02:27, Jan Filipiak wrote:

On 03.12.2017 21:55, Colin McCabe wrote:

On Sat, Dec 2, 2017, at 23:21, Becket Qin wrote:

Thanks for the explanation, Colin. A few more questions.


The session epoch is not complex.  It's just a number which increments
on each incremental fetch.  The session epoch is also useful for
debugging-- it allows you to match up requests and responses when
looking at log files.

Currently each request in Kafka has a correlation id to help match the
requests and responses. Is epoch doing something differently?

Hi Becket,

The correlation ID is used within a single TCP session, to uniquely
associate a request with a response.  The correlation ID is not unique
(and has no meaning) outside the context of that single TCP session.

Keep in mind, NetworkClient is in charge of TCP sessions, and generally
tries to hide that information from the upper layers of the code.  So
when you submit a request to NetworkClient, you don't know if that
request creates a TCP session, or reuses an existing one.

Unfortunately, this doesn't work.  Imagine the client misses an
increment fetch response about a partition.  And then the partition is
never updated after that.  The client has no way to know about the
partition, since it won't be included in any future incremental fetch
responses.  And there are no offsets to compare, since the partition is
simply omitted from the response.

I am curious about in which situation would the follower miss a response
of a partition. If the entire FetchResponse is lost (e.g. timeout), the
follower would disconnect and retry. That will result in sending a full
FetchRequest.

Basically, you are proposing that we rely on TCP for reliable delivery
in a distributed system.  That isn't a good idea for a bunch of
different reasons.  First of all, TCP timeouts tend to be very long.  So
if the TCP session timing out is your error detection mechanism, you
have to wait minutes for messages to timeout.  Of course, we add a
timeout on top of that after which we declare the connection bad and
manually close it.  But just because the session is closed on one end
doesn't mean that the other end knows that it is closed.  So the leader
may have to wait quite a long time before TCP decides that yes,
connection X from the follower is

Re: [DISCUSS] KIP-227: Introduce Incremental FetchRequests to Increase Partition Scalability

2017-12-08 Thread Ismael Juma
One correction below.

On Fri, Dec 8, 2017 at 11:16 AM, Jan Filipiak 
wrote:

> We only check max.message.bytes to late to guard against consumer stalling.
> we dont have a notion of max.networkpacket.size before we allocate the
> bytebuffer to read it into.


We do: socket.request.max.bytes.

Ismael


[GitHub] kafka pull request #4280: KAFKA-6289: NetworkClient should not expose failed...

2017-12-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4280


---


[jira] [Resolved] (KAFKA-6289) NetworkClient should not return internal failed api version responses from poll

2017-12-08 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-6289.
---
   Resolution: Fixed
Fix Version/s: 1.0.1
   1.1.0

Issue resolved by pull request 4280
[https://github.com/apache/kafka/pull/4280]

> NetworkClient should not return internal failed api version responses from 
> poll
> ---
>
> Key: KAFKA-6289
> URL: https://issues.apache.org/jira/browse/KAFKA-6289
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 1.1.0, 1.0.1
>
>
> In the AdminClient, if the initial ApiVersion request sent to the broker 
> fails, we see the following obscure message:
> {code}
> [2017-11-30 17:18:48,677] ERROR Internal server error on -2: server returned 
> information about unknown correlation ID 0.  requestHeader = 
> {api_key=18,api_version=1,correlation_id=0,client_id=adminclient-3} 
> (org.apache.kafka.clients.admin.KafkaAdminClient)
> {code}
> What's happening is that the response to the internal ApiVersion request 
> which is received in NetworkClient is mistakenly being sent to the upper 
> layer (the admin client in this case). The admin wasn't expecting it, so we 
> see this message. Instead, the request should be handled internally in 
> NetworkClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-227: Introduce Incremental FetchRequests to Increase Partition Scalability

2017-12-08 Thread Jan Filipiak


On 08.12.2017 10:43, Ismael Juma wrote:

One correction below.

On Fri, Dec 8, 2017 at 11:16 AM, Jan Filipiak 
wrote:


We only check max.message.bytes to late to guard against consumer stalling.
we dont have a notion of max.networkpacket.size before we allocate the
bytebuffer to read it into.


We do: socket.request.max.bytes.

Ismael



perfect, didn't knew we have this in the meantime. :) good that we have it.

Its a very good safeguard. and a nice fail fast for dodgy clients or 
network interfaces.




[VOTE] KIP-218: Make KafkaFuture.Function java 8 lambda compatible

2017-12-08 Thread Steven Aerts
Hello everybody,


I think KIP-218 is crystallized enough to start voting.

KIP documentation:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-218%3A+Make+KafkaFuture.Function+java+8+lambda+compatible


Thanks,


   Steven


Re: [DISCUSS] KIP 226 - Dynamic Broker Configuration

2017-12-08 Thread Rajini Sivaram
Hi Jun,

Thank you for the review.

1. No, I am hoping to migrate partitions to new threads. We just need to
ensure they don't run concurrently.

2. AdminClient has a validateOnly option for AlterConfigs. Any exceptions
or return value of false from Reconfigurable#validate would fail the
AlterConfigsRequest.

3. Yes, we will support describe and alter of configs with listener prefix.
We will not allow alterConfigs() of security configs without the listener
prefix, since we need to prevent the whole cluster being made unusable.

4. Thank you, will make a note of that.

5. When we are upgrading (from 1.0 to 2.0 for example), my understanding is
that we set inter.broker.protocol.version=1.0, do a rolling upgrade of the
whole cluster and when all brokers are at 2.0, we do another rolling
upgrade with inter.broker.protocol.version=2.0. Jason's suggestion was to
avoid the second rolling upgrade by enabling dynamic update of
inter.broker.protocol.version. To set inter.broker.protocol.version=2.0
dynamically, we need to verify not just that the current broker is on
version 2.0, but that all brokers int the cluster are on version 2.0 (I
thought that was the reason for the second rolling upgrade). The broker
version in ZK would enable that verification before performing the update.

6. The config source would be STATIC_BROKER_CONFIG/DYNAMIC_BROKER_CONFIG,
the config name would be listener.name.listenerA.configX. And synonyms list
in describeConfigs() would list  listener.name.listenerA.configX as well as
configX.

7. I think `default` is an overused terminology already. When configs are
described, they return a flag indicating if the value is a default. And in
the broker, we have so many levels of configs already and we are adding so
many more, that it may be better to use a different term. It doesn't have
to be synonyms, but since we want to use the same term for topics and
brokers and we have listener configs which override non-prefixed security
configs, perhaps it is ok?

Regards,

Rajini



On Wed, Dec 6, 2017 at 11:50 PM, Jun Rao  wrote:

> A couple more things.
>
> 6. For the SSL/SASL configurations with the listener prefix, do we need
> another level in config_source since it's neither topic nor broker?
>
> 7. For include_synonyms in DescribeConfigs, the name makes sense for the
> topic level configs. Not sure if it makes sense for other hierarchies.
> Perhaps sth more generic like default will be better?
>
> Thanks,
>
> Jun
>
> On Wed, Dec 6, 2017 at 3:41 PM, Jun Rao  wrote:
>
> > Hi, Rajini,
> >
> > Thanks for the kip. Looks good overall. A few comments below.
> >
> > 1. "num.replica.fetchers: Affinity of partitions to threads will be
> > preserved for ordering." Does that mean the new fetcher threads won't be
> > used until new partitions are added? This may be limiting.
> >
> > 2. I am wondering how the result from reporter.validate(Map
> > configs) will be used. If it returns false, do we return false to the
> admin
> > client for the validation call, which will seem a bit weird?
> >
> > 3. For the SSL and SASL configuration changes, do we support those with
> > the listener prefix (e.g., external-ssl-lisener.ssl.keystore.location).
> > If so, do we plan to include them in the result of describeConfigs()?
> >
> > 4. "Updates to advertised.listeners will re-register the new listener in
> > ZK". To support this, we will likely need additional logic in the
> > controller such that the controller can broadcast the metadata with the
> new
> > listeners to every broker.
> >
> > 5. Including broker version in broker registration in ZK. I am not sure
> > the usage of that. Each broker knows its version. So, is that for the
> > controller?
> >
> > Jun
> >
> >
> >
> > On Tue, Dec 5, 2017 at 11:05 AM, Colin McCabe 
> wrote:
> >
> >> On Tue, Dec 5, 2017, at 06:01, Rajini Sivaram wrote:
> >> > Hi Colin,
> >> >
> >> > KAFKA-5722 already has an owner, so I didn't want to confuse the two
> >> > KIPs.  They can be done independently of each other. The goal is to
> try
> >> and
> >> > validate every config to the minimum validation already in the broker
> >> for
> >> > the static configs, but in some cases to a more restrictive level. So
> a
> >> > typo like a file-not-found or class-not-found would definitely fail
> the
> >> > AlterConfigs request (validation is performed by AlterConfigs
> regardless
> >> > of validateOnly flag). I am working out the additional validation I
> can
> >> > perform as I implement updates for each config. For example,
> >> > inter-broker keystore update will not be allowed unless it can be
> >> > verified against the currently configured truststore.
> >>
> >> HI Rajini,
> >>
> >> I agree.  It's probably better to avoid expanding the scope of KIP-226.
> >> I hope we can get to KAFKA-5722 soon, though, since it will really
> >> improve the user experience for this feature.
> >>
> >> regards,
> >> Colin
> >>
> >> >
> >> > On Sat, Dec 2, 2017 at 10:15 PM, Colin McCabe 
> >> wrote:
> >> >
> >> > >

[jira] [Created] (KAFKA-6330) KafkaZkClient request queue time metric

2017-12-08 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-6330:
--

 Summary: KafkaZkClient request queue time metric
 Key: KAFKA-6330
 URL: https://issues.apache.org/jira/browse/KAFKA-6330
 Project: Kafka
  Issue Type: Bug
  Components: zkclient
Reporter: Ismael Juma


KafkaZkClient have a latency metric which is the time it takes to send a 
request and receive the corresponding response.

If ZooKeeperClient's `maxInFlightRequests` (10 by default) is reached, a 
request may be held for some time before sending starts. This time is not 
currently measured and it may be useful to know if requests are spending longer 
than usual in the `queue` (conceptually as the current implementation doesn't 
use a queue).

This would require a KIP.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6331) Transient failure in kafka.api.AdminClientIntegrationTest.testLogStartOffsetCheckpointkafka.api.AdminClientIntegrationTest.testAlterReplicaLogDirs

2017-12-08 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-6331:


 Summary: Transient failure in 
kafka.api.AdminClientIntegrationTest.testLogStartOffsetCheckpointkafka.api.AdminClientIntegrationTest.testAlterReplicaLogDirs
 Key: KAFKA-6331
 URL: https://issues.apache.org/jira/browse/KAFKA-6331
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Reporter: Guozhang Wang


Saw this error once on Jenkins: 
https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/3025/testReport/junit/kafka.api/AdminClientIntegrationTest/testAlterReplicaLogDirs/

{code}
Stacktrace

java.lang.AssertionError: timed out waiting for message produce
at kafka.utils.TestUtils$.fail(TestUtils.scala:347)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:861)
at 
kafka.api.AdminClientIntegrationTest.testAlterReplicaLogDirs(AdminClientIntegrationTest.scala:357)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:844)
Standard Output

[2017-12-07 19:22:56,297] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-12-07 19:22:59,447] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-12-07 19:22:59,453] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-12-07 19:23:01,335] ERROR Error while creating ephemeral at /controller, 
node already exists and owner '99134641238966279' does not match current 
session '99134641238966277' (kafka.zk.KafkaZkClient$CheckedEphemeral:71)
[2017-12-07 19:23:04,695] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-12-07 19:23:04,760] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-12-07 19:23:06,764] ERROR Error while creating ephemeral at /controller, 
node already exists and owner '99134641586700293' does not match current 
session '99134641586700295' (kafka.zk.KafkaZkClient$CheckedEphemeral:71)
[2017-12-07 19:23:09,379] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-12-07 19:23:09,387] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-12-07 19:23:11,533] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-12-07 19:23:11,539] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-12-07 19:23:13,022] ERROR Error while creating ephemeral at /controller, 
node already exists and owner '99134642031034375' does not match current 
session '99134642031034373' (kafka.zk.KafkaZkClient$CheckedEphemeral:71)
[2017-12-07 19:23:14,667] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.

[jira] [Created] (KAFKA-6332) Kafka system tests should use nc instead of log grep to detect start-up

2017-12-08 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-6332:
--

 Summary: Kafka system tests should use nc instead of log grep to 
detect start-up
 Key: KAFKA-6332
 URL: https://issues.apache.org/jira/browse/KAFKA-6332
 Project: Kafka
  Issue Type: Bug
Reporter: Ismael Juma


[~ewencp] suggested using nc -z test instead of grepping the logs for a more 
reliable test. This came up when the system tests were broken by a log 
improvement change.

Reference: https://github.com/apache/kafka/pull/3834



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5702) Refactor StreamThread to separate concerns and enable better testability

2017-12-08 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-5702.
--
Resolution: Fixed

> Refactor StreamThread to separate concerns and enable better testability
> 
>
> Key: KAFKA-5702
> URL: https://issues.apache.org/jira/browse/KAFKA-5702
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
>
> {{StreamThread}} does a lot of stuff, i.e., managing and creating tasks, 
> getting data from consumers, updating standby tasks, punctuating, rebalancing 
> etc. With the current design it is extremely hard to reason about and is 
> quite tightly coupled. 
> We need to start to tease out some of the separate concerns from 
> StreamThread, ie, TaskManager, RebalanceListener etc. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5002) Stream does't seem to consider partitions for processing which are being consumed

2017-12-08 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-5002.
--
Resolution: Not A Problem

> Stream does't seem to consider partitions for processing which are being 
> consumed
> -
>
> Key: KAFKA-5002
> URL: https://issues.apache.org/jira/browse/KAFKA-5002
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
> Environment: Windows 8.1
>Reporter: Mustak
>  Labels: patch
>
> Kafka streams doesn't seems to consider particular partition for processing 
> if that partition is being consumed by some consumer. For example if I've two 
> topics t1 and t2 with two partitions p1 and p2 and there is a stream process 
> is running with consumes data from these topics and produce output to topic 
> t3 which has two partitions. If run this kind of topology it works but if i 
> start consumer which consumes data from topic t1 and partition p1 then the 
> stream logic doesn't consider p1 for processing and stream doesn't provide 
> any output related to that partition. I think stream logic should consider 
> partitions which are being consumed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4305: KAFKA-6318: StreamsResetter should return non-zero...

2017-12-08 Thread shivsantham
GitHub user shivsantham opened a pull request:

https://github.com/apache/kafka/pull/4305

KAFKA-6318: StreamsResetter should return non-zero return code on error

If users specify a non-existing topic as input parameter, StreamsResetter 
only prints out an error message that the topic was not found, but return code 
is still zero. We should return a non-zero return code for this case.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shivsantham/kafka KAFKA-6318

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4305.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4305


commit a9696d03bd6f9e03071b3d76b3ad51a411766557
Author: siva santhalingam 
Date:   2017-12-08T18:05:55Z

KAFKA-6318: StreamsResetter should return non-zero return code on error




---


Build failed in Jenkins: kafka-trunk-jdk8 #2265

2017-12-08 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-6289; NetworkClient should not expose failed internal 
ApiVersions

--
[...truncated 3.37 MB...]
kafka.utils.CoreUtilsTest > testCsvList STARTED

kafka.utils.CoreUtilsTest > testCsvList PASSED

kafka.utils.CoreUtilsTest > testReadInt STARTED

kafka.utils.CoreUtilsTest > testReadInt PASSED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate STARTED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate PASSED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID STARTED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.CoreUtilsTest > testCsvMap STARTED

kafka.utils.CoreUtilsTest > testCsvMap PASSED

kafka.utils.CoreUtilsTest > testInLock STARTED

kafka.utils.CoreUtilsTest > testInLock PASSED

kafka.utils.CoreUtilsTest > testSwallow STARTED

kafka.utils.CoreUtilsTest > testSwallow PASSED

kafka.utils.IteratorTemplateTest > testIterator STARTED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.JsonValueTest > testAsJsonObject PASSED

kafka.utils.json.JsonValueTest > testDecodeDouble STARTED

kafka.utils.json.JsonValueTest > testDecodeDouble PASSED

kafka.utils.json.JsonValueTest > testDecodeOption STARTED

kafka.utils.json.JsonValueTest > testDecodeOption PASSED

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > testJsonValueHashCode STARTED

kafka.utils.json.JsonValueTest > testJsonValueHashCode PASSED

kafka.utils.json.JsonValueTest > testDecodeInt STARTED

kafka.utils.json.JsonValueTest > testDecodeInt PASSED

kafka.utils.json.JsonValueTest > testDecodeMap STARTED

kafka.utils.json.JsonValueTest > testDecodeMap PASSED

kafka.utils.json.JsonValueTest > testDecodeSeq STARTED

kafka.utils.json.JsonValueTest > testDecodeSeq PASSED

kafka.utils.json.JsonValueTest > testJsonObjectGet STARTED

kafka.utils.json.JsonValueTest > testJsonObjectGet PASSED

kafka.utils.json.JsonValueTest > testJsonValueEquals STARTED

kafka.utils.json.JsonValueTest > testJsonValueEquals PASSED

kafka.utils.json.JsonValueTest > testJsonArrayIterator STARTED

kafka.utils.json.JsonValueTest > testJsonArrayIterator PASSED

kafka.utils.json.JsonValueTest > testJsonObjectApply STARTED

kafka.utils.json.JsonValueTest > testJsonObjectApply PASSED

kafka.utils.json.JsonValueTest > testDecodeBoolean STARTED

kafka.utils.json.JsonValueTest > testDecodeBoolean PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic STARTED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired STARTED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents STARTED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize STARTED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents STARTED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize STARTED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner STARTED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration STARTED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition STARTED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker STARTED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed STARTED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer STARTED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder STARTED

kaf

Re: [VOTE] KIP-218: Make KafkaFuture.Function java 8 lambda compatible

2017-12-08 Thread Ted Yu
+1

On Fri, Dec 8, 2017 at 3:49 AM, Steven Aerts  wrote:

> Hello everybody,
>
>
> I think KIP-218 is crystallized enough to start voting.
>
> KIP documentation:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 218%3A+Make+KafkaFuture.Function+java+8+lambda+compatible
>
>
> Thanks,
>
>
>Steven
>


[GitHub] kafka pull request #4306: KAFKA-6331; Fix transient failure in AdminClientIn...

2017-12-08 Thread lindong28
GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/4306

KAFKA-6331; Fix transient failure in 
AdminClientIntegrationTest.testAlterReplicaLogDirs

*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*

*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-6331

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4306.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4306


commit f5c364e774a7a3b75fc49f8f5eb6bb707d1e799d
Author: Dong Lin 
Date:   2017-12-08T18:34:16Z

KAFKA-6331; Fix transient failure in 
AdminClientIntegrationTest.testAlterReplicaLogDirs




---


Re: [VOTE] KIP-218: Make KafkaFuture.Function java 8 lambda compatible

2017-12-08 Thread Tom Bentley
+1

On 8 December 2017 at 18:34, Ted Yu  wrote:

> +1
>
> On Fri, Dec 8, 2017 at 3:49 AM, Steven Aerts 
> wrote:
>
> > Hello everybody,
> >
> >
> > I think KIP-218 is crystallized enough to start voting.
> >
> > KIP documentation:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 218%3A+Make+KafkaFuture.Function+java+8+lambda+compatible
> >
> >
> > Thanks,
> >
> >
> >Steven
> >
>


[GitHub] kafka pull request #4307: KAFKA-6307 mBeanName should be removed before retu...

2017-12-08 Thread tedyu
GitHub user tedyu opened a pull request:

https://github.com/apache/kafka/pull/4307

KAFKA-6307 mBeanName should be removed before returning from 
JmxReporter#removeAttribute()


### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tedyu/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4307.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4307


commit 82d3f60eaf0548b478c7d873c5fbf4b930a11473
Author: tedyu 
Date:   2017-12-08T19:08:30Z

KAFKA-6307 mBeanName should be removed before returning from 
JmxReporter#removeAttribute()




---


Re: [VOTE] KIP-218: Make KafkaFuture.Function java 8 lambda compatible

2017-12-08 Thread Mickael Maison
+1 (non binding)
Thanks for the KIP

On Fri, Dec 8, 2017 at 6:53 PM, Tom Bentley  wrote:
> +1
>
> On 8 December 2017 at 18:34, Ted Yu  wrote:
>
>> +1
>>
>> On Fri, Dec 8, 2017 at 3:49 AM, Steven Aerts 
>> wrote:
>>
>> > Hello everybody,
>> >
>> >
>> > I think KIP-218 is crystallized enough to start voting.
>> >
>> > KIP documentation:
>> >
>> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > 218%3A+Make+KafkaFuture.Function+java+8+lambda+compatible
>> >
>> >
>> > Thanks,
>> >
>> >
>> >Steven
>> >
>>


[jira] [Created] (KAFKA-6333) java.awt.headless should not be on commandline

2017-12-08 Thread Fabrice Bacchella (JIRA)
Fabrice Bacchella created KAFKA-6333:


 Summary: java.awt.headless should not be on commandline
 Key: KAFKA-6333
 URL: https://issues.apache.org/jira/browse/KAFKA-6333
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 1.0.0
Reporter: Fabrice Bacchella
Priority: Trivial


The option -Djava.awt.headless=true is defined in KAFKA_JVM_PERFORMANCE_OPTS.

But it should even not be present on command line. It's only useful for 
application that can be used in application that is used in both a headless and 
a traditional environment. Kafka is a server, so it should be setup in the main 
class. This help reduce clutter in command line.

See http://www.oracle.com/technetwork/articles/javase/headless-136834.html for 
more details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6334) Minor documentation typo

2017-12-08 Thread Andrew Olson (JIRA)
Andrew Olson created KAFKA-6334:
---

 Summary: Minor documentation typo
 Key: KAFKA-6334
 URL: https://issues.apache.org/jira/browse/KAFKA-6334
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.0
Reporter: Andrew Olson
Priority: Trivial


At [1]:

{quote}
0.11.0 consumers support backwards compatibility with brokers 0.10.0 brokers 
and upward, so it is possible to upgrade the clients first before the brokers
{quote}

Specifically the "brokers 0.10.0 brokers" wording.

[1] http://kafka.apache.org/documentation.html#upgrade_11_message_format



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: KStream: Error Reading and Writing Avro records

2017-12-08 Thread Matthias J. Sax
Cross posted at SO:
https://stackoverflow.com/questions/47712933/kstream-error-reading-and-writing-avro-records

I put an answer there.

-Matthias

On 12/7/17 10:29 PM, Somasundaram Sekar wrote:
> I’m trying to write avro record that I read from a topic into another
> topic, intentions it to augment it with transformation after I get this
> routing working. I have used the KStream with avro code from one of the
> example with some modifications to connect to Schema Registry for
> retrieving the avro schema.
> 
> 
> 
> streamsConfiguration.put(StreamsConfig.*APPLICATION_ID_CONFIG*,
> *"mysql-stream-processing"*);
> streamsConfiguration.put(StreamsConfig.*BOOTSTRAP_SERVERS_CONFIG*,
> bootstrapServers);
> streamsConfiguration.put(AbstractKafkaAvroSerDeConfig.
> *SCHEMA_REGISTRY_URL_CONFIG*, schemaRegistryUrl);
> final Serde keySerde = new GenericAvroSerde(
> new CachedSchemaRegistryClient(schemaRegistryUrl, 100),
> Collections.*singletonMap*(AbstractKafkaAvroSerDeConfig.
> *SCHEMA_REGISTRY_URL_CONFIG*,
> schemaRegistryUrl));
> final Serde valueSerde = new GenericAvroSerde(
> new CachedSchemaRegistryClient(schemaRegistryUrl, 100),
> Collections.*singletonMap*(AbstractKafkaAvroSerDeConfig.
> *SCHEMA_REGISTRY_URL_CONFIG*,
> schemaRegistryUrl));
> streamsConfiguration.put(ConsumerConfig.*AUTO_OFFSET_RESET_CONFIG*,
> *"earliest"*);
> streamsConfiguration.put(StreamsConfig.*COMMIT_INTERVAL_MS_CONFIG*, 10 *
> 1000);
> 
> final KStreamBuilder builder = new KStreamBuilder();
> 
> final KStream record = builder.stream(
> *"dbserver1.employees.employees"*);
> 
> record.print(keySerde, valueSerde);
> 
> record.to(keySerde, valueSerde, *"newtopic"*);
> 
> 
> record.foreach((key, val) -> System.*out*.println(key.toString()+*"  "*
> +val.toString()));
> final KafkaStreams streams = new KafkaStreams(builder, streamsConfiguration)
> ;
> 
> streams.cleanUp();
> streams.start();
> 
> 
> 
> When run print() works as I can see the record in the console, but Im
> unable to get the record written to the “newtopic”, failing with the below
> error
> 
> 
> 
> *Exception in thread "StreamThread-1"
> org.apache.kafka.streams.errors.StreamsException: Exception caught in
> process. taskId=0_0, processor=KSTREAM-SOURCE-00,
> topic=dbserver1.employees.employees, partition=0, offset=0*
> 
> at
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:217)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:627)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:361)
> 
> *Caused by: org.apache.kafka.streams.errors.StreamsException: A serializer
> (key: io.confluent.examples.streams.utils.GenericAvroSerializer / value:
> io.confluent.examples.streams.utils.GenericAvroSerializer) is not
> compatible to the actual key or value type (key type: [B / value type: [B).
> Change the default Serdes in StreamConfig or provide correct Serdes via
> method parameters.*
> 
> at
> org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:81)
> 
> at
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
> 
> at
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:70)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:198)
> 
> ... 2 more
> 
> *Caused by: java.lang.ClassCastException: [B cannot be cast to
> org.apache.avro.generic.GenericRecord*
> 
> at
> io.confluent.examples.streams.utils.GenericAvroSerializer.serialize(GenericAvroSerializer.java:25)
> 
> at
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:77)
> 
> at
> org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:79)
> 
> ... 5 more
> 
> 
> 
> Regards,
> 
> Somasundaram S
> 



signature.asc
Description: OpenPGP digital signature


[GitHub] kafka pull request #4308: MINOR: make addWaiter public and fix exception han...

2017-12-08 Thread xvrl
GitHub user xvrl opened a pull request:

https://github.com/apache/kafka/pull/4308

MINOR: make addWaiter public and fix exception handling

KafkaFuture.thenApply(...) only allows invoking a callback on normal 
completion.
Making KafkaFuture.addWaiter(...) public makes possible to invoke a
callback on exceptional completion as well.

Exceptions thrown by waiters could have prevented other waiters from
executing, possibly breaking KafkaFuture.allOf(), so it seemed advisable
to wrap waiters to catch and log exceptions before making this API public.

cc @cmccabe @ijuma 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xvrl/kafka make-add-waiter-public

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4308.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4308


commit 40908e15e75ba0ed15a27904e06e17ebce7f44e6
Author: Xavier Léauté 
Date:   2017-12-08T22:00:51Z

make addWaiter public and fix exception handling

KafkaFuture.thenApply(...) only allows invoking a callback on normal 
completion.
Making KafkaFuture.addWaiter(...) public makes possible to invoke a
callback on exceptional completion as well.

Exceptions thrown by waiters could have prevented other waiters from
executing, possibly breaking KafkaFuture.allOf(), so it seemed advisable
to wrap waiters to catch and log exceptions before making this API public.




---


[jira] [Created] (KAFKA-6335) SimpleAclAuthorizerTest#testHighConcurrencyModificationOfResourceAcls fails intermittently

2017-12-08 Thread Ted Yu (JIRA)
Ted Yu created KAFKA-6335:
-

 Summary: 
SimpleAclAuthorizerTest#testHighConcurrencyModificationOfResourceAcls fails 
intermittently
 Key: KAFKA-6335
 URL: https://issues.apache.org/jira/browse/KAFKA-6335
 Project: Kafka
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor


>From 
>https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/3045/testReport/junit/kafka.security.auth/SimpleAclAuthorizerTest/testHighConcurrencyModificationOfResourceAcls/
> :
{code}
java.lang.AssertionError: expected acls Set(User:36 has Allow permission for 
operations: Read from hosts: *, User:7 has Allow permission for operations: 
Read from hosts: *, User:21 has Allow permission for operations: Read from 
hosts: *, User:39 has Allow permission for operations: Read from hosts: *, 
User:43 has Allow permission for operations: Read from hosts: *, User:3 has 
Allow permission for operations: Read from hosts: *, User:35 has Allow 
permission for operations: Read from hosts: *, User:15 has Allow permission for 
operations: Read from hosts: *, User:16 has Allow permission for operations: 
Read from hosts: *, User:22 has Allow permission for operations: Read from 
hosts: *, User:26 has Allow permission for operations: Read from hosts: *, 
User:11 has Allow permission for operations: Read from hosts: *, User:38 has 
Allow permission for operations: Read from hosts: *, User:8 has Allow 
permission for operations: Read from hosts: *, User:28 has Allow permission for 
operations: Read from hosts: *, User:32 has Allow permission for operations: 
Read from hosts: *, User:25 has Allow permission for operations: Read from 
hosts: *, User:41 has Allow permission for operations: Read from hosts: *, 
User:44 has Allow permission for operations: Read from hosts: *, User:48 has 
Allow permission for operations: Read from hosts: *, User:2 has Allow 
permission for operations: Read from hosts: *, User:9 has Allow permission for 
operations: Read from hosts: *, User:14 has Allow permission for operations: 
Read from hosts: *, User:46 has Allow permission for operations: Read from 
hosts: *, User:13 has Allow permission for operations: Read from hosts: *, 
User:5 has Allow permission for operations: Read from hosts: *, User:29 has 
Allow permission for operations: Read from hosts: *, User:45 has Allow 
permission for operations: Read from hosts: *, User:6 has Allow permission for 
operations: Read from hosts: *, User:37 has Allow permission for operations: 
Read from hosts: *, User:23 has Allow permission for operations: Read from 
hosts: *, User:19 has Allow permission for operations: Read from hosts: *, 
User:24 has Allow permission for operations: Read from hosts: *, User:17 has 
Allow permission for operations: Read from hosts: *, User:34 has Allow 
permission for operations: Read from hosts: *, User:12 has Allow permission for 
operations: Read from hosts: *, User:42 has Allow permission for operations: 
Read from hosts: *, User:4 has Allow permission for operations: Read from 
hosts: *, User:47 has Allow permission for operations: Read from hosts: *, 
User:18 has Allow permission for operations: Read from hosts: *, User:31 has 
Allow permission for operations: Read from hosts: *, User:49 has Allow 
permission for operations: Read from hosts: *, User:33 has Allow permission for 
operations: Read from hosts: *, User:1 has Allow permission for operations: 
Read from hosts: *, User:27 has Allow permission for operations: Read from 
hosts: *) but got Set(User:36 has Allow permission for operations: Read from 
hosts: *, User:7 has Allow permission for operations: Read from hosts: *, 
User:21 has Allow permission for operations: Read from hosts: *, User:39 has 
Allow permission for operations: Read from hosts: *, User:43 has Allow 
permission for operations: Read from hosts: *, User:3 has Allow permission for 
operations: Read from hosts: *, User:35 has Allow permission for operations: 
Read from hosts: *, User:15 has Allow permission for operations: Read from 
hosts: *, User:16 has Allow permission for operations: Read from hosts: *, 
User:22 has Allow permission for operations: Read from hosts: *, User:26 has 
Allow permission for operations: Read from hosts: *, User:11 has Allow 
permission for operations: Read from hosts: *, User:38 has Allow permission for 
operations: Read from hosts: *, User:8 has Allow permission for operations: 
Read from hosts: *, User:28 has Allow permission for operations: Read from 
hosts: *, User:32 has Allow permission for operations: Read from hosts: *, 
User:25 has Allow permission for operations: Read from hosts: *, User:41 has 
Allow permission for operations: Read from hosts: *, User:44 has Allow 
permission for operations: Read from hosts: *, User:48 has Allow permission for 
operations: Read from hosts: *, User:2 has Allow permission for operations: 
Read from hosts: *, User:9 has Allow permission for 

[jira] [Created] (KAFKA-6336) when using assign() with kafka consumer the KafkaConsumerGroup command doesnt show those consumers

2017-12-08 Thread Neerja Khattar (JIRA)
Neerja Khattar created KAFKA-6336:
-

 Summary: when using assign() with kafka consumer the 
KafkaConsumerGroup command doesnt show those consumers
 Key: KAFKA-6336
 URL: https://issues.apache.org/jira/browse/KAFKA-6336
 Project: Kafka
  Issue Type: Bug
Reporter: Neerja Khattar


The issue is when using assign rather than subscribe for kafka consumers commit 
not able to get the lag using ConsumerGroup command. It doesnt even list those 
groups.

JMX tool also doesnt show lag properly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-232: Detect outdated metadata by adding ControllerMetadataEpoch field

2017-12-08 Thread Jun Rao
Hi, Dong,

Thanks for the reply. A few more points below.

For dealing with how to prevent a consumer switching from a new leader to
an old leader, you suggestion that refreshes metadata on consumer restart
until it sees a metadata version >= the one associated with the offset
works too, as long as we guarantee that the cached metadata versions on the
brokers only go up.

The second discussion point is on whether the metadata versioning should be
per partition or global. For the partition level versioning, you were
concerned about the performance. Given that metadata updates are rare, I am
not sure if it's a big concern though. Doing a million if tests is probably
going to take less than 1ms. Another thing is that the metadata version
seems to need to survive controller failover. In your current approach, a
consumer may not be able to wait on the right version of the metadata after
the consumer restart since the metadata version may have been recycled on
the server side due to a controller failover while the consumer is down.
The partition level leaderEpoch survives controller failure and won't have
this issue.

Lastly, neither your proposal nor mine addresses the issue how to guarantee
a consumer to detect that is metadata is outdated. Currently, the consumer
is not guaranteed to fetch metadata from every broker within some bounded
period of time. Maybe this is out of the scope of your KIP. But one idea is
force the consumer to refresh metadata from the controller periodically.

Jun


On Thu, Dec 7, 2017 at 11:25 AM, Dong Lin  wrote:

> Hey Jun,
>
> Thanks much for the comments. Great point particularly regarding (3). I
> haven't thought about this before.
>
> It seems that there are two possible ways where the version number can be
> used. One solution is for client to check the version number at the time it
> receives MetadataResponse. And if the version number in the
> MetadataResponse is smaller than the version number in the client's cache,
> the client will be forced to fetch metadata again.  Another solution, as
> you have suggested, is for broker to check the version number at the time
> it receives a request from client. The broker will reject the request if
> the version is smaller than the version in broker's cache.
>
> I am not very sure that the second solution can address the problem here.
> In the scenario described in the JIRA ticket, broker's cache may be
> outdated because it has not processed the LeaderAndIsrRequest from the
> controller. Thus it may still process client's request even if the version
> in client's request is actually outdated. Does this make sense?
>
> IMO, it seems that we can address problem (3) by saving the metadata
> version together with the offset. After consumer starts, it will keep
> fetching metadata until the metadata version >= the version saved with the
> offset of this partition.
>
> Regarding problems (1) and (2): Currently we use the version number in the
> MetadataResponse to ensure that the metadata does not go back in time.
> There are two alternative solutions to address problems (1) and (2). One
> solution is for client to enumerate all partitions in the MetadataResponse,
> compare their epoch with those in the cached metadata, and rejects the
> MetadataResponse iff any leader epoch is smaller. The main concern is that
> MetadataResponse currently cached information of all partitions in the
> entire cluster. It may slow down client's performance if we were to do it.
> The other solution is for client to enumerate partitions for only topics
> registered in the org.apache.kafka.clients.Metadata, which will be an
> empty
> set for producer and the set of subscribed partitions for consumer. But
> this degrades to all topics if consumer subscribes to topics in the cluster
> by pattern.
>
> Note that client will only be forced to update metadata if the version in
> the MetadataResponse is smaller than the version in the cached metadata. In
> general it should not be a problem. It can be a problem only if some broker
> is particularly slower than other brokers in processing
> UpdateMetadataRequest. When this is the case, it means that the broker is
> also particularly slower in processing LeaderAndIsrRequest, which can cause
> problem anyway because some partition will probably have no leader during
> this period. I am not sure problems (1) and (2) cause more problem than
> what we already have.
>
> Thanks,
> Dong
>
>
> On Wed, Dec 6, 2017 at 6:42 PM, Jun Rao  wrote:
>
> > Hi, Dong,
> >
> > Great finding on the issue. It's a real problem. A few comments about the
> > KIP. (1) I am not sure about updating controller_metadata_epoch on every
> > UpdateMetadataRequest. Currently, the controller can send
> > UpdateMetadataRequest when there is no actual metadata change. Doing this
> > may require unnecessary metadata refresh on the client. (2)
> > controller_metadata_epoch is global across all topics. This means that a
> > client may be forced to update its met

Re: [DISCUSS] KIP 226 - Dynamic Broker Configuration

2017-12-08 Thread Jun Rao
Hi, Rajini,

Thanks for the reply. They all make sense.

5. Got it. Note that currently, only live brokers are registered in ZK.
Another thing is that I am not sure that we want every broker to read all
broker registrations directly from ZK. It's probably better to have the
controller propagate this information. That will require changing the
UpdateMetadataRequest protocol though. So, I am not sure if you want to do
that in the same KIP.

Jun



On Fri, Dec 8, 2017 at 6:07 AM, Rajini Sivaram 
wrote:

> Hi Jun,
>
> Thank you for the review.
>
> 1. No, I am hoping to migrate partitions to new threads. We just need to
> ensure they don't run concurrently.
>
> 2. AdminClient has a validateOnly option for AlterConfigs. Any exceptions
> or return value of false from Reconfigurable#validate would fail the
> AlterConfigsRequest.
>
> 3. Yes, we will support describe and alter of configs with listener prefix.
> We will not allow alterConfigs() of security configs without the listener
> prefix, since we need to prevent the whole cluster being made unusable.
>
> 4. Thank you, will make a note of that.
>
> 5. When we are upgrading (from 1.0 to 2.0 for example), my understanding is
> that we set inter.broker.protocol.version=1.0, do a rolling upgrade of the
> whole cluster and when all brokers are at 2.0, we do another rolling
> upgrade with inter.broker.protocol.version=2.0. Jason's suggestion was to
> avoid the second rolling upgrade by enabling dynamic update of
> inter.broker.protocol.version. To set inter.broker.protocol.version=2.0
> dynamically, we need to verify not just that the current broker is on
> version 2.0, but that all brokers int the cluster are on version 2.0 (I
> thought that was the reason for the second rolling upgrade). The broker
> version in ZK would enable that verification before performing the update.
>
> 6. The config source would be STATIC_BROKER_CONFIG/DYNAMIC_BROKER_CONFIG,
> the config name would be listener.name.listenerA.configX. And synonyms
> list
> in describeConfigs() would list  listener.name.listenerA.configX as well
> as
> configX.
>
> 7. I think `default` is an overused terminology already. When configs are
> described, they return a flag indicating if the value is a default. And in
> the broker, we have so many levels of configs already and we are adding so
> many more, that it may be better to use a different term. It doesn't have
> to be synonyms, but since we want to use the same term for topics and
> brokers and we have listener configs which override non-prefixed security
> configs, perhaps it is ok?
>
> Regards,
>
> Rajini
>
>
>
> On Wed, Dec 6, 2017 at 11:50 PM, Jun Rao  wrote:
>
> > A couple more things.
> >
> > 6. For the SSL/SASL configurations with the listener prefix, do we need
> > another level in config_source since it's neither topic nor broker?
> >
> > 7. For include_synonyms in DescribeConfigs, the name makes sense for the
> > topic level configs. Not sure if it makes sense for other hierarchies.
> > Perhaps sth more generic like default will be better?
> >
> > Thanks,
> >
> > Jun
> >
> > On Wed, Dec 6, 2017 at 3:41 PM, Jun Rao  wrote:
> >
> > > Hi, Rajini,
> > >
> > > Thanks for the kip. Looks good overall. A few comments below.
> > >
> > > 1. "num.replica.fetchers: Affinity of partitions to threads will be
> > > preserved for ordering." Does that mean the new fetcher threads won't
> be
> > > used until new partitions are added? This may be limiting.
> > >
> > > 2. I am wondering how the result from reporter.validate(Map
> > > configs) will be used. If it returns false, do we return false to the
> > admin
> > > client for the validation call, which will seem a bit weird?
> > >
> > > 3. For the SSL and SASL configuration changes, do we support those with
> > > the listener prefix (e.g., external-ssl-lisener.ssl.
> keystore.location).
> > > If so, do we plan to include them in the result of describeConfigs()?
> > >
> > > 4. "Updates to advertised.listeners will re-register the new listener
> in
> > > ZK". To support this, we will likely need additional logic in the
> > > controller such that the controller can broadcast the metadata with the
> > new
> > > listeners to every broker.
> > >
> > > 5. Including broker version in broker registration in ZK. I am not sure
> > > the usage of that. Each broker knows its version. So, is that for the
> > > controller?
> > >
> > > Jun
> > >
> > >
> > >
> > > On Tue, Dec 5, 2017 at 11:05 AM, Colin McCabe 
> > wrote:
> > >
> > >> On Tue, Dec 5, 2017, at 06:01, Rajini Sivaram wrote:
> > >> > Hi Colin,
> > >> >
> > >> > KAFKA-5722 already has an owner, so I didn't want to confuse the two
> > >> > KIPs.  They can be done independently of each other. The goal is to
> > try
> > >> and
> > >> > validate every config to the minimum validation already in the
> broker
> > >> for
> > >> > the static configs, but in some cases to a more restrictive level.
> So
> > a
> > >> > typo like a file-not-found or class-not-found would definitel

Re: [DISCUSS] KIP-218: Make KafkaFuture.Function java 8 lambda compatible

2017-12-08 Thread Xavier Léauté
Hi Steven,

I noticed you are making KafkaFuture.addWaiter(...) public as part of your
PR. This is a very useful method to add – and you should mention it  in the
KIP – however addWaiter currently doesn't guard against exceptions thrown
inside of the BiConsumer function, which is something we should probably
fix before making it public.

I was about to make the necessary exception handling changes as part of
https://github.com/apache/kafka/pull/4308 until someone pointed out your
KIP to me. Since you already have a PR out, it might be worth incorporating
my fixes (and the extra docs), what do you think?

I'll rebase my PR onto yours to make it easier to merge.

Thanks!
Xavier


On Mon, Dec 4, 2017 at 4:03 AM Steven Aerts  wrote:

> Tom,
>
> Thanks for the review.
> updated the motivation a little bit, it's better, but I have to admit can
> be improved.
> I made addWaiters public.
>
> Enjoy,
>
> Steven
>
>
>
> Op ma 4 dec. 2017 om 11:01 schreef Tom Bentley :
>
> > Hi Steven,
> >
> > Thanks for updating the KIP. I have a couple of points:
> >
> > 1. Typo in the first sentence of the Motivation. Also what does "empty
> > public abstract classes with one abstract method" mean -- if it's got one
> > abstract method in what way is it empty?
> > 2.From an entirely self-centred point of view, the main thing that's
> > missing for my work in KIP-183 is that addWaiter() needs to be public.
> >
> > Thanks again,
> >
> > Tom
> >
> > On 2 December 2017 at 10:07, Steven Aerts 
> wrote:
> >
> > > Hi Tom,
> > >
> > > I just made changes to the proposal of KIP-218, to make everything more
> > > backwards compatible as suggested by Collin.
> > > For me it is now in a state where starts to become final.
> > >
> > > I propose to wait a few days so everybody can take a look and open the
> > > votes when I do not receive any major comments.
> > >
> > > Does that sound ok for you?
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 218%3A+Make+KafkaFuture.Function+java+8+lambda+compatible
> > >
> > > Thanks for your patience,
> > >
> > >
> > >Steven
> > >
> > >
> > > Op vr 1 dec. 2017 om 11:55 schreef Tom Bentley  >:
> > >
> > > > Hi Steven,
> > > >
> > > > I'm particularly interested in seeing progress on this KIP as the
> work
> > > for
> > > > KIP-183 needs a public version of BiConsumer. do you have any idea
> when
> > > the
> > > > KIP might be ready for voting?
> > > >
> > > > Thanks,
> > > >
> > > > Tom
> > > >
> > > > On 10 November 2017 at 13:38, Steven Aerts 
> > > wrote:
> > > >
> > > > > Collin, Ben,
> > > > >
> > > > > Thanks for the input.
> > > > >
> > > > > I will work out this proposa, so I get an idea on the impact.
> > > > >
> > > > > Do you think it is a good idea to line up the new method names with
> > > those
> > > > > of CompletableFuture?
> > > > >
> > > > > Thanks,
> > > > >
> > > > >
> > > > >Steven
> > > > >
> > > > > Op vr 10 nov. 2017 om 12:12 schreef Ben Stopford  >:
> > > > >
> > > > > > Sounds like a good middle ground to me. What do you think Steven?
> > > > > >
> > > > > > On Mon, Nov 6, 2017 at 8:18 PM Colin McCabe 
> > > > wrote:
> > > > > >
> > > > > > > It would definitely be nice to use the jdk8
> CompletableFuture.  I
> > > > think
> > > > > > > that's a bit of a separate discussion, though, since it has
> such
> > > > heavy
> > > > > > > compatibility implications.
> > > > > > >
> > > > > > > How about making KIP-218 backwards compatible?  As a starting
> > > point,
> > > > > you
> > > > > > > can change KafkaFuture#BiConsumer to an interface with no
> > > > compatibility
> > > > > > > implications, since there are currently no public functions
> > exposed
> > > > > that
> > > > > > > use it.  That leaves KafkaFuture#Function, which is publicly
> used
> > > > now.
> > > > > > >
> > > > > > > For the purposes of KIP-218, how about adding a new interface
> > > > > > > FunctionInterface?  Then you can add a function like this:
> > > > > > >
> > > > > > > >  public abstract  KafkaFuture
> > > thenApply(FunctionInterface > > > R>
> > > > > > > function);
> > > > > > >
> > > > > > > And mark the older declaration as deprecated:
> > > > > > >
> > > > > > > >  @deprecated
> > > > > > > >  public abstract  KafkaFuture thenApply(Function
> > > > > function);
> > > > > > >
> > > > > > > This is a 100% compatible way to make things nicer for java 8.
> > > > > > >
> > > > > > > cheers,
> > > > > > > Colin
> > > > > > >
> > > > > > >
> > > > > > > On Thu, Nov 2, 2017, at 10:38, Steven Aerts wrote:
> > > > > > > > Hi Tom,
> > > > > > > >
> > > > > > > > Nice observation.
> > > > > > > > I changed "Rejected Alternatives" section to "Other
> > > Alternatives",
> > > > as
> > > > > > > > I see myself as too much of an outsider to the kafka
> community
> > to
> > > > be
> > > > > > > > able to decide without this discussion.
> > > > > > > >
> > > > > > > > I see two major factors to decide:
> > > > > > > >  - how soon will KIP-118 (drop support of java 7) be
> > implemented?
> 

Re: [VOTE] KIP-218: Make KafkaFuture.Function java 8 lambda compatible

2017-12-08 Thread Xavier Léauté
Hi Steve, I just posted in the discussion thread, there's just one tiny fix
I think would be useful to add while we're making changes to this API.
Do you mind having a look?

On Fri, Dec 8, 2017 at 11:37 AM Mickael Maison 
wrote:

> +1 (non binding)
> Thanks for the KIP
>
> On Fri, Dec 8, 2017 at 6:53 PM, Tom Bentley  wrote:
> > +1
> >
> > On 8 December 2017 at 18:34, Ted Yu  wrote:
> >
> >> +1
> >>
> >> On Fri, Dec 8, 2017 at 3:49 AM, Steven Aerts 
> >> wrote:
> >>
> >> > Hello everybody,
> >> >
> >> >
> >> > I think KIP-218 is crystallized enough to start voting.
> >> >
> >> > KIP documentation:
> >> >
> >> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> > 218%3A+Make+KafkaFuture.Function+java+8+lambda+compatible
> >> >
> >> >
> >> > Thanks,
> >> >
> >> >
> >> >Steven
> >> >
> >>
>


Re: [DISCUSS] KIP-227: Introduce Incremental FetchRequests to Increase Partition Scalability

2017-12-08 Thread Jun Rao
Hi, Jiangjie,

What I described is almost the same as yours. The only extra thing is to
scan the log segment from the identified index entry a bit more to find a
file position that ends at a message set boundary and is less than the
partition level fetch size. This way, we still preserve the current
semantic of not returning more bytes than fetch size unless there is a
single message set larger than the fetch size.

In a typically cluster at LinkedIn, what's the percentage of idle
partitions?

Thanks,

Jun


On Wed, Dec 6, 2017 at 6:57 PM, Becket Qin  wrote:

> Hi Jun,
>
> Yes, we still need to handle the corner case. And you are right, it is all
> about trade-off between simplicity and the performance gain.
>
> I was thinking that the brokers always return at least
> log.index.interval.bytes per partition to the consumer, just like we will
> return at least one message to the user. This way we don't need to worry
> about the case that the fetch size is smaller than the index interval. We
> may just need to let users know this behavior change.
>
> Not sure if I completely understand your solution, but I think we are
> thinking about the same. i.e. for the first fetch asking for offset x0, we
> will need to do a binary search to find the position p0. and then the
> broker will iterate over the index entries starting from the first index
> entry whose offset is greater than p0 until it reaches the index entry(x1,
> p1) so that p1 - p0 is just under the fetch size, but the next entry will
> exceed the fetch size. We then return the bytes from p0 to p1. Meanwhile
> the broker caches the next fetch (x1, p1). So when the next fetch comes, it
> will just iterate over the offset index entry starting at (x1, p1).
>
> It is true that in the above approach, the log compacted topic needs to be
> handled. It seems that this can be solved by checking whether the cached
> index and the new log index are still the same index object. If they are
> not the same, we can fall back to binary search with the cached offset. It
> is admittedly more complicated than the current logic, but given the binary
> search logic already exists, it seems the additional object sanity check is
> not too much work.
>
> Not sure if the above implementation is simple enough to justify the
> performance improvement. Let me know if you see potential complexity.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
>
>
>
>
> On Wed, Dec 6, 2017 at 4:48 PM, Jun Rao  wrote:
>
> > Hi, Becket,
> >
> > Yes, I agree that it's rare to have the fetch size smaller than index
> > interval. It's just that we still need additional code to handle the rare
> > case.
> >
> > If you go this far, a more general approach (i.e., without returning at
> the
> > index boundary) is the following. We can cache the following metadata for
> > the next fetch offset: the file position in the log segment, the first
> > index slot at or after the file position. When serving a fetch request,
> we
> > scan the index entries from the cached index slot until we hit the fetch
> > size. We can then send the data at the message set boundary and update
> the
> > cached metadata for the next fetch offset. This is kind of complicated,
> but
> > probably not more than your approach if the corner case has to be
> handled.
> >
> > In both the above approach and your approach, we need the additional
> logic
> > to handle compacted topic since a log segment (and therefore its index)
> can
> > be replaced between two consecutive fetch requests.
> >
> > Overall, I agree that the general approach that you proposed applies more
> > widely since we get the benefit even when all topics are high volume.
> It's
> > just that it would be better if we could think of a simpler
> implementation.
> >
> > Thanks,
> >
> > Jun
> >
> > On Tue, Dec 5, 2017 at 9:38 PM, Becket Qin  wrote:
> >
> > > Hi Jun,
> > >
> > > That is true, but in reality it seems rare that the fetch size is
> smaller
> > > than index interval. In the worst case, we may need to do another look
> > up.
> > > In the future, when we have the mechanism to inform the clients about
> the
> > > broker configurations, the clients may want to configure
> correspondingly
> > as
> > > well, e.g. max message size, max timestamp difference, etc.
> > >
> > > On the other hand, we are not guaranteeing that the returned bytes in a
> > > partition is always bounded by the per partition fetch size, because we
> > are
> > > going to return at least one message, so the per partition fetch size
> > seems
> > > already a soft limit. Since we are introducing a new fetch protocol and
> > > this is related, it might be worth considering this option.
> > >
> > > BTW, one reason I bring this up again was because yesterday we had a
> > > presentation from Uber regarding the end to end latency. And they are
> > > seeing this binary search behavior impacting the latency due to page
> > in/out
> > > of the index file.
> > >
> > > Thanks,
> > >
> > > Jiangjie (Becket) Qin
> > >
> > 

[GitHub] kafka pull request #3570: KAFKA-4218: KIP-149, Enabling withKey interfaces i...

2017-12-08 Thread jeyhunkarimov
Github user jeyhunkarimov closed the pull request at:

https://github.com/apache/kafka/pull/3570


---


Re: [DISCUSS] KIP-237: More Controller Health Metrics

2017-12-08 Thread Jun Rao
Hi, Dong,

Thanks for the KIP. It looks reasonable.

Just one minor comment. In the following metric, it seems that RequestRate
is better than EventRate.

kafka.controller:type=ControllerChannelManager,name=EventRateAndQueueTimeMs,
brokerId=someId

Jun

On Wed, Dec 6, 2017 at 6:21 PM, Dong Lin  wrote:

> Hi all,
>
> I have created KIP-237: More Controller Health Metrics
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 237%3A+More+Controller+Health+Metrics
> .
>
> The KIP proposes to add a few more metrics to help monitor Kafka Controller
> health. Feedback and suggestions are welcome!
>
> Thanks,
> Dong
>


[GitHub] kafka pull request #4309: KAFKA-4218: Enable access to key in ValueTransform...

2017-12-08 Thread jeyhunkarimov
GitHub user jeyhunkarimov opened a pull request:

https://github.com/apache/kafka/pull/4309

KAFKA-4218: Enable access to key in ValueTransformer and ValueMapper

This PR is the partial implementation for KIP-149. As the discussion for 
this KIP is still ongoing, I made a PR on the "safe" portions of the KIP (so 
that it can be included in the next release) which are 1) `ValueMapperWithKey`, 
2) `ValueTransformerWithKeySupplier`, and 3) `ValueTransformerWithKey`.




You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jeyhunkarimov/kafka KIP-149_hope_last

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4309.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4309


commit 91c1108c72f2f3b3097dcff3bd2aad237789215e
Author: Jeyhun Karimov 
Date:   2017-12-09T00:56:36Z

Submit the first version of KIP-149




---


Re: [DISCUSS] KIP-236 Interruptible Partition Reassignment

2017-12-08 Thread Jun Rao
Hi, Tom,

Thanks for the KIP. It definitely addresses one of the pain points in
partition reassignment. Another issue that it also addresses is the ZK node
size limit when writing the reassignment JSON.

My only concern is that the KIP needs to create one watcher per reassigned
partition. This could add overhead in ZK and complexity for debugging when
lots of partitions are being reassigned simultaneously. We could
potentially improve this by introducing a separate ZK path for change
notification as we do for configs. For example, every time we change the
assignment for a set of partitions, we could further write a sequential
node /admin/reassignment_changes/[change_x]. That way, the controller only
needs to watch the change path. Once a change is triggered, the controller
can read everything under /admin/reassignments/.

Jun


On Wed, Dec 6, 2017 at 1:19 PM, Tom Bentley  wrote:

> Hi,
>
> This is still very new, but I wanted some quick feedback on a preliminary
> KIP which could, I think, help with providing an AdminClient API for
> partition reassignment.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 236%3A+Interruptible+Partition+Reassignment
>
> I wasn't sure whether to start fleshing out a whole AdminClient API in this
> KIP (which would make it very big, and difficult to read), or whether to
> break it down into smaller KIPs (which makes it easier to read and
> implement in pieces, but harder to get a high-level picture of the ultimate
> destination). For now I've gone for a very small initial KIP, but I'm happy
> to sketch the bigger picture here if people are interested.
>
> Cheers,
>
> Tom
>


[jira] [Created] (KAFKA-6337) Error for partition [__consumer_offsets,15] to broker

2017-12-08 Thread Abhi (JIRA)
Abhi created KAFKA-6337:
---

 Summary: Error for partition [__consumer_offsets,15] to broker
 Key: KAFKA-6337
 URL: https://issues.apache.org/jira/browse/KAFKA-6337
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.2.0
 Environment: Windows running Kafka(0.10.2.0)
3 ZK Instances running on 3 different Windows Servers, 7 Kafka Broker nodes 
running on single windows machine with different disk for logs directory.
Reporter: Abhi


Hello *

I am running Kafka(0.10.2.0) on windows from the past one year ...

But off late there has been unique Broker issues that I have observed 4-5 times 
in
last 4 months.

Kafka setup cofig...

3 ZK Instances running on 3 different Windows Servers, 7 Kafka Broker nodes 
running on single windows machine with different disk for logs directory

My Kafka has 2 Topics with partition size 50 each , and replication factor of 3.

My partition logic selection: Each message has a unique ID and logic of 
selecting partition is ( unique ID % 50), and then calling Kafka producer API 
to route a specific message to a particular topic partition .

My Each Broker Properties look like this

{{broker.id=0
port:9093
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
offsets.retention.minutes=360
advertised.host.name=1.1.1.2
advertised.port:9093
ctories under which to store log files
log.dirs=C:\\kafka_2.10-0.10.2.0-SNAPSHOT\\data\\kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.minutes=360
log.segment.bytes=52428800
log.retention.check.interval.ms=30
log.cleaner.enable=true
log.cleanup.policy=delete
log.cleaner.min.cleanable.ratio=0.5
log.cleaner.backoff.ms=15000
log.segment.delete.delay.ms=6000
auto.create.topics.enable=false
zookeeper.connect=1.1.1.2:2181,1.1.1.3:2182,1.1.1.4:2183
zookeeper.connection.timeout.ms=6000
}}
But of-late there has been a unique case that's cropping out in Kafka broker 
nodes,
_[2017-12-02 02:47:40,024] ERROR [ReplicaFetcherThread-0-4], Error for 
partition [__consumer_offsets,15] to broker 
4:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)_

The entire server.log is filled with these logs, and its very huge too , please 
help me in understanding under what circumstances these can occur, and what 
measures I need to take.. 

Please help me this is the third time in last three Saturdays i faced the 
similar issue. 

Courtesy
Abhi
!wq 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6338) java.lang.NoClassDefFoundError: org/apache/kafka/common/network/LoginType

2017-12-08 Thread Ronald van de Kuil (JIRA)
Ronald van de Kuil created KAFKA-6338:
-

 Summary: java.lang.NoClassDefFoundError: 
org/apache/kafka/common/network/LoginType
 Key: KAFKA-6338
 URL: https://issues.apache.org/jira/browse/KAFKA-6338
 Project: Kafka
  Issue Type: Test
Affects Versions: 1.0.0
Reporter: Ronald van de Kuil
Priority: Minor


I have just setup a kerberized Kafa cluster with Ranger 0.7.1 and Kafka 1.0.0. 

It all seems to work fine as I see that authorisation policies are enforced and 
auditlogging is present.

On startup of a kafka server I see a stack trace but it does not seem to matter.

My wish is to keep the logs tidy and free of false alerts.

I wonder whether I have an issue somewhere.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)