[jira] [Created] (KAFKA-5997) acks=all does not seem to be honoured

2017-09-30 Thread Ronald van de Kuil (JIRA)
Ronald van de Kuil created KAFKA-5997:
-

 Summary: acks=all does not seem to be honoured
 Key: KAFKA-5997
 URL: https://issues.apache.org/jira/browse/KAFKA-5997
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.11.0.0
Reporter: Ronald van de Kuil


I have a 3 node Kafka cluster. I made a topic with 1 partition with a 
replication factor of 2. 

The replicas landed on broker 1 and 3. 

When I stopped the leader I still could produce on it. Also, I saw the consumer 
consume it.

I produced with acks=all:

[main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig 
values: 
acks = all

The documentations says says about acc=all:
"This means the leader will wait for the full set of in-sync replicas to 
acknowledge the record."

The producer api returns the offset which is present in the Metadatarecord.

I would have expected that producer could not publish because only 1 replica is 
in sync.

This is the output of the topic state:

Topic:topic1PartitionCount:1ReplicationFactor:2 Configs:
Topic: topic1   Partition: 0Leader: 1   Replicas: 1,3   Isr: 1

I am wondering whether or not this is a bug in the current version of the API 
or whether the documentation (or my understanding) is eligible for an update? I 
expect the former.

If there is anything that I can do, please let me know.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5997) acks=all does not seem to be honoured

2017-09-30 Thread Ronald van de Kuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ronald van de Kuil resolved KAFKA-5997.
---
Resolution: Not A Bug

> acks=all does not seem to be honoured
> -
>
> Key: KAFKA-5997
> URL: https://issues.apache.org/jira/browse/KAFKA-5997
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.11.0.0
>Reporter: Ronald van de Kuil
>
> I have a 3 node Kafka cluster. I made a topic with 1 partition with a 
> replication factor of 2. 
> The replicas landed on broker 1 and 3. 
> When I stopped the leader I still could produce on it. Also, I saw the 
> consumer consume it.
> I produced with acks=all:
> [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig 
> values: 
>   acks = all
> The documentations says says about acc=all:
> "This means the leader will wait for the full set of in-sync replicas to 
> acknowledge the record."
> The producer api returns the offset which is present in the Metadatarecord.
> I would have expected that producer could not publish because only 1 replica 
> is in sync.
> This is the output of the topic state:
> Topic:topic1PartitionCount:1ReplicationFactor:2 Configs:
> Topic: topic1   Partition: 0Leader: 1   Replicas: 1,3   Isr: 1
> I am wondering whether or not this is a bug in the current version of the API 
> or whether the documentation (or my understanding) is eligible for an update? 
> I expect the former.
> If there is anything that I can do, please let me know.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5998) /.checkpoint.tmp Not found exception

2017-09-30 Thread Yogesh BG (JIRA)
Yogesh BG created KAFKA-5998:


 Summary: /.checkpoint.tmp Not found exception
 Key: KAFKA-5998
 URL: https://issues.apache.org/jira/browse/KAFKA-5998
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.11.0.0
Reporter: Yogesh BG
Priority: Trivial


I have one kafka broker and one kafka stream running... I am running its since 
two days under load of around 2500 msgs per second.. On third day am getting 
below exception for some of the partitions, I have 16 partitions only 0_0 and 
0_1 gives this error

09:43:25.955 [ks_0_inst-StreamThread-6] WARN  o.a.k.s.p.i.ProcessorStateManager 
- Failed to write checkpoint file to 
/data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
java.io.FileNotFoundException: 
/data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or directory)
at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:221) 
~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:171) 
~[na:1.7.0_111]
at 
org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
/data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
java.io.FileNotFoundException: 
/data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or directory)
at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:221) 
~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:171) 
~[na:1.7.0_111]
at 
org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
 [rtp-kafkastreams-1.0-SNAPSHO

[GitHub] kafka pull request #3998: Fix array index out of bounds

2017-09-30 Thread yew1eb
GitHub user yew1eb opened a pull request:

https://github.com/apache/kafka/pull/3998

Fix array index out of bounds

This array access might be out of bounds, as the index might be equal to 
the array length.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yew1eb/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3998.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3998


commit 4c37969ecb12a39320cd2a11add74f3d5ce6a99a
Author: zhouhai02 
Date:   2017-09-30T10:46:55Z

fix array index out of bounds




---


Re: How is CorrelationId used for matching request and response

2017-09-30 Thread Jay Kreps
Yes the idea of the correlation id is to make it easier for the client to
match a particular response to the request it answers. Kafka’s protocol
allows sending multiple requests without waiting for the response. In
theory you can just rely on ordering, but that can be a bit fragile if the
client has any kind of bug. So this id is an explicit check—a response with
id 42 is the answer to the request you sent with id 42. Hope that helps!

-Jay

On Fri, Sep 29, 2017 at 4:52 PM Ted Yu  wrote:

> Which release / version are you looking at ?
> In trunk branch, I only see one toSend():
>
> protected Send toSend(String destination, ResponseHeader header, short
> apiVersion) {
>
> return new NetworkSend(destination, serialize(apiVersion, header));
>
> On Fri, Sep 29, 2017 at 4:49 PM, Javed, Haseeb <
> javed...@buckeyemail.osu.edu
> > wrote:
>
> > The Kafka protocol guide mentions that each request and response contains
> > a correlationId which is a user-supplied integer to match requests and
> > corresponding responses. However, when I look at the code in the class
> > AbstractResponse, we have a method defined as following:
> >
> >
> > public Send toSend(String destination, RequestHeader requestHeader) {
> > return toSend(destination, requestHeader.apiVersion(), requestHeader.
> > toResponseHeader());
> > }
> >
> > So basically we are just using the requestHeader to generate the
> > responseHeader so doesn't this pretty much guarantees that the
> > correlationId for the Request and the Response would always be the same,
> or
> > am I missing something?
> >
> >
> >
>


[GitHub] kafka pull request #3996: KAFKA-5746; Return 0.0 from Metric.value() instead...

2017-09-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3996


---


Re: How is CorrelationId used for matching request and response

2017-09-30 Thread Javed, Haseeb
Thanks all for reaching out.



Ted - I am looking at the 0.11.0 release. Particularly here 
https://github.com/apache/kafka/blob/0.11.0/clients/src/main/java/org/apache/kafka/common/requests/AbstractResponse.java

In this release, the Server uses the following method in almost all cases 
(ApiVersionResponses.unsupportedVersionSend(...) being the only exception)

public Send toSend(String destination, RequestHeader requestHeader) {
return toSend(destination, requestHeader.apiVersion(), 
requestHeader.toResponseHeader());
}


Jay - I understand the purpose of correlationId but what I don't understand is 
how the request/response matching logic is being implemented. From the code, I 
see that server always uses the request header to generate the response header 
so in effect both request and response headers end up having the same 
correlationId. There seems to be no situation where response and request could 
possible have different correlationIds.



Haseeb


From: Jay Kreps 
Sent: Saturday, September 30, 2017 11:43:30 PM
To: dev@kafka.apache.org
Subject: Re: How is CorrelationId used for matching request and response

Yes the idea of the correlation id is to make it easier for the client to
match a particular response to the request it answers. Kafka’s protocol
allows sending multiple requests without waiting for the response. In
theory you can just rely on ordering, but that can be a bit fragile if the
client has any kind of bug. So this id is an explicit check—a response with
id 42 is the answer to the request you sent with id 42. Hope that helps!

-Jay

On Fri, Sep 29, 2017 at 4:52 PM Ted Yu  wrote:

> Which release / version are you looking at ?
> In trunk branch, I only see one toSend():
>
> protected Send toSend(String destination, ResponseHeader header, short
> apiVersion) {
>
> return new NetworkSend(destination, serialize(apiVersion, header));
>
> On Fri, Sep 29, 2017 at 4:49 PM, Javed, Haseeb <
> javed...@buckeyemail.osu.edu
> > wrote:
>
> > The Kafka protocol guide mentions that each request and response contains
> > a correlationId which is a user-supplied integer to match requests and
> > corresponding responses. However, when I look at the code in the class
> > AbstractResponse, we have a method defined as following:
> >
> >
> > public Send toSend(String destination, RequestHeader requestHeader) {
> > return toSend(destination, requestHeader.apiVersion(), requestHeader.
> > toResponseHeader());
> > }
> >
> > So basically we are just using the requestHeader to generate the
> > responseHeader so doesn't this pretty much guarantees that the
> > correlationId for the Request and the Response would always be the same,
> or
> > am I missing something?
> >
> >
> >
>


Re: How is CorrelationId used for matching request and response

2017-09-30 Thread Ismael Juma
Hi Haseeb,

That is the point, the server should always send a response with the same
correlation id as the request it has received. If there's a bug in the
networking layer where a response is never sent back or there is reordering
somewhere in the stack, then this will be identified by the client when the
correlation id does not match. Does this help?

Ismael

On Sun, Oct 1, 2017 at 2:39 AM, Javed, Haseeb 
wrote:

> Thanks all for reaching out.
>
>
>
> Ted - I am looking at the 0.11.0 release. Particularly here
> https://github.com/apache/kafka/blob/0.11.0/clients/src/
> main/java/org/apache/kafka/common/requests/AbstractResponse.java
>
> In this release, the Server uses the following method in almost all cases
> (ApiVersionResponses.unsupportedVersionSend(...) being the only exception)
>
> public Send toSend(String destination, RequestHeader requestHeader) {
> return toSend(destination, requestHeader.apiVersion(), requestHeader.
> toResponseHeader());
> }
>
>
> Jay - I understand the purpose of correlationId but what I don't
> understand is how the request/response matching logic is being implemented.
> From the code, I see that server always uses the request header to generate
> the response header so in effect both request and response headers end up
> having the same correlationId. There seems to be no situation where
> response and request could possible have different correlationIds.
>  main/java/org/apache/kafka/common/requests/AbstractResponse.java>
>
>
> Haseeb
>
> 
> From: Jay Kreps 
> Sent: Saturday, September 30, 2017 11:43:30 PM
> To: dev@kafka.apache.org
> Subject: Re: How is CorrelationId used for matching request and response
>
> Yes the idea of the correlation id is to make it easier for the client to
> match a particular response to the request it answers. Kafka’s protocol
> allows sending multiple requests without waiting for the response. In
> theory you can just rely on ordering, but that can be a bit fragile if the
> client has any kind of bug. So this id is an explicit check—a response with
> id 42 is the answer to the request you sent with id 42. Hope that helps!
>
> -Jay
>
> On Fri, Sep 29, 2017 at 4:52 PM Ted Yu  wrote:
>
> > Which release / version are you looking at ?
> > In trunk branch, I only see one toSend():
> >
> > protected Send toSend(String destination, ResponseHeader header,
> short
> > apiVersion) {
> >
> > return new NetworkSend(destination, serialize(apiVersion,
> header));
> >
> > On Fri, Sep 29, 2017 at 4:49 PM, Javed, Haseeb <
> > javed...@buckeyemail.osu.edu
> > > wrote:
> >
> > > The Kafka protocol guide mentions that each request and response
> contains
> > > a correlationId which is a user-supplied integer to match requests and
> > > corresponding responses. However, when I look at the code in the class
> > > AbstractResponse, we have a method defined as following:
> > >
> > >
> > > public Send toSend(String destination, RequestHeader requestHeader) {
> > > return toSend(destination, requestHeader.apiVersion(),
> requestHeader.
> > > toResponseHeader());
> > > }
> > >
> > > So basically we are just using the requestHeader to generate the
> > > responseHeader so doesn't this pretty much guarantees that the
> > > correlationId for the Request and the Response would always be the
> same,
> > or
> > > am I missing something?
> > >
> > >
> > >
> >
>