[jira] [Commented] (KAFKA-5998) /.checkpoint.tmp Not found exception

2017-10-01 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187353#comment-16187353
 ] 

Ted Yu commented on KAFKA-5998:
---

Patch v1 has been attached. 
If the .tmp file cannot be renamed, the checkpoint is not taken. The patch 
doesn't change that behavior. 

But the patch tries to avoid the error shown above

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Yogesh BG
> Attachments: 5998.v1.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.

[jira] [Commented] (KAFKA-5679) Add logging to distinguish between internally and externally initiated shutdown of Kafka

2017-10-01 Thread Jacek Laskowski (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187359#comment-16187359
 ] 

Jacek Laskowski commented on KAFKA-5679:


[~rsivaram] The order matters! Thanks for the tip!

> Add logging to distinguish between internally and externally initiated 
> shutdown of Kafka
> 
>
> Key: KAFKA-5679
> URL: https://issues.apache.org/jira/browse/KAFKA-5679
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Apurva Mehta
>Assignee: Rajini Sivaram
> Fix For: 1.0.0
>
>
> Currently, if there is an internal error that triggers a shutdown of the 
> Kafka server, the {{Exit}} class is used, which begins the shutdown 
> procedure. The other way a shutdown is triggered is by {{SIGTERM}} or some 
> other signal.
> We would like to distinguish between shutdown due to internal errors and 
> external signals. This helps when debugging. Particularly, a natural question 
> when a broker shuts down unexpectedly is:  "did the deployment system send 
> the signal or is there some un logged fatal error in the broker"? 
> Today, we rely on callers of {{Exit}} to log the error before making the 
> call. However, this won't always have 100% coverage. It would be good to add 
> a log message in {{Exit}} to record that an exit method was invoked 
> explicitly. 
> We could also add a signal handler to log when {{SIGTERM}}, {{SIGKILL}} etc. 
> are received.
> This would make operating Kafka a bit easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5998) /.checkpoint.tmp Not found exception

2017-10-01 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187353#comment-16187353
 ] 

Ted Yu edited comment on KAFKA-5998 at 10/1/17 3:10 PM:


Patch v1 has been attached. 
If the .tmp file cannot be renamed to checkpoint file, the checkpoint is not 
taken. The patch doesn't change that behavior. 

But the patch tries to avoid the error shown above


was (Author: yuzhih...@gmail.com):
Patch v1 has been attached. 
If the .tmp file cannot be renamed, the checkpoint is not taken. The patch 
doesn't change that behavior. 

But the patch tries to avoid the error shown above

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Yogesh BG
> Attachments: 5998.v1.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-ja

[jira] [Updated] (KAFKA-5998) /.checkpoint.tmp Not found exception

2017-10-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KAFKA-5998:
--
Attachment: 5998.v2.txt

Patch v2 enables the log when renaming to secondary fails.

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Yogesh BG
> Attachments: 5998.v1.txt, 5998.v2.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.inter

[jira] [Updated] (KAFKA-5967) Ineffective check of negative value in CompositeReadOnlyKeyValueStore#approximateNumEntries()

2017-10-01 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5967:
---
Affects Version/s: 0.11.0.1

> Ineffective check of negative value in 
> CompositeReadOnlyKeyValueStore#approximateNumEntries()
> -
>
> Key: KAFKA-5967
> URL: https://issues.apache.org/jira/browse/KAFKA-5967
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.1
>Reporter: Ted Yu
>Assignee: siva santhalingam
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 1.0.0, 0.11.0.2
>
>
> {code}
> long total = 0;
> for (ReadOnlyKeyValueStore store : stores) {
> total += store.approximateNumEntries();
> }
> return total < 0 ? Long.MAX_VALUE : total;
> {code}
> The check for negative value seems to account for wrapping.
> However, wrapping can happen within the for loop. So the check should be 
> performed inside the loop.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5967) Ineffective check of negative value in CompositeReadOnlyKeyValueStore#approximateNumEntries()

2017-10-01 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5967:
---
Fix Version/s: 0.11.0.2

> Ineffective check of negative value in 
> CompositeReadOnlyKeyValueStore#approximateNumEntries()
> -
>
> Key: KAFKA-5967
> URL: https://issues.apache.org/jira/browse/KAFKA-5967
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.1
>Reporter: Ted Yu
>Assignee: siva santhalingam
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 1.0.0, 0.11.0.2
>
>
> {code}
> long total = 0;
> for (ReadOnlyKeyValueStore store : stores) {
> total += store.approximateNumEntries();
> }
> return total < 0 ? Long.MAX_VALUE : total;
> {code}
> The check for negative value seems to account for wrapping.
> However, wrapping can happen within the for loop. So the check should be 
> performed inside the loop.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5967) Ineffective check of negative value in CompositeReadOnlyKeyValueStore#approximateNumEntries()

2017-10-01 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5967:
---
Fix Version/s: 1.0.0

> Ineffective check of negative value in 
> CompositeReadOnlyKeyValueStore#approximateNumEntries()
> -
>
> Key: KAFKA-5967
> URL: https://issues.apache.org/jira/browse/KAFKA-5967
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.1
>Reporter: Ted Yu
>Assignee: siva santhalingam
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 1.0.0, 0.11.0.2
>
>
> {code}
> long total = 0;
> for (ReadOnlyKeyValueStore store : stores) {
> total += store.approximateNumEntries();
> }
> return total < 0 ? Long.MAX_VALUE : total;
> {code}
> The check for negative value seems to account for wrapping.
> However, wrapping can happen within the for loop. So the check should be 
> performed inside the loop.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5998) /.checkpoint.tmp Not found exception

2017-10-01 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5998:
---
Affects Version/s: 0.11.0.1

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1
>Reporter: Yogesh BG
> Attachments: 5998.v1.txt, 5998.v2.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.meas

[jira] [Commented] (KAFKA-5651) KIP-182: Reduce Streams DSL overloads and allow easier use of custom storage engines

2017-10-01 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187543#comment-16187543
 ] 

Matthias J. Sax commented on KAFKA-5651:


[~damianguy] Can this be closed?

> KIP-182: Reduce Streams DSL overloads and allow easier use of custom storage 
> engines
> 
>
> Key: KAFKA-5651
> URL: https://issues.apache.org/jira/browse/KAFKA-5651
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5988) Consider removing StreamThread#STREAM_THREAD_ID_SEQUENCE

2017-10-01 Thread siva santhalingam (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187642#comment-16187642
 ] 

siva santhalingam commented on KAFKA-5988:
--

[~te...@apache.org] Can i pick this up ? this is going to be my second 
contribution to kafka and i believe this will be a simple change. 

> Consider removing StreamThread#STREAM_THREAD_ID_SEQUENCE
> 
>
> Key: KAFKA-5988
> URL: https://issues.apache.org/jira/browse/KAFKA-5988
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Minor
>
> StreamThread#STREAM_THREAD_ID_SEQUENCE is used for naming (numbering) 
> StreamThread's .
> It is used in create() which is called from a loop in KafkaStreams ctor.
> We can remove STREAM_THREAD_ID_SEQUENCE and pass the loop index to create()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5988) Consider removing StreamThread#STREAM_THREAD_ID_SEQUENCE

2017-10-01 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187645#comment-16187645
 ] 

Ted Yu commented on KAFKA-5988:
---

Please go ahead.

> Consider removing StreamThread#STREAM_THREAD_ID_SEQUENCE
> 
>
> Key: KAFKA-5988
> URL: https://issues.apache.org/jira/browse/KAFKA-5988
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Minor
>
> StreamThread#STREAM_THREAD_ID_SEQUENCE is used for naming (numbering) 
> StreamThread's .
> It is used in create() which is called from a loop in KafkaStreams ctor.
> We can remove STREAM_THREAD_ID_SEQUENCE and pass the loop index to create()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5999) Offset Fetch Request

2017-10-01 Thread Zhao Weilong (JIRA)
Zhao Weilong created KAFKA-5999:
---

 Summary: Offset Fetch Request
 Key: KAFKA-5999
 URL: https://issues.apache.org/jira/browse/KAFKA-5999
 Project: Kafka
  Issue Type: Improvement
Reporter: Zhao Weilong


New kafka (found in 10.2.1) support new feature for all topic which is put 
number of topics -1. (v2)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-3360) Add a protocol page/section to the official Kafka documentation

2017-10-01 Thread Zhao Weilong (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187661#comment-16187661
 ] 

Zhao Weilong commented on KAFKA-3360:
-

Hi, two things I have found during my use of kafka. (11_10.2.1)
1. For Offset Fetch Request, new version support search for all topics which is 
to set topic no. -1. (v2, detail in `package 
org.apache.kafka.common.protocol.types; -> NULLABLE_BYTES`).
2. For Offset Response, there is an error. The correct scheme for v1 is 
ListOffsetResponse => [TopicName [PartitionOffsets]]
  PartitionOffsets => Partition ErrorCode Timestamp Offset
  Partition => int32
  ErrorCode => int16
  Timestamp => int64
  Offset => int64

Please double check. I am looking for your reply. Thank you.

> Add a protocol page/section to the official Kafka documentation
> ---
>
> Key: KAFKA-3360
> URL: https://issues.apache.org/jira/browse/KAFKA-3360
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> This is an umbrella jira to track adding a protocol page/section to the 
> official Kafka documentation. It lays out subtasks for initial content and 
> follow up improvements and fixes. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5988) Consider removing StreamThread#STREAM_THREAD_ID_SEQUENCE

2017-10-01 Thread siva santhalingam (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

siva santhalingam reassigned KAFKA-5988:


Assignee: siva santhalingam

> Consider removing StreamThread#STREAM_THREAD_ID_SEQUENCE
> 
>
> Key: KAFKA-5988
> URL: https://issues.apache.org/jira/browse/KAFKA-5988
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: siva santhalingam
>Priority: Minor
>
> StreamThread#STREAM_THREAD_ID_SEQUENCE is used for naming (numbering) 
> StreamThread's .
> It is used in create() which is called from a loop in KafkaStreams ctor.
> We can remove STREAM_THREAD_ID_SEQUENCE and pass the loop index to create()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5988) Consider removing StreamThread#STREAM_THREAD_ID_SEQUENCE

2017-10-01 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187683#comment-16187683
 ] 

Matthias J. Sax commented on KAFKA-5988:


[~sssanthalingam] [~tedyu] Thinking about this, I am not sure if we should 
apply this change. While the JIRA makes sense for the current code base, we 
want to consider dynamic scaling of thread within a single {{KafkaStreams}} 
instance, and thus, keeping the atomic counter would be "future proof" as it 
allow us to assign new unique IDs to thread we start later. A local variable 
within {{KafakStreams}} constructor would not help for this case. \cc 
[~guozhang] [~damianguy] [~bbejeck]

> Consider removing StreamThread#STREAM_THREAD_ID_SEQUENCE
> 
>
> Key: KAFKA-5988
> URL: https://issues.apache.org/jira/browse/KAFKA-5988
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: siva santhalingam
>Priority: Minor
>
> StreamThread#STREAM_THREAD_ID_SEQUENCE is used for naming (numbering) 
> StreamThread's .
> It is used in create() which is called from a loop in KafkaStreams ctor.
> We can remove STREAM_THREAD_ID_SEQUENCE and pass the loop index to create()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5972) Flatten SMT does not work with null values

2017-10-01 Thread Tomas Zuklys (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187687#comment-16187687
 ] 

Tomas Zuklys commented on KAFKA-5972:
-

Hi Siva,

Sure, you can assign it to yourself.

> Flatten SMT does not work with null values
> --
>
> Key: KAFKA-5972
> URL: https://issues.apache.org/jira/browse/KAFKA-5972
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0, 0.11.0.1
>Reporter: Tomas Zuklys
>Priority: Minor
>  Labels: easyfix, patch
> Attachments: kafka-transforms.patch
>
>
> Hi,
> I noticed a bug in Flatten SMT while doing tests with different SMTs that are 
> provided out-of-box.
> Flatten SMT does not work as expected with schemaless JSON that has 
> properties with null values. 
> Example json:
> {code}
>   {A={D=dValue, B=null, C=cValue}}
> {code}
> The issue is in if statement that checks for null value.
> Current version:
> {code}
>   for (Map.Entry entry : originalRecord.entrySet()) {
> final String fieldName = fieldName(fieldNamePrefix, 
> entry.getKey());
> Object value = entry.getValue();
> if (value == null) {
> newRecord.put(fieldName(fieldNamePrefix, entry.getKey()), 
> null);
> return;
> }
> ...
> {code}
> should be
> {code}
>   for (Map.Entry entry : originalRecord.entrySet()) {
> final String fieldName = fieldName(fieldNamePrefix, 
> entry.getKey());
> Object value = entry.getValue();
> if (value == null) {
> newRecord.put(fieldName(fieldNamePrefix, entry.getKey()), 
> null);
> continue;
> }
> {code}
> I have attached a patch containing the fix for this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)