[jira] [Created] (KAFKA-5628) Kafka Startup fails on corrupted index files

2017-07-23 Thread Prasanna Gautam (JIRA)
Prasanna Gautam created KAFKA-5628:
--

 Summary: Kafka Startup fails on corrupted index files
 Key: KAFKA-5628
 URL: https://issues.apache.org/jira/browse/KAFKA-5628
 Project: Kafka
  Issue Type: Bug
Reporter: Prasanna Gautam






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5628) Kafka Startup fails on corrupted index files

2017-07-23 Thread Prasanna Gautam (JIRA)
 [ 
https://issues.apache.org/jira/browse/KAFKA-5628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanna Gautam reassigned KAFKA-5628:
--

 Assignee: Jun Rao
Affects Version/s: 0.10.2.0
  Environment: Ubuntu 14.04, Java 8(1.8.0_65)
  Description: 
One of our kafka brokers shut down after a load test and while there are some 
corrupted index files , the broker is failing to start with a unsafe memory 
access error


{code:java}
[2017-07-23 15:52:32,019] FATAL Fatal error during KafkaServerStartable 
startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
java.lang.InternalError: a fault occurred in a recent unsafe memory access 
operation in compiled Java code
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:53)
at org.apache.kafka.common.utils.Utils.readFully(Utils.java:854)
at org.apache.kafka.common.utils.Utils.readFullyOrFail(Utils.java:827)
at 
org.apache.kafka.common.record.FileLogInputStream$FileChannelLogEntry.loadRecord(FileLogInputStream.java:136)
at 
org.apache.kafka.common.record.FileLogInputStream$FileChannelLogEntry.record(FileLogInputStream.java:149)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:225)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:224)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.log.LogSegment.recover(LogSegment.scala:224)
at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:231)
at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:188)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegments(Log.scala:188)
at kafka.log.Log.(Log.scala:116)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:157)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

This doesn't seem to be same as 
https://issues.apache.org/jira/browse/KAFKA-1554 because these topics are 
actively in use and the other empty indices are recovered fine..

It seems the machine had died because the disk was full.


> Kafka Startup fails on corrupted index files
> 
>
> Key: KAFKA-5628
> URL: https://issues.apache.org/jira/browse/KAFKA-5628
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0
> Environment: Ubuntu 14.04, Java 8(1.8.0_65)
>Reporter: Prasanna Gautam
>Assignee: Jun Rao
>
> One of our kafka brokers shut down after a load test and while there are some 
> corrupted index files , the broker is failing to start with a unsafe memory 
> access error
> {code:java}
> [2017-07-23 15:52:32,019] FATAL Fatal error during KafkaServerStartable 
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> java.lang.InternalError: a fault occurred in a recent unsafe memory access 
> operation in compiled Java code
> at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:53)
> at org.apache.kafka.common.utils.Utils.readFully(Utils.java:854)
> at org.apache.kafka.common.utils.Utils.readFullyOrFail(Utils.java:827)
> at 
> org.apache.kafka.common.record.FileLogInputStream$FileChannelLogEntry.loadRecord(FileLogInputStream.java:136)
> at 
> org.apache.kafka.common.record.FileLogInputStream$FileChannelLogEntry.record(FileLogInputStream.java:149)
> at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:225)
> at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:224)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at kafka.

[jira] [Updated] (KAFKA-5628) Kafka Startup fails on corrupted index files

2017-07-23 Thread Prasanna Gautam (JIRA)
 [ 
https://issues.apache.org/jira/browse/KAFKA-5628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanna Gautam updated KAFKA-5628:
---
Description: 
One of our kafka brokers shut down after a load test and while there are some 
corrupted index files , the broker is failing to start with a unsafe memory 
access error


{code:java}
[2017-07-23 15:52:32,019] FATAL Fatal error during KafkaServerStartable 
startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
java.lang.InternalError: a fault occurred in a recent unsafe memory access 
operation in compiled Java code
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:53)
at org.apache.kafka.common.utils.Utils.readFully(Utils.java:854)
at org.apache.kafka.common.utils.Utils.readFullyOrFail(Utils.java:827)
at 
org.apache.kafka.common.record.FileLogInputStream$FileChannelLogEntry.loadRecord(FileLogInputStream.java:136)
at 
org.apache.kafka.common.record.FileLogInputStream$FileChannelLogEntry.record(FileLogInputStream.java:149)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:225)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:224)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.log.LogSegment.recover(LogSegment.scala:224)
at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:231)
at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:188)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegments(Log.scala:188)
at kafka.log.Log.(Log.scala:116)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:157)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

This doesn't seem to be same as 
https://issues.apache.org/jira/browse/KAFKA-1554 because these topics are 
actively in use and the other empty indices are recovered fine..

Kafka on the machine had died because the disk was full. 
It seems to have resolved after the disk issue. Should kafka just check disk at 
startup and refuse to continue starting up? 

  was:
One of our kafka brokers shut down after a load test and while there are some 
corrupted index files , the broker is failing to start with a unsafe memory 
access error


{code:java}
[2017-07-23 15:52:32,019] FATAL Fatal error during KafkaServerStartable 
startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
java.lang.InternalError: a fault occurred in a recent unsafe memory access 
operation in compiled Java code
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:53)
at org.apache.kafka.common.utils.Utils.readFully(Utils.java:854)
at org.apache.kafka.common.utils.Utils.readFullyOrFail(Utils.java:827)
at 
org.apache.kafka.common.record.FileLogInputStream$FileChannelLogEntry.loadRecord(FileLogInputStream.java:136)
at 
org.apache.kafka.common.record.FileLogInputStream$FileChannelLogEntry.record(FileLogInputStream.java:149)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:225)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:224)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.log.LogSegment.recover(LogSegment.scala:224)
at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:231)
at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:188)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.s

[jira] [Updated] (KAFKA-5628) Kafka Startup fails on corrupted index files

2017-07-23 Thread Prasanna Gautam (JIRA)
 [ 
https://issues.apache.org/jira/browse/KAFKA-5628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanna Gautam updated KAFKA-5628:
---
   Priority: Minor  (was: Major)
Description: 
One of our kafka brokers shut down after a load test and while there are some 
corrupted index files , the broker is failing to start with a unsafe memory 
access error


{code:java}
[2017-07-23 15:52:32,019] FATAL Fatal error during KafkaServerStartable 
startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
java.lang.InternalError: a fault occurred in a recent unsafe memory access 
operation in compiled Java code
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:53)
at org.apache.kafka.common.utils.Utils.readFully(Utils.java:854)
at org.apache.kafka.common.utils.Utils.readFullyOrFail(Utils.java:827)
at 
org.apache.kafka.common.record.FileLogInputStream$FileChannelLogEntry.loadRecord(FileLogInputStream.java:136)
at 
org.apache.kafka.common.record.FileLogInputStream$FileChannelLogEntry.record(FileLogInputStream.java:149)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:225)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:224)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.log.LogSegment.recover(LogSegment.scala:224)
at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:231)
at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:188)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegments(Log.scala:188)
at kafka.log.Log.(Log.scala:116)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:157)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

This doesn't seem to be same as 
https://issues.apache.org/jira/browse/KAFKA-1554 because these topics are 
actively in use and the other empty indices are recovered fine..

It seems the machine had died because the disk was full. 
It seems to have resolved after the disk issue. Should kafka just check disk at 
startup and refuse to continue starting up? 

  was:
One of our kafka brokers shut down after a load test and while there are some 
corrupted index files , the broker is failing to start with a unsafe memory 
access error


{code:java}
[2017-07-23 15:52:32,019] FATAL Fatal error during KafkaServerStartable 
startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
java.lang.InternalError: a fault occurred in a recent unsafe memory access 
operation in compiled Java code
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:53)
at org.apache.kafka.common.utils.Utils.readFully(Utils.java:854)
at org.apache.kafka.common.utils.Utils.readFullyOrFail(Utils.java:827)
at 
org.apache.kafka.common.record.FileLogInputStream$FileChannelLogEntry.loadRecord(FileLogInputStream.java:136)
at 
org.apache.kafka.common.record.FileLogInputStream$FileChannelLogEntry.record(FileLogInputStream.java:149)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:225)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:224)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.log.LogSegment.recover(LogSegment.scala:224)
at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:231)
at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:188)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.muta

[jira] [Commented] (KAFKA-2526) Console Producer / Consumer's serde config is not working

2017-07-23 Thread Aish Raj Dahal (JIRA)
[ 
https://issues.apache.org/jira/browse/KAFKA-2526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16097776#comment-16097776
 ] 

Aish Raj Dahal commented on KAFKA-2526:
---

[~mgharat]: Hey Mauresh! are you working on this ? If not, I would like get 
started on this.

> Console Producer / Consumer's serde config is not working
> -
>
> Key: KAFKA-2526
> URL: https://issues.apache.org/jira/browse/KAFKA-2526
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Mayuresh Gharat
>  Labels: newbie
>
> Although in the console producer one can specify the key value serializer, 
> they are actually not used since 1) it always serialize the input string as 
> String.getBytes (hence always pre-assume the string serializer) and 2) it is 
> actually only passed into the old producer. The same issues exist in console 
> consumer.
> In addition the configs in the console producer is messy: we have 1) some 
> config values exposed as cmd parameters, and 2) some config values in 
> --producer-property and 3) some in --property.
> It will be great to clean the configs up in both console producer and 
> consumer, and put them into a single --property parameter which could 
> possibly take a file to reading in property values as well, and only leave 
> --new-producer as the other command line parameter.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5629) Console Consumer overrides auto.offset.reset property when provided on the command line without warning about it.

2017-07-23 Thread JIRA
Sönke Liebau created KAFKA-5629:
---

 Summary: Console Consumer overrides auto.offset.reset property 
when provided on the command line without warning about it.
 Key: KAFKA-5629
 URL: https://issues.apache.org/jira/browse/KAFKA-5629
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Affects Versions: 0.11.0.0
Reporter: Sönke Liebau
Assignee: Sönke Liebau
Priority: Trivial


The console consumer allows to specify consumer options on the command line 
with the --consumer-property parameter.

In the case of auto.offset.reset this parameter will always silently be ignored 
though, because this behavior is controlled via the --from-beginning parameter.
I believe that behavior to be fine, however we should log a warning in case 
auto.offset.reset is specified on the command line and overridden to something 
else in the code to avoid potential confusion.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)