[jira] [Commented] (KAFKA-1555) provide strong consistency with reasonable availability
[ https://issues.apache.org/jira/browse/KAFKA-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14122771#comment-14122771 ] saurabh agarwal commented on KAFKA-1555: Jun, Yes. This approach works fine for our use case. It is critical for our application not to lose the message. Setting up it at the topic level is fine as well. > provide strong consistency with reasonable availability > --- > > Key: KAFKA-1555 > URL: https://issues.apache.org/jira/browse/KAFKA-1555 > Project: Kafka > Issue Type: Improvement > Components: controller >Affects Versions: 0.8.1.1 >Reporter: Jiang Wu >Assignee: Gwen Shapira > Fix For: 0.8.2 > > > In a mission critical application, we expect a kafka cluster with 3 brokers > can satisfy two requirements: > 1. When 1 broker is down, no message loss or service blocking happens. > 2. In worse cases such as two brokers are down, service can be blocked, but > no message loss happens. > We found that current kafka versoin (0.8.1.1) cannot achieve the requirements > due to its three behaviors: > 1. when choosing a new leader from 2 followers in ISR, the one with less > messages may be chosen as the leader. > 2. even when replica.lag.max.messages=0, a follower can stay in ISR when it > has less messages than the leader. > 3. ISR can contains only 1 broker, therefore acknowledged messages may be > stored in only 1 broker. > The following is an analytical proof. > We consider a cluster with 3 brokers and a topic with 3 replicas, and assume > that at the beginning, all 3 replicas, leader A, followers B and C, are in > sync, i.e., they have the same messages and are all in ISR. > According to the value of request.required.acks (acks for short), there are > the following cases. > 1. acks=0, 1, 3. Obviously these settings do not satisfy the requirement. > 2. acks=2. Producer sends a message m. It's acknowledged by A and B. At this > time, although C hasn't received m, C is still in ISR. If A is killed, C can > be elected as the new leader, and consumers will miss m. > 3. acks=-1. B and C restart and are removed from ISR. Producer sends a > message m to A, and receives an acknowledgement. Disk failure happens in A > before B and C replicate m. Message m is lost. > In summary, any existing configuration cannot satisfy the requirements. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-1377) transient unit test failure in LogOffsetTest
[ https://issues.apache.org/jira/browse/KAFKA-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123041#comment-14123041 ] Manikumar Reddy commented on KAFKA-1377: Yes, these failures are consistent on my machine. My machine configuration: 32 bit Ubuntu OS, i5 processor, 4GB > transient unit test failure in LogOffsetTest > > > Key: KAFKA-1377 > URL: https://issues.apache.org/jira/browse/KAFKA-1377 > Project: Kafka > Issue Type: Bug > Components: core >Reporter: Jun Rao >Assignee: Jun Rao > Labels: newbie > Fix For: 0.9.0 > > Attachments: KAFKA-1377.patch, KAFKA-1377_2014-04-11_17:42:13.patch, > KAFKA-1377_2014-04-11_18:14:45.patch > > > Saw the following transient unit test failure. > kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime FAILED > junit.framework.AssertionFailedError: expected: but > was: > at junit.framework.Assert.fail(Assert.java:47) > at junit.framework.Assert.failNotEquals(Assert.java:277) > at junit.framework.Assert.assertEquals(Assert.java:64) > at junit.framework.Assert.assertEquals(Assert.java:71) > at > kafka.server.LogOffsetTest.testGetOffsetsBeforeEarliestTime(LogOffsetTest.scala:198) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-1577) Exception in ConnectionQuotas while shutting down
[ https://issues.apache.org/jira/browse/KAFKA-1577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123408#comment-14123408 ] Joel Koshy commented on KAFKA-1577: --- The exception would only occur if the socket and channel were closed in the first place, no? > Exception in ConnectionQuotas while shutting down > - > > Key: KAFKA-1577 > URL: https://issues.apache.org/jira/browse/KAFKA-1577 > Project: Kafka > Issue Type: Bug > Components: core >Reporter: Joel Koshy >Assignee: Sriharsha Chintalapani > Labels: newbie > Attachments: KAFKA-1577.patch, KAFKA-1577_2014-08-20_19:57:44.patch, > KAFKA-1577_2014-08-26_07:33:13.patch, > KAFKA-1577_check_counter_before_decrementing.patch > > > {code} > [2014-08-07 19:38:08,228] ERROR Uncaught exception in thread > 'kafka-network-thread-9092-0': (kafka.utils.Utils$) > java.util.NoSuchElementException: None.get > at scala.None$.get(Option.scala:185) > at scala.None$.get(Option.scala:183) > at kafka.network.ConnectionQuotas.dec(SocketServer.scala:471) > at kafka.network.AbstractServerThread.close(SocketServer.scala:158) > at kafka.network.AbstractServerThread.close(SocketServer.scala:150) > at kafka.network.AbstractServerThread.closeAll(SocketServer.scala:171) > at kafka.network.Processor.run(SocketServer.scala:338) > at java.lang.Thread.run(Thread.java:662) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-1510) Force offset commits when migrating consumer offsets from zookeeper to kafka
[ https://issues.apache.org/jira/browse/KAFKA-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Koshy updated KAFKA-1510: -- Resolution: Fixed Status: Resolved (was: Patch Available) Thanks for the patch. +1 and committed to trunk. > Force offset commits when migrating consumer offsets from zookeeper to kafka > > > Key: KAFKA-1510 > URL: https://issues.apache.org/jira/browse/KAFKA-1510 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8.2 >Reporter: Joel Koshy >Assignee: Joel Koshy > Labels: newbie > Fix For: 0.8.2 > > Attachments: > Patch_to_push_unfiltered_offsets_to_both_Kafka_and_potentially_Zookeeper_when_Kafka_is_con.patch, > Unfiltered_offsets_commit_to_kafka_rebased.patch > > > When migrating consumer offsets from ZooKeeper to kafka, we have to turn on > dual-commit (i.e., the consumers will commit offsets to both zookeeper and > kafka) in addition to setting offsets.storage to kafka. However, when we > commit offsets we only commit offsets if they have changed (since the last > commit). For low-volume topics or for topics that receive data in bursts > offsets may not move for a long period of time. Therefore we may want to > force the commit (even if offsets have not changed) when migrating (i.e., > when dual-commit is enabled) - we can add a minimum interval threshold (say > force commit after every 10 auto-commits) as well as on rebalance and > shutdown. > Also, I think it is safe to switch the default for offsets.storage from > zookeeper to kafka and set the default to dual-commit (for people who have > not migrated yet). We have deployed this to the largest consumers at linkedin > and have not seen any issues so far (except for the migration caveat that > this jira will resolve). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-1577) Exception in ConnectionQuotas while shutting down
[ https://issues.apache.org/jira/browse/KAFKA-1577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123463#comment-14123463 ] Bogdan Calmac commented on KAFKA-1577: -- Maybe, I don't know the code well enough to draw that conclusion. But what would be a good reason to allow the exception in the first place? The exception isn't caused by an external factor but by a programming error (the assumption that the Option always has a value). This puts pressure on all methods up the call hierarchy to do a proper cleanup after a RuntimeException. The {{close()}} method I mentioned was just an example. Why look for trouble? > Exception in ConnectionQuotas while shutting down > - > > Key: KAFKA-1577 > URL: https://issues.apache.org/jira/browse/KAFKA-1577 > Project: Kafka > Issue Type: Bug > Components: core >Reporter: Joel Koshy >Assignee: Sriharsha Chintalapani > Labels: newbie > Attachments: KAFKA-1577.patch, KAFKA-1577_2014-08-20_19:57:44.patch, > KAFKA-1577_2014-08-26_07:33:13.patch, > KAFKA-1577_check_counter_before_decrementing.patch > > > {code} > [2014-08-07 19:38:08,228] ERROR Uncaught exception in thread > 'kafka-network-thread-9092-0': (kafka.utils.Utils$) > java.util.NoSuchElementException: None.get > at scala.None$.get(Option.scala:185) > at scala.None$.get(Option.scala:183) > at kafka.network.ConnectionQuotas.dec(SocketServer.scala:471) > at kafka.network.AbstractServerThread.close(SocketServer.scala:158) > at kafka.network.AbstractServerThread.close(SocketServer.scala:150) > at kafka.network.AbstractServerThread.closeAll(SocketServer.scala:171) > at kafka.network.Processor.run(SocketServer.scala:338) > at java.lang.Thread.run(Thread.java:662) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-1577) Exception in ConnectionQuotas while shutting down
[ https://issues.apache.org/jira/browse/KAFKA-1577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123543#comment-14123543 ] Joel Koshy commented on KAFKA-1577: --- I think the main reason to allow the exception (as opposed to an existence check) is that it should never happen. If it does, then it is a bug and we need to know about it. We can either receive a runtime exception or do an existence check and log an error. There are a number of places elsewhere in the code where we expect keys to be present. If not it is a (potentially serious) bug - we would rather let an exception be thrown rather than do an existence check and log an error especially if it is a serious issue. In this case we traced the cause of this occurrence to a race condition that only happens during shutdown which is why swallowing at that point is reasonable since that is the only circumstance under which a missing key is possible (and okay). Going forward, if the exception shows up anytime other than shutdown then we will need to again debug why that is the case and fix it - e.g., if it is related to the same race condition then we should fix that race condition. > Exception in ConnectionQuotas while shutting down > - > > Key: KAFKA-1577 > URL: https://issues.apache.org/jira/browse/KAFKA-1577 > Project: Kafka > Issue Type: Bug > Components: core >Reporter: Joel Koshy >Assignee: Sriharsha Chintalapani > Labels: newbie > Attachments: KAFKA-1577.patch, KAFKA-1577_2014-08-20_19:57:44.patch, > KAFKA-1577_2014-08-26_07:33:13.patch, > KAFKA-1577_check_counter_before_decrementing.patch > > > {code} > [2014-08-07 19:38:08,228] ERROR Uncaught exception in thread > 'kafka-network-thread-9092-0': (kafka.utils.Utils$) > java.util.NoSuchElementException: None.get > at scala.None$.get(Option.scala:185) > at scala.None$.get(Option.scala:183) > at kafka.network.ConnectionQuotas.dec(SocketServer.scala:471) > at kafka.network.AbstractServerThread.close(SocketServer.scala:158) > at kafka.network.AbstractServerThread.close(SocketServer.scala:150) > at kafka.network.AbstractServerThread.closeAll(SocketServer.scala:171) > at kafka.network.Processor.run(SocketServer.scala:338) > at java.lang.Thread.run(Thread.java:662) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-1577) Exception in ConnectionQuotas while shutting down
[ https://issues.apache.org/jira/browse/KAFKA-1577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123563#comment-14123563 ] Bogdan Calmac commented on KAFKA-1577: -- OK, makes sense. Now, if this is the case, shouldn't the exception be swallowed as soon as possible as below: {code} def close(channel: SocketChannel) { if(channel != null) { debug("Closing connection from " + channel.socket.getRemoteSocketAddress()) swallowError(connectionQuotas.dec(channel.socket.getInetAddress)) // known race condition may lead to NoSuchElementException swallowError(channel.socket().close()) swallowError(channel.close()) } } {code} It might make no difference at runtime, but the code is more readable. > Exception in ConnectionQuotas while shutting down > - > > Key: KAFKA-1577 > URL: https://issues.apache.org/jira/browse/KAFKA-1577 > Project: Kafka > Issue Type: Bug > Components: core >Reporter: Joel Koshy >Assignee: Sriharsha Chintalapani > Labels: newbie > Attachments: KAFKA-1577.patch, KAFKA-1577_2014-08-20_19:57:44.patch, > KAFKA-1577_2014-08-26_07:33:13.patch, > KAFKA-1577_check_counter_before_decrementing.patch > > > {code} > [2014-08-07 19:38:08,228] ERROR Uncaught exception in thread > 'kafka-network-thread-9092-0': (kafka.utils.Utils$) > java.util.NoSuchElementException: None.get > at scala.None$.get(Option.scala:185) > at scala.None$.get(Option.scala:183) > at kafka.network.ConnectionQuotas.dec(SocketServer.scala:471) > at kafka.network.AbstractServerThread.close(SocketServer.scala:158) > at kafka.network.AbstractServerThread.close(SocketServer.scala:150) > at kafka.network.AbstractServerThread.closeAll(SocketServer.scala:171) > at kafka.network.Processor.run(SocketServer.scala:338) > at java.lang.Thread.run(Thread.java:662) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 24676: Fix KAFKA-1583
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/24676/ --- (Updated Sept. 5, 2014, 9:08 p.m.) Review request for kafka. Bugs: KAFKA-1583 https://issues.apache.org/jira/browse/KAFKA-1583 Repository: kafka Description (updated) --- rebase on KAFKA-1616 for checking diff files, please do not review Diffs (updated) - core/src/main/scala/kafka/api/FetchRequest.scala 51cdccf7f90eb530cc62b094ed822b8469d50b12 core/src/main/scala/kafka/api/FetchResponse.scala af9308737bf7832eca018c2b3ede703f7d1209f1 core/src/main/scala/kafka/api/OffsetCommitRequest.scala 861a6cf11dc6b6431fcbbe9de00c74a122f204bd core/src/main/scala/kafka/api/ProducerRequest.scala b2366e7eedcac17f657271d5293ff0bef6f3cbe6 core/src/main/scala/kafka/api/ProducerResponse.scala a286272c834b6f40164999ff8b7f8998875f2cfe core/src/main/scala/kafka/cluster/Partition.scala ff106b47e6ee194cea1cf589474fef975b9dd7e2 core/src/main/scala/kafka/common/ErrorMapping.scala 3fae7910e4ce17bc8325887a046f383e0c151d44 core/src/main/scala/kafka/log/Log.scala 0ddf97bd30311b6039e19abade41d2fbbad2f59b core/src/main/scala/kafka/network/BoundedByteBufferSend.scala a624359fb2059340bb8dc1619c5b5f226e26eb9b core/src/main/scala/kafka/server/DelayedFetch.scala e0f14e25af03e6d4344386dcabc1457ee784d345 core/src/main/scala/kafka/server/DelayedProduce.scala 9481508fc2d6140b36829840c337e557f3d090da core/src/main/scala/kafka/server/FetchRequestPurgatory.scala ed1318891253556cdf4d908033b704495acd5724 core/src/main/scala/kafka/server/KafkaApis.scala c584b559416b3ee4bcbec5966be4891e0a03eefb core/src/main/scala/kafka/server/OffsetManager.scala 43eb2a35bb54d32c66cdb94772df657b3a104d1a core/src/main/scala/kafka/server/ProducerRequestPurgatory.scala d4a7d4a79b44263a1f7e1a92874dd36aa06e5a3f core/src/main/scala/kafka/server/ReplicaManager.scala 68758e35d496a4659819960ae8e809d6e215568e core/src/main/scala/kafka/server/RequestPurgatory.scala cf3ed4c8f197d1197658645ccb55df0bce86bdd4 core/src/main/scala/kafka/utils/DelayedItem.scala d7276494072f14f1cdf7d23f755ac32678c5675c core/src/test/scala/unit/kafka/server/HighwatermarkPersistenceTest.scala 03a424d45215e1e7780567d9559dae4d0ae6fc29 core/src/test/scala/unit/kafka/server/ISRExpirationTest.scala cd302aa51eb8377d88b752d48274e403926439f2 core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala a9c4ddc78df0b3695a77a12cf8cf25521a203122 core/src/test/scala/unit/kafka/server/RequestPurgatoryTest.scala a577f4a8bf420a5bc1e62fad6d507a240a42bcaa core/src/test/scala/unit/kafka/server/ServerShutdownTest.scala ab60e9b3a4d063c838bdc7f97b3ac7d2ede87072 core/src/test/scala/unit/kafka/server/SimpleFetchTest.scala 09ed8f5a7a414ae139803bf82d336c2d80bf4ac5 Diff: https://reviews.apache.org/r/24676/diff/ Testing --- Unit tests Thanks, Guozhang Wang
[jira] [Commented] (KAFKA-1583) Kafka API Refactoring
[ https://issues.apache.org/jira/browse/KAFKA-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123581#comment-14123581 ] Guozhang Wang commented on KAFKA-1583: -- Updated reviewboard https://reviews.apache.org/r/24676/diff/ against branch origin/trunk > Kafka API Refactoring > - > > Key: KAFKA-1583 > URL: https://issues.apache.org/jira/browse/KAFKA-1583 > Project: Kafka > Issue Type: Bug >Reporter: Guozhang Wang >Assignee: Guozhang Wang > Fix For: 0.9.0 > > Attachments: KAFKA-1583.patch, KAFKA-1583_2014-08-20_13:54:38.patch, > KAFKA-1583_2014-08-21_11:30:34.patch, KAFKA-1583_2014-08-27_09:44:50.patch, > KAFKA-1583_2014-09-01_18:07:42.patch, KAFKA-1583_2014-09-02_13:37:47.patch, > KAFKA-1583_2014-09-05_14:08:36.patch > > > This is the next step of KAFKA-1430. Details can be found at this page: > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+API+Refactoring -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-1583) Kafka API Refactoring
[ https://issues.apache.org/jira/browse/KAFKA-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guozhang Wang updated KAFKA-1583: - Attachment: KAFKA-1583_2014-09-05_14:08:36.patch > Kafka API Refactoring > - > > Key: KAFKA-1583 > URL: https://issues.apache.org/jira/browse/KAFKA-1583 > Project: Kafka > Issue Type: Bug >Reporter: Guozhang Wang >Assignee: Guozhang Wang > Fix For: 0.9.0 > > Attachments: KAFKA-1583.patch, KAFKA-1583_2014-08-20_13:54:38.patch, > KAFKA-1583_2014-08-21_11:30:34.patch, KAFKA-1583_2014-08-27_09:44:50.patch, > KAFKA-1583_2014-09-01_18:07:42.patch, KAFKA-1583_2014-09-02_13:37:47.patch, > KAFKA-1583_2014-09-05_14:08:36.patch > > > This is the next step of KAFKA-1430. Details can be found at this page: > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+API+Refactoring -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-1577) Exception in ConnectionQuotas while shutting down
[ https://issues.apache.org/jira/browse/KAFKA-1577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123613#comment-14123613 ] Joel Koshy commented on KAFKA-1577: --- We can do that, but it does allow the error to go undetected (apart from the error log) and execution continues (even in the non-shutdown case). It is a bug if the element does not exist - i.e., execution should not proceed beyond this point which is what an exception provides. The swallow is okay in the shutdown code because we explicitly allow the key's non-existence there. > Exception in ConnectionQuotas while shutting down > - > > Key: KAFKA-1577 > URL: https://issues.apache.org/jira/browse/KAFKA-1577 > Project: Kafka > Issue Type: Bug > Components: core >Reporter: Joel Koshy >Assignee: Sriharsha Chintalapani > Labels: newbie > Attachments: KAFKA-1577.patch, KAFKA-1577_2014-08-20_19:57:44.patch, > KAFKA-1577_2014-08-26_07:33:13.patch, > KAFKA-1577_check_counter_before_decrementing.patch > > > {code} > [2014-08-07 19:38:08,228] ERROR Uncaught exception in thread > 'kafka-network-thread-9092-0': (kafka.utils.Utils$) > java.util.NoSuchElementException: None.get > at scala.None$.get(Option.scala:185) > at scala.None$.get(Option.scala:183) > at kafka.network.ConnectionQuotas.dec(SocketServer.scala:471) > at kafka.network.AbstractServerThread.close(SocketServer.scala:158) > at kafka.network.AbstractServerThread.close(SocketServer.scala:150) > at kafka.network.AbstractServerThread.closeAll(SocketServer.scala:171) > at kafka.network.Processor.run(SocketServer.scala:338) > at java.lang.Thread.run(Thread.java:662) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (KAFKA-1618) Exception thrown when running console producer with no port number for the broker
[ https://issues.apache.org/jira/browse/KAFKA-1618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gwen Shapira reopened KAFKA-1618: - Re-openning, since AFAIK, the patch is not committed yet. > Exception thrown when running console producer with no port number for the > broker > - > > Key: KAFKA-1618 > URL: https://issues.apache.org/jira/browse/KAFKA-1618 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.8.1.1 >Reporter: Gwen Shapira >Assignee: BalajiSeshadri > Labels: newbie > Fix For: 0.8.2 > > Attachments: KAFKA-1618.patch > > > When running console producer with just "localhost" as the broker list, I get > ArrayIndexOutOfBounds exception. > I expect either a clearer error about arguments or for the producer to > "guess" a default port. > [root@shapira-1 bin]# ./kafka-console-producer.sh --topic rufus1 > --broker-list localhost > java.lang.ArrayIndexOutOfBoundsException: 1 > at > kafka.client.ClientUtils$$anonfun$parseBrokerList$1.apply(ClientUtils.scala:102) > at > kafka.client.ClientUtils$$anonfun$parseBrokerList$1.apply(ClientUtils.scala:97) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) > at scala.collection.AbstractTraversable.map(Traversable.scala:105) > at kafka.client.ClientUtils$.parseBrokerList(ClientUtils.scala:97) > at > kafka.producer.BrokerPartitionInfo.(BrokerPartitionInfo.scala:32) > at > kafka.producer.async.DefaultEventHandler.(DefaultEventHandler.scala:41) > at kafka.producer.Producer.(Producer.scala:59) > at kafka.producer.ConsoleProducer$.main(ConsoleProducer.scala:158) > at kafka.producer.ConsoleProducer.main(ConsoleProducer.scala) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-1577) Exception in ConnectionQuotas while shutting down
[ https://issues.apache.org/jira/browse/KAFKA-1577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123623#comment-14123623 ] Bogdan Calmac commented on KAFKA-1577: -- You're right. Thanks for the explanation. > Exception in ConnectionQuotas while shutting down > - > > Key: KAFKA-1577 > URL: https://issues.apache.org/jira/browse/KAFKA-1577 > Project: Kafka > Issue Type: Bug > Components: core >Reporter: Joel Koshy >Assignee: Sriharsha Chintalapani > Labels: newbie > Attachments: KAFKA-1577.patch, KAFKA-1577_2014-08-20_19:57:44.patch, > KAFKA-1577_2014-08-26_07:33:13.patch, > KAFKA-1577_check_counter_before_decrementing.patch > > > {code} > [2014-08-07 19:38:08,228] ERROR Uncaught exception in thread > 'kafka-network-thread-9092-0': (kafka.utils.Utils$) > java.util.NoSuchElementException: None.get > at scala.None$.get(Option.scala:185) > at scala.None$.get(Option.scala:183) > at kafka.network.ConnectionQuotas.dec(SocketServer.scala:471) > at kafka.network.AbstractServerThread.close(SocketServer.scala:158) > at kafka.network.AbstractServerThread.close(SocketServer.scala:150) > at kafka.network.AbstractServerThread.closeAll(SocketServer.scala:171) > at kafka.network.Processor.run(SocketServer.scala:338) > at java.lang.Thread.run(Thread.java:662) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-1618) Exception thrown when running console producer with no port number for the broker
[ https://issues.apache.org/jira/browse/KAFKA-1618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123632#comment-14123632 ] Gwen Shapira commented on KAFKA-1618: - [~balaji.sesha...@dish.com] - per Joe's comment, can you make sure URI handling is consistent across the scripts? That we always accept either hostname:port or hostname (and use default port)? > Exception thrown when running console producer with no port number for the > broker > - > > Key: KAFKA-1618 > URL: https://issues.apache.org/jira/browse/KAFKA-1618 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.8.1.1 >Reporter: Gwen Shapira >Assignee: BalajiSeshadri > Labels: newbie > Fix For: 0.8.2 > > Attachments: KAFKA-1618.patch > > > When running console producer with just "localhost" as the broker list, I get > ArrayIndexOutOfBounds exception. > I expect either a clearer error about arguments or for the producer to > "guess" a default port. > [root@shapira-1 bin]# ./kafka-console-producer.sh --topic rufus1 > --broker-list localhost > java.lang.ArrayIndexOutOfBoundsException: 1 > at > kafka.client.ClientUtils$$anonfun$parseBrokerList$1.apply(ClientUtils.scala:102) > at > kafka.client.ClientUtils$$anonfun$parseBrokerList$1.apply(ClientUtils.scala:97) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) > at scala.collection.AbstractTraversable.map(Traversable.scala:105) > at kafka.client.ClientUtils$.parseBrokerList(ClientUtils.scala:97) > at > kafka.producer.BrokerPartitionInfo.(BrokerPartitionInfo.scala:32) > at > kafka.producer.async.DefaultEventHandler.(DefaultEventHandler.scala:41) > at kafka.producer.Producer.(Producer.scala:59) > at kafka.producer.ConsoleProducer$.main(ConsoleProducer.scala:158) > at kafka.producer.ConsoleProducer.main(ConsoleProducer.scala) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (KAFKA-1618) Exception thrown when running console producer with no port number for the broker
[ https://issues.apache.org/jira/browse/KAFKA-1618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123652#comment-14123652 ] BalajiSeshadri edited comment on KAFKA-1618 at 9/5/14 9:49 PM: --- GwenShapira Can you please list down the scripts for me ?. I will have them updated. was (Author: balaji.sesha...@dish.com): gwenshap Can you please list down the scripts for me ?. I will have them updated. > Exception thrown when running console producer with no port number for the > broker > - > > Key: KAFKA-1618 > URL: https://issues.apache.org/jira/browse/KAFKA-1618 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.8.1.1 >Reporter: Gwen Shapira >Assignee: BalajiSeshadri > Labels: newbie > Fix For: 0.8.2 > > Attachments: KAFKA-1618.patch > > > When running console producer with just "localhost" as the broker list, I get > ArrayIndexOutOfBounds exception. > I expect either a clearer error about arguments or for the producer to > "guess" a default port. > [root@shapira-1 bin]# ./kafka-console-producer.sh --topic rufus1 > --broker-list localhost > java.lang.ArrayIndexOutOfBoundsException: 1 > at > kafka.client.ClientUtils$$anonfun$parseBrokerList$1.apply(ClientUtils.scala:102) > at > kafka.client.ClientUtils$$anonfun$parseBrokerList$1.apply(ClientUtils.scala:97) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) > at scala.collection.AbstractTraversable.map(Traversable.scala:105) > at kafka.client.ClientUtils$.parseBrokerList(ClientUtils.scala:97) > at > kafka.producer.BrokerPartitionInfo.(BrokerPartitionInfo.scala:32) > at > kafka.producer.async.DefaultEventHandler.(DefaultEventHandler.scala:41) > at kafka.producer.Producer.(Producer.scala:59) > at kafka.producer.ConsoleProducer$.main(ConsoleProducer.scala:158) > at kafka.producer.ConsoleProducer.main(ConsoleProducer.scala) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-1618) Exception thrown when running console producer with no port number for the broker
[ https://issues.apache.org/jira/browse/KAFKA-1618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123652#comment-14123652 ] BalajiSeshadri commented on KAFKA-1618: --- gwenshap Can you please list down the scripts for me ?. I will have them updated. > Exception thrown when running console producer with no port number for the > broker > - > > Key: KAFKA-1618 > URL: https://issues.apache.org/jira/browse/KAFKA-1618 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.8.1.1 >Reporter: Gwen Shapira >Assignee: BalajiSeshadri > Labels: newbie > Fix For: 0.8.2 > > Attachments: KAFKA-1618.patch > > > When running console producer with just "localhost" as the broker list, I get > ArrayIndexOutOfBounds exception. > I expect either a clearer error about arguments or for the producer to > "guess" a default port. > [root@shapira-1 bin]# ./kafka-console-producer.sh --topic rufus1 > --broker-list localhost > java.lang.ArrayIndexOutOfBoundsException: 1 > at > kafka.client.ClientUtils$$anonfun$parseBrokerList$1.apply(ClientUtils.scala:102) > at > kafka.client.ClientUtils$$anonfun$parseBrokerList$1.apply(ClientUtils.scala:97) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) > at scala.collection.AbstractTraversable.map(Traversable.scala:105) > at kafka.client.ClientUtils$.parseBrokerList(ClientUtils.scala:97) > at > kafka.producer.BrokerPartitionInfo.(BrokerPartitionInfo.scala:32) > at > kafka.producer.async.DefaultEventHandler.(DefaultEventHandler.scala:41) > at kafka.producer.Producer.(Producer.scala:59) > at kafka.producer.ConsoleProducer$.main(ConsoleProducer.scala:158) > at kafka.producer.ConsoleProducer.main(ConsoleProducer.scala) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (KAFKA-1618) Exception thrown when running console producer with no port number for the broker
[ https://issues.apache.org/jira/browse/KAFKA-1618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123652#comment-14123652 ] BalajiSeshadri edited comment on KAFKA-1618 at 9/5/14 9:51 PM: --- [~gwenshap] Can you please list down the scripts for me ?. I will have them updated. was (Author: balaji.sesha...@dish.com): GwenShapira Can you please list down the scripts for me ?. I will have them updated. > Exception thrown when running console producer with no port number for the > broker > - > > Key: KAFKA-1618 > URL: https://issues.apache.org/jira/browse/KAFKA-1618 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.8.1.1 >Reporter: Gwen Shapira >Assignee: BalajiSeshadri > Labels: newbie > Fix For: 0.8.2 > > Attachments: KAFKA-1618.patch > > > When running console producer with just "localhost" as the broker list, I get > ArrayIndexOutOfBounds exception. > I expect either a clearer error about arguments or for the producer to > "guess" a default port. > [root@shapira-1 bin]# ./kafka-console-producer.sh --topic rufus1 > --broker-list localhost > java.lang.ArrayIndexOutOfBoundsException: 1 > at > kafka.client.ClientUtils$$anonfun$parseBrokerList$1.apply(ClientUtils.scala:102) > at > kafka.client.ClientUtils$$anonfun$parseBrokerList$1.apply(ClientUtils.scala:97) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) > at scala.collection.AbstractTraversable.map(Traversable.scala:105) > at kafka.client.ClientUtils$.parseBrokerList(ClientUtils.scala:97) > at > kafka.producer.BrokerPartitionInfo.(BrokerPartitionInfo.scala:32) > at > kafka.producer.async.DefaultEventHandler.(DefaultEventHandler.scala:41) > at kafka.producer.Producer.(Producer.scala:59) > at kafka.producer.ConsoleProducer$.main(ConsoleProducer.scala:158) > at kafka.producer.ConsoleProducer.main(ConsoleProducer.scala) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (KAFKA-1618) Exception thrown when running console producer with no port number for the broker
[ https://issues.apache.org/jira/browse/KAFKA-1618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BalajiSeshadri reassigned KAFKA-1618: - Assignee: Gwen Shapira (was: BalajiSeshadri) > Exception thrown when running console producer with no port number for the > broker > - > > Key: KAFKA-1618 > URL: https://issues.apache.org/jira/browse/KAFKA-1618 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.8.1.1 >Reporter: Gwen Shapira >Assignee: Gwen Shapira > Labels: newbie > Fix For: 0.8.2 > > Attachments: KAFKA-1618.patch > > > When running console producer with just "localhost" as the broker list, I get > ArrayIndexOutOfBounds exception. > I expect either a clearer error about arguments or for the producer to > "guess" a default port. > [root@shapira-1 bin]# ./kafka-console-producer.sh --topic rufus1 > --broker-list localhost > java.lang.ArrayIndexOutOfBoundsException: 1 > at > kafka.client.ClientUtils$$anonfun$parseBrokerList$1.apply(ClientUtils.scala:102) > at > kafka.client.ClientUtils$$anonfun$parseBrokerList$1.apply(ClientUtils.scala:97) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) > at scala.collection.AbstractTraversable.map(Traversable.scala:105) > at kafka.client.ClientUtils$.parseBrokerList(ClientUtils.scala:97) > at > kafka.producer.BrokerPartitionInfo.(BrokerPartitionInfo.scala:32) > at > kafka.producer.async.DefaultEventHandler.(DefaultEventHandler.scala:41) > at kafka.producer.Producer.(Producer.scala:59) > at kafka.producer.ConsoleProducer$.main(ConsoleProducer.scala:158) > at kafka.producer.ConsoleProducer.main(ConsoleProducer.scala) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-1482) Transient test failures for kafka.admin.DeleteTopicTest
[ https://issues.apache.org/jira/browse/KAFKA-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123659#comment-14123659 ] Sriharsha Chintalapani commented on KAFKA-1482: --- [~junrao] If you haven't started on this JIRA can I take a look. Thanks. > Transient test failures for kafka.admin.DeleteTopicTest > --- > > Key: KAFKA-1482 > URL: https://issues.apache.org/jira/browse/KAFKA-1482 > Project: Kafka > Issue Type: Bug >Reporter: Guozhang Wang >Assignee: Jun Rao > Labels: newbie > Fix For: 0.8.2 > > > A couple of test cases have timing related transient test failures: > kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic > FAILED > junit.framework.AssertionFailedError: Admin path /admin/delete_topic/test > path not deleted even after a replica is restarted > at junit.framework.Assert.fail(Assert.java:47) > at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:578) > at > kafka.admin.DeleteTopicTest.verifyTopicDeletion(DeleteTopicTest.scala:333) > at > kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic(DeleteTopicTest.scala:197) > kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition FAILED > junit.framework.AssertionFailedError: Replica logs not deleted after > delete topic is complete > at junit.framework.Assert.fail(Assert.java:47) > at junit.framework.Assert.assertTrue(Assert.java:20) > at > kafka.admin.DeleteTopicTest.verifyTopicDeletion(DeleteTopicTest.scala:338) > at > kafka.admin.DeleteTopicTest.testDeleteTopicDuringAddPartition(DeleteTopicTest.scala:216) > kafka.admin.DeleteTopicTest > testRequestHandlingDuringDeleteTopic FAILED > org.scalatest.junit.JUnitTestFailedError: fails with exception > at > org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:102) > at > org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:142) > at org.scalatest.Assertions$class.fail(Assertions.scala:664) > at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:142) > at > kafka.admin.DeleteTopicTest.testRequestHandlingDuringDeleteTopic(DeleteTopicTest.scala:123) > Caused by: > org.scalatest.junit.JUnitTestFailedError: Test should fail because > the topic is being deleted > at > org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101) > at > org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:142) > at org.scalatest.Assertions$class.fail(Assertions.scala:644) > at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:142) > at > kafka.admin.DeleteTopicTest.testRequestHandlingDuringDeleteTopic(DeleteTopicTest.scala:120) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 24676: Fix KAFKA-1583
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/24676/ --- (Updated Sept. 5, 2014, 9:55 p.m.) Review request for kafka. Bugs: KAFKA-1583 https://issues.apache.org/jira/browse/KAFKA-1583 Repository: kafka Description (updated) --- Rebase on KAFKA-1616 and minor changes for unit tests Diffs (updated) - core/src/main/scala/kafka/api/FetchRequest.scala 51cdccf7f90eb530cc62b094ed822b8469d50b12 core/src/main/scala/kafka/api/FetchResponse.scala af9308737bf7832eca018c2b3ede703f7d1209f1 core/src/main/scala/kafka/api/OffsetCommitRequest.scala 861a6cf11dc6b6431fcbbe9de00c74a122f204bd core/src/main/scala/kafka/api/ProducerRequest.scala b2366e7eedcac17f657271d5293ff0bef6f3cbe6 core/src/main/scala/kafka/api/ProducerResponse.scala a286272c834b6f40164999ff8b7f8998875f2cfe core/src/main/scala/kafka/cluster/Partition.scala ff106b47e6ee194cea1cf589474fef975b9dd7e2 core/src/main/scala/kafka/common/ErrorMapping.scala 3fae7910e4ce17bc8325887a046f383e0c151d44 core/src/main/scala/kafka/log/Log.scala 0ddf97bd30311b6039e19abade41d2fbbad2f59b core/src/main/scala/kafka/network/BoundedByteBufferSend.scala a624359fb2059340bb8dc1619c5b5f226e26eb9b core/src/main/scala/kafka/server/DelayedFetch.scala e0f14e25af03e6d4344386dcabc1457ee784d345 core/src/main/scala/kafka/server/DelayedProduce.scala 9481508fc2d6140b36829840c337e557f3d090da core/src/main/scala/kafka/server/FetchRequestPurgatory.scala ed1318891253556cdf4d908033b704495acd5724 core/src/main/scala/kafka/server/KafkaApis.scala c584b559416b3ee4bcbec5966be4891e0a03eefb core/src/main/scala/kafka/server/OffsetManager.scala 43eb2a35bb54d32c66cdb94772df657b3a104d1a core/src/main/scala/kafka/server/ProducerRequestPurgatory.scala d4a7d4a79b44263a1f7e1a92874dd36aa06e5a3f core/src/main/scala/kafka/server/ReplicaManager.scala 68758e35d496a4659819960ae8e809d6e215568e core/src/main/scala/kafka/server/RequestPurgatory.scala cf3ed4c8f197d1197658645ccb55df0bce86bdd4 core/src/main/scala/kafka/utils/DelayedItem.scala d7276494072f14f1cdf7d23f755ac32678c5675c core/src/test/scala/unit/kafka/server/HighwatermarkPersistenceTest.scala 03a424d45215e1e7780567d9559dae4d0ae6fc29 core/src/test/scala/unit/kafka/server/ISRExpirationTest.scala cd302aa51eb8377d88b752d48274e403926439f2 core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala a9c4ddc78df0b3695a77a12cf8cf25521a203122 core/src/test/scala/unit/kafka/server/RequestPurgatoryTest.scala a577f4a8bf420a5bc1e62fad6d507a240a42bcaa core/src/test/scala/unit/kafka/server/ServerShutdownTest.scala ab60e9b3a4d063c838bdc7f97b3ac7d2ede87072 core/src/test/scala/unit/kafka/server/SimpleFetchTest.scala 09ed8f5a7a414ae139803bf82d336c2d80bf4ac5 Diff: https://reviews.apache.org/r/24676/diff/ Testing --- Unit tests Thanks, Guozhang Wang
[jira] [Updated] (KAFKA-1583) Kafka API Refactoring
[ https://issues.apache.org/jira/browse/KAFKA-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guozhang Wang updated KAFKA-1583: - Attachment: KAFKA-1583_2014-09-05_14:55:38.patch > Kafka API Refactoring > - > > Key: KAFKA-1583 > URL: https://issues.apache.org/jira/browse/KAFKA-1583 > Project: Kafka > Issue Type: Bug >Reporter: Guozhang Wang >Assignee: Guozhang Wang > Fix For: 0.9.0 > > Attachments: KAFKA-1583.patch, KAFKA-1583_2014-08-20_13:54:38.patch, > KAFKA-1583_2014-08-21_11:30:34.patch, KAFKA-1583_2014-08-27_09:44:50.patch, > KAFKA-1583_2014-09-01_18:07:42.patch, KAFKA-1583_2014-09-02_13:37:47.patch, > KAFKA-1583_2014-09-05_14:08:36.patch, KAFKA-1583_2014-09-05_14:55:38.patch > > > This is the next step of KAFKA-1430. Details can be found at this page: > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+API+Refactoring -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-1583) Kafka API Refactoring
[ https://issues.apache.org/jira/browse/KAFKA-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123664#comment-14123664 ] Guozhang Wang commented on KAFKA-1583: -- Updated reviewboard https://reviews.apache.org/r/24676/diff/ against branch origin/trunk > Kafka API Refactoring > - > > Key: KAFKA-1583 > URL: https://issues.apache.org/jira/browse/KAFKA-1583 > Project: Kafka > Issue Type: Bug >Reporter: Guozhang Wang >Assignee: Guozhang Wang > Fix For: 0.9.0 > > Attachments: KAFKA-1583.patch, KAFKA-1583_2014-08-20_13:54:38.patch, > KAFKA-1583_2014-08-21_11:30:34.patch, KAFKA-1583_2014-08-27_09:44:50.patch, > KAFKA-1583_2014-09-01_18:07:42.patch, KAFKA-1583_2014-09-02_13:37:47.patch, > KAFKA-1583_2014-09-05_14:08:36.patch, KAFKA-1583_2014-09-05_14:55:38.patch > > > This is the next step of KAFKA-1430. Details can be found at this page: > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+API+Refactoring -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (KAFKA-1625) Sample Java code contains Scala syntax
David Chen created KAFKA-1625: - Summary: Sample Java code contains Scala syntax Key: KAFKA-1625 URL: https://issues.apache.org/jira/browse/KAFKA-1625 Project: Kafka Issue Type: Bug Components: website Reporter: David Chen As I was reading the Kafka documentation, I noticed that some of the parameters use Scala syntax, even though the code appears to be Java. For example: {code} public static kafka.javaapi.consumer.ConsumerConnector createJavaConsumerConnector(config: ConsumerConfig); {code} Also, what is the reason for fully qualifying these classes? I understand that there are Scala and Java classes with the same name, but I think that fully qualifying them in the sample code would encourage that practice by users, which is not desirable in Java code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-1625) Sample Java code contains Scala syntax
[ https://issues.apache.org/jira/browse/KAFKA-1625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Chen updated KAFKA-1625: -- Attachment: KAFKA-1625.site.0.patch I am attaching a patch for this. How do I create an RB for a change to the website? I tried to manually create the RB and entered {{/site}} for the base directory, but RB gave me the following error: {code} Line undefined: Repository moved permanently to 'https://svn.apache.org/repos/asf/kafka/site/081/api.html'; please relocate {code} > Sample Java code contains Scala syntax > -- > > Key: KAFKA-1625 > URL: https://issues.apache.org/jira/browse/KAFKA-1625 > Project: Kafka > Issue Type: Bug > Components: website >Reporter: David Chen > Attachments: KAFKA-1625.site.0.patch > > > As I was reading the Kafka documentation, I noticed that some of the > parameters use Scala syntax, even though the code appears to be Java. For > example: > {code} > public static kafka.javaapi.consumer.ConsumerConnector > createJavaConsumerConnector(config: ConsumerConfig); > {code} > Also, what is the reason for fully qualifying these classes? I understand > that there are Scala and Java classes with the same name, but I think that > fully qualifying them in the sample code would encourage that practice by > users, which is not desirable in Java code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-1625) Sample Java code contains Scala syntax
[ https://issues.apache.org/jira/browse/KAFKA-1625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Chen updated KAFKA-1625: -- Status: Patch Available (was: Open) > Sample Java code contains Scala syntax > -- > > Key: KAFKA-1625 > URL: https://issues.apache.org/jira/browse/KAFKA-1625 > Project: Kafka > Issue Type: Bug > Components: website >Reporter: David Chen > Attachments: KAFKA-1625.site.0.patch > > > As I was reading the Kafka documentation, I noticed that some of the > parameters use Scala syntax, even though the code appears to be Java. For > example: > {code} > public static kafka.javaapi.consumer.ConsumerConnector > createJavaConsumerConnector(config: ConsumerConfig); > {code} > Also, what is the reason for fully qualifying these classes? I understand > that there are Scala and Java classes with the same name, but I think that > fully qualifying them in the sample code would encourage that practice by > users, which is not desirable in Java code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (KAFKA-1626) Use Jekyll for website
David Chen created KAFKA-1626: - Summary: Use Jekyll for website Key: KAFKA-1626 URL: https://issues.apache.org/jira/browse/KAFKA-1626 Project: Kafka Issue Type: Improvement Reporter: David Chen Currently, the Kafka website uses Apache httpd includes. Developing the website can be made easier by switching to a static site generator such as Jekyll, which is what Samza uses for its website. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (KAFKA-1625) Sample Java code contains Scala syntax
[ https://issues.apache.org/jira/browse/KAFKA-1625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Chen reassigned KAFKA-1625: - Assignee: David Chen > Sample Java code contains Scala syntax > -- > > Key: KAFKA-1625 > URL: https://issues.apache.org/jira/browse/KAFKA-1625 > Project: Kafka > Issue Type: Bug > Components: website >Reporter: David Chen >Assignee: David Chen > Attachments: KAFKA-1625.site.0.patch, KAFKA-1625.site.1.patch > > > As I was reading the Kafka documentation, I noticed that some of the > parameters use Scala syntax, even though the code appears to be Java. For > example: > {code} > public static kafka.javaapi.consumer.ConsumerConnector > createJavaConsumerConnector(config: ConsumerConfig); > {code} > Also, what is the reason for fully qualifying these classes? I understand > that there are Scala and Java classes with the same name, but I think that > fully qualifying them in the sample code would encourage that practice by > users, which is not desirable in Java code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-1625) Sample Java code contains Scala syntax
[ https://issues.apache.org/jira/browse/KAFKA-1625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Chen updated KAFKA-1625: -- Attachment: KAFKA-1625.site.1.patch Attaching a new patch that removes the full qualification of class names in the sample code. > Sample Java code contains Scala syntax > -- > > Key: KAFKA-1625 > URL: https://issues.apache.org/jira/browse/KAFKA-1625 > Project: Kafka > Issue Type: Bug > Components: website >Reporter: David Chen > Attachments: KAFKA-1625.site.0.patch, KAFKA-1625.site.1.patch > > > As I was reading the Kafka documentation, I noticed that some of the > parameters use Scala syntax, even though the code appears to be Java. For > example: > {code} > public static kafka.javaapi.consumer.ConsumerConnector > createJavaConsumerConnector(config: ConsumerConfig); > {code} > Also, what is the reason for fully qualifying these classes? I understand > that there are Scala and Java classes with the same name, but I think that > fully qualifying them in the sample code would encourage that practice by > users, which is not desirable in Java code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)