GitHub user kyleabcha opened a pull request:

    https://github.com/apache/kafka/pull/3750

    KafkaConsumer was hung when bootstrap servers was not existed

    Could anyone help me on this?
    
    We have an issue if we entered an non-existed host:port for 
bootstrap.servers property on KafkaConsumer. The created KafkaConsumer was hung 
forever.
    
    **the debug message:**
    java.net.ConnectException: Connection timed out: no further information
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
        at 
org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
        at 
org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:95)
        at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:359)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:326)
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:432)
        at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)
        at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
        at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:199)
        at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
        at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:223)
        at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:200)
        at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
        at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
        at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
        at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
    [2017-08-28 09:20:56,400] DEBUG Node -1 disconnected. 
(org.apache.kafka.clients.NetworkClient)
    [2017-08-28 09:20:56,400] WARN Connection to node -1 could not be 
established. Broker may not be available. 
(org.apache.kafka.clients.NetworkClient)
    [2017-08-28 09:20:56,400] DEBUG Give up sending metadata request since no 
node is available (org.apache.kafka.clients.NetworkClient)
    [2017-08-28 09:20:56,450] DEBUG Initialize connection to node -1 for 
sending metadata request (org.apache.kafka.clients.NetworkClient)

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/apache/kafka trunk

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/kafka/pull/3750.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #3750
    
----
commit de982ba3fbf99664f0aaa5aa4b72af8fd1881232
Author: Randall Hauch <rha...@gmail.com>
Date:   2017-06-21T00:48:32Z

    KAFKA-5472: Eliminated duplicate group names when validating connector 
results
    
    Kafka Connect was adding duplicate group names in the response from the 
REST API's validation of connector configurations. This fixes the duplicates 
and maintains the order of the `ConfigDef` objects so that the `ConfigValue` 
results are in the same order.
    
    This is a blocker and should be merged to 0.11.0.
    
    Author: Randall Hauch <rha...@gmail.com>
    
    Reviewers: Ewen Cheslack-Postava <e...@confluent.io>
    
    Closes #3379 from rhauch/KAFKA-5472

commit 76f6e14b07bd97d17f9275968a047d68b4658704
Author: Kelvin Rutt <ruttkel...@gmail.com>
Date:   2017-06-21T01:00:41Z

    KAFKA-5413; Log cleaner fails due to large offset in segment file
    
    the contribution is my original work and I license the work to the project 
under the project's open source license.
    
    junrao , I had already made the code change before your last comment.  I've 
done pretty much what you said, except that I've not used the current segment 
because I wasn't sure if it will always be available.
    I'm happy to change it if you prefer.
    I've run all the unit and integration tests which all passed.
    
    Author: Kelvin Rutt <ruttkel...@gmail.com>
    Author: Kelvin Rutt <kelvin.r...@sky.com>
    
    Reviewers: Jun Rao <jun...@gmail.com>
    
    Closes #3357 from kelvinrutt/kafka_5413_bugfix

commit cae5977ed0d6b63f992973800273769c970b0a0a
Author: Matthias J. Sax <matth...@confluent.io>
Date:   2017-06-21T08:32:46Z

    MINOR: explain producer naming within Streams
    
    Author: Matthias J. Sax <matth...@confluent.io>
    
    Reviewers: Bill Bejeck <bbej...@gmail.com>, Damian Guy 
<damian....@gmail.com>
    
    Closes #3378 from mjsax/minor-producer-naming

commit 55a90938a12d8928289a30588bbad6c959c48674
Author: Eno Thereska <eno.there...@gmail.com>
Date:   2017-06-21T10:46:59Z

    MINOR: add Yahoo benchmark to nightly runs
    
    Author: Eno Thereska <eno.there...@gmail.com>
    
    Reviewers: Damian Guy <damian....@gmail.com>
    
    Closes #3289 from enothereska/yahoo-benchmark

commit 254add953477df65fe36144dc0714e0c9815c767
Author: Apurva Mehta <apu...@confluent.io>
Date:   2017-06-21T18:00:04Z

    KAFKA-5477; Lower retry backoff for first AddPartitions in transaction
    
    This patch lowers the retry backoff when receiving a 
CONCURRENT_TRANSACTIONS error from an AddPartitions request. The default of 
100ms would mean that back to back transactions would be 100ms long at minimum, 
making things to slow.
    
    Author: Apurva Mehta <apu...@confluent.io>
    
    Reviewers: Guozhang Wang <wangg...@gmail.com>, Ismael Juma 
<ism...@juma.me.uk>, Jason Gustafson <ja...@confluent.io>
    
    Closes #3377 from apurvam/HOTFIX-lower-retry-for-add-partitions

commit f848e2cd681ced74fefdd38a54a44e31e4d867fb
Author: Guozhang Wang <wangg...@gmail.com>
Date:   2017-06-21T20:05:54Z

    Revert "MINOR: make flush no-op as we don't need to call flush on commit."
    
    This reverts commit 90b2a2bf664e4e40d4cd1b46c72732c5edb97cf9.

commit e6e263174300ffab05676790f2a6c963ba24e5c9
Author: Jason Gustafson <ja...@confluent.io>
Date:   2017-06-21T21:04:19Z

    MINOR: Detail message/batch size implications for conversion between old 
and new formats
    
    Author: Jason Gustafson <ja...@confluent.io>
    
    Reviewers: Ismael Juma <ism...@juma.me.uk>
    
    Closes #3373 from hachikuji/fetch-size-upgrade-notes

commit 96587f4b1ffd372d3e4f9a1fba6fc1d2f84a191d
Author: Ewen Cheslack-Postava <m...@ewencp.org>
Date:   2017-06-21T21:20:48Z

    KAFKA-5475: Connector config validation should include fields for defined 
transformation aliases
    
    Author: Ewen Cheslack-Postava <m...@ewencp.org>
    
    Reviewers: Konstantine Karantasis <konstant...@confluent.io>, Jason 
Gustafson <ja...@confluent.io>
    
    Closes #3399 from ewencp/kafka-5475-validation-transformations

commit bc47e9d6ca976ba3c15249500b2bb6f6355816bc
Author: Apurva Mehta <apu...@confluent.io>
Date:   2017-06-21T21:41:51Z

    KAFKA-5491; Enable transactions in ProducerPerformance Tool
    
    With this patch, the `ProducePerfomance` tool can create transactions of 
differing durations.
    
    This patch was used to to collect the initial set of benchmarks for 
transaction performance, documented here: 
https://docs.google.com/spreadsheets/d/1dHY6M7qCiX-NFvsgvaE0YoVdNq26uA8608XIh_DUpI4/edit#gid=282787170
    
    Author: Apurva Mehta <apu...@confluent.io>
    
    Reviewers: Jun Rao <jun...@gmail.com>
    
    Closes #3400 from apurvam/MINOR-add-transaction-size-to-producre-perf

commit 914e42a28254ef6a4818b3fcdc2197db6fbe8e0f
Author: Matthias J. Sax <matth...@confluent.io>
Date:   2017-06-22T00:16:48Z

    KAFKA-5474: Streams StandbyTask should no checkpoint on commit if EOS is 
enabled
    
    <strike> - actual fix for `StandbyTask#commit()` </strike>
    
    Additionally (for debugging):
     - EOS test, does not report "expected" value correctly
     - add `IntegerDecoder` (to be use with `kafka.tools.DumpLogSegments`)
     - add test for `StreamTask` to not checkpoint on commit if EOS enabled
    
    Author: Matthias J. Sax <matth...@confluent.io>
    
    Reviewers: Bill Bejeck <bbej...@gmail.com>, Damian Guy 
<damian....@gmail.com>, Guozhang Wang <wangg...@gmail.com>
    
    Closes #3375 from mjsax/kafka-5474-eos-standby-task

commit 4e8797f54ef9d2d7f40e3100943ae8afd5496b16
Author: Jeyhun Karimov <je.kari...@gmail.com>
Date:   2017-06-22T07:40:54Z

    KAFKA-4659; Improve test coverage of CachingKeyValueStore
    
    Author: Jeyhun Karimov <je.kari...@gmail.com>
    
    Reviewers: Matthias J Sax <matth...@confluent.io>, Guozhang Wang 
<wangg...@gmail.com>, Damian Guy <damian....@gmail.com>
    
    Closes #3291 from jeyhunkarimov/KAFKA-4659

commit 7f4feda959f5a0c438487844a0754cdf75f32a46
Author: Guozhang Wang <wangg...@gmail.com>
Date:   2017-06-22T07:53:32Z

    MINOR: Turn off caching in demos for more understandable outputs
    
    Author: Guozhang Wang <wangg...@gmail.com>
    
    Reviewers: Matthias J Sax <matth...@confluent.io>, Bill Bejeck 
<bbej...@gmail.com>
    
    Closes #3403 from guozhangwang/KMinor-turn-off-caching-in-demo

commit cb5e1f0a40e9a9779a5dcabf555a593363728b33
Author: Jeyhun Karimov <je.kari...@gmail.com>
Date:   2017-06-22T11:23:58Z

    KAFKA-4785; Records from internal repartitioning topics should always use 
RecordMetadataTimestampExtractor
    
    Author: Jeyhun Karimov <je.kari...@gmail.com>
    
    Reviewers: Matthias J. Sax <matth...@confluent.io>, Eno Thereska 
<eno.there...@gmail.com>, Bill Bejeck <bbej...@gmail.com>, Damian Guy 
<damian....@gmail.com>
    
    Closes #3106 from jeyhunkarimov/KAFKA-4785

commit a4794b11b201804c39ee9f9a2f32dbdd7c2c246b
Author: Ismael Juma <ism...@juma.me.uk>
Date:   2017-06-22T11:48:53Z

    KAFKA-5486: org.apache.kafka logging should go to server.log
    
    The current config sends org.apache.kafka and any unspecified logger to
    stdout. They should go to `server.log` instead.
    
    Author: Ismael Juma <ism...@juma.me.uk>
    
    Reviewers: Damian Guy <damian....@gmail.com>
    
    Closes #3402 from ijuma/kafka-5486-org.apache.kafka-logging-server.log

commit a6799f4e14ac68a5915bce50f37343bec45c988a
Author: Tom Bentley <tbent...@redhat.com>
Date:   2017-06-22T12:42:32Z

    KAFKA-4059; API Design section under Implementation is out of date
    
    It describes the old deprecated clients and it's better to just
    remove it.
    
    The contribution is my original work and I license the work to the
    project under the project's open source license.
    
    Author: Tom Bentley <tbent...@redhat.com>
    
    Reviewers: Ismael Juma <ism...@juma.me.uk>
    
    Closes #3385 from tombentley/KAFKA-4059

commit 785d8e20caf31d4165dbd9573828b74e5859b259
Author: Kevin Sweeney <restlessdes...@users.noreply.github.com>
Date:   2017-06-22T13:03:34Z

    MINOR: Provide link to ZooKeeper within Quickstart
    
    Author: Kevin Sweeney <restlessdes...@users.noreply.github.com>
    
    Reviewers: Ismael Juma <ism...@juma.me.uk>
    
    Closes #3372 from restlessdesign/patch-1

commit adfaa1161150635b9bb0e36a573382c5b68960e2
Author: Jeyhun Karimov <je.kari...@gmail.com>
Date:   2017-06-22T14:00:41Z

    KAFKA-4655; Improve test coverage of CompositeReadOnlySessionStore
    
    Author: Jeyhun Karimov <je.kari...@gmail.com>
    
    Reviewers: Matthias J. Sax <matth...@confluent.io>, Damian Guy 
<damian....@gmail.com>
    
    Closes #3290 from jeyhunkarimov/KAFKA-4655

commit 1744a9b4c2a13545036f172064df01b34b0dded0
Author: Jeyhun Karimov <je.kari...@gmail.com>
Date:   2017-06-22T14:04:29Z

    KAFKA-4658; Improve test coverage InMemoryKeyValueLoggedStore
    
    Author: Jeyhun Karimov <je.kari...@gmail.com>
    
    Reviewers: Matthias J. Sax <matth...@confluent.io>, Damian Guy 
<damian....@gmail.com>
    
    Closes #3293 from jeyhunkarimov/KAFKA-4658

commit fc58ac594f0eb63e0928374f67eba25bfa18eaea
Author: Jason Gustafson <ja...@confluent.io>
Date:   2017-06-22T15:54:28Z

    KAFKA-5490; Skip empty record batches in the consumer
    
    The actual fix for KAFKA-5490 is in
    https://github.com/apache/kafka/pull/3406.
    
    This is just the consumer change that will allow the cleaner
    to use empty record batches without breaking 0.11.0.0
    consumers (assuming that KAFKA-5490 does not make the cut).
    This is a safe change even if we decide to go with a different option
    for KAFKA-5490 and I'd like to include it in RC2.
    
    Author: Jason Gustafson <ja...@confluent.io>
    Author: Ismael Juma <ism...@juma.me.uk>
    
    Reviewers: Damian Guy <damian....@gmail.com>, Ismael Juma 
<ism...@juma.me.uk>
    
    Closes #3408 from ijuma/kafka-5490-consumer-should-skip-empty-batches

commit 5d9563d95fb09903d13b7135e1136081feb4fc4b
Author: Ismael Juma <ism...@juma.me.uk>
Date:   2017-06-22T15:56:06Z

    MINOR: Switch ZK client logging to INFO
    
    Author: Ismael Juma <ism...@juma.me.uk>
    
    Reviewers: Jun Rao <jun...@gmail.com>
    
    Closes #3409 from ijuma/tweak-log-config

commit cd11fd787438d26324d9644c248b812d25e26b34
Author: Ewen Cheslack-Postava <m...@ewencp.org>
Date:   2017-06-22T20:00:12Z

    KAFKA-5498: ConfigDef derived from another ConfigDef did not correctly 
compute parentless configs
    
    Author: Ewen Cheslack-Postava <m...@ewencp.org>
    
    Reviewers: Gwen Shapira
    
    Closes #3412 from ewencp/kafka-5498-base-configdef-parentless-configs

commit 6d2fbfc911120c178d9ba2528179fb4d8475afc4
Author: Onur Karaman <okara...@linkedin.com>
Date:   2017-06-22T21:28:03Z

    KAFKA-5502; read current brokers from zookeeper upon processing broker 
change
    
    Dong Lin's testing of the 0.11.0 release revealed a controller-side 
performance regression in clusters with many brokers and many partitions when 
bringing up many brokers simultaneously.
    
    The regression is caused by KAFKA-5028: a Watcher receives WatchedEvent 
notifications from the raw ZooKeeper client EventThread. A WatchedEvent only 
contains the following information:
    - KeeperState
    - EventType
    - path
    
    Note that it does not actually contain the current data or current set of 
children associated with the data/child change notification. It is up to the 
user to do this lookup to see the current data or set of children.
    
    ZkClient is itself a Watcher. When it receives a WatchedEvent, it puts a 
ZkEvent into its own queue which its own ZkEventThread processes. Users of 
ZkClient interact with these notifications through listeners (IZkDataListener, 
IZkChildListener). IZkDataListener actually expects as input the current data 
of the watched znode, and likewise IZkChildListener actually expects as input 
the current set of children of the watched znode. In order to provide this 
information to the listeners, the ZkEventThread, when processing the ZkEvent in 
its queue, looks up the information (either the current data or current set of 
children) simultaneously sets up the next watch, and passes the result to the 
listener.
    
    The regression introduced in KAFKA-5028 is the time at which we lookup the 
information needed for the event processing.
    
    In the past, the lookup from the ZkEventThread during ZkEvent processing 
would be passed into the listener which is processed immediately after. For 
instance in ZkClient.fireChildChangedEvents:
    ```
    List<String> children = getChildren(path);
    listener.handleChildChange(path, children);
    ```
    Now, however, there are multiple listeners that pass information looked up 
by the ZkEventThread into a ControllerEvent which gets processed potentially 
much later. For instance in BrokerChangeListener:
    ```
    class BrokerChangeListener(controller: KafkaController) extends 
IZkChildListener with Logging {
      override def handleChildChange(parentPath: String, currentChilds: 
java.util.List[String]): Unit = {
        import JavaConverters._
        
controller.addToControllerEventQueue(controller.BrokerChange(currentChilds.asScala))
      }
    }
    ```
    
    In terms of impact, this:
    - increases the odds of working with stale information by the time the 
ControllerEvent gets processed.
    - can cause the cluster to take a long time to stabilize if you bring up 
many brokers simultaneously.
    
    In terms of how to solve it:
    - (short term) just ignore the ZkClient's information lookup and repeat the 
lookup at the start of the ControllerEvent. This is the approach taken in this 
ticket.
    - (long term) try to remove a queue. This basically means getting rid of 
ZkClient. This is likely the approach that will be taken in KAFKA-5501.
    
    Author: Onur Karaman <okara...@linkedin.com>
    
    Reviewers: Ismael Juma <ism...@juma.me.uk>, Jun Rao <jun...@gmail.com>
    
    Closes #3413 from onurkaraman/KAFKA-5502

commit 4baca9172dd662c2e45cd99c638a052eeca688c5
Author: Matthias J. Sax <matth...@confluent.io>
Date:   2017-06-22T22:00:29Z

    HOTFIX: reduce log verbosity on commit
    
    Author: Matthias J. Sax <matth...@confluent.io>
    
    Reviewers: Bill Bejeck <bbej...@gmail.com>, Eno Thereska 
<eno.there...@gmail.com>, Ismael Juma <ism...@juma.me.uk>
    
    Closes #3414 from mjsax/hotfix-commit-logging

commit b62cccd078210a5333a6dbc64881dd36f925e139
Author: Matthias J. Sax <matth...@confluent.io>
Date:   2017-06-22T23:16:07Z

    MINOR: improve test README
    
    Author: Matthias J. Sax <matth...@confluent.io>
    
    Reviewers: Ewen Cheslack-Postava <e...@confluent.io>
    
    Closes #3416 from mjsax/minor-aws

commit ac539796478d6f39933d02f52e9c8972ac07466c
Author: Matthias J. Sax <matth...@confluent.io>
Date:   2017-06-22T23:42:55Z

    MINOR: update AWS test setup guide
    
    Author: Matthias J. Sax <matth...@confluent.io>
    
    Reviewers: Joseph Rea <j...@users.noreply.github.com>, Ewen 
Cheslack-Postava <e...@confluent.io>
    
    Closes #2575 from mjsax/minor-update-system-test-readme

commit 2420491f417012ba5215a9f72fa5e3a0c586c8e8
Author: Damian Guy <damian....@gmail.com>
Date:   2017-06-23T07:59:13Z

    KAFKA-4913; prevent creation of window stores with less than 2 segments
    
    Throw IllegalArgumentException when attempting to create a `WindowStore` 
via `Stores` or directly with `RocksDBWindowStoreSupplier` when it has less 
than 2 segments.
    
    Author: Damian Guy <damian....@gmail.com>
    
    Reviewers: Eno Thereska <eno.there...@gmail.com>, Matthias J. Sax 
<matth...@confluent.io>, Bill Bejeck <bbej...@gmail.com>, Guozhang Wang 
<wangg...@gmail.com>
    
    Closes #3410 from dguy/kafka-4913

commit 26eea1d71e57348284ea00182045c8d943336f4e
Author: Jeyhun Karimov <je.kari...@gmail.com>
Date:   2017-06-23T10:32:47Z

    KAFKA-4656; Improve test coverage of CompositeReadOnlyKeyValueStore
    
    Author: Jeyhun Karimov <je.kari...@gmail.com>
    
    Reviewers: Matthias J. Sax <matth...@confluent.io>, Eno Thereska 
<eno.there...@gmail.com>, Damian Guy <damian....@gmail.com>
    
    Closes #3292 from jeyhunkarimov/KAFKA-4656

commit 701e318ee1046946ce6681a59223504d8c33d751
Author: Jeyhun Karimov <je.kari...@gmail.com>
Date:   2017-06-23T10:41:02Z

    KAFKA-4653; Improve test coverage of RocksDBStore
    
    Author: Jeyhun Karimov <je.kari...@gmail.com>
    
    Reviewers: Matthias J. Sax <matth...@confluent.io>, Damian Guy 
<damian....@gmail.com>
    
    Closes #3294 from jeyhunkarimov/KAFKA-4653

commit 9ada0f81695d99587b18bffe939de651065076ab
Author: ppatierno <ppatie...@live.com>
Date:   2017-06-23T13:14:18Z

    MINOR: Fixed way how logging methods are used for having a consistent one
    
    In the stream library there are few cases where we don't leverage on 
logging methods features (i.e. using {} placeholder instead of string 
concatenation or passing the exception variable)
    
    Author: ppatierno <ppatie...@live.com>
    
    Reviewers: Damian Guy <damian....@gmail.com>
    
    Closes #3419 from ppatierno/streams-consistent-logging

commit b490368735b3bfe82acab3f2db4894a757abb966
Author: Eno Thereska <eno.there...@gmail.com>
Date:   2017-06-23T14:56:29Z

    HOTFIX: Don't check metadata unless you are creating topic
    
    During a broker rolling upgrade, it's likely we don't have enough brokers 
ready yet. If streams does not need to create a topic it shouldn't check how 
many brokers are up.
    
    The system test for this is in a separate PR: 
https://github.com/apache/kafka/pull/3411
    
    Author: Eno Thereska <eno.there...@gmail.com>
    
    Reviewers: Damian Guy <damian....@gmail.com>
    
    Closes #3418 from enothereska/hotfix-replication

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to