Hi Ismael,

Thank you for your reply. It is 0.10.0.1.

On Tue, Apr 18, 2017 at 12:52 AM Ismael Juma <ism...@juma.me.uk> wrote:

> Hi Anas,
>
>
>
> What is the version of the consumer?
>
>
>
> Ismael
>
>
>
> On Mon, Apr 17, 2017 at 5:32 PM, Anas Mosaad <anas.mos...@incorta.com>
>
> wrote:
>
>
>
> > Hi All,
>
> >
>
> > We have a customer that recently upgraded their brokers to 0.10.1.1.
> After
>
> > upgrade they are unable to consume any messages. Can someone please help
>
> > what might be the issue?
>
> >
>
> > The error being thrown is:
>
> >
>
> > Failed to send SSL Close message
>
> > >  [org.apache.kafka.common.network.SslTransportLayer.close]
>
> > >  java.io.IOException: Connection reset by peer
>
> > >   at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>
> > >   at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>
> > >   at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>
> > >   at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>
> > >   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
>
> > >   at
>
> > > org.apache.kafka.common.network.SslTransportLayer.
>
> > flush(SslTransportLayer.java:195)
>
> > >   at
>
> > > org.apache.kafka.common.network.SslTransportLayer.
>
> > close(SslTransportLayer.java:163)
>
> > >   at org.apache.kafka.common.utils.Utils.closeAll(Utils.java:690)
>
> > >   at
>
> > >
> org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:47)
>
> > >   at org.apache.kafka.common.network.Selector.close(Selector.java:471)
>
> > >   at
>
> > > org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.
>
> > java:348)
>
> > >   at org.apache.kafka.common.network.Selector.poll(Selector.java:283)
>
> > >   at
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
>
> > >   at
>
> > > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.
>
> > clientPoll(ConsumerNetworkClient.java:360)
>
> > >   at
>
> > > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(
>
> > ConsumerNetworkClient.java:224)
>
> > >   at
>
> > > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(
>
> > ConsumerNetworkClient.java:192)
>
> > >   at
>
> > > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.
>
> > awaitMetadataUpdate(ConsumerNetworkClient.java:134)
>
> > >   at
>
> > > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.
>
> > ensureCoordinatorReady(AbstractCoordinator.java:183)
>
> > >   at
>
> > > org.apache.kafka.clients.consumer.KafkaConsumer.
>
> > pollOnce(KafkaConsumer.java:974)
>
> > >   at
>
> > > org.apache.kafka.clients.consumer.KafkaConsumer.poll(
>
> > KafkaConsumer.java:938)
>
> > >  ......
>
> >
>
> >
>
> > The client configuration are:
>
> >
>
> >    - ssl.truststore.location=<path>
>
> >    - ssl.truststore.password=<password>
>
> >    - security.protocol=SASL_SSL
>
> >    - sasl.mechanism=PLAIN
>
> >
>
> > When we try the above configuration from the console consumer it works.
> If
>
> > we try from the code it fails. It used to work with previous 0.10 version
>
> > without errors.
>
> >
>
> > Below are the whole client configuration being passed - from the logs:
>
> >
>
> > > ConsumerConfig values:
>
> > >   interceptor.classes = null
>
> > >   request.timeout.ms = 40000
>
> > >   check.crcs = true
>
> > >   ssl.truststore.password = [hidden]
>
> > >   retry.backoff.ms = 100
>
> > >   ssl.keymanager.algorithm = SunX509
>
> > >   receive.buffer.bytes = 65536
>
> > >   ssl.key.password = null
>
> > >   ssl.cipher.suites = null
>
> > >   sasl.kerberos.ticket.renew.jitter = 0.05
>
> > >   sasl.kerberos.service.name = null
>
> > >   ssl.provider = null
>
> > >   session.timeout.ms = 30000
>
> > >   sasl.kerberos.ticket.renew.window.factor = 0.8
>
> > >   sasl.mechanism = PLAIN
>
> > >   max.poll.records = 2147483647
>
> > >   bootstrap.servers = [kafka.broker.com:9094]
>
> > >   client.id = test-connection [topic-bulk-uat2]1492116006321
>
> > >   fetch.max.wait.ms = 500
>
> > >   fetch.min.bytes = 1
>
> > >   key.deserializer = class
>
> > > org.apache.kafka.common.serialization.StringDeserializer
>
> > >   auto.offset.reset = earliest
>
> > >   value.deserializer = class
>
> > > org.apache.kafka.common.serialization.StringDeserializer
>
> > >   sasl.kerberos.kinit.cmd = /usr/bin/kinit
>
> > >   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>
> > >   max.partition.fetch.bytes = 1048576
>
> > >   partition.assignment.strategy =
>
> > > [org.apache.kafka.clients.consumer.RangeAssignor]
>
> > >   ssl.endpoint.identification.algorithm = null
>
> > >   ssl.keystore.location = null
>
> > >   ssl.truststore.location = <path>
>
> > >   exclude.internal.topics = true
>
> > >   ssl.keystore.password = null
>
> > >   metrics.sample.window.ms = 30000
>
> > >   security.protocol = SASL_SSL
>
> > >   metadata.max.age.ms = 300000
>
> > >   auto.commit.interval.ms = 1000
>
> > >   ssl.protocol = TLS
>
> > >   sasl.kerberos.min.time.before.relogin = 60000
>
> > >   connections.max.idle.ms = 540000
>
> > >   ssl.trustmanager.algorithm = PKIX
>
> > >   group.id = Test_Kafka_10
>
> > >   enable.auto.commit = true
>
> > >   metric.reporters = []
>
> > >   ssl.truststore.type = JKS
>
> > >   send.buffer.bytes = 131072
>
> > >   reconnect.backoff.ms = 50
>
> > >   metrics.num.samples = 2
>
> > >   ssl.keystore.type = JKS
>
> > >   heartbeat.interval.ms = 3000
>
> > >
>
> >
>
> > *Best Regards/أطيب المنى,*
>
> >
>
> > *Anas Mosaad*
>
> > *Incorta Inc.*
>
> > *+20-100-743-4510*
>
> >
>
> --

Best Regards,
Anas Mosaad
+201007434510

Reply via email to