Hi Alexander,

The group management of the new Kafka Consumer is not using Zookeeper. Can
you add new-consumer to the command line arguments of
/kafka-consumer-groups.sh?

Thanks,
Liquan

On Thu, Apr 14, 2016 at 2:01 PM, Alexander Cook <ac...@umn.edu> wrote:

> Hi all,
>
> I am having trouble getting details on a consumer group I am using. I would
> appreciate the help! Here is what I'm doing.
>
> *1. Launch a Kafka 0.9.0.1 and Consumer with group.id
> <http://group.id>=testGroup*
> *2. I start receiving messages just fine. *
> *3. I can see my group in the 0.9.0.1 Broker logs: *
>
> [2016-04-14 16:26:28,757] INFO [GroupCoordinator 2]: Assignment received
> from leader for group testGroup for generation 1
> (kafka.coordinator.GroupCoordinator)
> [2016-04-14 16:27:15,362] INFO [Group Metadata Manager on Broker 2]:
> Removed 0 expired offsets in 0 milliseconds.
> (kafka.coordinator.GroupMetadataManager)
> [2016-04-14 16:32:59,467] INFO [GroupCoordinator 2]: Preparing to
> restabilize group testGroup with old generation 1
> (kafka.coordinator.GroupCoordinator)
> [2016-04-14 16:33:01,783] INFO [GroupCoordinator 2]: Stabilized group
> testGroup generation 2 (kafka.coordinator.GroupCoordinator)
>
> *4. I can see my group in the Consumer logs: *
> 14 Apr 2016 16:26:28.760 [32345] DEBUG
> #splapptrc,J[14],P[119],KafkaStreamTest
>
> M[AbstractCoordinator.java:org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle:423]
>  - Received successful sync group response for group testGroup:
> {error_code=0,member_assignment=java.nio.HeapByteBuffer[pos=0 lim=50
> cap=50]}
> 14 Apr 2016 16:26:28.762 [32345] DEBUG
> #splapptrc,J[14],P[119],KafkaStreamTest
>
> *5. BUT...when I try to examine the group, I do: *
> $ ./kafka-consumer-groups.sh --zookeeper myZkHost:2181 --describe --group
> testGroup
> No topic available for consumer group provided
> GROUP, TOPIC, PARTITION, CURRENT OFFSET, LOG END OFFSET, LAG, OWNER
>
> $  ./kafka-consumer-groups.sh --zookeeper myZkHost:2181 --list
> 1451409432.77249
> mygroup
> 1451409552.27855
>
> My group is nowhere to be found! Thanks for any help or ideas!
>
> Alex
>
>
> Here are my Consumer configs:
>
> 14 Apr 2016 16:26:28.360 [32152] INFO
> #splapptrc,J[14],P[119],KafkaStreamTest
>
> M[AbstractConfig.java:org.apache.kafka.common.config.AbstractConfig.logAll:165]
>  - ConsumerConfig values:
> metric.reporters = []
> metadata.max.age.ms = 300000
> value.deserializer = class
> org.apache.kafka.common.serialization.StringDeserializer
> group.id = testGroup
> partition.assignment.strategy =
> [org.apache.kafka.clients.consumer.RangeAssignor]
> reconnect.backoff.ms = 50
> sasl.kerberos.ticket.renew.window.factor = 0.8
> max.partition.fetch.bytes = 1048576
> bootstrap.servers = [g0601b02.some.host.com:9092,
> g0601b03.some.host.com:9092, g0601b04.some.host.com:9092]
> retry.backoff.ms = 100
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> sasl.kerberos.service.name = null
> sasl.kerberos.ticket.renew.jitter = 0.05
> ssl.keystore.type = JKS
> ssl.trustmanager.algorithm = PKIX
> enable.auto.commit = true
> ssl.key.password = null
> fetch.max.wait.ms = 500
> sasl.kerberos.min.time.before.relogin = 60000
> connections.max.idle.ms = 540000
> ssl.truststore.password = null
> session.timeout.ms = 30000
> metrics.num.samples = 2
> client.id =
> ssl.endpoint.identification.algorithm = null
> key.deserializer = class
> org.apache.kafka.common.serialization.StringDeserializer
> ssl.protocol = TLS
> check.crcs = true
> request.timeout.ms = 40000
> ssl.provider = null
> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> ssl.keystore.location = null
> heartbeat.interval.ms = 3000
> auto.commit.interval.ms = 5000
> receive.buffer.bytes = 32768
> ssl.cipher.suites = null
> ssl.truststore.type = JKS
> security.protocol = PLAINTEXT
> ssl.truststore.location = null
> ssl.keystore.password = null
> ssl.keymanager.algorithm = IbmX509
> metrics.sample.window.ms = 30000
> fetch.min.bytes = 1
> send.buffer.bytes = 131072
> auto.offset.reset = latest
>



-- 
Liquan Pei
Software Engineer, Confluent Inc

Reply via email to