[ https://issues.apache.org/jira/browse/KAFKA-3006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064478#comment-15064478 ]
Gwen Shapira commented on KAFKA-3006: ------------------------------------- I'm not seeing how stream processing frameworks are impacted. Stream processing frameworks ship with the Kafka Client as part of their libraries / dependencies, their users don't interact with Kafka directly (i.e. I depend on Spark libraries which bring whatever Kafka they need, and I only use Spark APIs, not Kafka ones). We are changing nothing on the broker side, so no matter which API they choose to go with, it will work. > Make collection default container type for sequences in the consumer API > ------------------------------------------------------------------------ > > Key: KAFKA-3006 > URL: https://issues.apache.org/jira/browse/KAFKA-3006 > Project: Kafka > Issue Type: Improvement > Components: clients > Affects Versions: 0.9.0.0 > Reporter: Pierre-Yves Ritschard > Labels: patch > > The KafkaConsumer API has some annoying inconsistencies in the usage of > collection types. For example, subscribe() takes a list, but subscription() > returns a set. Similarly for assign() and assignment(). We also have pause() > , seekToBeginning(), seekToEnd(), and resume() which annoyingly use a > variable argument array, which means you have to copy the result of > assignment() to an array if you want to pause all assigned partitions. We can > solve these issues by adding the following variants: > {code} > void subscribe(Collection<String> topics); > void subscribe(Collection<String> topics, ConsumerRebalanceListener); > void assign(Collection<TopicPartition> partitions); > void pause(Collection<TopicPartition> partitions); > void resume(Collection<TopicPartition> partitions); > void seekToBeginning(Collection<TopicPartition>); > void seekToEnd(Collection<TopicPartition>); > {code} > This issues supersedes KAFKA-2991 -- This message was sent by Atlassian JIRA (v6.3.4#6332)