Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/1420#discussion_r20131759
--- Diff:
external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala
---
@@ -151,4 +153,23 @@ object KafkaUtils {
createStream[K, V, U, T](
jssc.ssc, kafkaParams.toMap,
Map(topics.mapValues(_.intValue()).toSeq: _*), storageLevel)
}
+
+ /**
+ * Delete the consumer group related Zookeeper metadata immediately,
+ * force consumer to ignore previous consumer offset and directly read
data from the beginning
+ * or end of the partition. The behavior of reading data from the
beginning or end of the
+ * partition also relies on kafka parameter 'auto.offset.reset':
+ * When 'auto.offset.reset' = 'smallest', directly read data from
beginning,
+ * will re-read the whole partition.
+ * When 'auto.offset.reset' = 'largest', directly read data from end,
ignore old, unwanted data.
+ * This is default in Kafka 0.8.
+ *
+ * To avoid deleting existing Zookeeper metadata in each Receiver when
multiple consumers are
+ * launched, this should be called be createStream().
+ * @param zkQuorum Zookeeper quorum (hostname:port,hostname:port,..).
+ * @param groupId The group id for this consumer.
+ */
+ def resetOffset(zkQuorum: String, groupId: String) {
--- End diff --
So maybe we can just remove this function to let user to delete the ZK
metadata by themselves if they know it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]