I have data in Kafka topic-partition and I am reading it from Spark like this: JavaPairReceiverInputDStream<String, String> directKafkaStream = KafkaUtils.createDirectStream(streamingContext, [key class], [value class], [key decoder class], [value decoder class], [map of Kafka parameters], [set of topics to consume]);I want that message from a kafka partition always land on same machine on Spark rdd so I can cache some decoration data locally and later reuse with other messages (that belong to same key). Can anyone tell me how can I achieve it?Thanks
-- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Kafka-Spark-Partition-Mapping-tp24372.html Sent from the Apache Spark User List mailing list archive at Nabble.com.