t; >> > run this with better performance would be of great help.
>>>> >> >
>>>> >> > java.lang.AssertionError: assertion failed: Failed to get records
>>>> for
>>>> >> > test
>>
p[String, Object](
>>> >> > "bootstrap.servers" -> kafkaBrokers,
>>> >> > "key.deserializer" -> classOf[StringDeserializer],
>>> >> > "value.deserializer" -> classOf[StringDeserializer],
>>&g
er],
>> >> > "value.deserializer" -> classOf[StringDeserializer],
>> >> > "auto.offset.reset" -> "latest",
>> >> > "heartbeat.interval.ms" -> Integer.valueOf(2),
>> >> >
auto.offset.reset" -> "latest",
> >> > "heartbeat.interval.ms" -> Integer.valueOf(2),
> >> > "session.timeout.ms" -> Integer.valueOf(6),
> >> > "request.timeout.ms" -> Integer.valueOf(9),
> >&
teger.valueOf(2),
>> > "session.timeout.ms" -> Integer.valueOf(6),
>> > "request.timeout.ms" -> Integer.valueOf(9),
>> > "enable.auto.commit" -> (false: java.lang.Boolean),
>> > "spark.streaming.kafka.consumer.cache.enabled&quo
gt; "request.timeout.ms" -> Integer.valueOf(9),
>> > "enable.auto.commit" -> (false: java.lang.Boolean),
>> > "spark.streaming.kafka.consumer.cache.enabled" -> "false",
>> > "group.id" -> "test1
roup.id" -> "test1"
> > )
> >
> > val hubbleStream = KafkaUtils.createDirectStream[String, String](
> > ssc,
> > LocationStrategies.PreferConsistent,
> > ConsumerStrategies.Subscribe[String, String](topicsSet,
>
est1"
> )
>
> val hubbleStream = KafkaUtils.createDirectStream[String, String](
> ssc,
> LocationStrategies.PreferConsistent,
> ConsumerStrategies.Subscribe[String, String](topicsSet, kafkaParams)
> )
>
>
>
&g
nabled" -> "false",
"group.id" -> "test1"
)
val hubbleStream = KafkaUtils.createDirectStream[String, String](
ssc,
LocationStrategies.PreferConsistent,
ConsumerStrategies.Subscribe[String, String](topicsSet, kafkaParams)
-make-sure-that-Spark-Kafka-Direct-Streaming-job-maintains-the-state-upon-code-deployment-tp28799.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe e-mail: user-unsubscr
ic);
>
> Questions -
>
> Q1- Is my Kafka configuration correct or should it be changed?
>
> Q2- I also looked into the Checkpointing but in my usecase, Data
> checkpointing is not required but meta checkpointing is required. Can I
> achieve this, i.e. enabling meta checkpoin
I
achieve this, i.e. enabling meta checkpointing and not the data
checkpointing?
Thanks
Abhishek Patel
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Kafka-Direct-Streaming-tp23685.html
Sent from the Apache Spark User List mailing list archive at Nabb
12 matches
Mail list logo