stUpdate)
> .select(to_json(struct("*")) as 'value)
> .writeStream
> .format("kafka")
> .option("kafka.bootstrap.servers", kafkaCluster.kafkaNodesString)
> //original
> .option("topic", Variables.OUTPUT_TOPIC
Hi Peter,
What parameters are you putting in your spark streaming configuration? What
are you putting as number of executor instances and how many cores per
executor are you setting in your Spark job?
Best,
Khaled
On Mon, Oct 15, 2018 at 9:18 PM Peter Liu wrote:
> Hi there,
>
> I have a syste
Hi,
I'm a PhD student and I'm trying to model the performances (processing
delay, throughput) of Spark Streaming jobs, and I wonder whether there is a
way to do live migration of a Spark Streaming job from one configuration to
another?
(i.e. without having to interrupt the job and then re-submit u