You can use .option( "auto.offset.reset","earliest") while reading from
kafka.
With this, new stream will read from the first offset present for topic .
On Wed, May 23, 2018 at 11:32 AM, karthikjay wrote:
> Chris,
>
> Thank you for responding. I get it.
>
> But, if I am using a console sink w
The purpose of broadcast variable is different.
@Malveeka, could you please explain your usecase and issue.
If the FAT/ Uber jar is not having required dependent jars then the spark
job will fail at the start itself.
What is your scenario in which you want to add new jars?
Also, what you mean by
Hi
With spark-submit we can start a new spark job, but it will not add new
jar files in already running job.
~Sushil
On Wed, May 23, 2018, 17:28 kedarsdixit
wrote:
> Hi,
>
> You can add dependencies in spark-submit as below:
>
> ./bin/spark-submit \
> --class \
> --master \
> --deploy