I am using Spark(Java) Structured Streaming in Continuous Trigger Mode  
connecting to Kafka Broker. Usecase is very simple to do some custom
filter/transformation using a simple java method and ingest data into an
external system. Kafka has 6 partitions -so application is running 6
executors.  I have requirement to change the behavior of
filter/transfomration logic each executor is doing based on external
event(for example a property change in s3 file).  This is  towards the goal
of building a highly resiliency architecture where Spark application is
running into two Cloud Regions and react to an external event.

What is best way to send a signal to running spark application and prorogate
same to each executor?

Approach I have in mind is to create a new component which periodically
refreshes s3 file to look for trigger event. This component will be
integrated with logic  running on each Spark executor JVM.

Please advice.

Thanks and Regards
 



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to