Hello everyone:
I have been facing a problem associated spark streaming memory.
I have been running two Spark Streaming jobs concurrently. The jobs read
data from Kafka with a batch interval of 1 minute, performs aggregation, and
sinks the computed data to MongoDB using using stratio-mongodb conn
Hi,
I am running spark v 1.6.1 on a single machine in standalone mode, having
64GB RAM and 16cores.
I have created five worker instances to create five executor as in
standalone mode, there cannot be more than one executor in one worker node.
*Configuration*:
SPARK_WORKER_INSTANCES 5
SPARK_WORK