You would probably need dynamic allocation which is only available on yarn
and mesos. Or wait for on going spark k8s integration
Le 15 mars 2017 1:54 AM, "Pranav Shukla" a
écrit :
> How to scale or possibly auto-scale a spark streaming application
> consuming from kafka and using kafka direct s
How to scale or possibly auto-scale a spark streaming application consuming
from kafka and using kafka direct streams. We are using spark 1.6.3, cannot
move to 2.x unless there is a strong reason.
Scenario:
Kafka topic with 10 partitions
Standalone cluster running on kubernetes with 1 master and 2