Yes that's a great option when the modeling process itself doesn't really
need Spark. You can use any old modeling tool you want and get the
parallelism in tuning via hyperopt's Spark integration.
On Thu, Apr 1, 2021 at 10:50 AM Williams, David (Risk Value Stream)
wrote:
> Classification: Public
Classification: Public
Many thanks for the info. So you wouldn't use sklearn with Spark for large
datasets but use it with smaller datasets and using hyperopt to build models in
parallel for hypertuning on Spark?
From: Sean Owen
Sent: 26 March 2021 13:53
To: Williams, David (Risk Value Stream
'start-master.sh': No such file or directory. It look like the settings I have
done on the container were gone and were not saved when I exited.
Don't the container save its settings when user exit? Thank you.
--
«Ce dont le monde a le plus besoin, c'est d'hommes, non pas des hommes qu'on
achète