--
“Overfitting” is not about an excessive amount of physical exercise...
--
“Overfitting” is not about an excessive amount of physical exercise...
Hi,
I want to connect with a local Jupyter Notebook to a remote Spark cluster.
The Cluster is running Spark 2.0.1 and the Jupyter notebook is based on
Spark 1.6 and running in a docker image (Link). I try to init the
SparkContext like this:
import pyspark
sc = pyspark.SparkContext('spark://:7077'
Hello,
is there a way to get the during the predict() phase also the class
probabilities like I would get in sklearn?
Cheers,
Klaus
--
--
Klaus Schaefers
Senior Optimization Manager
Ligatus GmbH
Hohenstaufenring 30-32
D-50674 Köln
Tel.: +49 (0) 221 / 56939 -784
Fax: +49 (0) 221 / 56
Hi,
is there a kind adapter to use GoogleCloudStorage with Spark?
Cheers,
Klaus
--
--
Klaus Schaefers
Senior Optimization Manager
Ligatus GmbH
Hohenstaufenring 30-32
D-50674 Köln
Tel.: +49 (0) 221 / 56939 -784
Fax: +49 (0) 221 / 56 939 - 599
E-Mail: klaus.schaef...@ligatus.com
Web