Hi, I am following the code in examples/src/main/scala/org/apache/spark/examples/mllib/BinaryClassification.scala For setting the parameters and parsing the command line options, I am just reusing that code.Params is defined as follows.
case class Params( input: String = null, numIterations: Int = 100, stepSize: Double = 1.0, algorithm: Algorithm = LR, regType: RegType = L2, regParam: Double = 0.1) I use the command line option --regType to choose L1 or L2, and --regParam to set it to 0.0. The option parser code in the example above parses the options and creates the LogisticRegression object. It calls setRegParam(regParam) to set the regularization parameter and calls the updater to set the regType. To run LR, I am again using the code in the example above (algorithm.run(training).clearThreshold()) The code in the above example computes AUC. To compute accuracy of the test data classification, I map the class to 0 if prediction < 0.5, else it is mapped to class 1. THen I compare the predictions with the corresponding labels and the number of matches is given by correctCount. val accuracy = correctCount.toDouble / predictionAndLabel.count thanks -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Regularization-parameters-tp11601p11627.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org