Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/3099#issuecomment-61775481
@srowen Glad to hear that you like it :) Your feedback will be greatly
appreciated, but I don't want to waste your time on minor details. Let's run
the discussion in the main thread, so we can still see them when I update the
code. One thing I have been struggling with is the way we handle parameters
here. Please check the last section of the PR description. Basically, I think
~~~
val lr = new LogisticRegression
.setMaxIter(50)
.setRegParam(0.1)
~~~
looks better than
~~~
val lr = new LogisticRegression
lr.set(lr.maxIter, 50)
.set(lr.regParam, 0.1)
~~~
But if we use setters/getters, it is a little weird that `lr.maxIter` is a
parameter key instead of its value. Or we can let `lr.maxIter` store the value
(so the setters/getters can be generated using @BeanProperty), and then use
something like `lr.maxIter_` (underscore) as the parameter key. The code would
become
~~~
val lr = new LogisticRegression
.setRegParam(0.1) // attach regParam = 0.1 with the object "lr"
val lrParamMaps = new ParamMapBuilder()
.addMulti(lr.maxIter_, Array(10, 20)) // specify parameters (with
underscore) outside the object "lr"
.build()
// multi-model training
val models = lr.fit(dataset, lrParamMaps)
~~~
Does it look better than the current version?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]