Hey Spark devs

I noticed that we now have a large number of examples for ML & MLlib in the
examples project - 57 for ML and 67 for MLLIB to be precise. This is bound
to get larger as we add features (though I know there are some PRs to clean
up duplicated examples).

What do you think about organizing them into packages to match the use case
and the structure of the code base? e.g.

org.apache.spark.examples.ml.recommendation

org.apache.spark.examples.ml.feature

and so on...

Is it worth doing? The doc pages with include_example would need updating,
and the run_example script input would just need to change the package
slightly. Did I miss any potential issue?

N

Reply via email to