Hi.
On Wed, 8 Jun 2016 23:50:00 +0300, Artem Barger wrote:
On Wed, Jun 8, 2016 at 12:25 AM, Gilles
<gil...@harfang.homelinux.org>
wrote:
According to JIRA, among 180 issues currently targeted for the
next major release (v4.0), 139 have been resolved (75 of which
were not in v3.6.1).
Huh, it's above of 75% completion :)
Everybody is welcome to review the "open" issues and comment
about them.
I guess someone need to prioritize them according to they
importance for
release.
Importance is relative... :-}
IMO, it is important to not release unsupported code.
So the priority would be higher for issues that would be included
in the release of the new Commons components.
Hence the need to figure out what these components will be.
Of course, anyone who wishes to maintain some of these codes
(answer user questions, fix bugs, create enhancements, etc.)
is most welcome to step forward.
I can try to cover some of these and maintain relevant code
parts.
Which ones?
I will look into JIRA and provide the issue numbers, and of course I
can cover and assist with ML part and particular clustering.
Thanks.
IMO, a maintainer is someone who is able to respond to user
questions and to figure out whether a bug report is valid.
I'm subscribed for mailing list for quite a while and haven't
seen a lot of questions coming from users.
The "user" ML has always been fairly quiet.
Does it mean that the code is really easy to use?
Or feature-complete (I doubt that)?
Or that there are very few users for the most complex features?
The "dev" ML was usually (much) more active.
The point is that when someone asks a question or propose an
contribution, there must be someone to answer.
I think that clustering part could be generalized to ML package
as a
whole.
Fine I guess, since currently the "neuralnet" sub-package's only
concrete functionality is also a clustering method.
I was also wondering whenever ML package meant to be extended in
the future
Really there was no plan, or as many plans as there were developers...
Putting all these codes (with different designs, different coding
practices, different intended audiences, different levels of expertise,
etc.) in a single library was not sustainable.
That's why I strongly favour cutting this monolith into pieces
with a limited scope.
with additional functionality, since I think I can provide my code
for
several classification
algorithms.
That sounds nice.
Which algorithms?
Regards,
Gilles
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org