Hey Bonino,
Sounds great. Since we have not set up the website for Flink ML yet, how
about we create PRs for https://github.com/apache/flink-ml and put those
Markdown files under flink-ml/docs?
Best Regards,
Dong
On Sat, Jan 22, 2022 at 12:25 AM Bonino Dario
wrote:
> Hi Dong,
>
> We assembled
Hi Dong,
We assembled a first, very small, Markdown document providing a
jump-start description using a kMeans example. I could already share it
with you to check if we are pointing in the right direction. I had a
look at the Flink contribution guidelines, however the flink-ml project
isĀ som
Hi Bonino,
Definitely, it will be great to build up the Flink ML docs together based
on your experience.
Thanks!
Dong
On Wed, Jan 19, 2022 at 4:32 PM Bonino Dario wrote:
> Hi Dong,
>
> Thank you for the reply. Since we are actually experimenting with the
> Flink ML libraries, If you think it's
Hi Dong,
Thank you for the reply. Since we are actually experimenting with the
Flink ML libraries, If you think it's worth, we may contribute some
documentation, e.g., tutorial based on what we learn while setting up
our test project with Flink ML. Is it something that might be of
interest fo
Hi Bonino,
Thanks for your interest!
Flink ML is currently ready for experienced algorithm developers to try it
out because we have setup the basic APIs and infrastructure to develop
algorithms. Five algorithms (i.e. kmeans, naive bays, knn, logistic
regression and one-hot encoder) has been imple
I am adding a couple of people who worked on it. Hopefully, they will be
able to answer you.
On 17/01/2022 13:39, Bonino Dario wrote:
>
> Dear List,
>
> We are in the process of evaluating Flink ML version 2.0 in the
> context of some ML task mainly concerned with classification and
> clustering.
Dear List,
We are in the process of evaluating Flink ML version 2.0 in the context
of some ML task mainly concerned with classification and clustering.
While algorithms for this 2 domains are already present, although in a
limited form (perhaps) in the latest release of Flink ML, we did not