Hello all Apache Ignite ML developers:
I understand currently Ignite can't save a model after training, in such a
way that the model can be re-imported by another Ignite cluster. Correct me
if you can save and reload a model but I don't think you can.
Anyway, I'd like to know if you have recomme
Have any of you performed an H20 integration with Ignite to import an
extracted feature data set directly as input into Ignite training engine?
--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
I have attempted to add this call to a RandomForestModel in order to obtain
accuracy:
*double accuracy = Evaluator.evaluate(
dataCache,
randomForestMdl,
vectorizer,
new Accuracy<>()
);
Andrei,
I am also working with Apache Ignite ML and am interested in providing
wrappers for Ignite ML API, but am wondering if instead of simply recreating
the low level Java API for ML inside Python, how about creating some higher
level services "Auto ML" workflow ? For example:
1. here is raw
Hello all,
I've searched through examples and so far have seen examples on how to do to
use one-hot-encoder only for model fitting or for evaluator, but can't
figure out how to do this for the predict call. For example, we see use of
one-hot as inputs to :
1. RF_MODEL = trainer.fit(