Hi Qing,
Thanks for the numbers. They look very good. I am wondering if we can have
DJL integrated with some existing Flink AI ecosystem projects.
For example, the project flink-ai-extended [1] provides the capability to
run a distributed TF/PyTorch cluster on top of Flink, which allows people
to
Hi Becket,
Talking about the IPC, DJL is leveraging the JNI/JNA directly to connect to DL
engines C /C++ API. So the latency between C++ and Java is minimum (~10ns).
Performance wise speaking, DJL can offers true multi threading Java inference,
means load model once, use in as many threads you
Hi Qing,
Thanks for raising the discussion. It is great to know the project DJL.
If I understand correctly, the discussion is mostly about inference. DJL
essentially provides a uniform Java API for people to use different deep
learning engines. It is useful for people to combine Flink and DJL so
Hi all,
On behalf of the AWS DJL team, I would like to discuss about the Apache Flink's
ML integration development. We would like to contribute some more Deep Learning
(DL) based applications to Flink that including but not limited to TensorFlow,
PyTorch, Apache MXNet, Apache TVM and more throu