On Sun, Sep 12, 2021 at 11:13 PM Dave Airlie <airl...@gmail.com> wrote:

> For userspace components as well these communities of experts need to
> exist for each domain, and we need to encourage upstream first
> processes across the board for these split kernel/userspace stacks.
>
> The habanalabs compiler backend is an LLVM fork, I'd like to see the
> effort to upstream that LLVM backend into LLVM proper.

I couldn't agree more.

A big part of the problem with inference engines / NPU:s is that of no
standardized userspace. Several of the machine learning initiatives
from some years back now have stale git repositories and are
visibly unmaintained, c.f. Caffe https://github.com/BVLC/caffe
last commit 2 years ago.

In a discussion thread at LWN I raised Apache TVM as a currently
quite obviously alive and kicking community, and these people have
the ambition to provide "an open source machine learning compiler
framework for CPUs, GPUs, and machine learning accelerators".
https://tvm.apache.org/
At least they have all relevant companies logotypes on their homepage,
so there is some kind of commitment.
You can find for example from Arm an RFC for real HW accelerator code
support using (out of tree) Linux kernel drivers with Apache TVM:
https://discuss.tvm.apache.org/t/rfc-ethosn-arm-ethos-n-integration/6680

Then there is Google's TensorFlow. How open is that for a random
HW vendor who want to integrate their accelerator and how open is
it to working with the kernel community? Then there is PyTorch.
All of these apparently active. Well CPU vendors often support
two different compilers so I guess they could very well support
three machine learning userspaces, why not.

What confuses me is what kind of time horizon and longevity these
projects have, and what level of commitment is involved and
what ambition. Especially to what extent they would care about
working with the Linux kernel community. (TVM have a mail
address so I added them on CC.)

Habanalabs propose an LLVM fork as compiler, yet the Intel
logo is on the Apache TVM website, and no sign of integrating with
that project. They claim to support also TensorFlow.

The way I percieve it is that there simply isn't any GCC/LLVM or
Gallium 3D of NPU:s, these people haven't yet decided that "here
is that userspace we are all going to use". Or have they?

LLVM? TVM? TensorFlow? PyTorch? Some other one?

What worries me is that I don't see one single developer being
able to say "this one definately, and they will work with the kernel
community", and that is what we need to hear.

Yours,
Linus Walleij

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tvm.apache.org
For additional commands, e-mail: dev-h...@tvm.apache.org

Reply via email to