On Mon, 13 Sept 2021 at 08:52, Linus Walleij <linus.wall...@linaro.org> wrote:
>
> On Sun, Sep 12, 2021 at 11:13 PM Dave Airlie <airl...@gmail.com> wrote:
>
> > For userspace components as well these communities of experts need to
> > exist for each domain, and we need to encourage upstream first
> > processes across the board for these split kernel/userspace stacks.
> >
> > The habanalabs compiler backend is an LLVM fork, I'd like to see the
> > effort to upstream that LLVM backend into LLVM proper.
>
> I couldn't agree more.
>
> A big part of the problem with inference engines / NPU:s is that of no
> standardized userspace. Several of the machine learning initiatives
> from some years back now have stale git repositories and are
> visibly unmaintained, c.f. Caffe https://github.com/BVLC/caffe
> last commit 2 years ago.
>
> In a discussion thread at LWN I raised Apache TVM as a currently
> quite obviously alive and kicking community, and these people have
> the ambition to provide "an open source machine learning compiler
> framework for CPUs, GPUs, and machine learning accelerators".
> https://tvm.apache.org/
> At least they have all relevant companies logotypes on their homepage,
> so there is some kind of commitment.
> You can find for example from Arm an RFC for real HW accelerator code
> support using (out of tree) Linux kernel drivers with Apache TVM:
> https://discuss.tvm.apache.org/t/rfc-ethosn-arm-ethos-n-integration/6680
>
> Then there is Google's TensorFlow. How open is that for a random
> HW vendor who want to integrate their accelerator and how open is
> it to working with the kernel community? Then there is PyTorch.
> All of these apparently active. Well CPU vendors often support
> two different compilers so I guess they could very well support
> three machine learning userspaces, why not.
>
> What confuses me is what kind of time horizon and longevity these
> projects have, and what level of commitment is involved and
> what ambition. Especially to what extent they would care about
> working with the Linux kernel community. (TVM have a mail
> address so I added them on CC.)
>
> Habanalabs propose an LLVM fork as compiler, yet the Intel
> logo is on the Apache TVM website, and no sign of integrating with
> that project. They claim to support also TensorFlow.
>
> The way I percieve it is that there simply isn't any GCC/LLVM or
> Gallium 3D of NPU:s, these people haven't yet decided that "here
> is that userspace we are all going to use". Or have they?
>
> LLVM? TVM? TensorFlow? PyTorch? Some other one?

Yeah I've been doing the same research, and there is also the Glow
project I think to add to the list.

The thing is control, everyone wants to run it, when it comes to Linux
nearly all the vendors have realised they've lost their control and
learned to live with it, but the second they are into userspace, it's
like hey we need to be in charge of every single piece of this, thus
losing the Linux kernel advantage of pooling engineering expertise
cross-vendor.

I certainly don't want to be the distro packager having to package 30
forks of LLVM for 20 different vendor accelerators with 20 runtime
APIs and 20 forks of TVM/Tensorflow/pytorch.

Enabling that behaviour by just merging kernel drivers and washing our
hands to me seems like a large misstep for the future of
maintainability of the kernel, esp as these devices start interacting
with GPUs or RDMA and we get locked into unmovable interfaces that we
can't even analyse for deadlocks etc.

Dave.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tvm.apache.org
For additional commands, e-mail: dev-h...@tvm.apache.org

Reply via email to