@masahi I think my effort to create 
[MetalXLA](https://github.com/philipturner/metal-xla) would be the perfect 
opportunity to experiment with using AutoTVM to accelerate training. It's a 
real-time ML context where you have to balance compilation cost with code 
optimization. Also, you would either compete with or work with MPSGraph, giving 
a realistic scenario where other framework’s compilers might sometimes be 
better than TVM. Instead of CUDA XLA or PyTorch, which are relatively 
established, this backend is very open to change. I could even add features 
just to help out with TVM experimentation.

Also, the timeframe for when such experimentation will happen is perfect. 
There’s a several month gap between now and when both S4TF (may) be resurrected 
and I finish some collaboration with PyTorch on ops such as 3D convolutions. 
This gives ample time for you and others at TVM to debate whether it’s a good 
investment. I will also develop MetalSLC*, which is vital data for an AI 
algorithm concerned with predicting performance.

*Can't provide a link because of this forum's restriction on new users.

I read this research paper on using ML to predict computational cost of models: 
https://arxiv.org/abs/1811.11880. That research only focused on NVIDIA GPUs. 
Several other parties are recently making GPUs with good ML capabilities 
(Intel, Imagination, Apple) besides NVIDIA. Investing time into experimenting 
with a Metal project would help break the ML community out of the walled garden 
of NVIDIA.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/question-about-what-tvm-does/11775/3) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/0ec20ac4e96e80374a65f8944b433192cc6e0e8519394b9902225be5b697c44b).

Reply via email to