Does TVM support training models on Apple devices, or only inference? There's a 
lot of work out there about accelerating inference, but that's not what I need. 
I'm looking to accelerate a resurrected Swift for TensorFlow (a framework for 
ML training, not inference) with Metal, and I'm not clear whether TVM is 
something I can use for that.

The reason I ask is that I saw OctoML running Bert on M1 extremely quickly 
(https://github.com/octoml/Apple-M1-BERT), but they didn't clarify whether it 
was only for inference. I have tried researching into whether TVM is only 
inference before, but I never found a clear answer.

In addition, I need to be able to train models on iOS, so I can't depend on any 
Python code when S4TF runs on iOS.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/question-about-what-tvm-does/11775/1) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/b86039ddd2eb79099ed40faf5e8e95c85d912b34cef1890e14ea55d81b94e40e).

Reply via email to