> I'm curious how it integrates with PyTorch frontend. Do we convert every op > not supported to relay.torchop, run BYOC flow to get TorchScript subgraphs, > and send them to libtorch? Sounds interesting!
This is how I'd like it to work out. I've been thinking what the best "level" is and while the operator level might seem attractive, but I I'm not sure there is an easy way to run individual operators. It won't help with out favourite inplace problems, though. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/pull/7401#issuecomment-773154955