Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2022-03-08 Thread Thomas Viehmann
Are you on the tvm discord or so to quickly discuss? -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/pull/7401#issuecomment-1061504189 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2022-03-07 Thread Masahiro Masuda
Great! Can you remove `WIP` from the title now? -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/pull/7401#issuecomment-1061502358 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2022-03-07 Thread Thomas Viehmann
Hi, so I rebased this finally and it all compiles and runs one test against a current PyTorch master, so I think I'm back in business with this PR (unless it has been obsoleted, but from what I understand, the bridge is in the other direction). -- Reply to this email directly or view it on Gi

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2022-01-21 Thread Thomas Viehmann
M. Ruberry of the PyTorch team re-landed the update of the dlpack.h in PyTorch. If this still holds next week, it'll be exciting to bring this up to date. :) -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/pull/7401#issuecomment-1018688381 You are receiving th

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2022-01-11 Thread Masahiro Masuda
Another interesting use case this fallback could enable is mmdetection https://github.com/open-mmlab/mmdetection. It has a lot of cool detection models but rely on many custom ops that cannot convert to relay. -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/p

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2022-01-10 Thread Thomas Viehmann
So I thought, I could wait it out, but I'll look into working around the version discrepancy in the next few weeks. -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/pull/7401#issuecomment-1008961724 You are receiving this because you are subscribed to this thre

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2022-01-09 Thread Masahiro Masuda
As commented by the author of PyTorchTVM, https://github.com/apache/tvm-rfcs/pull/25#issuecomment-908041324, many people are interested in this feature. Also people are actively talking about deeper integration of TVM into PyTorch workflow. So we should definitely land this. As for me personal

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2022-01-09 Thread Masahiro Masuda
> I wonder if we could work around it by providing a "dlpack-compat" header Does this mean the same thing as https://github.com/pytorch/pytorch/pull/65047#issuecomment-972734912? (which sounds good to me). Anyway it seems we cannot count on the PyTorch-side to change, so I'd say anything that c

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2022-01-09 Thread Thomas Viehmann
@masahi So I had hoped to get the dlpack header version in PyTorch bumped (see the linked bug) but Facebook has internal uses that let it insist on the old one. I wonder if we could work around it by providing a "dlpack-compat" header that defines the names for the types / fields? Or I could try

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2022-01-09 Thread Masahiro Masuda
@t-vi Is this still WIP? I'm happy to merge whenever you think it is ready. -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/pull/7401#issuecomment-1008443597 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2021-11-18 Thread Thomas Viehmann
Just a quick note that when I tried to revive this back in the summer it got a bit stalled around pytorch/pytorch#65047 . -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/pull/7401#issuecomment-9

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2021-09-15 Thread masahi
I was going to suggest using `MergeCompilerRegions`, but I saw you already using it. So I like your current approach. Sending many small functions to torch sounds like a non-trivial overhead, and I think " piece things back together into a graph" is essentially what `MergeCompilerRegions` does a

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2021-09-15 Thread Thomas Viehmann
So I have been mulling over the best granularity / approach. Currently I'm taking TorchScript functions / graphs as the unit I'm working with. An alternative could be to move to the PyTorch operator level (so one aten::...-call) - which would seem to be more natural in Relay - but then one woul

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2021-05-31 Thread Thomas Viehmann
> I would really appreciate getting at least your fix to solve this issue > merged into upstream. Maybe in a separate PR at this is not really related to > the TorchScript use case. I'm all for it, but I wouldn't know how to add tests in lieu of something using it. If you or @masahi has any opi

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2021-05-23 Thread Philipp van Kempen
Hey @t-vi, the idea of a fallback for unsupported TorchScript Ops is great. I am currently pursuing a similar approach for unsupported (and custom) TFLite Ops. I also stumbled over the issue that `num_inpust == -1` leads to problems in the `type_infer` step and "solved" it in a quite bad way b

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2021-02-05 Thread Thomas Viehmann
Yeah, the general idea is to use this as the fallback. I can add the fallback generation here in the PR if that is better. Also I added a bit of a pro-con discussion regarding single op vs. program on the forum thread, if you have opinions, I'd be very grateful if you could chime in. -- You ar

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2021-02-04 Thread Cody Yu
I have the same question as masahi. IIUC, after this PR, the PyTorch frontend has the capablility to convert all unsupported ops to `torchop` so that we can guarantee the flow would work. This is an interesting idea and this would be the first BYOC use case that could potentially incopreate two

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2021-02-04 Thread Thomas Viehmann
> I'm curious how it integrates with PyTorch frontend. Do we convert every op > not supported to relay.torchop, run BYOC flow to get TorchScript subgraphs, > and send them to libtorch? Sounds interesting! This is how I'd like it to work out. I've been thinking what the best "level" is and while

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2021-02-03 Thread masahi
I'm curious how it integrates with PyTorch frontend. Do we convert every op not supported to `relay.torchop`, run BYOC flow to get TorchScript subgraphs, and send them to libtorch? Sounds interesting! -- You are receiving this because you are subscribed to this thread. Reply to this email direc

Re: [apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2021-02-03 Thread masahi
This is an interesting use case of byoc, cc @zhiics @comaniac -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/pull/7401#issuecomment-772591355

[apache/tvm] WIP/RFC: initial stab at TorchScript fallback (#7401)

2021-02-03 Thread Thomas Viehmann
This patch adds a support for calling TorchScript. This can be used fallback for when torch operators are not yet implemented or if one wants to incorporate bespoke PyTorch custom ops into TVM with ease. It adds - a new relay `torchop` that takes a variable number of inputs and executes a provi