Are you on the tvm discord or so to quickly discuss?
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7401#issuecomment-1061504189
You are receiving this because you are subscribed to this thread.
Message ID:
Great! Can you remove `WIP` from the title now?
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7401#issuecomment-1061502358
You are receiving this because you are subscribed to this thread.
Message ID:
Hi, so I rebased this finally and it all compiles and runs one test against a
current PyTorch master, so I think I'm back in business with this PR (unless it
has been obsoleted, but from what I understand, the bridge is in the other
direction).
--
Reply to this email directly or view it on Gi
M. Ruberry of the PyTorch team re-landed the update of the dlpack.h in PyTorch.
If this still holds next week, it'll be exciting to bring this up to date. :)
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7401#issuecomment-1018688381
You are receiving th
Another interesting use case this fallback could enable is mmdetection
https://github.com/open-mmlab/mmdetection. It has a lot of cool detection
models but rely on many custom ops that cannot convert to relay.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/p
So I thought, I could wait it out, but I'll look into working around the
version discrepancy in the next few weeks.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7401#issuecomment-1008961724
You are receiving this because you are subscribed to this thre
As commented by the author of PyTorchTVM,
https://github.com/apache/tvm-rfcs/pull/25#issuecomment-908041324, many people
are interested in this feature. Also people are actively talking about deeper
integration of TVM into PyTorch workflow. So we should definitely land this.
As for me personal
> I wonder if we could work around it by providing a "dlpack-compat" header
Does this mean the same thing as
https://github.com/pytorch/pytorch/pull/65047#issuecomment-972734912? (which
sounds good to me). Anyway it seems we cannot count on the PyTorch-side to
change, so I'd say anything that c
@masahi So I had hoped to get the dlpack header version in PyTorch bumped (see
the linked bug) but Facebook has internal uses that let it insist on the old
one.
I wonder if we could work around it by providing a "dlpack-compat" header that
defines the names for the types / fields? Or I could try
@t-vi Is this still WIP? I'm happy to merge whenever you think it is ready.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7401#issuecomment-1008443597
You are receiving this because you are subscribed to this thread.
Message ID:
Just a quick note that when I tried to revive this back in the summer it got a
bit stalled around pytorch/pytorch#65047 .
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7401#issuecomment-9
I was going to suggest using `MergeCompilerRegions`, but I saw you already
using it. So I like your current approach. Sending many small functions to
torch sounds like a non-trivial overhead, and I think " piece things back
together into a graph" is essentially what `MergeCompilerRegions` does a
So I have been mulling over the best granularity / approach. Currently I'm
taking TorchScript functions / graphs as the unit I'm working with. An
alternative could be to move to the PyTorch operator level (so one
aten::...-call) - which would seem to be more natural in Relay - but then one
woul
> I would really appreciate getting at least your fix to solve this issue
> merged into upstream. Maybe in a separate PR at this is not really related to
> the TorchScript use case.
I'm all for it, but I wouldn't know how to add tests in lieu of something using
it. If you or @masahi has any opi
Hey @t-vi, the idea of a fallback for unsupported TorchScript Ops is great. I
am currently pursuing a similar approach for unsupported (and custom) TFLite
Ops.
I also stumbled over the issue that `num_inpust == -1` leads to problems in
the `type_infer` step and "solved" it in a quite bad way b
Yeah, the general idea is to use this as the fallback. I can add the fallback
generation here in the PR if that is better.
Also I added a bit of a pro-con discussion regarding single op vs. program on
the forum thread, if you have opinions, I'd be very grateful if you could chime
in.
--
You ar
I have the same question as masahi. IIUC, after this PR, the PyTorch frontend
has the capablility to convert all unsupported ops to `torchop` so that we can
guarantee the flow would work. This is an interesting idea and this would be
the first BYOC use case that could potentially incopreate two
> I'm curious how it integrates with PyTorch frontend. Do we convert every op
> not supported to relay.torchop, run BYOC flow to get TorchScript subgraphs,
> and send them to libtorch? Sounds interesting!
This is how I'd like it to work out. I've been thinking what the best "level"
is and while
I'm curious how it integrates with PyTorch frontend. Do we convert every op not
supported to `relay.torchop`, run BYOC flow to get TorchScript subgraphs, and
send them to libtorch? Sounds interesting!
--
You are receiving this because you are subscribed to this thread.
Reply to this email direc
This is an interesting use case of byoc, cc @zhiics @comaniac
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7401#issuecomment-772591355
This patch adds a support for calling TorchScript. This can be used fallback
for when torch operators are not yet implemented or if one wants to incorporate
bespoke PyTorch custom ops into TVM with ease.
It adds
- a new relay `torchop` that takes a variable number of inputs and executes a
provi
21 matches
Mail list logo