@jcf94 @junrushao1994 Sorry, both of you don't understand my question correctly.
I mean the original TE is a declarative language so it can know all
transformation before it starts to generate low-level AST. But the new schedule
primitives are done imperatively. In the original TE, we can shar
Hello, we're the team from NTHU (National Tsing-Hua University), Taiwan. Our
team mainly focuses on the design with supporting TVM on RISC-V architecture
with SIMD instructions. In this RFC, we target on the application for RISC-V P
extension(RVP). This is the extension for RISC-V DSP and subw
[quote="merrymercy, post:37, topic:7872"]
I mean the original TE is a declarative language so it can know all
transformation before it starts to generate low-level AST. But the new schedule
primitives are done imperatively. In the original TE, we can share some
analysis results (e.g. dependenc
Thanks for the RFC!
While I'm not familiar with the current RISC-V applications, I'm carious about
the purpose of running Spike simulator and what would be the usual next step
after it.
I also have some questions/thoughts about the implementation. In general I'm
thinking if it would be better
Oh, I see, thanks for your kind reply.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/strassen-algorithm-for-dense/2661/17)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/u
thanks @yrchen and colleagues for the RFC! overall it's very exciting work. a
couple of thoughts
- is your eventual target bare metal devices, or does your runtime require a
kernel?
- `riscv_cpu` target: in the past we had introduced a special `micro_dev`
target for µTVM work. recently, we de
Thanks for great discussions. I agree that it would be really nice to make use
of uTVM RPC runtime with spike in the place of the specifically runtime.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-enable-tvm-qnn-on-risc-v-with-subword-simd-computation/7967/4)
to respond.
You ar
[quote="wrongtest, post:3, topic:7960"]
If I have some common neural network structure such as resnet50 at hand, can I
just use autodiff to get backward computation graph?
[/quote]
graph-wise I think you can refer to
[relay.transform.gradient](https://github.com/apache/incubator-tvm/blob/master
I wonder whether tvm supports the following operators:
1. matrix determinant (`linalg_det`).
2. matrix inversion (`linalg_inverse`).
I searched the topi and relay library but failed to find these operators, and
also failed to come up with an op-level equivalent transformation solution
ba