This sounds like you need polyhedral model?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/5809#issuecomment-649246713
When I Used VMExecutor to run a CNN model, it threw an error
```
RuntimeError: Check failed: VerifyMemory(func): Direct host side access to
device memory is detected. Did you forget to bind?
PrimFunc([placeholder, transform_weight]) attrs={"global_symbol":
"fused_nn_contrib_conv2d_winograd_wei
The proto representation looks good to me. I have a couple of suggestions based
on prior experience designing proto-based data formats.
- I recommend the use of enums rather than strings for values that are
constrained to a small, fixed-size set. For example, the dtype field in the
Tensor mes
Dear podling,
This email was sent by an automated system on behalf of the Apache
Incubator PMC. It is an initial reminder to give you plenty of time to
prepare your quarterly board report.
The board meeting is scheduled for Wed, 15 July 2020.
The report for your podling will form a part of the In
cc @merrymercy @zhiics @haichen @FrozenGene @comaniac @ajtulloch @antinucleon
@junrushao1994
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-canonicalizing-autotvm-log-format/7038/13)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emai
I think the main benefit of keeping the ProtoBuf opaque is avoiding the
unnecessary effort of fleshing out a schema that will change very soon.
However, since I have a full specification described here already, I prefer to
go ahead with it, unless there other concerns I have missed.
I sugges
Welcome to the TVM community :)
Mali doesn't really have an equivalent to Nvidia's shared memory, it uses the
system RAM backed by an unconfigurable cache. Local is just OpenCL's term for
CUDA's shared. This means that using explicit cache read/writes to shared/local
aren't advised when optim
OpenCL Mali's analog to "shared" is "local" IIRC. Nvidia GPU calls "shared" a
configurable part of L2 and I believe Mali GPU calls the configurable part of
L2 as "local" if I'm not mistaken.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/2
I think you can just write `Map` or `Array`. Let me
know if any issues occurred :-)
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-runtime-array-containers-array-adt-string/4582/46)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these e
You can follow the example in vta to overwrite the implementation for a
specific target.
https://github.com/apache/incubator-tvm/blob/master/vta/python/vta/top/op.py#L63
---
[Visit Topic](https://discuss.tvm.ai/t/relay-op-strategy/5247/21) to respond.
You are receiving this because you en
@junrushao1994: I totally agree with you! The "attr" usage in this case is
pretty confusing.
I think "config" is better that "attr". :+1:
As these are various user options for target gen. How about "add_user_option"
or "add_target_option" ?
---
[Visit Topic](https://discuss.tvm.ai/t/rfc
As an alternative, we can try to add enough overloads to make sure that the
object itself behave like an list and number so such cast is not necessary. We
have already done that for string(by having String to subclass str)
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-runtime-array-c
My impression is that one drawback of this switch is that in the Python
interface, a lot of conversions to standard types seem to be needed. When using
shapes outside of TVM, one has to convert Arrays of IntImm to lists of int.
Even when feeding shapes back to TVM I seem to need a lot of list(
## Motivation
[Arm Compute Library](https://github.com/ARM-software/ComputeLibrary) (ACL) is
an open source project that provides hand-crafted assembler routines for Arm
CPU's and GPU's. This integration will look at how we can accelerate CPU
performance for Arm devices in TVM using ACL. The
Hi, can I use StringList or Map in Relay? example ?
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-runtime-array-containers-array-adt-string/4582/43)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://disc
Adding "-libs=cblas" in target and building tvm with MKLDNN will use mkl gemm
for dense.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/26)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from the
16 matches
Mail list logo