+1 (non-binding)
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679387753
+1 Thanks @hogepodge for your hard work on this!
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/7991#issuecomment-833766117
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8928#issuecomment-912869690
Thanks @manupa-arm @Mousius and @leandron for bringing these points up.

Building off what you said, I just wanted to point out that I think that this
vote is really two questions rolled into one:
**Q1:** Whether the current system of tagging all code owners is working, and
should we revert it
@areusch I definitely agree with everything you said. To clarify, I'm in favor
of this going forward given the impact it has on the quality of life of the
code shepherds, so I guess I'll officially vote +1.
I just wanted to mention these concerns in a place where we already have some
discussion
+1, noticed a typo in the overview though-- robust is misspelled as "robost"
:laughing:
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/9504#issuecomment-969225705
@jroesch @mbs-octoml @mikepapadim please take a look and let me know if you
have any feedback
You can view, comment on, or merge this pull request online at:
https://github.com/apache/tvm-rfcs/pull/44
-- Commit Summary --
* Add virtual device as a first class field to Relay expressions
*
Closed #44.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/44#event-5695684441
Oops, meant to open this on my fork first. Will post the polished RFC soon.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/44#issuecomment-983185654
Thanks for the feedback @tqchen @jwfromm. I'll move the the code to the
namespace `tvm.utils.data`, and set batch_size and num_batches through the
@property decorator.
I do agree that future support of zero copy through DLPack is interesting, so
it's worth considering using `tvm.runtime.ndar
Also, it appears that `tvm.runtime.ndarray` only has one method for comparing
ndarrays, same_as, and same_as checks object identity equality, not value
equality.
If the output of running a relay mod is a `tvm.runtime.ndarray`, and the labels
are also a `tvm.runtime.ndarray`, it seems that th
I guess having the user transform them into numpy before comparison is OK for
now, and to be consistent I'll make both data and labels
`tvm.runtime.ndarrays`. I can put a note in the documentation that they need to
convert them to numpy arrays before comparing them.
It would be nice if there
@altanh Thanks for the input. I think you're right, knowledge of the layout is
not required, and I can remove that.
With regard to your concern about the list of ndarrays -- the ndarrays in the
list are meant to be batched (I should make this clearer in the documentation,
though). The intenti
@mikeseven
Yes, the goal is to create a fully quantized graph, and we do recognize that
this transformation will change the output of the graph. For this reason, we're
not going to present the rewrite as a Relay pass. And I definitely agree that
we should let there be user-defined handling.
A
[quote="anijain2305, post:20, topic:9775"]
I am trying to understand why we need `qnn.conv2d*` (* represents operator
along the lines of `qnn.simulated_conv2d`) during calibration. The only reason
would be if you want to propagate the error from previous operators while
**calibrating** current
Also, as part of the standardization of QNN, we could ensure that all QNN
"compute" ops go from `int8 -> int8` . I believe that `qnn.conv2d` is the only
QNN op that outputs an accumulation dtype, so we could change `qnn.conv2d` to
take in bias in addition to the data and weight.
---
[Visi
##
This RFC is a case study on unifying the lower API, which had implementations
in both Python and C++. I'll call the duplication of APIs in Python and C++
"bifurcation" or "bifurcation of the API". I'll conclude by analyzing how this
bifurcation happened in the first place, and then presen
17 matches
Mail list logo