Okay, I for some reason, there was `opt_level=3` set here. I changed it to 2
and now it fails with:
```python
File "/home/martin/Dev/xyz/src/tvm/compile_model.py", line 112, in
compile_model
lib.export_library(lib_name)
File
"/home/martin/.local/lib/python3.6/site-packages/tvm-0.6.dev
@merrymercy Could you elaborate a bit about the 4 types (simple reduction,
complex reduction, direct compute, and location-tunable compute) ? Also it
would be helpful if you can give an example of how the DAG looks like.
--
You are receiving this because you are subscribed to this thread.
Reply
Closed #2279.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2279#event-2248518245
Close as the first part scaffolding is in. Let us open new RFCs for new error
class proposals
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2279#issuecomment-479313558
@merrymercy how much work is there per backend? looking over the code now will
follow up with more questions later.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2954#issuecomment-4793000
I think we should consider it, I think having the tuner sit in Python is
okay the more important bit being the schedules and other compiler pieces
in C++ for integrating the
compiler. I talked with some PyTorch people today and they suggested a
Python free version of the compiler would be important
Thank you for opening this RFC! I have a question regarding user API. Does the
hardware information needed for autotvm.AutoSchedulerOptions(**kwargs) function
pre-defined for different hardware architectures? If so, how much more
information does a user need to provide to differentiate between d
@FrozenGene The default schedule here for x86 eliminates most layout
transformations. It should have similar performance with "apply_history_best".
I'll update the data for "apply_history_best".
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or
# Auto-Scheduler
TVM decouples kernel implementation into compute and schedule. The compute part
is a friendly DSL that can describe algorithms intuitively. However, the
schedule part still requires strong expert knowledge and time-consuming tuning
to provide decent performance. The tuning proce
I want to highlight that due to public archive principle. The summary of in
person discussion only serve as summary information and suggestions instead of
the final design decision.
The design decision should be make in this thread, allowing everyone to
participate. So at this moment, the disc
Should we open an RFC to discuss how to port autotvm and topi to c++?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2685#issuecomment-478954829
11 matches
Mail list logo