We haven’t planned that far yet, as currently we lower a Relay function to a TE
compute, which relies on Relay op strategy to map Relay ops to TOPI computes.
I’m not familiar with custom Relay ops, but it would be great if you have any
suggestion that could make this RFC potentially work for c
Met same problem when open `USE_THRUST=ON` without using `xgboost`
My environment: CUDA 10.1, thrust 1.9.5
---
[Visit
Topic](https://discuss.tvm.apache.org/t/conflict-with-xgboost-when-thrust-is-enabled/6889/5)
to respond.
You are receiving this because you enabled mailing list mode.
To
@tkonolige Thanks a lot for your help.
Regarding the ```tvm.lower(s, args)```, you can find below the generated code .
Before tuning, I got:
```
#[version = "0.0.5"]
primfn(A_1: handle, W_1: handle, output_unpack_1: handle) -> ()
attr = {"global_symbol": "main", "tir.noalias": True}
buffer
I also wrote a a minimal example to reproduce the problem.
```
"""Test for NCHW[x]c convolution"""
import numpy as np
import tvm
from tvm import te
from tvm import autotvm
from tvm import topi
import tvm.testing
import tvm.topi.testing
from tvm.contrib.pickle_memoize import memoize
from tvm.top
I'm not super familiar with autotvm and auto scheduling, but I've got a couple
questions:
1. What is the interaction between autoscheduler and autotvm in the future.
Will we be unifying the user api for autotvm and auto scheduling? Can you mix
auto scheduling and autotvm?
2. Why is the `GraphR
I believe this line is the issue as it occurs before `threadIdx.z` is defined.
[quote="OValery16, post:6, topic:8338"]
`allocate(compute, int32, [(((floordiv(((threadIdx.z: int32*2) + 1), 4)*32) +
32) - (floordiv(threadIdx.z, 2)*32))]);`
[/quote]
However, I cannot reproduce this issue with the
1. We haven't figured out the plan yet, but mixing them up is definitely a
trend.
2. In order to make task extraction and schedule application align, we follow
the same flow as building a model to extract tasks. Both AutoTVM and
auto_scheduler leverage this approach.
---
[Visit
Topic](h
I have one question about `use_topi_schedule`. I assume that after we set it to
False, it will always use the Ansor scheduler to schedule the ops. Will there
be a case that we want have a mix of topi schedule and ansor schedule?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-a-gen
This is a good question. This is possible for the current implementation,
because we use Relay op strategy to define auto_scheduler tasks as well. In
other words, we use Relay FuseOps to define the task scope, and should be able
to choose to use TOPI (AutoTVM) or auto_scheduler schedule for ea
I agree it could be part of the PassContext, but perhaps not at the top level
as opt_level, but more as a sub-level attribute, like the other attributes in
loop unrolling
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-a-general-task-extraction-mechanism-for-auto-scheduler/8444/13)
hi @cgerum, I have a prototype of P0
[here](https://github.com/areusch/incubator-tvm/tree/aot-experiment). it's not
ready to merge and I think we should move to the P1 approach before we do. Feel
free to take a look at it if you like.
Andrew
---
[Visit Topic](https://discuss.tvm.apache.o
In collaboration with @tqchen
See also: [PoC](https://github.com/apache/incubator-tvm/pull/6917)
## Overview
In RAM-limited deployment scenarios (i.e. µTVM), it's desirable to place as
much constant data as possible in a separate binary section and use it directly
from that section. To that
So you meant the use case would be like the following?
```python
with auto_scheduler.ApplyHistoryBest(log_file):
with PassContext(opt_level=opt_level, config={use_topi_schedule: False}):
lib = relay.build(mod, target=target, params=params)
```
---
[Visit
Topic](https://discuss.
```
with auto_scheduler.ApplyHistoryBest(log_file):
with PassContext(opt_level=opt_level, config={
"relay.CompileEngine": { use_topi_schedule: False }
}):
lib = relay.build(mod, target=target, params=params)
```
---
[Visit
Topic](https://discuss.tvm.apache.org/t/r
14 matches
Mail list logo