[Apache TVM Discuss] [Development/RFC] [RFC] A general task extraction mechanism for auto_scheduler

2020-11-13 Thread Cody H. Yu via Apache TVM Discuss
We haven’t planned that far yet, as currently we lower a Relay function to a TE compute, which relies on Relay op strategy to map Relay ops to TOPI computes. I’m not familiar with custom Relay ops, but it would be great if you have any suggestion that could make this RFC potentially work for c

[Apache TVM Discuss] [Development] Conflict with XGBoost when Thrust is enabled

2020-11-13 Thread nolan via Apache TVM Discuss
Met same problem when open `USE_THRUST=ON` without using `xgboost` My environment: CUDA 10.1, thrust 1.9.5 --- [Visit Topic](https://discuss.tvm.apache.org/t/conflict-with-xgboost-when-thrust-is-enabled/6889/5) to respond. You are receiving this because you enabled mailing list mode. To

[Apache TVM Discuss] [Development] Quantization and 3D convolution

2020-11-13 Thread Olivier Valery via Apache TVM Discuss
@tkonolige Thanks a lot for your help. Regarding the ```tvm.lower(s, args)```, you can find below the generated code . Before tuning, I got: ``` #[version = "0.0.5"] primfn(A_1: handle, W_1: handle, output_unpack_1: handle) -> () attr = {"global_symbol": "main", "tir.noalias": True} buffer

[Apache TVM Discuss] [Development] Quantization and 3D convolution

2020-11-13 Thread Olivier Valery via Apache TVM Discuss
I also wrote a a minimal example to reproduce the problem. ``` """Test for NCHW[x]c convolution""" import numpy as np import tvm from tvm import te from tvm import autotvm from tvm import topi import tvm.testing import tvm.topi.testing from tvm.contrib.pickle_memoize import memoize from tvm.top

[Apache TVM Discuss] [Development/RFC] [RFC] A general task extraction mechanism for auto_scheduler

2020-11-13 Thread Tristan Konolige via Apache TVM Discuss
I'm not super familiar with autotvm and auto scheduling, but I've got a couple questions: 1. What is the interaction between autoscheduler and autotvm in the future. Will we be unifying the user api for autotvm and auto scheduling? Can you mix auto scheduling and autotvm? 2. Why is the `GraphR

[Apache TVM Discuss] [Development] Quantization and 3D convolution

2020-11-13 Thread Tristan Konolige via Apache TVM Discuss
I believe this line is the issue as it occurs before `threadIdx.z` is defined. [quote="OValery16, post:6, topic:8338"] `allocate(compute, int32, [(((floordiv(((threadIdx.z: int32*2) + 1), 4)*32) + 32) - (floordiv(threadIdx.z, 2)*32))]);` [/quote] However, I cannot reproduce this issue with the

[Apache TVM Discuss] [Development/RFC] [RFC] A general task extraction mechanism for auto_scheduler

2020-11-13 Thread Cody H. Yu via Apache TVM Discuss
1. We haven't figured out the plan yet, but mixing them up is definitely a trend. 2. In order to make task extraction and schedule application align, we follow the same flow as building a model to extract tasks. Both AutoTVM and auto_scheduler leverage this approach. --- [Visit Topic](h

[Apache TVM Discuss] [Development/RFC] [RFC] A general task extraction mechanism for auto_scheduler

2020-11-13 Thread Haichen Shen via Apache TVM Discuss
I have one question about `use_topi_schedule`. I assume that after we set it to False, it will always use the Ansor scheduler to schedule the ops. Will there be a case that we want have a mix of topi schedule and ansor schedule? --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-a-gen

[Apache TVM Discuss] [Development/RFC] [RFC] A general task extraction mechanism for auto_scheduler

2020-11-13 Thread Cody H. Yu via Apache TVM Discuss
This is a good question. This is possible for the current implementation, because we use Relay op strategy to define auto_scheduler tasks as well. In other words, we use Relay FuseOps to define the task scope, and should be able to choose to use TOPI (AutoTVM) or auto_scheduler schedule for ea

[Apache TVM Discuss] [Development/RFC] [RFC] A general task extraction mechanism for auto_scheduler

2020-11-13 Thread tqchen via Apache TVM Discuss
I agree it could be part of the PassContext, but perhaps not at the top level as opt_level, but more as a sub-level attribute, like the other attributes in loop unrolling --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-a-general-task-extraction-mechanism-for-auto-scheduler/8444/13)

[Apache TVM Discuss] [Development/RFC] [Guideline] Relay AOT

2020-11-13 Thread Andrew Reusch via Apache TVM Discuss
hi @cgerum, I have a prototype of P0 [here](https://github.com/areusch/incubator-tvm/tree/aot-experiment). it's not ready to merge and I think we should move to the P1 approach before we do. Feel free to take a look at it if you like. Andrew --- [Visit Topic](https://discuss.tvm.apache.o

[Apache TVM Discuss] [Development/RFC] [RFC] Linked Parameters for CPU Targets

2020-11-13 Thread Andrew Reusch via Apache TVM Discuss
In collaboration with @tqchen See also: [PoC](https://github.com/apache/incubator-tvm/pull/6917) ## Overview In RAM-limited deployment scenarios (i.e. µTVM), it's desirable to place as much constant data as possible in a separate binary section and use it directly from that section. To that

[Apache TVM Discuss] [Development/RFC] [RFC] A general task extraction mechanism for auto_scheduler

2020-11-13 Thread Cody H. Yu via Apache TVM Discuss
So you meant the use case would be like the following? ```python with auto_scheduler.ApplyHistoryBest(log_file): with PassContext(opt_level=opt_level, config={use_topi_schedule: False}): lib = relay.build(mod, target=target, params=params) ``` --- [Visit Topic](https://discuss.

[Apache TVM Discuss] [Development/RFC] [RFC] A general task extraction mechanism for auto_scheduler

2020-11-13 Thread tqchen via Apache TVM Discuss
``` with auto_scheduler.ApplyHistoryBest(log_file): with PassContext(opt_level=opt_level, config={ "relay.CompileEngine": { use_topi_schedule: False } }): lib = relay.build(mod, target=target, params=params) ``` --- [Visit Topic](https://discuss.tvm.apache.org/t/r