[TVM Discuss] [Questions] BYOC: Runtime access Tensor values

2020-09-02 Thread Cody H. Yu via TVM Discuss
Here is an example of Zhi's response. The expression `data_entry_[eid]->data` accesses the values of the tesnor. https://github.com/apache/incubator-tvm/blob/master/src/runtime/contrib/dnnl/dnnl_json_runtime.cc#L71 --- [Visit Topic](https://discuss.tvm.ai/t/byoc-runtime-access-tensor-valu

[TVM Discuss] [Questions] Why can't I save the file

2020-08-25 Thread Cody H. Yu via TVM Discuss
@zhiics it looks like a problem of missing SavetoFile implementation. --- [Visit Topic](https://discuss.tvm.ai/t/why-cant-i-save-the-file/7718/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.a

[TVM Discuss] [Questions] [BYOC] Codegen, JSON, non-composite

2020-08-25 Thread Cody H. Yu via TVM Discuss
The decision of using JSON (or any customized) runtime or C source module is based on whether the generated code is GCC compilable. If you are going to generate assembly code in your ISA, and the assembly code needs to be compiled by your own compiler, then based on JSON runtime would be a bet

[TVM Discuss] [Questions] How to define search space an bypass some of them

2020-08-23 Thread Cody H. Yu via TVM Discuss
We don’t support non-grid tuning space in AutoTVM. On the other hand, the new auto-scheduler that is being upstreamed will support symbolic tuning parameters with conditional expression. --- [Visit Topic](https://discuss.tvm.ai/t/how-to-define-search-space-an-bypass-some-of-them/7699/2)

[TVM Discuss] [Questions] Pattern matching for TupleGetItem

2020-06-23 Thread Cody H. Yu via TVM Discuss
https://github.com/apache/incubator-tvm/pull/5909 --- [Visit Topic](https://discuss.tvm.ai/t/pattern-matching-for-tuplegetitem/7069/5) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/un

[TVM Discuss] [Questions] Pattern matching for TupleGetItem

2020-06-23 Thread Cody H. Yu via TVM Discuss
Yeah. I would suggest enhancing the current one to `is_tuple_get_item(pat: DFPattern, index: Optional[int] = None)`. @t-vi Are you going to extend it? I might have some time tomorrow or later this week to do so if needed. --- [Visit Topic](https://discuss.tvm.ai/t/pattern-matching-for-tu

[TVM Discuss] [Questions] Pattern matching for TupleGetItem

2020-06-23 Thread Cody H. Yu via TVM Discuss
As you mentioned, that will need to extend the `TupleGetItemPattern` to allow unspecified index. Currently I can only come up with the following workaround: ``` op = is_op(...) out = is_tuple_get_item(op, 0) | is_tuple_get_item(op, 1) | is_tuple_get_item(op, 2) ``` cc @mbrookhart --- [Vi

[TVM Discuss] [Questions] Is there a way to pass arguments to an external Codegen?

2020-06-23 Thread Cody H. Yu via TVM Discuss
Just FYI. While the RFC shoudl be out this week as I mentioned, I am not sure how long will it take for the community to converage the design and review the PR. It will probably take 2-3 weeks in total I guess. --- [Visit Topic](https://discuss.tvm.ai/t/is-there-a-way-to-pass-arguments-to

[TVM Discuss] [Questions] Is there a way to pass arguments to an external Codegen?

2020-06-23 Thread Cody H. Yu via TVM Discuss
Thanks for the interest. Currently we don't have a way to pass additional arguments to the external codegen, but we do have a short term plan to add this feature. The RFC should be out in this week. During this time, a workaround I can think of is through the environment variables. That is, s

[TVM Discuss] [Questions] Correct target string for x86 CPU

2020-06-22 Thread Cody H. Yu via TVM Discuss
It doesn't mean your target is wrong. It just meant Tophub doesn't have pre-tuned schedule config for your workload on Cascade Lake. You should use AutoTVM to tune the model by yourself. --- [Visit Topic](https://discuss.tvm.ai/t/correct-target-string-for-x86-cpu/7056/4) to respond. You

[TVM Discuss] [Questions] Correct target string for x86 CPU

2020-06-22 Thread Cody H. Yu via TVM Discuss
Should be `llvm -mcpu=cascadelake` I think? --- [Visit Topic](https://discuss.tvm.ai/t/correct-target-string-for-x86-cpu/7056/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscr

[TVM Discuss] [Questions] Is there a way to serialize a tvm module into a string like the serialized engine in tensorrt?

2020-06-21 Thread Cody H. Yu via TVM Discuss
You can try lib.save? --- [Visit Topic](https://discuss.tvm.ai/t/is-there-a-way-to-serialize-a-tvm-module-into-a-string-like-the-serialized-engine-in-tensorrt/7042/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here]

[TVM Discuss] [Questions] Same shape pattern

2020-06-18 Thread Cody H. Yu via TVM Discuss
I agree with @matt-arm. The `checked_type_` would be empty when a node is created until `InterType` is run or a new function is added to the module. It means the later processing node may not get the type of its parents if the parents were replaced with new nodes without properly propogating

[TVM Discuss] [Questions] Same shape pattern

2020-06-17 Thread Cody H. Yu via TVM Discuss
Could you provide example graphs before and after the pattern matching and rewriting to better illustrate your requirements? --- [Visit Topic](https://discuss.tvm.ai/t/same-shape-pattern/7012/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from

[TVM Discuss] [Questions] Difference in execuion time of model calculated using time_evaluator and sum of each layer latency

2020-06-14 Thread Cody H. Yu via TVM Discuss
Are you talking about debug graph runtime? If so, did you check if there have duplicated lines in the log? If the info of one layer is too long, then debug graph runtime will separate it to two lines to make the print pretty. However, the execution time of that layer will also be printed twice

[TVM Discuss] [Questions] [PattenLang]How to match op according to element type of input and output

2020-06-08 Thread Cody H. Yu via TVM Discuss
I personally like `ShapePattern` and `DTypePattern` more as they are more straightforward in the pattern language, but I'd also like to see other's opinions. cc @zhiics @tqchen @masahi --- [Visit Topic](https://discuss.tvm.ai/t/pattenlang-how-to-match-op-according-to-element-type-of-inpu

[TVM Discuss] [Questions] [PattenLang]How to match op according to element type of input and output

2020-06-08 Thread Cody H. Yu via TVM Discuss
Make sense. The ideal interface for this case would be leveraging has_type as other type matching. We may need new patterns like AnyShape, or support Wildcard in tensor shapes (seems much harder to me). --- [Visit Topic](https://discuss.tvm.ai/t/pattenlang-how-to-match-op-according-to-ele

[TVM Discuss] [Questions] Relay 'conv2d' layer performance after auto-tuning same as fallback

2020-06-04 Thread Cody H. Yu via TVM Discuss
There are some possibilities: 1. Try to use `pick_best` to identify the best config for each workload in a log file. AutoTVM will apply the best config over all tasks for the same workload. In other words, if you tune `direct` and `winograd` for the same conv2d workload and put them in the lo

[TVM Discuss] [Questions] Relay 'conv2d' layer performance after auto-tuning same as fallback

2020-06-04 Thread Cody H. Yu via TVM Discuss
You just tuned for 100 trials? If so please try 3,000 or 4,000 trials. --- [Visit Topic](https://discuss.tvm.ai/t/relay-conv2d-layer-performance-after-auto-tuning-same-as-fallback/6888/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these

[TVM Discuss] [Questions] A problem for the optimized model

2020-06-03 Thread Cody H. Yu via TVM Discuss
Isn't `p0` the weight of conv2d and `p1` the bias? --- [Visit Topic](https://discuss.tvm.ai/t/a-problem-for-the-optimized-model/6878/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/u

[TVM Discuss] [Questions] PyTorch to Relay IR

2020-06-01 Thread Cody H. Yu via TVM Discuss
It targets to the Relay IR. https://docs.tvm.ai/tutorials/frontend/from_pytorch.html#sphx-glr-tutorials-frontend-from-pytorch-py --- [Visit Topic](https://discuss.tvm.ai/t/pytorch-to-relay-ir/6864/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe

[TVM Discuss] [Questions] [PattenLang]How to match op according to element type of input and output

2020-06-01 Thread Cody H. Yu via TVM Discuss
I see your point and it seems fair. In the current implementation, one solution I can think of is leveraging the `check` function: ```python import tvm from tvm import relay from tvm.relay.dataflow_pattern import * def check(pre): return (pre.args[0].checked_type.dtype == 'float32' and

[TVM Discuss] [Questions] [PattenLang]How to match op according to element type of input and output

2020-05-30 Thread Cody H. Yu via TVM Discuss
Of course. Shape is a part of the type. --- [Visit Topic](https://discuss.tvm.ai/t/pattenlang-how-to-match-op-according-to-element-type-of-input-and-output/6846/4) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](ht

[TVM Discuss] [Questions] [PattenLang]How to match op according to element type of input and output

2020-05-30 Thread Cody H. Yu via TVM Discuss
`has_type` is not necessary to match the final expression. It can also be used like the following to check the type of an op: ```python in1 = wildcard() in2 = wildcard() pat = is_op('add')(in1, in2).has_type(relay.TensorType((10, 10), 'float32')) x = relay.var('x', shape=(10, 10), dtype='float

[TVM Discuss] [Questions] How to do auto-tuning for a list of specific ops?

2020-05-28 Thread Cody H. Yu via TVM Discuss
Good questions in general. 1. This would be a problem if you have limited memory in the machine. 2. Currently AutoTVM doesn't have an official API to export tasks. At this moment, one solution I can think of is writing a script to extract necessary task information and save it to a JSON file.

[TVM Discuss] [Questions] How to do auto-tuning for a list of specific ops?

2020-05-27 Thread Cody H. Yu via TVM Discuss
The official solution to this question is leveraging `extract_from_multiple_programs`. You can give that API multiple models at once, and it will return a list of unique tasks. By tuning the list of tasks, your log file can be used for all models you provided. --- [Visit Topic](https://d

[TVM Discuss] [Questions] Traversing Relay Graph order (source to sink, sink to source)

2020-05-20 Thread Cody H. Yu via TVM Discuss
Both orders can be implemented using post_order_visit. Postorder. In this case the first node that processes "do something" would be the source node in a DAG. ```python def visit(self, node): super().visit(node) // do something ``` Preorder. In this case the first node that processes "d

[TVM Discuss] [Questions] Example of JSON runtime for BYOC

2020-05-14 Thread Cody H. Yu via TVM Discuss
We will be working on it based on this RFC: https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579 --- [Visit Topic](https://discuss.tvm.ai/t/example-of-json-runtime-for-byoc/6670/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from t

[TVM Discuss] [Questions] How to reset Relay to use the default config?

2020-05-11 Thread Cody H. Yu via TVM Discuss
This is a tricky question. Theoretically you only need to build the module again without `apply_history_best` and measure its latency. However, if you build the same module twice, the compiler engine will use the previous built programs to reduce the build time in the second time. This can be

[TVM Discuss] [Questions] How to use autotuing after runing the tutorials?

2020-05-08 Thread Cody H. Yu via TVM Discuss
Is it available to post your script? It's hard to locate the problem from your screenshots. In addition, the tuning space size for your renset-18 looks weird to me. For example, Task 1 only has 400 candidates. What's your input shapes? --- [Visit Topic](https://discuss.tvm.ai/t/how-to-us

[TVM Discuss] [Questions] How to use autotuing after runing the tutorials?

2020-05-07 Thread Cody H. Yu via TVM Discuss
Did you uncomment the `tune_and_evaluate` function call in the tutorial? --- [Visit Topic](https://discuss.tvm.ai/t/how-to-use-autotuing-after-runing-the-tutorials/6620/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click

[TVM Discuss] [Questions] Autotvm.task_extract_from_program in TFLite

2020-05-05 Thread Cody H. Yu via TVM Discuss
I'm not familiar with the QNN module so I'm calling @anijain2305 for help. I would suggest opening another topic with a proper title for a new problem next time; otherwise it's easy to be ignored. --- [Visit Topic](https://discuss.tvm.ai/t/autotvm-task-extract-from-program-in-tflite/6578/8

[TVM Discuss] [Questions] Autotvm.task_extract_from_program in TFLite

2020-05-04 Thread Cody H. Yu via TVM Discuss
So your model is already in NCHW layout? From the log it seems the model is still in NHWC. You can see whatever the selected implementation (e.g., `conv2d_nhwc.x86`) and warning (e.g., NHWC layout is not optimized for x86) are all about the NHWC layout. You may need to check if the layout conv

[TVM Discuss] [Questions] Autotvm.task_extract_from_program in TFLite

2020-05-04 Thread Cody H. Yu via TVM Discuss
Looks like your model is in NHWC layout, but TVM now supports NCHW layout better and AFAIK TVM doesn't have a tunable template for NHWC layout in X86. You may need to use `ConvertLayout` pass to transform your model to NCHW layout and then extract tasks. --- [Visit Topic](https://discuss

[TVM Discuss] [Questions] [BYOC] Problem about subgraph with TupleTypeNode inputs

2020-04-29 Thread Cody H. Yu via TVM Discuss
@matt-arm the PR you posted solved the issue of tuple constant propagation, but it seems not solving the tuple var node issue. In this particular case, for example, we will still have a tuple of data in the first argument. --- [Visit Topic](https://discuss.tvm.ai/t/byoc-problem-about-subg

[TVM Discuss] [Questions] [BYOC] Problem about subgraph with TupleTypeNode inputs

2020-04-28 Thread Cody H. Yu via TVM Discuss
At this moment we suggest your codegen flatting a tuple here: https://github.com/apache/incubator-tvm/blob/master/src/relay/backend/contrib/dnnl/codegen.cc#L141 In addition, when processing concatenate nodes, your codegen can retrieve the tuple information when processing concatenate nodes by

[TVM Discuss] [Questions] Autotvm error: cudaErrorCudartUnloading: initialization error' error_no=2

2020-04-26 Thread Cody H. Yu via TVM Discuss
That's a good finding. Maybe we can pass this parameter via `LocalBuilder` (https://github.com/apache/incubator-tvm/blob/96873076ebe895f967a04886cbcc95d209ee3980/python/tvm/autotvm/measure/measure_methods.py#L93). @merrymercy do you have any suggestions? --- [Visit Topic](https://discuss.

[TVM Discuss] [Questions] Autotvm error: cudaErrorCudartUnloading: initialization error' error_no=2

2020-04-24 Thread Cody H. Yu via TVM Discuss
I think this is nothing to do with AutoTVM yet. Did you try to build the model directly without running AutoTVM? If you encounter the same error without running AutoTVM, then it means you change results in errors in the generated CUDA code. If you didn't encounter the error without running Au

[TVM Discuss] [Questions] Relationship between json and TVM Runtime: How operators are selected for execution

2020-04-24 Thread Cody H. Yu via TVM Discuss
>From my understanding, all operators are executed sequentially. --- [Visit Topic](https://discuss.tvm.ai/t/relationship-between-json-and-tvm-runtime-how-operators-are-selected-for-execution/6477/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe

[TVM Discuss] [Questions] Question about conv2d x86 schedule template

2020-04-24 Thread Cody H. Yu via TVM Discuss
Because kernel includes the channel of not only input but also output. --- [Visit Topic](https://discuss.tvm.ai/t/question-about-conv2d-x86-schedule-template/6436/8) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](

[TVM Discuss] [Questions] Question about conv2d x86 schedule template

2020-04-20 Thread Cody H. Yu via TVM Discuss
It's true, but it should be fine in my opinion, because the search space for X86 workloads are still acceptable in most cases. --- [Visit Topic](https://discuss.tvm.ai/t/question-about-conv2d-x86-schedule-template/6436/6) to respond. You are receiving this because you enabled mailing lis

[TVM Discuss] [Questions] Question about conv2d x86 schedule template

2020-04-20 Thread Cody H. Yu via TVM Discuss
You can search `tile_ow` in the Github repo for the use cases. For example: https://github.com/apache/incubator-tvm/blob/0cfdecdae99582998dae5c2c3fdfd7a2700f10c0/topi/python/topi/x86/conv2d_avx_1x1.py#L64 --- [Visit Topic](https://discuss.tvm.ai/t/question-about-conv2d-x86-schedule-templat

[TVM Discuss] [Questions] Can TVM now support batched inference? Autotvm runs twice as long as tensorflow

2020-04-20 Thread Cody H. Yu via TVM Discuss
I am not sure if graph tuner is still applicable when cBLAS is used. Maybe @kevinthesun could provide more details about it. --- [Visit Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-inference-autotvm-runs-twice-as-long-as-tensorflow/6405/7) to respond. You are receiving thi

[TVM Discuss] [Questions] Can TVM now support batched inference? Autotvm runs twice as long as tensorflow

2020-04-17 Thread Cody H. Yu via TVM Discuss
Dense is another issue tho. In this case you have to tune the model with batch size 500. Did you try graph tuner after tuning each op? Another option is enabling cBLAS for dense ops by setting `target=llvm -lib=cblas` --- [Visit Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-

[TVM Discuss] [Questions] Can TVM now support batched inference? Autotvm runs twice as long as tensorflow

2020-04-16 Thread Cody H. Yu via TVM Discuss
You can try to use batch 1 for tuning and 500 for inference. The time should be just around (batch size) * (single batch inference time). Current TVM HCHW/NHWC conv2d does not tune the batch size, but some work is ongoing. --- [Visit Topic](https://discuss.tvm.ai/t/can-tvm-now-support-bat

[TVM Discuss] [Questions] Meaning of first numbers in auto-tuning logs

2020-04-16 Thread Cody H. Yu via TVM Discuss
Latency is the execution time of that op when tuning. However, it may not reflect to the end-to-end model execution time directly, so we usually only use it for configs comparison. --- [Visit Topic](https://discuss.tvm.ai/t/meaning-of-first-numbers-in-auto-tuning-logs/6399/6) to respond.

[TVM Discuss] [Questions] Meaning of first numbers in auto-tuning logs

2020-04-16 Thread Cody H. Yu via TVM Discuss
Selection logic is not shown in the log. When calling `ApplyHistoryBest`, it will select the log with the minimal latency and error_no=0. --- [Visit Topic](https://discuss.tvm.ai/t/meaning-of-first-numbers-in-auto-tuning-logs/6399/4) to respond. You are receiving this because you enabled

[TVM Discuss] [Questions] Meaning of first numbers in auto-tuning logs

2020-04-16 Thread Cody H. Yu via TVM Discuss
`[[latency in seconds], error no, all costs, timestamp]`. If you set `repeat` > 1 in the tuning option, you will have more than one latencies in the first field (the largest and smallest latencies will be treat as outliers and removed when `repeat` > 3). p.s. This is the old format. Please us

[TVM Discuss] [Questions] Auto-tuned convolution achieving higher than V100's theoretical peak single-precision FLOPS

2020-04-10 Thread Cody H. Yu via TVM Discuss
You may want to remeasure the throughput using the best config. We've observed that AutoTVM measurement might be inaccurate for some reasons. For example, [this recent PR](https://github.com/apache/incubator-tvm/pull/5200) fixed one for Intel, but it must have some others. --- [Visit Top

[TVM Discuss] [Questions] How to use AutoTVM with manually created TOPI computations?

2020-04-06 Thread Cody H. Yu via TVM Discuss
Not exactly. The task name corresponds to a pair of compute and schedule functions. When creating a task (or even a TVM program), we always create a compute first, and create a schedule accordingly, so does AutoTVM. --- [Visit Topic](https://discuss.tvm.ai/t/how-to-use-autotvm-with-manual

[TVM Discuss] [Questions] How to use AutoTVM with manually created TOPI computations?

2020-04-06 Thread Cody H. Yu via TVM Discuss
You are filling the compute, so the compute function at line 30 is what you should look at. --- [Visit Topic](https://discuss.tvm.ai/t/how-to-use-autotvm-with-manually-created-topi-computations/4895/9) to respond. You are receiving this because you enabled mailing list mode. To unsubscr

[TVM Discuss] [Questions] How to use AutoTVM with manually created TOPI computations?

2020-04-06 Thread Cody H. Yu via TVM Discuss
As the error message suggested, you need to specify all required arguments. You still miss 4 of them. --- [Visit Topic](https://discuss.tvm.ai/t/how-to-use-autotvm-with-manually-created-topi-computations/4895/7) to respond. You are receiving this because you enabled mailing list mode. T

[TVM Discuss] [Questions] How to use AutoTVM with manually created TOPI computations?

2020-04-06 Thread Cody H. Yu via TVM Discuss
You don't need to provide `cfg` in arguments. The first `cfg` argument in the template is handled by the template registration decorator and will be provided by the current context. --- [Visit Topic](https://discuss.tvm.ai/t/how-to-use-autotvm-with-manually-created-topi-computations/4895/

[TVM Discuss] [Questions] [RUNTIME] Can tvm call TensorRT as third-party runtime engine?

2020-04-02 Thread Cody H. Yu via TVM Discuss
TRT integration is now working but only on the [AWS forked repo](https://github.com/neo-ai/tvm). @trevor-m is working hard to make it upstream. Since the external codegen infra still requires some improvements, it might take some more time. --- [Visit Topic](https://discuss.tvm.ai/t/runt

[TVM Discuss] [Questions] [Solved] Relationship between strategy/compute/schedule?

2020-04-01 Thread Cody H. Yu via TVM Discuss
Yes that's the one for your question. Could you mind changing the title with [Solved] if the document addresses your question? --- [Visit Topic](https://discuss.tvm.ai/t/solved-relationship-between-strategy-compute-schedule/6175/3) to respond. You are receiving this because you enabled m

[TVM Discuss] [Questions] [CI][LINT] Enabling clang-format based lint checks

2020-04-01 Thread Cody H. Yu via TVM Discuss
@tqchen Is that possible for you to run clang-format for an entire code base so that we can add a checker to CI? If we have concerns to the correctness issues potentially could be introduced by clang-format, we might be able to assign few people to do so? --- [Visit Topic](https://discus

[TVM Discuss] [Questions] External codegen with CUDA target

2020-03-31 Thread Cody H. Yu via TVM Discuss
Ah I see. One reason might be an empty host module in this case. I'd call out @trevor-m since he has the experience to offload subgraphs to TRT while keeping thre rest on CUDA. --- [Visit Topic](https://discuss.tvm.ai/t/external-codegen-with-cuda-target/6159/4) to respond. You are recei

[TVM Discuss] [Questions] External codegen with CUDA target

2020-03-31 Thread Cody H. Yu via TVM Discuss
No that's a different flow. TVM itself has the cuBLAS and cuDNN support already ([example](https://github.com/apache/incubator-tvm/blob/master/python/tvm/contrib/cudnn.py)). If you set the target with `-libs`, it's using the TVM builtin one instead of your codegen. To use your codegen, now you

[TVM Discuss] [Questions] [External CodeGen] Status of Annotating composite functions?

2020-03-30 Thread Cody H. Yu via TVM Discuss
You should be able to use the op-based annotation now. The PR we merged today provides a pass to combine supported ops in a single subgraph: https://github.com/apache/incubator-tvm/pull/5134 --- [Visit Topic](https://discuss.tvm.ai/t/external-codegen-status-of-annotating-composite-functio

[TVM Discuss] [Questions] [AutoTVM] Find the default optimization configuration of the kernel

2020-03-27 Thread Cody H. Yu via TVM Discuss
> Is *fallback_with_reference_log* the one I can use? I’m not sure since it > does not return any configuration. (Seems like it applies directly to the > kernel?) Yes this is the one. It didn't return because it is a method of a config entity. What it did was copying the config map from oth

[TVM Discuss] [Questions] [AutoTVM] Find the default optimization configuration of the kernel

2020-03-27 Thread Cody H. Yu via TVM Discuss
It depends on the schedule function. For example: 1. Directly define a default configuration. https://github.com/apache/incubator-tvm/blob/master/topi/python/topi/x86/conv2d.py#L36 2. Load the best configurations of similar workloads and make a one. https://github.com/apache/incubator-tvm/blo

[TVM Discuss] [Questions] [AutoTVM] Selective tuning of hotspots

2020-03-25 Thread Cody H. Yu via TVM Discuss
Well...if more than two convs have the same shape (both input and weight), then they will be the same tuning task. The tricky part is that it's not straightforward to see the weight shape from the debug runtime log. --- [Visit Topic](https://discuss.tvm.ai/t/autotvm-selective-tuning-of-ho

[TVM Discuss] [Questions] [External Codegen] Constant tensors in c-codegen

2020-03-25 Thread Cody H. Yu via TVM Discuss
The larger constant tensor issue was also considered before, but since we have no idea how will developers deal with constant tensors, we transparent this part to the developers. As a result, you can do anything you think that's better, including writing them out to a separate file. --- [

[TVM Discuss] [Questions] Task create function template_key keywork

2020-03-25 Thread Cody H. Yu via TVM Discuss
`template_key` is deprecated. In 0.7.dev1, you don't have to specify the template for a task. Instead, task name implies the template it uses. When extracting tasks from a model, Relay op strategy will generate multiple tasks for different templates. For example, one conv2d op in a model may r

[TVM Discuss] [Questions] [AutoTVM] Selective tuning of hotspots

2020-03-25 Thread Cody H. Yu via TVM Discuss
It's a bit tricky. For now you can only match op type and shape. --- [Visit Topic](https://discuss.tvm.ai/t/autotvm-selective-tuning-of-hotspots/6083/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discu

[TVM Discuss] [Questions] [External Codegen] Constant tensors in c-codegen

2020-03-24 Thread Cody H. Yu via TVM Discuss
No I think you could just assign the value one-by-one. One optimization you could consider is to make sure you assign values only at the first visit. --- [Visit Topic](https://discuss.tvm.ai/t/external-codegen-constant-tensors-in-c-codegen/5890/9) to respond. You are receiving this becau