Thanks for the discussion. Here are my thoughts.

### API Usage
  The API for tuning a whole neural network will be the same as autotvm 
(extract tasks and tune all of them).
  The API for writing templates is still under development. But it will be 
similar to autotvm.

### Performance in absolute time
   We didn't run on c5.9x large. On our test machine (a 20-core cascadelake), 
we get around 10% improvements on resnet-50, which means around 0.5ms speedup.

### Dense schedule
   Ansor significantly outperforms autotvm on dense and can match MKLDNN. So 
this may not be a big issue. Combining MKLDNN and TVM is orthogonal to this 
RFC. 

### Quantized models
@FrozenGene got promissing results on ARM CPU. But we expect more work has to 
be done on tensorization.

### Replacing AutoTVM
Currently, I am confident that Ansor can replace all fp32 autotvm templates.
I agree that current AutoTVM serves as a handy tool for manual exploration and 
we should not deprecate this functionality. We should support easy manual 
customization in Ansor and then replace AutoTVM.

### Code generation without tuning
This is on my to-do list. We have a better and unified cost model (one model 
for all operators), so we should be able to get some results in this direction.

### Hybrid Script
This is not supported and not easy to support. Ansor only accepts tvm.compute 
as input.

### New backend
We need modifications of the search space. Ansor supports search space 
customization by allowing users to register composable rules.  The framework is 
general for different backends.





---
[Visit 
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/12)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/bd8723a8e12d60591a761996860b153cd9fd2fc2d1a1824dd67df755502702a7).

Reply via email to