Re: [dmlc/tvm] [VOTE] Apache Transition Plan (#2973)

2019-04-06 Thread xqdan
+1

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2973#issuecomment-480507855

Re: [dmlc/tvm] [VOTE] Add "Organizations contributing using and contributing to TVM" Section to Community Webpage (#4162)

2019-10-24 Thread xqdan
+1

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4162#issuecomment-546161435

Re: [dmlc/tvm] [DEV] TVM v0.6 Roadmap (#2623)

2019-10-27 Thread xqdan
When will we have 0.6 release ? thanks

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2623#issuecomment-546780527

Re: [dmlc/tvm] [RFC][DEV] TVM Project Repo Migration (#4212)

2019-10-31 Thread xqdan
Are we going to release 0.6 in new repo? @tqchen 

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4212#issuecomment-548286904

Re: [dmlc/tvm] [RFC][DEV] TVM Project Repo Migration (#4212)

2019-10-31 Thread xqdan
@tqchen Thanks. Both ok for us, as long as we can get a release in one or two 
month, is that possible?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4212#issuecomment-548680219

Re: [apache/incubator-tvm] [VOTE] Release Apache TVM (incubating) v0.6.0.rc2 (#4443)

2019-12-03 Thread xqdan
+1

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4443#issuecomment-561196336

Re: [apache/incubator-tvm] [RFC] Data-flow Analysis Functionality on TVM IR (#4468)

2019-12-09 Thread xqdan
@tqchen, what's your suggestion? IMO, low level IR has been there for a while, 
and we've had experience and understanding in low level ir. the post of unified 
ir to me is just a high level proposal, details needs to be discussed further,  
such as,

The most valuable thing to me is we can make ops white box with unified ir, 
that is to say, we can analyze into the ops, which is totally different with 
the current way, we are using ops as black box in graph framework, or separated 
ir.  with white box ops, we don't need to care about the ops name or the 
formula ops are using, we can optimize them in a general way.  see what's XLA 
doing. 

The abstraction of high level ir matters.  you don't want to lower ops body too 
much from tensor expression into nested loops, since it's a big cost of 
analyzing ir with too much context. 

so I suggest we can keep moving this work on this thread meanwhile we discuss 
how to reuse the solution. 

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4468#issuecomment-563318743

Re: [apache/tvm-rfcs] Additional Target Hooks RFC (#10)

2021-08-24 Thread xqdan
This is a great disscussion here. Actually, we are supporting a DSA with TVM, 
let me share my practice.

1, We only re-use some of tvm relay or tir passes, less than 10 passes, such 
storage flatten, we don't need most of tvm passes, keep them in our flow means 
wasting compilation time.
2, We develop passes, and enhance tvm passes for our target, such stroage 
rewrite.
3, We develop a hybrid ir on tir, in which we can do unified memory allocation 
and schedule insnstruction across operators.

So we are excited to see tvm can do these in main line.

What we want to have:
1, customzation compilation flow for both relay and tir flow.
2, unified ir to view both graph and tir, supporting inter/intra pass 
developing.


-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/10#issuecomment-904635448

Re: [apache/tvm-rfcs] [RFC] TVMScript Metaprogramming (PR #79)

2022-07-11 Thread xqdan
@yelite It's a great RFC,and this is what we need right now.
the requirements we need:
1) For compute fusion. With TE compute,  it's easy to concate TE computes with 
producer-comsuer relation to get a fused compute. for example, conv + elemwise 
ops fusion. We should have similar function in TVM script. Which thread is 
related to this requirement?
2) For conditional lowering. We may have some attributtes in graph/relay level, 
which will further decide how to lowering into different tir. With old ir 
builder/TE compute, we can do that. F4 in this RFC will ensure this,correct?
3) For reducing boilerplate code. F3 is a good idea. Another one is we define a 
tir function (with or without host python code), and we reuse it other place. 
We see this in F4 which foucus on conditional lowering, however I think we 
should define/declare it as standalone Fearture.

Looking forward to see this RFC in upstream!

-- 
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/79#issuecomment-1181184802
You are receiving this because you are subscribed to this thread.

Message ID: 

Re: [apache/tvm] [VOTE] Establish TVM Unity Connection Technical Strategy (Issue #12651)

2022-09-01 Thread xqdan
+1

-- 
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12651#issuecomment-1233949370
You are receiving this because you commented.

Message ID: 

Re: [apache/tvm] [VOTE] Clarify Community Strategy Decision Process (Issue #15521)

2023-08-13 Thread xqdan
+1

-- 
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15521#issuecomment-1676584840
You are receiving this because you are subscribed to this thread.

Message ID: 

Re: [apache/tvm] [VOTE] Transition Main to Unity (Issue #16368)

2024-01-08 Thread xqdan
+1

-- 
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16368#issuecomment-1882134532
You are receiving this because you are subscribed to this thread.

Message ID: 

[TVM Discuss] [Development] Google lasted work: MLIR Primer

2019-04-08 Thread Xqdan via TVM Discuss


My take is,

MLIR is a replacement of HalideIR. 1) compiler infra support, like cfg/dfa/ssa, 
with these, we can avoid pattern matching style pass on Halide, which is not 
good for maintaining, 2) other better utilities, like text ir; 3) unified IR 
for multi-level, graph and tensor.

I agree the idea we have a MLIR phase in TVM. if it's indeed better, we can 
move our work to MLIR gradually, or just write new optimization pass on MLIR.





---
[Visit Topic](https://discuss.tvm.ai/t/google-lasted-work-mlir-primer/1721/17) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/0e24b57e89d3d6edcfa06fda5e8a7e6da1aa6a9e89bb4a188ffb00c38a50be7b).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=yrr-J9fBn994G8i-pPhrng2

[TVM Discuss] [Development/RFC] [IR] Unified TVM IR Infra

2020-04-27 Thread Xqdan via TVM Discuss


@tqchen do we have abstractions in TVM’s unfied IR infra?

1,  multi-stage ir for relay::Function:
```
c = IRModule A(a, b){
  a = a + 1;
  b = b + 1;
  return a+b;
}

e =  IRModule B(c, d){
  c = c + 1;
  d = d + 1;
  return c+d;
}
```
With this abstraction, we can express complex/big ops with limited small ops; 
also we can treat big op as white box op, so we can do some computation 
optimizations globally.

2, multi-stage ir for tir::PrimFunc:
```
c = IRModule A(a, b){
 lowered tir
}

e =  IRModule B(c, d){
  lowered tir
}
```
with this abstraction, we can do some low level global optimizations, like dma 
preload for local buffers.
we may need to break lower into several api, so we can utilize different 
property of tir, like before stroageflatten or after.





---
[Visit Topic](https://discuss.tvm.ai/t/ir-unified-tvm-ir-infra/4801/6) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/25ea4666880cf9d3e1284d1f1a02df0edef00e3874eceec1a426fff8ebab18af).


[TVM Discuss] [Development/RFC] [IR] Unified TVM IR Infra

2020-04-29 Thread Xqdan via TVM Discuss


@tqchen That's great!
BTW I notice you delete ir dump in recently pr, but this is very very important 
utility for compiler development in HW projects, do we have other alternatives 
in tvm?





---
[Visit Topic](https://discuss.tvm.ai/t/ir-unified-tvm-ir-infra/4801/8) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/a86eb74693b834cdc0a185bb83b8bf69dcb711df80e06d83df460989d68d5341).


[TVM Discuss] [Development/RFC] [IR] Unified TVM IR Infra

2020-04-29 Thread Xqdan via TVM Discuss


Do we support round trip ir?  which can parser a readable ir and construct ir 
objects as inputs for compiler.





---
[Visit Topic](https://discuss.tvm.ai/t/ir-unified-tvm-ir-infra/4801/10) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/e82be81fe5b5399ec73d4c0b36d9f94b7e6cdda5a361ddf18f7e2e771d5d7dc1).


[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-19 Thread Xqdan via TVM Discuss


We have a poly + tvm solution for Davinci, which will be released soon, maybe 
in the next week.





---
[Visit 
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/19)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/165772407af889c0ed52ac31b482d68aa08f33b760dff544ca83d0ea7fc3371e).


[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-20 Thread Xqdan via TVM Discuss


https://gitee.com/mindspore/akg





---
[Visit 
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/20)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/7b6b126fd8f7135187a8dac6c625752011e33ebf26b856c5d2eb2072ec2efe7b).


[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-22 Thread Xqdan via TVM Discuss


we do support ascend310 op codegen on AKG side, but not in MindSpore for now.





---
[Visit 
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/23)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/8e51494271506933809a860b0937a195acff3544f242e7163e53f81f35ae6b2c).


[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-21 Thread Xqdan via Apache TVM Discuss


This is the right way to go. However I have two concern,
1) How to fuse ops as much as possible? Basically fusion is copy propagation 
optimization in compilers, which is based on data flow analysis, but still lack 
of programming analysis in TVM now.
2) TE tensorize can not handle some complex pattern matching, see 
https://github.com/apache/incubator-tvm/pull/1053, can we do 100% pattern 
matching in tir?





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/29)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/78c2de28cf50a3f0e21bd234a9fd975d7fd77c870c4627104dab67469571f219).


[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-21 Thread Xqdan via Apache TVM Discuss


Is Fusion in Ansor based on tir? 
For other transforms, you may checkout here, that's what we've done in AKG. I 
can explain some if you are intrested.
 
https://github.com/mindspore-ai/akg/blob/master/src/codegen/build_module.cc#L439





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/31)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/a7f0d2bafba438aef187410bc3f676663b7b15309b2644f747d34a10d3bc45bd).


[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-21 Thread Xqdan via Apache TVM Discuss


@junrushao1994 It's better to know loops can be vectoried, permutable or 
distributied, isl can provide these information,so we can do loop optimization 
and tensorization/vectorization automatically.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/32)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/9935959d85972017de17516f48d2c09e3a5b07c0857a9cdcdd3306e512945c9f).


[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2021-04-14 Thread Xqdan via Apache TVM Discuss


One issue in old schedule ops is we can not get the accurate bouds with 
inferbound, what will it be like in new schedule system? thanks.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/64)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/a665d101ee258a31107279867294b7a7dc7427c420169cc6f1fed9f459564cc6).


[Apache TVM Discuss] [Application] TVM Community Survey

2021-06-23 Thread Xqdan via Apache TVM Discuss


[quote="hogepodge, post:1, topic:10305"]
What platforms are you using TVM for?

* [ ] X86 CPU
* [ ] ARM CPU
* [ ] Other CPU
* [ ] NVidia GPU
* [ ] AMD GPU
* [ ] Other GPU
* [ ] Embedded Platform
[/quote]

We are using TVM for DSA NPU, can you add one option, thanks!





---
[Visit Topic](https://discuss.tvm.apache.org/t/tvm-community-survey/10305/2) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/b4ef0750a32d0e34c4c34c0cd4688e804e06895dece4d68d35294a2026e7172c).