Re: [dmlc/tvm] [RFC][Relay] Implicit type casting vs explicit dtype conversion (#2659)

2019-03-19 Thread Haichen Shen
Closed #2659. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2659#event-2215789315

Re: [dmlc/tvm] [RFC][Relay] Implicit type casting vs explicit dtype conversion (#2659)

2019-03-19 Thread Haichen Shen
Yes, I agree that we should just keep the explicit casting. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2659#issuecomment-474697807

[dmlc/tvm] [RFC] Register Relay VM design (#2915)

2019-03-27 Thread Haichen Shen
# Register VM Design Current Relay VM RFC (#2810) proposes stack-based VM design which uses push and pop to maintain a unified stack. Though the design itself is simple to implement, it is cumbersome in tasks such as dataflow analysis and enforces certain orders in the execution. I propose a re

Re: [dmlc/tvm] [RFC] Register Relay VM design (#2915)

2019-03-27 Thread Haichen Shen
I proposed an alternative VM design using registers instead of stack. We can discuss and compare which one works better. cc @jroesch @tqchen @wweic @zhiics @MarisaKirisame @junrushao1994 @abergeron @ajtulloch @antinucleon @Ravenwater -- You are receiving this because you are subscribed to thi

Re: [dmlc/tvm] [VOTE] Apache Transition Plan (#2973)

2019-04-08 Thread Haichen Shen
+1 -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2973#issuecomment-480992101

Re: [dmlc/tvm] [Vote] Deprecate Python2 Support (#2994)

2019-04-09 Thread Haichen Shen
+1 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2994#issuecomment-481432475

Re: [dmlc/tvm] [RFC] Register Relay VM design (#2915)

2019-04-10 Thread Haichen Shen
I propose two changes to instructions. - `AllocTensor` uses shape stored in the register instead of hardcode shape in the instruction. This can help support dynamic shape in VM in the future. We can store constant shapes in the constant lists, and use `LoadConst` to load them. - Change `Phi` to

[dmlc/tvm] [Community] @antinucleon -> Reviewer (#3214)

2019-05-20 Thread Haichen Shen
This PR adds @antinucleon to the reviewer list of tvm. He has been contributed to TVM and Relay runtime. - [Commits](https://github.com/dmlc/tvm/commits?author=antinucleon) - [Code Review](https://github.com/dmlc/tvm/pulls?utf8=%E2%9C%93&q=reviewed-by%3Aantinucleon) - [Community Engagement](https

Re: [dmlc/tvm] [DEV] TVM v0.6 Roadmap (#2623)

2019-06-02 Thread Haichen Shen
# TVM Monthly - May 2019 https://discuss.tvm.ai/t/tvm-monthly-may-2019/2793 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2623#issuecomment-498106945

[dmlc/tvm] [RFC][Frontend] Return module for Relay frontend converter (#3346)

2019-06-11 Thread Haichen Shen
As we have pass manager in Relay optimization and mostly use module as optimization unit, I suggest we change the Relay frontend converter to return a module instead of a function. The change also makes easier add prelude and global var in the frontend converter in the future. Frontend converter

Re: [dmlc/tvm] [Relay][RFC] VM Object and Intepreter value (#3209)

2019-06-18 Thread Haichen Shen
I'm working on a PR that separates VM Object from Interpreter value #3391. After the PR, VM will directly return Object to python instead of converting to Interpreter Value. Currently I haven't dealt with `ClosureObject` since it won't appear in the return value. -- You are receiving this beca

Re: [dmlc/tvm] [Community] @joshpoll -> Reviewer (#3412)

2019-06-21 Thread Haichen Shen
Merged #3412 into master. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/pull/3412#event-2431035620

Re: [dmlc/tvm] [RFC][relay][vm] Relay virtual machine serialization (#3594)

2019-07-22 Thread Haichen Shen
One question I have is whether we should put `length` of *all fields* in the beginning, or just put the `length` of *one array field* `val*` immediately before this field. It's possible that we may have an instruction with more than one field that are arrays in the future. However, the current d

Re: [dmlc/tvm] [RFC][relay][vm] Relay virtual machine serialization (#3594)

2019-07-22 Thread Haichen Shen
@zhiics I mean if there are two fields with variable length, though not very likely, what do you plan to support? Also do we expect every fields have same data type? Another point Marisa is making that we don't need to put `length` for every instructions since many have fixed length. -- You ar

[dmlc/tvm] [Community] slyubomirsky -> Reviewer (#3673)

2019-07-30 Thread Haichen Shen
This PR adds @slyubomirsky to the reviewer list of tvm. He has been contributing to many core features in Relay. - [Commits](https://github.com/dmlc/tvm/commits?author=slyubomirsky) - [Code Review](https://github.com/dmlc/tvm/pulls?utf8=%E2%9C%93&q=reviewed-by%3Aslyubomirsky) - [Community Engage

[dmlc/tvm] [Community] MarisaKirisame -> Reviewer (#3755)

2019-08-11 Thread Haichen Shen
This PR adds @MarisaKirisame to the reviewer list of TVM. He has been contributing to Relay IR, passes, and ADT. - [Commits](https://github.com/dmlc/tvm/commits?author=MarisaKirisame) - [Code Review](https://github.com/dmlc/tvm/pulls?utf8=%E2%9C%93&q=reviewed-by%3AMarisaKirisame) - [Community En

Re: [dmlc/tvm] [DEV] TVM v0.6 Roadmap (#2623)

2019-08-12 Thread Haichen Shen
# TVM Monthly - July 2019 https://discuss.tvm.ai/t/tvm-monthly-july-2019 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2623#issuecomment-520475436

[dmlc/tvm] [Community] Luis Vega -> Reviewer (#3909)

2019-09-06 Thread Haichen Shen
This PR adds Luis Vega (@vegaluisjose) to the reviewer list of TVM. He has been contributing to VTA and TSIM. - [Commits](https://github.com/dmlc/tvm/commits?author=vegaluisjose) - [Code Review](https://github.com/dmlc/tvm/pulls?utf8=%E2%9C%93&q=reviewed-by%3Avegaluisjose) - [Community Engagemen

Re: [dmlc/tvm] [COMMUNITY] @yongwww-> reviewer (#3997)

2019-09-24 Thread Haichen Shen
Merged #3997 into master. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/pull/3997#event-2660143912

Re: [dmlc/tvm] [COMMUNITY] ajtulloch -> committer (#4043)

2019-10-01 Thread Haichen Shen
Merged #4043 into master. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/pull/4043#event-2678553862

Re: [dmlc/tvm] [DEV] TVM v0.6 Roadmap (#2623)

2019-10-03 Thread Haichen Shen
# TVM Monthly - September 2019 https://discuss.tvm.ai/t/tvm-monthly-september-2019 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2623#issuecomment-538074210

Re: [dmlc/tvm] [RFC] Unifying Object Protocol in the Stack (#4116)

2019-10-14 Thread Haichen Shen
I have one question about `_type_child_slots`. If the child class is a base class for others and also defines `_type_child_slots`, will you check if it overflows its parent `_type_child_slots`? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or

Re: [dmlc/tvm] [VOTE] Add "Organizations contributing using and contributing to TVM" Section to Community Webpage (#4162)

2019-10-21 Thread Haichen Shen
+1 -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/4162#issuecomment-544583390

Re: [dmlc/tvm] [RFC] Dynamic Shape Support - Graph Dispatching (#4118)

2019-10-23 Thread Haichen Shen
@soiferj 1. Shape function is used to compute the output shape(s) of an op at runtime, which cannot be determined at compilation time. And yes, fow now, we have to register the shape function for all ops to support dynamic shape. 2. We could do this. But we need to change the attribute of `full`

Re: [dmlc/tvm] [RFC] [AutoTVM] Implementing an auto-tuning library/cache (#4150)

2019-10-24 Thread Haichen Shen
@mbarrett97 I wonder why not just using the transfer learning in the AutoTVM. After using transfer learning, AutoTVM will skip the tasks that have been tried before. See the example at https://docs.tvm.ai/tutorials/autotvm/tune_relay_arm.html#begin-tuning -- You are receiving this because you a

[apache/incubator-tvm] [Community] @weberlo -> reviewer (#4390)

2019-11-20 Thread Haichen Shen
This PR welcomes @weberlo as a new reviewer of the TVM. He contributed to uTVM and Relay ADT. - [Commits](https://github.com/dmlc/tvm/commits?author=weberlo) - [Reviews](https://github.com/dmlc/tvm/pulls?utf8=%E2%9C%93&q=reviewed-by%3Aweberlo) - [Community forum engagement](https://discuss.tvm.a

Re: [apache/incubator-tvm] [VOTE] Release Apache TVM (incubating) v0.6.0.rc2 (#4443)

2019-11-28 Thread Haichen Shen
+1 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/4443#issuecomment-559610219

Re: [apache/incubator-tvm] [RFC] Lower Bound for Shape Variables (#4487)

2019-12-09 Thread Haichen Shen
Yes, I agree we should have a general `AssertExpr` instead of `AssertLowerBound`, and put the asserted condition in the expr.. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/44

Re: [apache/incubator-tvm] [COMMUNITY] @wweic -> committer (#4636)

2020-01-06 Thread Haichen Shen
Merged #4636 into master. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/pull/4636#event-2925568847

[apache/incubator-tvm] [COMMUNITY] @MarisaKirisame -> committer (#4645)

2020-01-07 Thread Haichen Shen
Please join us to welcome Marisa Kirisame(@MarisaKirisame) as a committer of TVM. Marisa has strong background and knowledge in programming language and compiler. He contributed to many aspects to relay and TVM, including high-level relay IR, partial evaluator, high-order/first-order AD pass, AD

Re: [apache/incubator-tvm] [COMMUNITY] @MarisaKirisame -> committer (#4645)

2020-01-07 Thread Haichen Shen
Merged #4645 into master. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/pull/4645#event-2929677304

[apache/incubator-tvm] Bump up dev version (#4941)

2020-02-25 Thread Haichen Shen
Not for release purpose. This is only for convenience purposes to differentiate TVM with and w/o op strategy. cc @tqchen @comaniac You can view, comment on, or merge this pull request online at: https://github.com/apache/incubator-tvm/pull/4941 -- Commit Summary -- * bump up dev version

Re: [apache/incubator-tvm] Bump up dev version (#4941)

2020-02-25 Thread Haichen Shen
@comaniac Thanks for the reminder. Updated now. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/pull/4941#issuecomment-591171149

Re: [apache/incubator-tvm] [COMMUNITY] Add @abergeron -> reviewer (#5064)

2020-03-13 Thread Haichen Shen
Merged #5064 into master. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/pull/5064#event-3128926179

Re: [apache/incubator-tvm] [VOTE] VTA HW/SW refactor (#5102)

2020-03-19 Thread Haichen Shen
+1 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/5102#issuecomment-601398752

Re: [apache/incubator-tvm] [COMMUNITY] @wpan11nv -> Reviewer (#5790)

2020-06-12 Thread Haichen Shen
Merged #5790 into master. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/pull/5790#event-3439973095

Re: [apache/incubator-tvm] [COMMUNITY] Siju Samuel -> Committer (#5817)

2020-06-15 Thread Haichen Shen
Merged #5817 into master. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/pull/5817#event-3445991608

[apache/incubator-tvm] [COMMUNITY] Matthew Brookhart -> Reviewer (#5886)

2020-06-22 Thread Haichen Shen
Please join us to welcome @mbrookhart as a new reviewer of the TVM community. He has been actively contributing to non-recursive graph visitor, Relay pattern language and matcher, and ONNX frontend conversions. - [Commits](https://github.com/apache/incubator-tvm/commits?author=mbrookhart) - [Cod

Re: [apache/incubator-tvm] [COMMUNITY] @kparzysz-quic -> committer (#6290)

2020-08-17 Thread Haichen Shen
Merged #6290 into master. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/pull/6290#event-3663875063

Re: [apache/incubator-tvm] [DISCUSS][RFC] Apache TVM Graduation (#6299)

2020-08-19 Thread Haichen Shen
+1 TVM community gets good traction. super excited to be part of the community. -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/6299#issuecomment-676830190

Re: [apache/incubator-tvm] [VOTE] Apache TVM Graduation (#6332)

2020-08-24 Thread Haichen Shen
+1 (binding) -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679411683

Re: Test to see if podling is listening

2020-08-27 Thread Haichen Shen
Ack On Thu, Aug 27, 2020 at 4:38 PM Thierry Moreau wrote: > Ack > > > > > On Aug 27, 2020, at 4:36 PM, YiZhi Liu wrote: > > > > > > Ack > > > > > > On Thu, Aug 27, 2020 at 4:09 PM Tianqi Chen > > > wrote: > > > > > >> Ack:) > > >> > > >> TQ > > >> > > >> On Thu, Aug 27, 2020 at 4:05 PM Byung-G

Re: [apache/tvm] [VOTE] Adopt the New RFC Process (#7991)

2021-05-06 Thread Haichen Shen
+1 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/7991#issuecomment-833926694

[TVM Discuss] [Development] Should we use PureExtern for some TVM ops?

2019-06-13 Thread Haichen Shen via TVM Discuss
Halide doc provides the definition of "intrinsic" and "extern" functions: https://halide-lang.org/docs/struct_halide_1_1_internal_1_1_call.html#a45d847325694df85e74150f770c1e393 "pure" just means that this function is side-effect-free. --- [Visit Topic](https://discuss.tvm.ai/t/should-we-u

[TVM Discuss] [Development/RFC] Performing Relay Passes Non-Recursively

2020-03-30 Thread Haichen Shen via TVM Discuss
Yes, that sounds good to me. --- [Visit Topic](https://discuss.tvm.ai/t/performing-relay-passes-non-recursively/5696/21) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/07c8

[TVM Discuss] [Development] Gather_nd semantics

2020-04-06 Thread Haichen Shen via TVM Discuss
Currently Relay `gather_nd` op uses the [mxnet semantic](https://mxnet.apache.org/api/python/docs/api/ndarray/op/index.html#mxnet.ndarray.op.gather_nd), which each column in `indices` indicates the indices in `data`. However, Tensorflow [`gather_nd`](https://www.tensorflow.org/api_docs/python

[TVM Discuss] [Development/RFC] [RFC] TVM Target Specification

2020-06-03 Thread Haichen Shen via TVM Discuss
I stand with Tianqi on the `target_host` attribute as it encapsulates the information required to compile for a device and can simplify the transformation passes in the TVM stack. I have a few questions to the new target specification. 1. How will the generic function and dispatching works wi

[TVM Discuss] [Development/RFC] [RFC] TVM Target Specification

2020-06-04 Thread Haichen Shen via TVM Discuss
Keys are an important field in the target to make other modules work. Since the target can be created from json, I'm worried if people forget to add certain keys in the target, it might cause some undesired behavior. --- [Visit Topic](https://discuss.tvm.ai/t/rfc-tvm-target-specification/6

[TVM Discuss] [Development] [DISCUSS] The meaning of "float" in Relay

2020-06-11 Thread Haichen Shen via TVM Discuss
Agree with @junrushao1994. I think we should use fp32 as default instead of fp64 as it's more common in deep learning. --- [Visit Topic](https://discuss.tvm.ai/t/discuss-the-meaning-of-float-in-relay/6949/5) to respond. You are receiving this because you enabled mailing list mode. To un

[TVM Discuss] [Development/RFC] Dynamic Ops in Relay

2020-06-11 Thread Haichen Shen via TVM Discuss
I'm also in favor of A1 approach. I have one more question to dynamic ops. Currently Relay allows to use symbolic var to represent a dimension. In the world of A1, if attributes contains a symbolic var, such as new shape in `reshape`, are we treating the op as a dynamic op or static op? -

[TVM Discuss] [RFC] Canonicalizing AutoTVM Log Format

2020-06-19 Thread Haichen Shen via TVM Discuss
I agree with @tqchen. Probably we should wait and see how Ansor log looks like and include it into the design. We could have @merrymercy comment on this. In the high level, I suggest we have five fields: **target**, workload, config, results, version. The only change is taking the target out o

[TVM Discuss] [RFC] Canonicalizing AutoTVM Log Format

2020-06-22 Thread Haichen Shen via TVM Discuss
Probably we can canonicalize the target (e.g., a protobuf buffer) instead of a string as well. We can refer the target format to https://discuss.tvm.ai/t/rfc-tvm-target-specification/6844. @tqchen --- [Visit Topic](https://discuss.tvm.ai/t/rfc-canonicalizing-autotvm-log-format/7038/9) to

[TVM Discuss] [Development/RFC] Relay op strategy

2020-06-24 Thread Haichen Shen via TVM Discuss
You can follow the example in vta to overwrite the implementation for a specific target. https://github.com/apache/incubator-tvm/blob/master/vta/python/vta/top/op.py#L63 --- [Visit Topic](https://discuss.tvm.ai/t/relay-op-strategy/5247/21) to respond. You are receiving this because you en

[Apache TVM Discuss] [Development] Make binary distributation

2020-09-10 Thread Haichen Shen via Apache TVM Discuss
Yes, @tqchen and I will post a RFC soon for the binary distribution. --- [Visit Topic](https://discuss.tvm.apache.org/t/make-binary-distributation/7867/4) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://dis

[Apache TVM Discuss] [Development/RFC] [RFC] tlcpack: Thirdparty Binary Packages

2020-09-14 Thread Haichen Shen via Apache TVM Discuss
So far we only released source code in past tvm releases, and we will continue to do so. As we continue to develop tvm, we also see some demand for convenient binary packages, such as wheel or docker binaries. One important factor of such binary packages is the potential links with third part

[Apache TVM Discuss] [Development/RFC] [RFC] tlcpack: Thirdparty Binary Packages

2020-09-15 Thread Haichen Shen via Apache TVM Discuss
The wheels are built on a newer version of CentOS. Pip wheel for CPU is manylinux2010 compatibility and wheels for CUDA are manylinux2014 compatibility. Releasing wheels with different CUDA versions is to accommodate different develop and deploy environment. --- [Visit Topic](https://dis

[Apache TVM Discuss] [Development] Remove extra reshape

2020-10-30 Thread Haichen Shen via Apache TVM Discuss
Sure, I think you can add this into the `SimplifyExpr` pass. I thought about this before, but I think that the reshape op might be inlined in the fused op. --- [Visit Topic](https://discuss.tvm.apache.org/t/remove-extra-reshape/8212/2) to respond. You are receiving this because you enable

[Apache TVM Discuss] [Development/RFC] [RFC] A general task extraction mechanism for auto_scheduler

2020-11-13 Thread Haichen Shen via Apache TVM Discuss
I have one question about `use_topi_schedule`. I assume that after we set it to False, it will always use the Ansor scheduler to schedule the ops. Will there be a case that we want have a mix of topi schedule and ansor schedule? --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-a-gen

[Apache TVM Discuss] [Development/RFC] [RFC] Rename TVMContext to TVMDevice

2021-02-06 Thread Haichen Shen via Apache TVM Discuss
I propose to rename the `TVMContext` to `TVMDevice`. Currently, `TVMContext` is used to represent a device that the model is executed on. Two main reasons for this change: 1. the name of `TVMContext` doesn't intuitively reflect its semantics of a device. 2. mainstream frameworks including [

[Apache TVM Discuss] [Development/RFC] [RFC] Rename TVMContext to TVMDevice

2021-02-07 Thread Haichen Shen via Apache TVM Discuss
1. TVM uses [RPCSession](https://tvm.apache.org/docs/api/python/rpc.html#tvm.rpc.RPCSession.context) to create a remote context/device. We can also change this API to `device`. 2. In fact, I also plan to create a RFC to dlpack for the context name change. --- [Visit Topic](https://discuss

[Apache TVM Discuss] [Development/RFC] [RFC] Rename TVMContext to TVMDevice

2021-02-08 Thread Haichen Shen via Apache TVM Discuss
I don't know much about µTVM. Need to take a look to comment on this. I think we can make the change in TVM first and change again after the change is pushed to DLPack. Because there'll be more dependencies in DLPack, it'll probably take longer time to change in DLPack. You can also check out

[Apache TVM Discuss] [Development/RFC] [RFC] Rename TVMContext to TVMDevice

2021-02-09 Thread Haichen Shen via Apache TVM Discuss
I think we can keep `TVMDevice` in the C++ and backend, but use `tvm.device` in the python frontend. It can reduce the confusion when integrating TVM into other frameworks if we keep `TVM` prefix. --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-rename-tvmcontext-to-tvmdevice/9090/1

[Apache TVM Discuss] [Development/RFC] [RFC] Rename TVMContext to TVMDevice

2021-03-05 Thread Haichen Shen via Apache TVM Discuss
Since everyone agree on A0, I'll go ahead and prepare the PR in next week. --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-rename-tvmcontext-to-tvmdevice/9090/27) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click he

[Apache TVM Discuss] [Development/RFC] [RFC] API change: asnumpy -> numpy

2021-05-06 Thread Haichen Shen via Apache TVM Discuss
As per discussion, I will prepare the PR to change the API to `numpy` and add a deprecation warning message in the `asnumpy`. --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-api-change-asnumpy-numpy/9846/9) to respond. You are receiving this because you enabled mailing list mode.