[dmlc/tvm] [Community] Jian Weng -> Committer (#3359)

2019-06-13 Thread Yizhi Liu
We are glad to welcome @were as a new committer of TVM. Jian is the major author of Hybrid Script for TVM. The tool enables people write complicated compute logic in pure Python and then be transformed to TVM tensor IR directly. It makes life much easier for implementing operators like non-maxim

Re: [dmlc/tvm] [Community] Jian Weng -> Committer (#3359)

2019-06-13 Thread Tianqi Chen
Merged #3359 into master. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/pull/3359#event-2411096126

[TVM Discuss] [Development] Should we use PureExtern for some TVM ops?

2019-06-13 Thread Haichen Shen via TVM Discuss
Halide doc provides the definition of "intrinsic" and "extern" functions: https://halide-lang.org/docs/struct_halide_1_1_internal_1_1_call.html#a45d847325694df85e74150f770c1e393 "pure" just means that this function is side-effect-free. --- [Visit Topic](https://discuss.tvm.ai/t/should-we-u

[TVM Discuss] [Development] Explore Optimizations for Concat

2019-06-13 Thread hlu1 via TVM Discuss
Sounds good. Will do. --- [Visit Topic](https://discuss.tvm.ai/t/explore-optimizations-for-concat/2435/11) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/488cbb585026277025

Re: [dmlc/tvm] [RFC] Frontend layout transformation (#2519)

2019-06-13 Thread Tianqi Chen
Closed #2519. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2519#event-2412073840

Re: [dmlc/tvm] [RFC] Frontend layout transformation (#2519)

2019-06-13 Thread Tianqi Chen
this thread is concluded and we shall move layout transformation as passes in relay -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2519#issuecomment-501899964

Re: [dmlc/tvm] [RFC] [VTA] [TSIM] Enabling Cycle-Accurate Hardware Simulation for VTA (#3009)

2019-06-13 Thread Tianqi Chen
https://github.com/dmlc/tvm/pull/3010 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/3009#issuecomment-501900322

Re: [dmlc/tvm] [RFC] Pytorch Support for TVM (#2494)

2019-06-13 Thread Tianqi Chen
Closed #2494. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2494#event-2412076073

Re: [dmlc/tvm] [RFC] [VTA] [TSIM] Enabling Cycle-Accurate Hardware Simulation for VTA (#3009)

2019-06-13 Thread Tianqi Chen
Closed #3009. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/3009#event-2412076941

Re: [dmlc/tvm] [RFC] Pytorch Support for TVM (#2494)

2019-06-13 Thread Tianqi Chen
closed for now, likely we can get related support in https://github.com/pytorch/tvm thanks to @bwasti -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2494#issuecomment-501900208

Re: [dmlc/tvm] [RFC][Relay] Text Format Part 2 (#3016)

2019-06-13 Thread Tianqi Chen
Please conclude and summarize the RFC so we could start a vote about the text format -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/3016#issuecomment-501900987

Re: [dmlc/tvm] [RFC][VTA] Support Intel FPGA in VTA (#1656)

2019-06-13 Thread Tianqi Chen
Closed #1656. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/1656#event-2412082736

Re: [dmlc/tvm] [RFC][VTA] Support Intel FPGA in VTA (#1656)

2019-06-13 Thread Tianqi Chen
closed in favor of the most recent chisel RFC -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/1656#issuecomment-501901239

Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-06-13 Thread Animesh Jain
@tqchen @FrozenGene @jackwish I have added a prototype patch. I think it will be helpful to use that patch to drive the discussion further. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues

[TVM Discuss] [Development] [Discuss] Should the Relay module include the prelude by default?

2019-06-13 Thread Steven S. Lyubomirsky via TVM Discuss
There doesn't seem to be any particular reason I can think of for the Relay module not to import the prelude by default, with a flag present for when it should not be imported (e.g., if you want to reclaim the names for some reason). It shouldn't lead to any overhead at run time since a dead c

Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-06-13 Thread Zhao Wu
@anijain2305 see the code quickly and I know your thought (combine operator to complete q_conv2d). However as commented before, how do we integrate with qnnpack when we don't have output_min / output_max? I think we could have these two arguments, if mxnet don't have, we could leave them the def

[TVM Discuss] [Development] [ONNX][Relay] Inconsistency between Int32 and Int64 after .view() opeartion

2019-06-13 Thread Ligeng Zhu via TVM Discuss
# Problem Description I am trying to deploy a [PyTorch model](https://github.com/mit-han-lab/ProxylessNAS) to TVM. When loading the onnx version via `relay.frontend.from_onnx`, it throws the following errors ```python %239 = take(%238, int64(0), axis=0) %240 = expand_dims(%239, axis=0)