[TVM Discuss] [Development] Google lasted work: MLIR Primer

2019-04-09 Thread Junru Shao via TVM Discuss


It’s true. Handcrafting doesn’t scale when # of ASICs increases.





---
[Visit Topic](https://discuss.tvm.ai/t/google-lasted-work-mlir-primer/1721/23) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/c458aaba3e942f579bedf35a44105dca916924ce536e1d08ea3e835886d10387).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=vn5Hx01XCgDeArEUFV-w3A2

Re: [dmlc/tvm] [VOTE] Apache Transition Plan (#2973)

2019-04-09 Thread Lianmin Zheng
+1

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2973#issuecomment-481145860

Re: [dmlc/tvm] [RFC][AUTOTVM] Auto-Schedule from Compute Declaration (#2954)

2019-04-09 Thread Lianmin Zheng
@eqy "injective" is considered "direct compute". Typically they will be inlined.

Serializable Template + Serializable Config seems to be a good direction to go.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2954#issuecomment-481152553

[TVM Discuss] [Development] Google lasted work: MLIR Primer

2019-04-09 Thread aca88 via TVM Discuss


[quote="junrushao1994, post:23, topic:1721, full:true"]
It’s true. Handcrafting doesn’t scale when # of ASICs increases.
[/quote]

Hmm I dont think TVM really has a bigger problem of hand-crafting (read my 
comment to the next quote), also I think every ASIC developer would have to 
commit to "at least" defining TVM scheduling rules. Getting that for free would 
obviously be nice but I dont think its realistic. That scaling in # ASICs would 
be completely transparent to development of the TVM infrastructure.

There is some flexibility in TVM's scheduling rules.
I mean given a certain layer-type with (or without) possible fusions, you can 
have more than one scheduling rule.
You would have a higher-level scheduling rule decision making module (which is 
purely SW) to actually pick which of the scheduling rules to use. Yes the 
scheduling rules are then hand-crafted, but most likely somewhat templated so 
that at least to some degree you can generate diverse "flavours" (imagine the 
block sizes and ordering of loops) of the routine.

[quote="yzhliu, post:22, topic:1721, full:true"]
polyhedral optimization (or at least the ability to easily apply 
polyhedral-like analysis) might be attractive for ASICs though, it could help 
to build a smarter tensorizer.
[/quote]

I am no expert in polyhedral scheduling, but that sounds like very complex 
problem to solve (at least fully automated).

Polyhedral would technically not require these templates, but would require the 
scheduling algorithm to be conforming to the capabilities to the ASIC 
datapaths, address generation patterns, accelerator system resources (possible 
scratchpad usage), etc. This for any kind of operator fusion. Here I would 
guess that some templated schedules or constraints would again be handcrafted.
The set of loop optimizations that TVM natively supports is a subset of all 
possible with polyhedral, so it would be interesting to know which are not 
available (not even through a mix of TVM scheduling primitives). The only one I 
can think about is loop skewing (to generate a SW pipeline), but even then I 
have a mindsketch of how it could still be realizable without any extension of 
the TVM primitives.

**If someone is a poly expert and totally against what I say __ please __ 
contribute to thread or contact me**

@tqchen
There is one thing which I think TVM could do better and would probably fit 
into the MLIR vision, and that is allowing the NNVM/Relay fusion rules of nodes 
to be an input from ASIC backend developers. 
Obviously one path is to turn-off all fusion and then implement "glue fusion" 
routines which are more target dependent (each ASIC developer would have to do 
this), but I am not sure if it would break some of the reusability of TVM code 
(i.e. example TVM routines to visit nodes in a graph or something like that). I 
guess another path would be to overwrite some layer type definitions (ex: if I 
want to fuse conv and pool, then define pool as element-wise operation, again 
every ASIC developer would have to do this) but then again I have no idea what 
extra problems that brings down the road.





---
[Visit Topic](https://discuss.tvm.ai/t/google-lasted-work-mlir-primer/1721/24) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/90e4d7f53faffe79ae6cb0152eb1741ee83d94eab3bfeb5ea3fff85fe93f3e99).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=fxShickInnadC_rW-4FoCQ2

[TVM Discuss] [Development] Google lasted work: MLIR Primer

2019-04-09 Thread tqchen via TVM Discuss


Good tensorizor is an open problem that we all need to solve. Poly do not have 
advantage or disadvantage in this problem. This is a technical direction we 
should push to solve in TVM.

The common part between Poly and TVM is the usage of integer and integer set 
analysis. On that end, TVM and Halide’s approach is generally faster and has 
some of the advantages, while dropping some other less commonly capabilities 
like loop skewing. I believe that is the direction where MLIR will learn from 
us based on my past conversation with the related folks





---
[Visit Topic](https://discuss.tvm.ai/t/google-lasted-work-mlir-primer/1721/25) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/0889ccab25bcda5287f7a200b53d7ff26d1e48b30c93493d6081b53734c535ab).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=ZV0JoI-EGmZ7MTq3b-M84A2

[dmlc/tvm] [Vote] Deprecate Python2 Support (#2994)

2019-04-09 Thread Bing Xu
As we discussed in https://github.com/dmlc/tvm/issues/2715, most of us agree 
with switch to Python3 starts with 0.6 release. I want to start a vote to 
deprecate Python2 support & test in dev branch now, so in starts from 0.6 
release, only Python3 is supported. 

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2994

Re: [dmlc/tvm] [Vote] Deprecate Python2 Support (#2994)

2019-04-09 Thread Nick Hynes
Ah finally! So much +1

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2994#issuecomment-481341057

Re: [dmlc/tvm] [Vote] Deprecate Python2 Support (#2994)

2019-04-09 Thread Thierry Moreau
+1

> On Apr 9, 2019, at 10:14 AM, Nick Hynes  wrote:
> 
> Ah finally! So much +1
> 
> —
> You are receiving this because you are on a team that was mentioned.
> Reply to this email directly, view it on GitHub 
> , or mute the 
> thread 
> .
> 



-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2994#issuecomment-481363286

[TVM Discuss] [Development] [DISCUSS] Contributing new docs for InferBound

2019-04-09 Thread Jessica Davies via TVM Discuss


Hi all. There's been lots of discussion about improving TVM documentation. I 
noticed that there isn't much documentation for the InferBound pass, but I 
think it's an interesting part of the code, that's fundamental to understanding 
how lowering works in TVM. After all, InferBound is essential to determining 
loop extents, and buffer sizes.

So I decided to write documentation for the InferBound pass, both to clarify my 
own understanding, and to help others gain a deeper understanding of this pass 
too.

I want to contribute this documentation to the developer docs, but before 
opening an issue/PR I'd like to solicite feedback from the community. I very 
much appreciate the time anyone takes to read the documentation below, and 
provide comments.

## InferBound Overview
The InferBound pass is run after normalize, and before ScheduleOps 
[[build_module.py:308](https://github.com/dmlc/tvm/blob/master/python/tvm/build_module.py#L308
 "build_module.py:308")]. The main job of InferBound is to create the bounds 
map, which specifies a Range for each IterVar in the program. These bounds are 
then passed to ScheduleOps, where they are used to set the extents of For loops 
[[MakeLoopNest](https://github.com/dmlc/tvm/blob/master/src/op/op_util.cc#L98 
"MakeLoopNest")], and to set the sizes of allocated buffers 
[[BuildRealize](https://github.com/dmlc/tvm/blob/master/src/op/compute_op.cc#L241
 "BuildRealize")], among other uses.

The output of InferBound is a map from IterVar to Range:

```cpp
Map InferBound(const Schedule& sch);
```

Therefore, let's review the Range and IterVar classes:
```cpp
namespace HalideIR {
namespace IR {
class RangeNode : public Node {
public:
Expr min;
Expr extent;
// remainder ommitted
};
}}
```
```cpp
namespace tvm {
class IterVarNode : public Node {
public:
Range dom;
Var var;
// remainder ommitted
};
}
```
Note that IterVarNode also contains a Range 'dom'. This dom may or may not have 
a meaningful value, depending on when the IterVar was created. For example, 
when tvm.compute is called, an [IterVar is 
created](https://github.com/dmlc/tvm/blob/master/src/op/compute_op.cc#L82 
"IterVar is created") for each axis and reduce axis, with dom's equal to the 
shape supplied in the call to tvm.compute.

On the other hand, when tvm.split is called, [IterVars are 
created](https://github.com/dmlc/tvm/blob/master/src/schedule/schedule_lang.cc#L50
 "IterVars are created") for the inner and outer axes, but these IterVars are 
not given a meaningful 'dom' value.

In any case, the 'dom' member of an IterVar is never modified during 
InferBound. However, keep in mind that the 'dom' member of an IterVar is 
sometimes used as default value for the Ranges InferBound computes.

We next review some TVM codebase concepts that are required to understand the 
InferBound pass.

Recall that InferBound takes one argument, a Schedule. This schedule object, 
and its members, contains all information about the program being compiled.

A TVM schedule is composed of Stages. Each stage has exactly one Operation, 
e.g., a ComputeOp or a TensorComputeOp. Each operation has a list of 
root_iter_vars, which in the case of ComputeOp, are composed of the axis 
IterVars and the reduce axis IterVars. Each operation can also contain many 
other IterVars, but all of them are related by the operations's list of 
IterVarRelations. Each IterVarRelation represents either a split, compute_at 
(rebase) or fuse in the schedule. For example, in the case of split, the 
IterVarRelation specifies the parent IterVar that was split, and the two 
children IterVars: inner and outer.

```cpp
namespace tvm {
class ScheduleNode : public Node {
public:
Array outputs;
Array stages;
Map stage_map;
// remainder ommitted
};

class StageNode : public Node {
public:
Operation op;
Operation origin_op;
Array all_iter_vars;
Array leaf_iter_vars;
Array relations;
// remainder ommitted
};

class OperationNode : public Node {
public:
virtual Array root_iter_vars();
virtual Array InputTensors();
// remainder ommitted
};

class ComputeOpNode : public OperationNode {
public:
Array axis;
Array reduce_axis;
Array body;
Array root_iter_vars();
// remainder ommitted
};
}
```

Tensors haven't been mentioned yet, but in the context of TVM, a Tensor 
represents output of an operation.

```cpp
class TensorNode : public Node {
public:
// The source operation, can be None
// This Tensor is output by this op

[TVM Discuss] [Development] Disable assert in runtime

2019-04-09 Thread Baowenlei via TVM Discuss


Hi there,

In current tvm, src/codegen/llvm/codegen_cpu.cc, it generate assert stmt. It 
would be nice to have a build config option to disable these runtime TVM 
asserts.

Below is just a example of disable it with environment variable. 
It will be better to use build config to control it.

Please let me know your thought.
Thanks,
-W



```
diff --git a/src/codegen/llvm/codegen_cpu.cc b/src/codegen/llvm/codegen_cpu.cc
index fcad0f7b..5842e2ed 100644
--- a/src/codegen/llvm/codegen_cpu.cc
+++ b/src/codegen/llvm/codegen_cpu.cc
@@ -705,25 +705,32 @@ llvm::Value* CodeGenCPU::CreateIntrinsic(const Call* op) {
 }
 
 void CodeGenCPU::VisitStmt_(const AssertStmt* op) {
-  using llvm::BasicBlock;
-  llvm::Value* cond = MakeValue(op->condition);
-  std::ostringstream os;
-  os << "Assert fail: " << op->condition;
-  if (op->message.as()) {
-os << ", " << op->message.as()->value;
+#ifndef NDEBUG
+  bool use_tvm_asserts = true;
+#else
+  bool use_tvm_asserts = (std::getenv("TVM_USE_ASSERT_STMT") != nullptr);
+#endif  // !NDEBUG
+  if (use_tvm_asserts) {
+using llvm::BasicBlock;
+llvm::Value* cond = MakeValue(op->condition);
+std::ostringstream os;
+os << "Assert fail: " << op->condition;
+if (op->message.as()) {
+  os << ", " << op->message.as()->value;
+}
+llvm::Value* msg = GetConstString(os.str());
+BasicBlock* fail_block = BasicBlock::Create(
+*ctx_, "assert_fail", function_);
+BasicBlock* end_block = BasicBlock::Create(
+*ctx_, "assert_end", function_);
+builder_->CreateCondBr(cond, end_block, fail_block, 
md_very_likely_branch_);
+// fail condition.
+builder_->SetInsertPoint(fail_block);
+builder_->CreateCall(RuntimeTVMAPISetLastError(), {msg});
+builder_->CreateRet(ConstInt32(-1));
+// otherwise set it to be new end.
+builder_->SetInsertPoint(end_block);
   }
-  llvm::Value* msg = GetConstString(os.str());
-  BasicBlock* fail_block = BasicBlock::Create(
-  *ctx_, "assert_fail", function_);
-  BasicBlock* end_block = BasicBlock::Create(
-  *ctx_, "assert_end", function_);
-  builder_->CreateCondBr(cond, end_block, fail_block, md_very_likely_branch_);
-  // fail condition.
-  builder_->SetInsertPoint(fail_block);
-  builder_->CreateCall(RuntimeTVMAPISetLastError(), {msg});
-  builder_->CreateRet(ConstInt32(-1));
-  // otherwise set it to be new end.
-  builder_->SetInsertPoint(end_block);
   CodeGenLLVM::VisitStmt_(op);
 }
```





---
[Visit Topic](https://discuss.tvm.ai/t/disable-assert-in-runtime/2152/1) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/55b10f319d278bde87f7e24e6822cbed232bc7d613ef2a91e73ecd397f2aa331).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=5cyuLfqkli8FZN--W9juhg2

[TVM Discuss] [Development] Export LookupLLVMIntrinsic for C++ users

2019-04-09 Thread tqchen via TVM Discuss


Since tvm depends on llvm, perhaps one way to do so is to directly use the 
related function in LLVM. Alternatively, you can get the feature by getting the 
corresponding PackedFunc from the global registry and call from there





---
[Visit 
Topic](https://discuss.tvm.ai/t/export-lookupllvmintrinsic-for-c-users/2111/4) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/bf9abbcbd0a86cf057e8036edf201c6acacb21f91e24ea3bf3d30a20ee734b2e).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=Rs4P3VtS4A_8Mjsb1JA-nA2

[TVM Discuss] [Development] Disable assert in runtime

2019-04-09 Thread tqchen via TVM Discuss


This is a reasonable feature that I agree we could put into build config 
option. Contributions are welcomed!





---
[Visit Topic](https://discuss.tvm.ai/t/disable-assert-in-runtime/2152/2) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/902635e64042f6b2a5225e917a701bcbe4174124f83a429cc2a7fed03171340d).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=oMsianWE5cQb7Oi3DkY5OA2

[TVM Discuss] [Development] Export LookupLLVMIntrinsic for C++ users

2019-04-09 Thread Baowenlei via TVM Discuss


Thanks for the reply. Totally understand your point. 
But I believe there is a point to export LookupLLVMIntrinsic in C++, which 
could open the portal for fine grain control for users to directly call the 
llvm intrin they want, e.g. like avx2 related instructions. User could have 
more flexibility IMHO. 
And besides, it is already exported in python if I am correct. 

Thanks,
-W





---
[Visit 
Topic](https://discuss.tvm.ai/t/export-lookupllvmintrinsic-for-c-users/2111/3) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/dd19c02d34448b8ea82f6889b01581679a8d4815079df0dc7625b6c838ba892c).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=UMDeEa6iNm72m3CeoTvigw2

Re: [dmlc/tvm] [Vote] Deprecate Python2 Support (#2994)

2019-04-09 Thread Theodore Omtzigt
+1

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2994#issuecomment-481411885

[TVM Discuss] [Development] Google lasted work: MLIR Primer

2019-04-09 Thread tqchen via TVM Discuss


Good discussions here, the design principle of TVM stack is to "be intelligent 
and pragmatic". This means we want as much automation as possible, but also 
provide ways to make use of human domain information like schedule template, 
tensorized micro-kernels when necessary. We will likely continue to use this 
principle.





---
[Visit Topic](https://discuss.tvm.ai/t/google-lasted-work-mlir-primer/1721/26) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/274c66c77c61456ea1f7dfc6c16e2be66b9261c45d0a074c4266a1b0eb30708f).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=iWRpVp0E4dprAybXZ4LOTw2

Re: [dmlc/tvm] [Vote] Deprecate Python2 Support (#2994)

2019-04-09 Thread Tianqi Chen
+1, I think we agreed on python3.5 support to be safe because many users' 
system still only have 3.5 installed.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2994#issuecomment-481419067

Re: [dmlc/tvm] [Vote] Deprecate Python2 Support (#2994)

2019-04-09 Thread 雾雨魔理沙
+1. We (The relay crew) often find not having py3 features(fstring, keword only 
argument, type annotation, nonlocal) inconvenient. Can we get it to 3.6? 
fstring(3.6 feature) is used heavily in the relay ahead of time compiler that 
will be opensourced soon.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2994#issuecomment-481417065

Re: [dmlc/tvm] [Vote] Deprecate Python2 Support (#2994)

2019-04-09 Thread Haichen Shen
+1

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2994#issuecomment-481432475

Re: [dmlc/tvm] [Vote] Deprecate Python2 Support (#2994)

2019-04-09 Thread Zhi
+1

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2994#issuecomment-481453561

Re: [dmlc/tvm] [RFC][Graph Tuner] Graph level auto-tuning (#1585)

2019-04-09 Thread Yao Wang
@FrozenGene Data of "apply_history_best" updated.
@yzhliu Updated some implementation details.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/1585#issuecomment-481464375

Re: [dmlc/tvm] [Vote] Deprecate Python2 Support (#2994)

2019-04-09 Thread Leyuan Wang
+1

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2994#issuecomment-481469077

Re: [dmlc/tvm] [Vote] Deprecate Python2 Support (#2994)

2019-04-09 Thread Siva
+1

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2994#issuecomment-481480998

Re: [dmlc/tvm] [Vote] Deprecate Python2 Support (#2994)

2019-04-09 Thread Jared Roesch
👍 

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2994#issuecomment-481482279

Re: [dmlc/tvm] [VOTE] Apache Transition Plan (#2973)

2019-04-09 Thread Jared Roesch
+1

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2973#issuecomment-481488643

[TVM Discuss] [Development] Google lasted work: MLIR Primer

2019-04-09 Thread Vinod Grover via TVM Discuss


Actually the current MLIR document says that polyhedral IR is an experimental 
dialect of MLIR. I find that a bit odd that they would call it "experimental".

BTW I presented polyhedral compilation of ML graphs at C4ML ... and I think 
that polyhedral and functional approaches like Relay IR are way to go.. though 
I think Relay is goes too far on the functional side.. (e.g recursion and 
lists)... but that is not bad just more work needs to be done there..





---
[Visit Topic](https://discuss.tvm.ai/t/google-lasted-work-mlir-primer/1721/27) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/48cd9b2067680ce1e3b965b7c7fe6f9bbd090ce124cab3b34437428f1e03).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=2jlWcSW2OoOJCceyIciHjQ2

[TVM Discuss] [Development] [DISCUSS] Contributing new docs for InferBound

2019-04-09 Thread Jared Roesch via TVM Discuss


Haven't had the chance to read the full post yet, but wanted to just say this 
looks great, and the kind of things we need more of! We've been chatting about 
outlining a new structure for the docs and I think this kind of technical 
documentation would be good to put into a dev's guide to working on TVM.

I think it would be good to put this in .rst and open an PR if possible.

Will follow up with more later.





---
[Visit 
Topic](https://discuss.tvm.ai/t/discuss-contributing-new-docs-for-inferbound/2151/2)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/985daac194a2c8e834a056f34b97bb4c292859047b8729f28c101342ec1f09de).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=nZVrXnZrRxAEZPG-IDwXug2

[TVM Discuss] [Development] [DISCUSS] Contributing new docs for InferBound

2019-04-09 Thread tqchen via TVM Discuss


Looks good. We can also highlights the design consideration besides the current 
restrictiions. 

For example, the usage of IntSet as abstraction is an important design choice. 
While the current realization mainly depends on intervals, we can certainly 
improve the integer set representation without changing any of the current 
logic.





---
[Visit 
Topic](https://discuss.tvm.ai/t/discuss-contributing-new-docs-for-inferbound/2151/3)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/b943baecb02f34aed03d8704ed9e71ec87f4c566be7df72bca74edd9c9c83a2e).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=baRLpagmCd-EvvT22_WrmQ2