For alter_op_layout, we will alter the weight layout, normally we will change
the weight layout to 5D, the last dim is queried from our AutoTVM log file. For
example:
```
if topi_tmpl == "conv2d_nchw_spatial_pack.arm_cpu":
assert data_layout == "NCHW" and kernel_layout == "OIHW"
Hi @FrozenGene,
I think I see why we don't want to change the layout for no workload (no
workload means we don't even know the strategy, I think). What I am missing is
why we don't want to change the layout when `cfg.is_fallback`. In that case,
the strategy is defined, so we know how the weigh
[quote="giuseros, post:11, topic:8253"]
What I am missing is why we don’t want to change the layout when
`cfg.is_fallback` . In that case, the strategy is defined
[/quote]
When we enter into fall back configuration means we don't find the
configuration of this workload in the tuning log. So li
Great to see this tricky topic is being tackled!
As the text above states, "the defacto standard Python package manager is pip".
Despite it's limitations, at this current point in time, that fact is
unavoidable. Users expect "pip install foo" to reliably install a working
instance of foo.
As a slight aside, on "package managers", we've used pipenv in a number of
internal projects. Despite the fact that many of us really the ui and concept
of pipenv, we've recently made the decision to back out of using it due to the
widely discussed instability it has with the time taken to res
## Introduction and motivation
This RFC is the third set of optimizations to enhance quantized convolution on
Arm architectures. To give a brief summary:
* Basic Armv8-A convolution implementation (through gemm):
https://discuss.tvm.apache.org/t/rfc-improve-quantized-convolution-performance-f
cc: @anijain2305, @FrozenGene, @matt-arm, @ramana-arm
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-improve-quantized-convolution-through-mmla-instruction/8336/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
Hi all,
I would like to contribute to this project by implementing 8-bit quantization
for 3d convolution. Currently my implementation works fine without auto-tuning.
It is quite similar to what is happening in 2D:
1. Reshape the input data and the kernel such as the convolution computation
c
Maybe I am wrong, but are you sure that when `cfg.is_fallback` parameters like
`cfg['tile_co']` are not defined? We usually set them to some default values (I
think). But even if we don't set them, IIUC they will get "some" value among
the possible ones. Am I missing something?
---
[Visit
Ah...u are right, @giuseros sorry i mislead u. I remembered wrong before. We
will have one default value, it is 1 if i remember correctly. But even we could
have one value, the value is not trusted, because we haven’t tuned it. We maybe
could say we could fix it for 4 or 8, but I think it does
Sure, I think you can add this into the `SimplifyExpr` pass. I thought about
this before, but I think that the reshape op might be inlined in the fused op.
---
[Visit Topic](https://discuss.tvm.apache.org/t/remove-extra-reshape/8212/2) to
respond.
You are receiving this because you enable
@mjs thanks for your reply! I agree we should avoid overly restrictive version
constraints in our released packages. my thoughts are that specifying a range
for semver packages and especially complex dependencies with significant API
exposure to tvm (I.e. most frontend packages--tensorflow, to
12 matches
Mail list logo