My desktop doesn't have such Big and small cores, so I am not able to reproduce
the result.
I indeed saw when the number of cores increases, the performance will improve.
However, running on small clusters outperforming big clusters still makes no
sense to me.
Kindly ask if there are any thou
@ZephyrSails
I guess you can take look at
https://discuss.tvm.apache.org/t/can-tvm-split-work-into-different-layers-and-assign-layers-into-different-cores/11161/10?u=popojames
I think this is what you are looking for.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-manually-con
Hello,
This is a continuing discussion from [Use all cores in a big.LITTLE
architecture](https://discuss.tvm.apache.org/t/use-all-cores-in-a-big-little-architecture/8474/6):
I am wondering how can we adjust CPU affinity and the numbers of the thread
**locally** without using "remote setting".
To be more precise on this question, I am working on Hikey 970 with 4 little
cores (core id: 0123) and 4 big cores (core id:4567). I am working on splitting
the entire relay graph into graphs and running it in pipeline format.
I know that with the following command, I can control the CPU numbe
After re-building the entire TVM, and trying it again, I got the normal result
now
This time I use the new TVM builds and this is what I got
note: affinity_mode: kBig = 1, kLittle = -1. kDefault = 0. pass 1 or -1 to
control the cores
H=512 L=12 BERT
affinity mode is: -1 , core number is: 2
According to this post:
[quote="FrozenGene, post:5, topic:8474"]
I assume you have got ‘remote’ handle correctly. Then we could get the func:
```
config_threadpool = remote.get_function('runtime.config_threadpool')
# affinity_mode: kBig = 1, kLittle = -1. kDefault = 0. pass 1 or -1 to control
This problem is fixed by myslef.
I was able to set it locally without RPC Remote.

What I did is very similar with the aforementioned setting: hooks c++ into
python
[quote="popojames, post:3, topic:11306"]
I tried to create a function
Hello @hijiang,
I have followed your work and made some extensions to support such splitting as
I mentioned in the previous discussions
https://discuss.tvm.apache.org/t/setting-the-cpu-affinity-and-number-of-cores-locally-without-rpc-remote/11306/4?u=popojames.
https://discuss.tvm.apache.org
Thanks for replying, I will try and keep an eye on the new patches.
:slight_smile:
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-set-parameter-into-pipeline-module/11375/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these e
Hello @hjiang,
Thanks for answering, I was able to feed the params into the model by adopting
your change.
But I add one more change: I changed to the following code
> if params:
> for param in params:
> self.graph_modules_[mod_idx].set_input(**params)
into the follo
Hello TVM developer and community,
I have been working on running inference with TVM on CPU only.
Especially, I am working on ARM big Little CPU core.
I am wondering about ARM big Little CPU core, is it possible to for TVM capture
the communication cost between Big core and Little core of ARM
Hello TVM developers and community,
I am trying to convert the Transformer-like models such as BERT from different
platforms (Tensorflow or PyTorch) to relay models.
For TensorFlow model, I was able to convert them into relay models successfully
by referring to this tutorial:
[Deploy a Huggin
Hello, @AndrewZhaoLuo @masahi Thanks for your answer.
@AndrewZhaoLuo Yes, I can definitely try to converting the model → onnx → relay.
But I still wanna try on Pytorch for now.
@masahi I have used "torch.jit.trace" to produce trace model, and it looks
normal:
> SqueezeBertForSequenceCl
Update:
According to: [PyTorch convert function for op 'dictconstruct' not implemented
· Issue #1157 · apple/coremltools
(github.com)](https://github.com/apple/coremltools/issues/1157)
After changing my code from
> model = transformers.SqueezeBertForSequenceClassification(config)
into
>
Same question when trying to convert ConvBERT. Any help?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/notimplementederror-the-following-operators-are-not-implemented-aten-im2col/10334/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from t
Hello TVM community,
I have a question regarding how to read out the intermediate value in Relay IR.
For the mod that the user creates manually, I know we can set arbitrary output
with the proper setting.
For example, to read out the output_0, output_1, output_2, we can set:
> data =
Thanks for your reply.
I will take a look at how to use debug_executor to enable such a function.
Also, I am asking this question because this is somehow related to pipeline
execution.
Thus, I am still wondering is it possible for that user can we register
operations in Relay IR as new outputs
Kindly ask does anyone have any thought on that?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-read-out-the-intermediate-value-in-relay-ir/12084/5)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https:
Hello @masahi
I have asked my questions and found out solution by myself in
[quote="masahi, post:2, topic:11978"]
Issue: Converting model from pytorch to relay model - #5 by popojames
[/quote]
I am facing prim::DictConstruct issue again. because I am customizing BERT
model with BERT config
Hello @masahi
I have tried using your TraceWrapper and it works on
**BertForSequenceClassification** model.
Thanks :)
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-deal-with-prim-dictconstruct/11978/6)
to respond.
You are receiving this because you enabled mailing list mod
20 matches
Mail list logo