[Apache TVM Discuss] [Questions] Benchmark results do not behave as expected

2021-10-22 Thread popojames via Apache TVM Discuss
My desktop doesn't have such Big and small cores, so I am not able to reproduce the result. I indeed saw when the number of cores increases, the performance will improve. However, running on small clusters outperforming big clusters still makes no sense to me. Kindly ask if there are any thou

[Apache TVM Discuss] [Questions] How to manually controll CPU affinity in multithreading scenario?

2021-10-22 Thread popojames via Apache TVM Discuss
@ZephyrSails I guess you can take look at https://discuss.tvm.apache.org/t/can-tvm-split-work-into-different-layers-and-assign-layers-into-different-cores/11161/10?u=popojames I think this is what you are looking for. --- [Visit Topic](https://discuss.tvm.apache.org/t/how-to-manually-con

[Apache TVM Discuss] [Questions] Setting the CPU affinity and number of cores locally without RPC Remote

2021-10-23 Thread popojames via Apache TVM Discuss
Hello, This is a continuing discussion from [Use all cores in a big.LITTLE architecture](https://discuss.tvm.apache.org/t/use-all-cores-in-a-big-little-architecture/8474/6): I am wondering how can we adjust CPU affinity and the numbers of the thread **locally** without using "remote setting".

[Apache TVM Discuss] [Questions] Setting the CPU affinity and number of cores locally without RPC Remote

2021-10-24 Thread popojames via Apache TVM Discuss
To be more precise on this question, I am working on Hikey 970 with 4 little cores (core id: 0123) and 4 big cores (core id:4567). I am working on splitting the entire relay graph into graphs and running it in pipeline format. I know that with the following command, I can control the CPU numbe

[Apache TVM Discuss] [Questions] Benchmark results do not behave as expected

2021-10-26 Thread popojames via Apache TVM Discuss
After re-building the entire TVM, and trying it again, I got the normal result now This time I use the new TVM builds and this is what I got note: affinity_mode: kBig = 1, kLittle = -1. kDefault = 0. pass 1 or -1 to control the cores H=512 L=12 BERT affinity mode is: -1 , core number is: 2

[Apache TVM Discuss] [Questions] Setting the CPU affinity and number of cores locally without RPC Remote

2021-10-26 Thread popojames via Apache TVM Discuss
According to this post: [quote="FrozenGene, post:5, topic:8474"] I assume you have got ‘remote’ handle correctly. Then we could get the func: ``` config_threadpool = remote.get_function('runtime.config_threadpool') # affinity_mode: kBig = 1, kLittle = -1. kDefault = 0. pass 1 or -1 to control

[Apache TVM Discuss] [Questions] Setting the CPU affinity and number of cores locally without RPC Remote

2021-10-27 Thread popojames via Apache TVM Discuss
This problem is fixed by myslef. I was able to set it locally without RPC Remote. ![image|354x108](upload://7FafocynhTfwzAiSAFiRTsJaYlJ.png) What I did is very similar with the aforementioned setting: hooks c++ into python [quote="popojames, post:3, topic:11306"] I tried to create a function

[Apache TVM Discuss] [Questions] How to set parameter into pipeline module

2021-11-01 Thread popojames via Apache TVM Discuss
Hello @hijiang, I have followed your work and made some extensions to support such splitting as I mentioned in the previous discussions https://discuss.tvm.apache.org/t/setting-the-cpu-affinity-and-number-of-cores-locally-without-rpc-remote/11306/4?u=popojames. https://discuss.tvm.apache.org

[Apache TVM Discuss] [Questions] How to set parameter into pipeline module

2021-11-02 Thread popojames via Apache TVM Discuss
Thanks for replying, I will try and keep an eye on the new patches. :slight_smile: --- [Visit Topic](https://discuss.tvm.apache.org/t/how-to-set-parameter-into-pipeline-module/11375/3) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these e

[Apache TVM Discuss] [Questions] How to set parameter into pipeline module

2021-11-02 Thread popojames via Apache TVM Discuss
Hello @hjiang, Thanks for answering, I was able to feed the params into the model by adopting your change. But I add one more change: I changed to the following code > if params: > for param in params: > self.graph_modules_[mod_idx].set_input(**params) into the follo

[Apache TVM Discuss] [Questions] Can TVM capture the communication cost between Big core and Little core of ARM Big Little CPU?

2021-11-18 Thread popojames via Apache TVM Discuss
Hello TVM developer and community, I have been working on running inference with TVM on CPU only. Especially, I am working on ARM big Little CPU core. I am wondering about ARM big Little CPU core, is it possible to for TVM capture the communication cost between Big core and Little core of ARM

[Apache TVM Discuss] [Questions] Issue: Converting model from pytorch to relay model

2021-11-23 Thread popojames via Apache TVM Discuss
Hello TVM developers and community, I am trying to convert the Transformer-like models such as BERT from different platforms (Tensorflow or PyTorch) to relay models. For TensorFlow model, I was able to convert them into relay models successfully by referring to this tutorial: [Deploy a Huggin

[Apache TVM Discuss] [Questions] Issue: Converting model from pytorch to relay model

2021-11-23 Thread popojames via Apache TVM Discuss
Hello, @AndrewZhaoLuo @masahi Thanks for your answer. @AndrewZhaoLuo Yes, I can definitely try to converting the model → onnx → relay. But I still wanna try on Pytorch for now. @masahi I have used "torch.jit.trace" to produce trace model, and it looks normal: > SqueezeBertForSequenceCl

[Apache TVM Discuss] [Questions] Issue: Converting model from pytorch to relay model

2021-11-23 Thread popojames via Apache TVM Discuss
Update: According to: [PyTorch convert function for op 'dictconstruct' not implemented · Issue #1157 · apple/coremltools (github.com)](https://github.com/apple/coremltools/issues/1157) After changing my code from > model = transformers.SqueezeBertForSequenceClassification(config) into >

[Apache TVM Discuss] [Questions] NotImplementedError: The following operators are not implemented: ['aten::im2col']

2021-11-25 Thread popojames via Apache TVM Discuss
Same question when trying to convert ConvBERT. Any help? --- [Visit Topic](https://discuss.tvm.apache.org/t/notimplementederror-the-following-operators-are-not-implemented-aten-im2col/10334/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from t

[Apache TVM Discuss] [Questions] How to read out the intermediate value in Relay IR?

2022-02-14 Thread popojames via Apache TVM Discuss
Hello TVM community, I have a question regarding how to read out the intermediate value in Relay IR. For the mod that the user creates manually, I know we can set arbitrary output with the proper setting. For example, to read out the output_0, output_1, output_2, we can set: > data =

[Apache TVM Discuss] [Questions] How to read out the intermediate value in Relay IR?

2022-02-15 Thread popojames via Apache TVM Discuss
Thanks for your reply. I will take a look at how to use debug_executor to enable such a function. Also, I am asking this question because this is somehow related to pipeline execution. Thus, I am still wondering is it possible for that user can we register operations in Relay IR as new outputs

[Apache TVM Discuss] [Questions] How to read out the intermediate value in Relay IR?

2022-02-17 Thread popojames via Apache TVM Discuss
Kindly ask does anyone have any thought on that? --- [Visit Topic](https://discuss.tvm.apache.org/t/how-to-read-out-the-intermediate-value-in-relay-ir/12084/5) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https:

[Apache TVM Discuss] [Questions] How to deal with prim::DictConstruct

2022-06-14 Thread popojames via Apache TVM Discuss
Hello @masahi I have asked my questions and found out solution by myself in [quote="masahi, post:2, topic:11978"] Issue: Converting model from pytorch to relay model - #5 by popojames [/quote] I am facing prim::DictConstruct issue again. because I am customizing BERT model with BERT config

[Apache TVM Discuss] [Questions] How to deal with prim::DictConstruct

2022-06-14 Thread popojames via Apache TVM Discuss
Hello @masahi I have tried using your TraceWrapper and it works on **BertForSequenceClassification** model. Thanks :) --- [Visit Topic](https://discuss.tvm.apache.org/t/how-to-deal-with-prim-dictconstruct/11978/6) to respond. You are receiving this because you enabled mailing list mod