@aakah18151 we don't quite have good enough debugging for this right now for me 
to be certain, but based on your stack trace and that it's inside 

>`   [bt] (6) 
>/tvm_micro_with_debugger/tvm/build/libtvm.so(tvm::runtime::RPCClientSession::AllocDataSpace(DLContext,
> unsigned long, unsigned long, DLDataType)+0x2b7) [0x7f7abd9b11b7]`

I would suppose that your runtime doesn't define enough memory for that model 
(or fragmentation issues with the JSON parsing are making the model require 
much more memory than it should). we are working to address these with AOT 
runtime and memory planner improvements. in the meantime, see if you can 
increase the amount of global memory made available in `main.c` or switch to 
the Zephyr heap allocator as demonstrated 
[here](https://github.com/areusch/microtvm-blogpost-eval/blob/master/runtimes/zephyr/src/main.c#L196).





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/measuring-utvm-inference-time/9064/6) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/3f0e9b9d1c80f922e904a27489899a3c49f323f72f421a0aa42121b09f67d6da).

Reply via email to