For #6 (export stats), I think you're absolutely right. I think there can be 
other interesting on-device stats (I.e. IRQs triggered, # function executions, 
etc). This is also the last one on the roadmap since it's a bit less planned 
relative to the others.

On #2, I think some part should run in the pre-submit. I don't think we should 
include custom hardware in the TVM presubmit for a couple of reasons:
1. It's harder for contributors to reproduce errors. Only contributors with 
that hardware could resolve CI errors that happen there.
2. It's easier for hardware to run into heisenbugs, and we shouldn't use the 
presence or absence of those to gate TVM code submission.
3. There's some logistical challenge around hosting the hardware for a CI (this 
one we can overcome, but we should think about how to place the CI for some 
piece of hardware closer to those with specific knowledge of that hardware, in 
case some offline troubleshooting is needed).

Right now for the presubmit, I'm thinking that we should run a suite of 
"black-box acceptance tests" against an x86 RPC server running in a child 
process. Those can also serve to validate the C runtime on x86, when compiled 
standalone.

I do think some regular automated job against hardware is important, too. I 
need to think a bit more about how we might put this together--open to thoughts 
from the community as well!





---
[Visit Topic](https://discuss.tvm.ai/t/rfc-tvm-standalone-tvm-roadmap/6987/5) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/3b6be0592381dd7f02330acf892afc2a66f6a0ced9b2fc4c7c40bd910f1487da).

Reply via email to