Adding notes from a few video chats, so that there is a record of the discussion

>From @tkonolige , confirmed that the current implementation of 
>`@tvm.testing.parametrize_targets` shows skipped targets if they are 
>explicitly listed in the decorator, but not if they come from 
>`TVM_TEST_TARGETS` environment variable.

>From @tkonolige , some of the unit tests have a significant amount of setup 
>required before the loop over `enabled_targets()`, and repeating the setup for 
>many targets would increase the runtime of the tests.  I agree, this 
>definitely would be an issue and should have a recommended style to avoid 
>duplicating work.  I think the best style would be to use xunit-style 
>`setup_class` to perform the setup, then have the test methods within the 
>class implemented using parametrized fixtures.  I'll test out a few options 
>and get back on it.

>From @areusch , recommended using xfail instead of skip for tests marked as 
>known failing.  I agree, and will make that change to the PR.

>From @areusch , recommended removing the global variables of 
>`tvm_excluded_targets` and `tvm_known_failing_targets`, having the decorator 
>as the only method by which to apply these traits.  The global variables can 
>silently have a typo, whereas a typo in a decorator throws an error.  I agree, 
>and will make that change to the PR.

>From @areusch , though perhaps out of scope of this particular RFC, it would 
>be good to have some discussion as to which unit tests should be required to 
>have parametrized targets, and which are allowed to be target-specific (e.g. 
>where should changes in PRs be requested for having non-parametrized targets). 
> My first thoughts would be to rearrange the current `tvm/tests/python` folder 
>to mimic the current organization of the top-level `src` and `python` 
>directories, rather than the current division into unit/integration tests.  
>Then, the tests that are specific to the `target` subdirectory and the 
>target-specific subdirectories in `runtime` would be allowed to have 
>non-parametrized tests, while all others would be parametrized.

Someone mentioned that @masahi was looking into ways to compare cuda/vulkan 
correctness across all topi models, and that this might be of interest to him.  
The topi tests were exactly the ones I had in mind as a starting point for 
converting over to the parametrized targets, for exactly the same reason.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/rfc-parametrized-unit-tests/9946/4) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/561dfa5e8ed159e2f3325fad182bda38726ae6c015f9f9dc62147d66e5ebfe8c).

Reply via email to