You can view, comment on, or merge this pull request online at:
https://github.com/apache/tvm/pull/18543
-- Commit Summary --
* set(CMAKE_CXX_STANDARD 17 to 20)
* Modify UNIVERSAL_PATH
* Fix bugs in the original version
* Use Hyper-threading
* Bring Your Own Datatypes
* Fix bug in constant folding for custom datatypes
* Fix function name error
* Fix sqrt require_float_dtype set to false
* Add "bring our own datatype" test program
* Comment pow datatype check
* Add posit operation
* Comment softmax datatype check
* set unary arith op require_float_dtype to false
* Unsupported numpy or ml_dtypes dtype('O') when importing ONNX model
using Relax frontend
* add _convert_struct_info
-- File Changes --
A 3rdparty/universal (1)
M CMakeLists.txt (2)
M cmake/modules/contrib/Posit.cmake (2)
A docs/how_to/tutorials/TinyLlama-1.1B-Chat-v1.0 (1)
M docs/how_to/tutorials/optimize_llm.py (8)
A docs/how_to/tutorials/register.py (138)
A example/BYODT/register.py (275)
A example/BYODT/sch_handed.py (116)
A example/BYODT/test_posites2.py (148)
A example/LLM/GPT-2/Compile_GPT2.py (259)
A example/LLM/GPT-2/ToMixedPrecision.md (219)
A example/LLM/GPT-2/benchmark_result.csv (4)
A example/LLM/GPT-2/out_fp16 (883)
A example/LLM/GPT-2/out_mixed (1289)
A example/LLM/GPT-2/run_benchmark_tvm (22)
A example/LLM/GPT-2/test.py (216)
A example/LLM/GPT-2/test_comparison.py (395)
A example/LLM/Qwen/Compile_Qwen.py (257)
A example/LLM/Qwen/NUMERICAL_SENSITIVITY_REPORT.md (137)
A example/LLM/Qwen/analyze_numerical_sensitivity.py (232)
A example/LLM/Qwen/benchmark_result.csv (8)
A example/LLM/Qwen/check_normalization_precision.py (134)
A example/LLM/Qwen/root_cause_summary.py (187)
A example/LLM/Qwen/run_benchmark_tvm (28)
A example/LLM/Qwen/test.py (219)
A example/LLM/Qwen/test_comparison.py (418)
A example/LLM/Whisper/Compile_Decoder.py (213)
A example/LLM/Whisper/Compile_Decoder_with_past.py (221)
A example/LLM/Whisper/Compile_Encoder.py (211)
A example/LLM/Whisper/benchmark_result.csv (3)
A example/LLM/Whisper/data.npy (0)
A example/LLM/Whisper/run_benchmark_tvm (20)
A example/LLM/Whisper/test.py (305)
A example/LLM/llama/Compile_Llama.py (257)
A example/LLM/llama/benchmark_result.csv (17)
A example/LLM/llama/run_benchmark_tvm (28)
A example/LLM/llama/sch_handed.py (221)
A example/LLM/llama/test.py (218)
A example/LLM/llama/test_comparison.py (418)
A python/tvm/relax/frontend/change_datatype.py (108)
M python/tvm/relax/frontend/onnx/onnx_frontend.py (34)
M python/tvm/relax/transform/legalize_ops/datatype.py (3)
M python/tvm/target/datatype.py (7)
M src/relax/op/nn/nn.cc (12)
M src/relax/op/tensor/unary.cc (12)
M src/runtime/threading_backend.cc (4)
M src/target/datatype/posit/posit-wrapper.cc (447)
M src/target/llvm/codegen_nvptx.cc (4)
M src/tir/op/op.cc (2)
-- Patch Links --
https://github.com/apache/tvm/pull/18543.patch
https://github.com/apache/tvm/pull/18543.diff
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/18543
You are receiving this because you are subscribed to this thread.
Message ID: <apache/tvm/pull/[email protected]>