discuss-archive
Thread
Date
Earlier messages
Messages by Thread
[PR] [Relax] Add TMixedPrecisionPolicy for dynamic_strided_slice [tvm]
via GitHub
Re: [PR] [Relax] Add TMixedPrecisionPolicy for dynamic_strided_slice [tvm]
via GitHub
Re: [PR] [Relax] Add TMixedPrecisionPolicy for dynamic_strided_slice [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix AssertionError: Unsupported function types `randn.default` [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix AssertionError: Unsupported function types `randn.default` [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix AssertionError: Unsupported function types `randn.default` [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix AssertionError: Unsupported function types `randn.default` [tvm]
via GitHub
[I] [Feature Request] support torch 1.14 [tvm-ffi]
via GitHub
[PR] feat: add __bool__ support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] feat: add __bool__ support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] feat: add __bool__ support for Array and Map [tvm-ffi]
via GitHub
[PR] [Relax] Move GetUsedVars to analysis module [tvm]
via GitHub
Re: [PR] [Relax] Move GetUsedVars to analysis module [tvm]
via GitHub
Re: [PR] [Relax] Move GetUsedVars to analysis module [tvm]
via GitHub
Re: [PR] [Relax] Move GetUsedVars to analysis module [tvm]
via GitHub
Re: [PR] [Relax] Move GetUsedVars to analysis module [tvm]
via GitHub
Re: [PR] [Relax] Move GetUsedVars to analysis module [tvm]
via GitHub
Re: [PR] [Relax] Move GetUsedVars to analysis module [tvm]
via GitHub
[PR] [Relax] Fix HardSigmoid returns 1.0 for NaN input [tvm]
via GitHub
Re: [PR] [Relax] Fix HardSigmoid returns 1.0 for NaN input [tvm]
via GitHub
Re: [PR] [Relax] Fix HardSigmoid returns 1.0 for NaN input [tvm]
via GitHub
[GH] (tvm-ffi/add-array-equality): Workflow run "CI" is working again!
GitBox
[PR] [KVCache] Enable sliding window for ragged prefill (`SelfAttention`) [tvm]
via GitHub
Re: [PR] [KVCache] Enable sliding window for ragged prefill (`SelfAttention`) [tvm]
via GitHub
Re: [PR] [KVCache] Enable sliding window for ragged prefill (`SelfAttention`) [tvm]
via GitHub
[GH] (tvm-ffi/add-array-equality): Workflow run "CI" failed!
GitBox
[PR] feat: add equality and hash support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] feat: add equality and hash support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] feat: add equality and hash support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] feat: add equality and hash support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] feat: add equality and hash support for Array and Map [tvm-ffi]
via GitHub
[GH] (tvm-ffi/fix-array-negative-index-check): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/fix-compiler-warnings): Workflow run "CI" failed!
GitBox
[PR] chore: fix compiler warnings [tvm-ffi]
via GitHub
Re: [PR] chore: fix compiler warnings [tvm-ffi]
via GitHub
Re: [PR] chore: fix compiler warnings [tvm-ffi]
via GitHub
Re: [PR] chore: fix compiler warnings [tvm-ffi]
via GitHub
Re: [PR] chore: fix compiler warnings [tvm-ffi]
via GitHub
Re: [PR] chore: fix compiler warnings [tvm-ffi]
via GitHub
[PR] feat: add array __contains__ support [tvm-ffi]
via GitHub
Re: [PR] feat: add array __contains__ support [tvm-ffi]
via GitHub
Re: [PR] feat: add array __contains__ support [tvm-ffi]
via GitHub
Re: [PR] feat: add array __contains__ support [tvm-ffi]
via GitHub
[PR] fix: add negative index bounds check in ArrayObj [tvm-ffi]
via GitHub
Re: [PR] fix: add negative index bounds check in ArrayObj [tvm-ffi]
via GitHub
Re: [PR] fix: add negative index bounds check in ArrayObj [tvm-ffi]
via GitHub
Re: [PR] fix: add negative index bounds check in ArrayObj [tvm-ffi]
via GitHub
Re: [PR] fix: add negative index bounds check in ArrayObj [tvm-ffi]
via GitHub
[PR] fix: integer overflow in GetDataSize [tvm-ffi]
via GitHub
Re: [PR] fix: integer overflow in GetDataSize [tvm-ffi]
via GitHub
Re: [PR] fix: integer overflow in GetDataSize [tvm-ffi]
via GitHub
Re: [PR] fix: add bounds checking for size() and stride() methods [tvm-ffi]
via GitHub
Re: [PR] fix: add bounds checking for size() and stride() methods [tvm-ffi]
via GitHub
Re: [PR] fix: add bounds checking for size() and stride() methods [tvm-ffi]
via GitHub
[GH] (tvm-ffi/cpp_dtype): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/cpp_dtype): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/cpp_dtype): Workflow run "CI" failed!
GitBox
[PR] [Feature] support C++ dtype_trait and Python-side mapping to C++ dtype [tvm-ffi]
via GitHub
Re: [PR] [Feature] support C++ dtype_trait and Python-side mapping to C++ dtype [tvm-ffi]
via GitHub
Re: [PR] [Feature] support C++ dtype_trait and Python-side mapping to C++ dtype [tvm-ffi]
via GitHub
Re: [PR] [Feature] support C++ dtype_trait and Python-side mapping to C++ dtype [tvm-ffi]
via GitHub
Re: [PR] [Feature] support C++ dtype_trait and Python-side mapping to C++ dtype [tvm-ffi]
via GitHub
[PR] doc: c++ toolchain [tvm-ffi]
via GitHub
Re: [PR] doc: c++ toolchain [tvm-ffi]
via GitHub
Re: [PR] doc: c++ toolchain [tvm-ffi]
via GitHub
[PR] doc: Reorder Sections in Python Packaging [tvm-ffi]
via GitHub
Re: [PR] doc: Reorder Sections in Python Packaging [tvm-ffi]
via GitHub
Re: [PR] doc: Reorder Sections in Python Packaging [tvm-ffi]
via GitHub
Re: [PR] doc: Reorder Sections in Python Packaging [tvm-ffi]
via GitHub
[PR] [Relax] Add FInferMixedPrecision and FRelaxInferLayout for conv transpose ops [tvm]
via GitHub
Re: [PR] [Relax] Add FInferMixedPrecision and FRelaxInferLayout for conv transpose ops [tvm]
via GitHub
Re: [PR] [Relax] Add FInferMixedPrecision and FRelaxInferLayout for conv transpose ops [tvm]
via GitHub
Re: [PR] [Relax] Add FInferMixedPrecision and FRelaxInferLayout for conv transpose ops [tvm]
via GitHub
Re: [PR] [Relax] Add FInferMixedPrecision and FRelaxInferLayout for conv transpose ops [tvm]
via GitHub
[PR] [Fix] Fix typo in file header comment [tvm]
via GitHub
Re: [PR] [Fix] Fix typo in file header comment [tvm]
via GitHub
Re: [PR] [Fix] Fix typo in file header comment [tvm]
via GitHub
[PR] [Relax] Fixes for metal to support bfloat16 [tvm]
via GitHub
Re: [PR] [Relax] Fixes for metal to support bfloat16 [tvm]
via GitHub
Re: [PR] [Relax] Fixes for metal to support bfloat16 [tvm]
via GitHub
Re: [PR] [Relax] Fixes for metal to support bfloat16 [tvm]
via GitHub
[PR] Fix TVMFFIEnvSetDLPackManagedTensorAllocator to correctly return the original allocator [tvm-ffi]
via GitHub
Re: [PR] Fix TVMFFIEnvSetDLPackManagedTensorAllocator to correctly return the original allocator [tvm-ffi]
via GitHub
Re: [PR] Fix TVMFFIEnvSetDLPackManagedTensorAllocator to correctly return the original allocator [tvm-ffi]
via GitHub
Re: [PR] Fix TVMFFIEnvSetDLPackManagedTensorAllocator to correctly return the original allocator [tvm-ffi]
via GitHub
[PR] [Relax][Op][PyTorch] Supported Median operator [tvm]
via GitHub
Re: [PR] [Relax][Op][PyTorch] Supported Median operator [tvm]
via GitHub
Re: [PR] [Relax][Op][PyTorch] Supported Median operator [tvm]
via GitHub
Re: [PR] [Relax][Op][PyTorch] Supported Median operator [tvm]
via GitHub
Re: [PR] [Relax][Op][PyTorch] Supported Median operator [tvm]
via GitHub
Re: [PR] [Relax][Op][PyTorch] Supported Median operator [tvm]
via GitHub
[GH] (tvm-ffi/2025-12-23/doc-tensor): Workflow run "CI" is working again!
GitBox
[PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
Re: [PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
Re: [PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
Re: [PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
Re: [PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
Re: [PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
Re: [PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
[GH] (tvm-ffi/feat/u64): Workflow run "CI" failed!
GitBox
[PR] [feat] Add overflow check for uint64_t/size_t in TypeTraits<Int>::CopyToAnyView [tvm-ffi]
via GitHub
Re: [PR] [feat] Add overflow check for uint64_t/size_t in TypeTraits<Int>::CopyToAnyView [tvm-ffi]
via GitHub
Re: [PR] [feat] Add overflow check for uint64_t/size_t in TypeTraits<Int>::CopyToAnyView [tvm-ffi]
via GitHub
[PR] feat: Introduce `<tvm/ffi/tvm_ffi.h>` [tvm-ffi]
via GitHub
Re: [PR] feat: Introduce `<tvm/ffi/tvm_ffi.h>` [tvm-ffi]
via GitHub
Re: [PR] feat: Introduce `<tvm/ffi/tvm_ffi.h>` [tvm-ffi]
via GitHub
Re: [PR] feat: Introduce `<tvm/ffi/tvm_ffi.h>` [tvm-ffi]
via GitHub
[PR] [CUDA] Fix cuModuleUnload crash during interpreter shutdown [tvm]
via GitHub
Re: [PR] [CUDA] Fix cuModuleUnload crash during interpreter shutdown [tvm]
via GitHub
Re: [PR] [CUDA] Fix cuModuleUnload crash during interpreter shutdown [tvm]
via GitHub
Re: [PR] [CUDA] Fix cuModuleUnload crash during interpreter shutdown [tvm]
via GitHub
[PR] Add bfloat16 Support for Metal SIMD Matrix Operations [tvm]
via GitHub
Re: [PR] Add bfloat16 Support for Metal SIMD Matrix Operations [tvm]
via GitHub
Re: [PR] [Relax] Add bfloat16 Support for Metal SIMD Matrix Operations [tvm]
via GitHub
Re: [PR] [Relax] Add bfloat16 Support for Metal SIMD Matrix Operations [tvm]
via GitHub
[PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
[PR] [Relax] Use weight shape instead of dim in Embedding.forward [tvm]
via GitHub
Re: [PR] [Relax] Use weight shape instead of dim in Embedding.forward [tvm]
via GitHub
Re: [PR] [Relax] Use weight shape instead of dim in Embedding.forward [tvm]
via GitHub
Re: [PR] [Relax] Use weight shape instead of dim in Embedding.forward [tvm]
via GitHub
[GH] (tvm-ffi/2025-12-28/docs-release-process): Workflow run "CI" failed!
GitBox
[PR] doc: Add release_process.rst [tvm-ffi]
via GitHub
Re: [PR] doc: Add release_process.rst [tvm-ffi]
via GitHub
Re: [PR] doc: Add release_process.rst [tvm-ffi]
via GitHub
Re: [PR] doc: Add release_process.rst [tvm-ffi]
via GitHub
[PR] chore(release): Version bump after v0.1.7 release [tvm-ffi]
via GitHub
Re: [PR] chore(release): Version bump after v0.1.7 release [tvm-ffi]
via GitHub
Re: [PR] chore(release): Version bump after v0.1.7 release [tvm-ffi]
via GitHub
Re: [PR] chore(release): Version bump after v0.1.7 release [tvm-ffi]
via GitHub
[I] [RESULT][VOTE] Release Apache TVM FFI v0.1.7-rc0 [tvm-ffi]
via GitHub
Re: [I] [RESULT][VOTE] Release Apache TVM FFI v0.1.7-rc0 [tvm-ffi]
via GitHub
[PR] Fix flaky test_conv2d gradient numeric test [tvm]
via GitHub
Re: [PR] Fix flaky test_conv2d gradient numeric test [tvm]
via GitHub
Re: [PR] [Relax] Fix flaky test_conv2d gradient numeric test [tvm]
via GitHub
Re: [PR] [Relax] Fix flaky test_conv2d gradient numeric test [tvm]
via GitHub
Re: [PR] [Relax] Fix flaky test_conv2d gradient numeric test [tvm]
via GitHub
[PR] [Relax][PyTorch] Fix PyTorch Dynamo frontend for Darwin compatibility [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix PyTorch Dynamo frontend for Darwin compatibility [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix PyTorch Dynamo frontend for Darwin compatibility [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix PyTorch Dynamo frontend for Darwin compatibility [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix PyTorch Dynamo frontend for Darwin compatibility [tvm]
via GitHub
[PR] [Relax] Add test case for op attributes in AST printer [tvm]
via GitHub
Re: [PR] [Relax] Add test case for op attributes in AST printer [tvm]
via GitHub
Re: [PR] [Relax] Add test case for op attributes in AST printer [tvm]
via GitHub
Re: [PR] [Relax] Add test case for op attributes in AST printer [tvm]
via GitHub
Re: [PR] [Relax] Add test case for op attributes in AST printer [tvm]
via GitHub
[I] [Bug] Segfault in `tvm.compile` on **LLVM (CPU) target** when `tir.ptx_ldg32=1`: unexpectedly runs `tir::transform::InjectPTXLDG32` / `PTXRewriter` and crashes in `BufferStore` [tvm]
via GitHub
[PR] [Relax] Replaced call_pure_packed with tensor_to_shape operator [tvm]
via GitHub
Re: [PR] [Relax] Replaced call_pure_packed with tensor_to_shape operator [tvm]
via GitHub
Re: [PR] [Relax] Replaced call_pure_packed with tensor_to_shape operator [tvm]
via GitHub
Re: [PR] [Relax] Replaced call_pure_packed with tensor_to_shape operator [tvm]
via GitHub
Re: [PR] [Relax] Replaced call_pure_packed with tensor_to_shape operator [tvm]
via GitHub
Re: [PR] [Relax] Replaced call_pure_packed with tensor_to_shape operator [tvm]
via GitHub
[GH] (tvm/onnx-edge-pad): Workflow run "CI" is working again!
GitBox
[PR] [Relax] Add gpu-generic fallback for unrecognized GPU targets [tvm]
via GitHub
Re: [PR] [Relax] Add gpu-generic fallback for unrecognized GPU targets [tvm]
via GitHub
Re: [PR] [Relax] Add gpu-generic fallback for unrecognized GPU targets [tvm]
via GitHub
Re: [PR] [Relax] Add gpu-generic fallback for unrecognized GPU targets [tvm]
via GitHub
Re: [PR] [Relax] Add gpu-generic fallback for unrecognized GPU targets [tvm]
via GitHub
Re: [PR] [Relax] Add gpu-generic fallback for unrecognized GPU targets [tvm]
via GitHub
[PR] [Relax] Refactoring Duplicate cuBLAS/hipBLAS Tests [tvm]
via GitHub
Re: [PR] [Relax] Refactoring Duplicate cuBLAS/hipBLAS Tests [tvm]
via GitHub
Re: [PR] [Relax] Refactoring Duplicate cuBLAS/hipBLAS Tests [tvm]
via GitHub
Re: [PR] [Relax] Refactoring Duplicate cuBLAS/hipBLAS Tests [tvm]
via GitHub
[PR] [Relax] Remove duplicated test case: test_if_branch_var_scope [tvm]
via GitHub
Re: [PR] [Relax] Remove duplicated test case: test_if_branch_var_scope [tvm]
via GitHub
Re: [PR] [Relax] Remove duplicated test case: test_if_branch_var_scope [tvm]
via GitHub
Re: [PR] [Relax] Remove duplicated test case: test_if_branch_var_scope [tvm]
via GitHub
Re: [PR] [Relax] Remove duplicated test case: test_if_branch_var_scope [tvm]
via GitHub
[I] [Feature Request] Support mapping from C++ to DLDataType [tvm-ffi]
via GitHub
Re: [I] [Feature Request] Support mapping from C++ to DLDataType [tvm-ffi]
via GitHub
Re: [I] [Feature Request] Support mapping from C++ to DLDataType [tvm-ffi]
via GitHub
Re: [I] [Feature Request] Support mapping from C++ to DLDataType [tvm-ffi]
via GitHub
Re: [I] [Feature Request] Support mapping from C++ to DLDataType [tvm-ffi]
via GitHub
Re: [I] [Bug] MetaSchedule tune_tir crashes with ScheduleError in RewriteFuseSplitParallelVectorize [tvm]
via GitHub
Re: [I] [Bug] MetaSchedule tune_tir crashes with ScheduleError in RewriteFuseSplitParallelVectorize [tvm]
via GitHub
[I] [Bug] Segfault in `tvm.compile` (Relax→TIR, CUDA target) inside `tir::transform::InjectPTXLDG32` / `PTXRewriter::VisitStmt_(BufferStore)` when compiling `torch.export` model returning `(tril, triu)` tuple [tvm]
via GitHub
[GH] (tvm/2025-12-24/change-type-key-func-name): Workflow run "CI" failed!
GitBox
[GH] (tvm/2025-12-24/change-type-key-func-name): Workflow run "CI" failed!
GitBox
[GH] (tvm/2025-12-24/change-type-key-func-name): Workflow run "CI" failed!
GitBox
[GH] (tvm/2025-12-24/change-type-key-func-name): Workflow run "CI" failed!
GitBox
[GH] (tvm/2025-12-24/change-type-key-func-name): Workflow run "CI" failed!
GitBox
[PR] [BREAKING CHANGE] Prefix all type keys and function names with "tvm." [tvm]
via GitHub
Re: [PR] [BREAKING CHANGE] Prefix all type keys and function names with "tvm." [tvm]
via GitHub
Re: [PR] [BREAKING CHANGE] Prefix all type keys and function names with "tvm." [tvm]
via GitHub
[GH] (tvm/junrus/2025-12-24/update-tvm-ffi): Workflow run "CI" failed!
GitBox
[GH] (tvm/junrus/2025-12-24/update-tvm-ffi): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/2025-12-24/use-slots): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/2025-12-24/use-slots): Workflow run "CI" failed!
GitBox
[PR] feat: Restrict `__slots__=()` for all subclasses of `tvm_ffi.Object` [tvm-ffi]
via GitHub
Re: [PR] feat: Restrict `__slots__=()` for all subclasses of `tvm_ffi.Object` [tvm-ffi]
via GitHub
Re: [PR] feat: Restrict `__slots__=()` for all subclasses of `tvm_ffi.Object` [tvm-ffi]
via GitHub
[PR] Update TVM-FFI to v0.1.6 [tvm]
via GitHub
Re: [PR] Update TVM-FFI to v0.1.6 [tvm]
via GitHub
Earlier messages