gemini-code-assist[bot] commented on code in PR #371:
URL: https://github.com/apache/tvm-ffi/pull/371#discussion_r2654653010
##########
tests/cpp/extra/test_c_env_api.cc:
##########
@@ -59,6 +59,14 @@ int TestDLPackManagedTensorAllocatorError(DLTensor*
prototype, DLManagedTensorVe
return -1;
}
+TEST(CEnvAPI, TVMFFIEnvSetDLPackManagedTensorAllocator) {
+ auto old_allocator = TVMFFIEnvGetDLPackManagedTensorAllocator();
+ DLPackManagedTensorAllocator pre_allocator;
+ TVMFFIEnvSetDLPackManagedTensorAllocator(TestDLPackManagedTensorAllocator,
0, &pre_allocator);
+ EXPECT_EQ(old_allocator, pre_allocator);
+ TVMFFIEnvSetDLPackManagedTensorAllocator(old_allocator, 0, nullptr);
+}
Review Comment:

The added test case is good for the basic scenario, but it's not
comprehensive enough. It passes because initially both the thread-local and
global allocators are `nullptr`. The test doesn't cover the scenario where a
global allocator is active and the thread-local one is not. In that case,
`TVMFFIEnvGetDLPackManagedTensorAllocator` would return the global allocator,
but `TVMFFIEnvSetDLPackManagedTensorAllocator` would incorrectly return
`nullptr` as the "previous" allocator. This indicates a logic issue in
`SetDLPackManagedTensorAllocator`.
I suggest adding a test case to cover this scenario, which would look
something like this:
```cpp
TEST(CEnvAPI, TVMFFIEnvSetDLPackManagedTensorAllocator_Global) {
// Setup: set a global allocator, but no thread-local one.
TVMFFIEnvSetDLPackManagedTensorAllocator(TestDLPackManagedTensorAllocatorError,
1, nullptr);
TVMFFIEnvSetDLPackManagedTensorAllocator(nullptr, 0, nullptr); // Unset
thread-local
// The effective allocator should be the global one.
auto old_allocator = TVMFFIEnvGetDLPackManagedTensorAllocator();
EXPECT_EQ(old_allocator, TestDLPackManagedTensorAllocatorError);
// Set a new thread-local allocator and get the previous one.
DLPackManagedTensorAllocator pre_allocator;
TVMFFIEnvSetDLPackManagedTensorAllocator(TestDLPackManagedTensorAllocator,
0, &pre_allocator);
// The returned previous allocator should be the effective (global) one.
EXPECT_EQ(old_allocator, pre_allocator);
// Cleanup
TVMFFIEnvSetDLPackManagedTensorAllocator(nullptr, 1, nullptr);
}
```
This test would fail with the current implementation and highlight the need
for a fix in `src/ffi/extra/env_context.cc`.
##########
src/ffi/extra/env_context.cc:
##########
@@ -65,7 +65,6 @@ class EnvContext {
void SetDLPackManagedTensorAllocator(DLPackManagedTensorAllocator allocator,
int write_to_global_context,
DLPackManagedTensorAllocator*
opt_out_original_allocator) {
- dlpack_allocator_ = allocator;
Review Comment:

Removing this assignment from the beginning of the function is the correct
first step to fix the bug where the new allocator was returned instead of the
previous one.
However, a more subtle logic issue remains. After this change, the function
will return the previous *thread-local* allocator. But
`GetDLPackManagedTensorAllocator` returns the *effective* allocator
(thread-local if available, otherwise global). For consistency,
`SetDLPackManagedTensorAllocator` should also return the previously *effective*
allocator.
To fix this, you should capture the result of
`GetDLPackManagedTensorAllocator()` *before* modifying any state, and return
that as the original allocator.
For example:
```cpp
void SetDLPackManagedTensorAllocator(...) {
if (opt_out_original_allocator != nullptr) {
*opt_out_original_allocator = GetDLPackManagedTensorAllocator();
}
if (write_to_global_context != 0) {
GlobalTensorAllocator() = allocator;
}
dlpack_allocator_ = allocator;
}
```
This would make the behavior consistent and robust. I've also suggested a
test case in `test_c_env_api.cc` that would fail with the current
implementation but pass with this suggested fix.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]