Hi Paul and Emilio,

> -----Original Message-----
> From: Paul Gevers <elb...@debian.org>
>
> (Paul)
> Will you think about fixing this (eventually)? Having libraries
> co-installable is a common theme, particularly during transitions and
> during upgrades.

I would like to, but I have little idea how to handle it now, i.e. loading 
different
versions of libtorch into python simultaneously. I am not very clear on the 
details,
but surely loading more than one would lead to problems as shown in [1]:

terminate called after throwing an instance of 'c10::Error'
  what():  Tried to register multiple backend fallbacks for the same dispatch 
key Conjugate; previous registration registered at 
./aten/src/ATen/ConjugateFallback.cpp:17, new registration registered at 
./aten/src/ATen/ConjugateFallback.cpp:17
Exception raised from registerFallback at 
./aten/src/ATen/core/dispatch/Dispatcher.cpp:426 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, 
std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> 
>) + 0x92 (0x7f219317a7f2 in /lib/x86_64-linux-gnu/libc10.so.2.5)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, 
std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > 
const&) + 0x76 (0x7f2193120476 in /lib/x86_64-linux-gnu/libc10.so.2.5)
frame #2: c10::Dispatcher::registerFallback(c10::DispatchKey, 
c10::KernelFunction, std::__cxx11::basic_string<char, std::char_traits<char>, 
std::allocator<char> >) + 0xeb4 (0x7f21857d1a94 in 
/lib/x86_64-linux-gnu/libtorch_cpu.so.2.5)
frame #3: torch::Library::_fallback(torch::CppFunction&&) & + 0x42e 
(0x7f218581d90e in /lib/x86_64-linux-gnu/libtorch_cpu.so.2.5)

We could note that this is libtorch_cpu.so.2.5, pulling in by old 
pytorch-scatter.
But pytorch would definitely want *ONLY* libtorch_cpu.so.2.6 to be used.
Since upstream have a tightly coupled way of dependency management,
they might not provide useful support on this.

[1]: 
https://buildd.debian.org/status/fetch.php?pkg=pytorch-sparse&arch=amd64&ver=0.6.18-2%2Bb5&stamp=1739960729&raw=0

> (Emilio)
> Britney schedules the pytorch tests against rdeps in testing, to ensure they 
> don't break, as pytorch could migrate before and independently of its rdeps. 
> If 
> the tests break that way, and it's a test-only thing, we could ignore it. If 
> the 
> rdeps actually break, then perhaps pytorch should gain Breaks against the old 
> rdep versions. Then the autopkgtests will be scheduled properly.
>
> (Paul)
> Maybe not elegant per se, but it would be correct: add a versioned
> Breaks to pytorch to the versions in testing that it breaks. Then the
> migration software will also be able to schedule the tests with the
> right things from unstable (I think. But while I'm thinking about it,
> I'm not 100% sure if it handles the binNMU'd case correctly).

Thank you both for explaining! I would add the corresponding Breaks to pytorch.

-- 
Thanks,
Shengqi Chen

Reply via email to