https://bugs.kde.org/show_bug.cgi?id=497938
--- Comment #4 from chair-tweet-de...@duck.com --- Indeed, the fact that it's Python might complicate things. It would likely be possible to export the model to ONNX, but that would require hosting the converted model, as I don't think it's available in that format. Also, the ONNX runtime can range from 50MB to 300MB depending on the OS and whether there's GPU support. Not to mention, using ONNX could add extra complexity. There’s also LibTorch, which could be used to run the model without ONNX, but you would still need to convert the model, which adds another dependency. -- You are receiving this mail because: You are watching all bug changes.