>>>>> "Bill" == Bill Allombert <ballo...@debian.org> writes:


    Bill> Without the original training data, we have no way to know
    Bill> what it is "inside" the model. The model could generate
    Bill> backdoors and non-free copyrighted material or even more
    Bill> harmful content.

And yet we have accepted x86 machine code as the preferred form of
modification.
Inspectability (as opposed to preferred form of modification) has never
been at the core of DFSG.
Typically, modifyability has come with some degree of inspectability.

Machine learning models are a case where those two properties split.
And there is sufficient history in my mind that we do not require
inspectability the same way we prefer  modifyability.



I also think we will start to develop black box inspection tools for
machine learning models, and so the level of inspectability we get with
model weights will improve over time.

Reply via email to