On Wed, Apr 01, 2020 at 10:37:07AM +0200, Gard Spreemann wrote: > Or, another example that I can imagine plausibly arising in practice: > suppose a terrabyte of raw data was collected from a scientific > experiment or simulation in order to produce (among other things) a plot > in the form of a 100 KB image that it is useful to distribute in > documentation that goes along with some DFSG-free code. Clearly the > "preferred form of modification" is the raw data, together with the code > that processed it, but it seems unpractical to expect the maintainer to > go through a laborious process that perhaps even requires highly > specialized expertise in order to distribute the raw data and reassemble > the plot from it.
In this case I agree that distributing the resulting image as is, is the sensible thing to do. Another thing came up in my mind, quite interesting, is that if the resulting product is a pre-trained neural network, I get completely reversed conclusion ... Further, some neural networks are trained from the wikipedia dump (CC-licensed). Are we uploading wikipedia dumps to the archive when our users need to use these models? What's the essential difference between a jpg picture and a pretrained neural network then? They are both multi-dimensional numerical arrays from an abstract perspective, but a normal human can understand and modify the picture pixels, while not being able to understand or modify the network prarameters.