On Tue, Feb 27, 2024 at 5:58 PM Tim Flink <[email protected]> wrote:
>
>
>
> On 2/26/24 19:06, Richard Fontana wrote:
>
> <snip>
>
> >> 4. Is it acceptable to package code which downloads pre-trained weights 
> >> from a non-Fedora source upon first use post-installation by a user if 
> >> that model and its associated weights are
> >>      a. For a specific model?

What do you mean by "upon first use post-installation"? Does that mean
I install the package, and the first time I launch it or whatever, it
automatically downloads some set of pre-trained weights, or is this
something that would be controlled by the user? The example you gave
suggests the latter but I wasn't sure if I was misunderstanding.

Richard




> >>      b. For a user-defined model which may or may not exist at the time of 
> >> packaging?
> >>
> >>
> >>
> >> I can provide examples of any of these situations if that would be helpful.
> >
> > Can you elaborate on 4a/4b with examples?
>
> There are 2 simple examples for the two cases I mentioned (4a and 4b) at the 
> bottom of this email
>
> Tim
>
> -----------------------------------------------------------------
> 4a - code that downloads pre-trained weights for a specific model
> -----------------------------------------------------------------
>
> torchvision [1] is a pytorch adjacent library which contains "Datasets, 
> Transforms and Models specific to Computer Vision". torchvision contains code 
> to implement several pre-defined model structures which can be used with or 
> without pre-trained weights [2]. torchvision is distributed under a BSD 
> 3-clause license [3] and is currently packaged in Fedora as 
> python-torchvision but all of the specific model code is removed at package 
> build time and not distributed as a Fedora package.
>
> As an example, to instantiate a vision transformer (ViT) base model variant 
> with 16x16 input patch size and download pre-trained weights, the following 
> python code could be used:
>
> ```
> import torchvision
>
> vitb16 = torchvision.models.vit_b_16()
> ```
>
> The code describing the vit_b_16 model is included in torchvision but the 
> weights are downloaded from an external site when the model is first used. At 
> the time I write this, the weights are downloaded from 
> https://download.pytorch.org/models/vit_b_16-c867db91.pth
>
> In this case and for all the other models contained in torchvision, the exact 
> links to the pretrained weights are all contained within the torchvision code.
>
> Something worthy of note is that the weights for vit_b_16 are from Facebook's 
> SWAG project [4] which is distributed as CC-BY-NC-4.0 [5] and would not be 
> acceptable for use in a Fedora package. For the other models in torchvision, 
> some of the pre-trained weights have an explicit license (like ViT) but many 
> of them are not distributed under any explicit license (ResNet[6] as an 
> example).
>
> [1] https://github.com/pytorch/vision
> [2] https://github.com/pytorch/vision/tree/main/torchvision/models
> [3] https://github.com/pytorch/vision/blob/main/LICENSE
> [4] https://github.com/facebookresearch/SWAG
> [5] https://github.com/facebookresearch/SWAG/blob/main/LICENSE
> [6] https://pytorch.org/hub/pytorch_vision_resnet/
>
>
> ----------------------------------------------------
> 4b - code that downloads an somewhat arbitrary model
> ----------------------------------------------------
>
> One of the newer features of pytorch (which is still considered to be in 
> beta) is the ability to interface with "PyTorch Hub" [7] to use pre-defined 
> and pre-trained models which have been uploaded by other users. At the time 
> of this writing, the pytorch hub appears to be moderated by the pytorch team 
> but the underlying code which supports loading of semi-arbitrary models from 
> user-defined locations at runtime.
>
> As an example, this code loads a MiDaS v3 large model with pre-trained 
> weights directly from intel's github repo [8].
> ```
> model_type = "DPT_Large"
> midas = torch.hub.load("intel-isl/MiDaS", model_type)
>
> ```
>
> Similar to the ViT example above, this model will download weights from a url 
> (https://github.com/isl-org/MiDaS/releases/download/v3/dpt_large_384.pt at 
> the time of this writing) but unlike the ViT example, the definitions of the 
> model and where the weights are located are determined by code contained in 
> the github repository specified by the user [9] and downloaded at runtime to 
> determine the exact link to any code and pre-trained weights. The MiDaS 
> repository is distributed under an MIT license [10].
>
>
> [7] https://pytorch.org/hub/
> [8] https://github.com/isl-org/MiDaS
> [9] https://github.com/isl-org/MiDaS/blob/master/hubconf.py#L218
> [10] https://github.com/isl-org/MiDaS/blob/master/LICENSE
>


--
--
_______________________________________________
legal mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/[email protected]
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

Reply via email to