On Fri, Jul 15, 2022 at 09:47:27AM +0530, Ani Sinha wrote: > > Instead of all this mess, can't we just spawn e.g. "git clone --depth 1"? > > And if the directory exists I would fetch and checkout. > > There are two reasons I can think of why I do not like this idea: > > (a) a git clone of a whole directory would download all versions of the > binary whereas we want only a specific version.
You mention shallow clone yourself, and I used --depth 1 above. > Downloading a single file > by shallow cloning or creating a git archive is overkill IMHO when a wget > style retrieval works just fine. However, it does not provide for versioning, tagging etc so you have to implement your own schema. > (b) we may later move the binary archives to a ftp server or a google > drive. git/version control mechanisms are not the best place to store > binary blobs IMHO. In this case also, wget also works. surely neither ftp nor google drive are reasonable dependencies for a free software project. But qemu does maintain an http server already so that't a plus. I am not insisting on git, but I do not like it that security, mirroring, caching, versioning all have to be hand rolled and then figured out by users and maintainers. Who frankly have other things to do besides learning yet another boutique configuration language. And I worry that after a while we come up with a new organization schema for the files, old ones are moved around and nothing relying on the URL works. git is kind of good in that it enforces the idea that history is immutable. If not vanilla git can we find another utility we can reuse? git lfs? It seems to be supported by both github and gitlab though bizarrely github has bandwidth limits on git lfs but apparently not on vanilla git. Hosting on qemu.org will require maintaining a server there though. All that said maybe we should just run with it as it is, just so we get *something* in the door, and then worry about getting the storage side straight before making this test a requirement for all acpi developers. -- MST